id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9907/astro-ph9907009.html
ar5iv
text
# X-ray observations of low-power radio galaxies from the B2 catalogue ## 1 Introduction It is widely accepted that the radio emission in BL Lac objects is dominated by synchrotron radiation from a relativistic jet pointed towards the observer, thus explaining, among other things, the superluminal velocities and rapid variability observed in several objects. Such a favourable orientation to the observer should not be common, implying the presence of a population of double radio sources whose twin jets lie in the plane of the sky. Low-power radio galaxies can be found which match BL Lac objects in extended radio power (Wardle, Moore & Angel 1984) and galaxy magnitude (Ulrich 1989). There is evidence that the total soft X-ray luminosity in such galaxies is correlated with the radio-core luminosity, implying a nuclear jet-related origin for at least some of the X-ray emission (Fabbiano et al. 1984). High spatial resolution X-ray measurements have further strengthened this argument by separating point-like emission from hot X-ray emitting atmospheres (Worrall & Birkinshaw 1994; Edge & Röttgering 1995; Feretti et al. 1995; Worrall 1997; Hardcastle & Worrall 1999). What has been lacking is high-resolution X-ray observations of a large unbiased sample of low-power radio galaxies with which to investigate the association of unresolved X-ray emission with the nuclear radio jet. The B2 bright sample of radio galaxies are Bologna Catalogue 408-MHz radio sources identified with elliptical galaxies brighter than m<sub>Zwicky</sub>=15.4 (Colla et al. 1975, Ulrich 1989). The radio survey occupies 0.84 steradian, and is complete at 408 MHz down to 0.2 Jy for (B1950) declinations between 29 and 34 degrees, 0.25 Jy for declinations between 24 and 29.5 degrees and between 34 and 40 degrees, and 0.5 Jy between 21.4 and 24 degrees (Colla et al. 1970, 1975; Ulrich 1989), although none of the bright sample radio galaxies lies in the last declination range. The sample comprises 50 galaxies, of which 47 are at $`z0.065`$ (see Table 1). The high Galactic latitudes imply a relatively small Galactic neutral hydrogen column density and small resultant X-ray absorption. The B2 sample has been shown to be well matched with radio-selected BL Lac objects in their extended radio properties and galaxy magnitudes (Ulrich 1989). In this paper we present ROSAT X-ray measurements of 40 galaxies which constitute an unbiased subsample of the 47 galaxies at $`z0.065`$. The data were taken from our pointed observations or from data in the ROSAT public archives. Section 2 discusses the sample of galaxies observed with $`\mathrm{𝑅𝑂𝑆𝐴𝑇}`$ and the general properties of the X-ray data. Details of our analysis and notes on some of the sources are in section 3. Section 4 describes X-ray – radio comparisons. Section 5 contains our conclusions. A Friedmann cosmological model with $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`q_0=0`$ is used throughout this paper. ## 2 X-ray data Table 1 lists the sample of 50 B2 radio sources associated with elliptical galaxies; we have not included B2 1101+38 and B2 1652+39 as these are the well known BL Lac objects Mkn 421 and Mkn 501. We tabulate the pointed observations taken with ROSAT using the instrument with the highest spatial resolution, the High Resolution Imager (HRI; David et al. 1997), or, if no HRI observations were made, the Position Sensitive Proportional Counter (PSPC; Trümper 1983; Pfeffermann et al. 1987). Where multiple observations exist, we present results for the longest on-axis viewings, concentrating where possible on HRI data. Most observations result in a detection. In some cases where good data exist from the HRI and PSPC, both were analysed and compared (this was the case for B2 0055+30, 0120+33, 0149+35, 1122+39; 1217+29 and 2335+26). Within the same ROR, some HRI observations were split into observing periods about 6 months or a year apart. Where this was the case, the data have been merged after the individual ‘observations’ were checked for anomalies. In many cases resolved X-ray emission (measured better with the PSPC) is seen in addition to point-like emission. Complementary work discussing the extended X-ray emission and X-ray spectra of B2 radio galaxies based on PSPC observations can be found in Worrall $`\&`$ Birkinshaw (in preparation). Our treatment of the resolved emission in this paper is restricted to modelling it sufficiently well to determine a best estimate for the contribution from central unresolved emission. 40 of the full sample of 50 B2 radio galaxies (and the 47 at $`z<`$0.065) listed in Table 1 were observed with $`\mathrm{𝑅𝑂𝑆𝐴𝑇}`$: the demise of the satellite in early 1999 prevented observation of the remaining 7 objects with $`z<`$0.065. No known bias was introduced into the set of 40 objects for which we have data by the process of prioritizing sources for $`\mathrm{𝑅𝑂𝑆𝐴𝑇}`$ observation. This is illustrated in Fig. 1, which shows histograms of 1.4-GHz extended radio power ($`P_{\mathrm{ext}}`$), redshift, and absolute visual magnitude ($`M_\mathrm{v}`$) for the 47 galaxies at $`z0.065`$. $`P_{\mathrm{ext}}`$ and $`M_\mathrm{v}`$ are indicators of isotropic unbeamed emission used to support the association of these galaxies with the hosts of BL Lac objects. A Kolmogorov-Smirnov test finds no significant ($`>`$90 per cent confidence) difference between the distributions of sources for which we have $`\mathrm{𝑅𝑂𝑆𝐴𝑇}`$ data, and those for which we do not, in any of the quantities $`P_{\mathrm{ext}}`$, $`z`$, or $`M_\mathrm{v}`$. The ROSAT HRI has a roughly square field of view with 38 arcmin on a side. A functional form for the azimuthally averaged point spread function (PSF) can be found in David et al. (1997). The core of the HRI radial profile of a point-like source may be wider than the nominal PSF, and ellipsoidal images are sometimes seen, due to a blurring attributable to residual errors in the aspect correction. The major axes of such ellipsoidal images are not aligned with the satellite wobble direction (the wobble is employed to ensure that no sources are imaged only in hot pixels in the HRI or hidden behind the PSPC window-support structure) and depend unpredictably on the day of observation and therefore the satellite roll angle. The asymmetry is strongest between 5 arcsec and 10 arcsec from the centroid of the image. The PSF for the HRI begins to be noticeably influenced by the off-axis blur of the X-ray telescope at $``$7 arcmin off-axis. All HRI observations discussed here are essentially on axis. The ROSAT PSPC has a circular field of view, of diameter 2 degrees, and the PSF has a FWHM of about 30 arcsec at the center of the field, degrading only marginally out to off-axis angles of about 20 arcmin. At larger radii the mirror blur dominates the spatial resolution, and at 40 arcmin off axis the FWHM of the PSF is about 100 arcsec. Only one of the 40 sources discussed here (B2 0722+30) is significantly affected by mirror blur, as all others were observed in the central part of the field of view. We used the Post Reduction Off-Line Software (PROS; Worrall et al. 1992) to generate radial profiles of the X-ray data. Background was taken from a source-centered annulus of radii given in Table 3. Where the radial profile has been used to probe the extended emission, the contribution from extended-emission models to the background region is taken into account (Worrall & Birkinshaw 1994). We excluded confusing sources, defined as those separated from the target by $`15`$ arcsec (HRI) and $`30`$ arcsec (PSPC), showing up at $`3\sigma `$ above the background level and overlapping the on-source or background regions (or being slightly beyond, but still affecting the background due to the broad wings of the PSF). Optical images (e.g the Palomar Sky Survey plates, digitized by the Space Telescope Science Institute) and radio maps were overlaid on the X-ray images, to check the identity of each target source and any neighbouring X-ray sources, and to help to classify and limit further the effects of confusing sources. Our luminosity determination for unresolved emission assumes a power-law spectrum with an energy index $`\alpha `$ of 0.8 ($`f_\nu \nu ^\alpha `$) modified by Galactic absorption. Variations in spectral form affect the luminosity, but this is normally a small error compared with statistical uncertainties. ## 3 Radial profiles and nuclear X-ray emission We analyse the structure of an X-ray source by extracting a radial profile and fitting various models convolved with the energy-weighted PSF of the detector in question \[the nominal PSF for the HRI (David et al. 1997) and the PSF for the PSPC (Belloni, Hasinger & Izzo 1994)\]. As well as point-source models, we have fitted our radial profiles with $`\beta `$ models (Sarazin 1986) which describe gas in hydrostatic equilibrium. The values used for $`\beta `$ were 0.3, 0.35, 0.4, 0.5, 0.67, 0.75 and 0.9. For each source the models used are (a) point source, (b) $`\beta `$ model and (c) point source + $`\beta `$ model. Model parameters for sources where an extended component is detected are given in Table 2. In most cases we find that for observations with enough counts a significantly better fit to the data is achieved by fitting the composite model (c) rather than either (a) or (b) individually (see Table 2). Residual errors in aspect correction affect the HRI PSF and must be taken into account when evaluating the contribution from a point source \[see Worrall et al. (1999), where it was found that a point source can appear more like a $`\beta `$ model with a core radius of up to about 5 arcsec (dependent on $`\beta `$ assumed) because of the aspect smearing\]. Of the sources in this paper only 1833+32 has high enough count rate to perform the dewobbling procedure described by Harris et al. (1998); we find no substantial difference in the best-fit parameters after dewobbling. For observations where the data show extension on small scales, it is difficult to decide whether the source is really point like or indeed slightly extended. In some cases, the fit to the point+$`\beta `$ model has a substantially lower $`\chi ^2`$ than the beta-model fit alone, and here we regard the point-like component as well measured. Where $`\chi ^2`$ remains unchanged for these two models, the total counts have been taken as an upper limit on the point-source counts. A literature search into the environments of the sources and a cross-correlation of the HRI and PSPC results, where possible, helps us further validate our HRI findings and place limits on the likelihood of the source being primarily point-like. In the 7 cases where there are not enough counts to perform adequate radial-profile fitting , the total counts are taken as an upper limit to the contribution of point-like emission. For non-detections a 3$`\sigma `$ upper limit, derived by applying Poisson statistics to a 5 by 5 arcsec detection cell for the HRI on-axis observations (3 cases), and a 120 by 120 arcsec detection cell for the PSPC observation of the off-axis source B2 0722+30, centred on the position of the radio and optical core, is taken as an upper limit on both the total and the unresolved X-ray emission. Table 2 gives results for the 16 sources with enough counts to allow radial-profile model fitting. Table 3 presents the net counts within a circle of specified radius and, for the sources with enough counts to allow radial profiling (consisting of a minimum of $``$70 counts over 3 data bins), the point-source contribution. There is a wide range in the ratio of unresolved to resolved counts. X-ray core flux and luminosity densities calculated from the unresolved count rates, and radio core flux and luminosity densities taken from the literature are given in Table 4. ### 3.1 Notes on individual sources Where analysis of sample sources is present in the literature, comparisons have been made with the results presented here. Results for B2 0055+26 are taken from Worrall, Birkinshaw $`\&`$ Cameron (1995). B2 0326+39, 1040+31 and 1855+37 are discussed in detail in Worrall $`\&`$ Birkinshaw (in preparation). The HRI results for B2 2229+39 are consistent with the findings of Hardcastle, Worrall $`\&`$ Birkinshaw (1998), for a PSPC observation of the source. B2 0120+33 Identified with the galaxy NGC 507, the source lies in Zwicky cluster 0107+3212 and is one of the brightest galaxies in a very dense region. Extended X-ray emission is seen out to a radius of at least 16 arcmin, and there is evidence for the presence of a cooling flow and possible undetected cooling clumps distributed at large radii (Kim & Fabbiano 1995). B2 0120+33 has a steep radio spectrum and weak core. It may be a source with particularly weak jets, or possibly a remnant of a radio galaxy whose nuclear engine is almost inactive and whose luminosity has decreased due to synchrotron or adiabatic losses (Fanti et al. 1987). The HRI map shows the central region of this source to be asymmetrical, with a large extended emission region to the SW. Radial-profile fitting of the innermost parts of this galaxy with point, $`\beta `$, and point+$`\beta `$ models, show that a good fit to the data is achieved by using a single $`\beta `$ model with $`\beta `$=0.67, which gives a core radius of 4$`\mathrm{}`$. The $`\beta `$+point model gives a marginally better fit, but the additional component is not significant on an F-test at the 90$`\%`$ confidence level. Therefore, we have taken the total counts from the inner regions of this source as an upper limit on any unresolved emission present, keeping in mind that this may also include a cooling-flow contribution (this also holds for other sources with possible unresolved cooling flows such as B2 0149+35, 1346+26 and 1626+39). B2 0149+35 B2 0149+35 is identified with NGC 708 and is associated with the brightest galaxy in the cluster Abell 262. Braine & Dupraz (1994) suggest that it contains a cooling flow which may contribute excess central X-rays, and this may explain why B2 0149+35 has a higher point-like X-ray luminosity and flux than expected based on other sample members. It is not possible to separate spatially a cooling-flow contribution from unresolved X-ray emission using the PSPC observation, and the asymmetry of the source makes the extraction of a radial profile difficult. The HRI observation is split up into 2 OBIs (Observation Intervals), one of which shows a barred N-S structure. Each OBI was individually analysed by taking close-in source regions, and this gives results which are consistent with the PSPC data from the inner region of B2 0149+35. In the longer, and more reliable, of the two OBIs (12.7ks), 207 counts were detected. The point-like contribution to the net emission from this source however, is not significant at the 95 per cent level when an F-test is performed. The detected counts are therefore taken as an upper limit on the point model emission. The shorter OBI was not used. B2 0207+38 This source is described as being more similar to an S0 or to a spiral galaxy than an elliptical (Parma et al. 1986). It has also been called a post-eruptive Sa (Zwicky et al., 1968) or a distorted Sa (de Vaucouleurs, de Vaucouleurs & Corwin 1975). The radio structure is disc-like and there is no sign of either a radio core, or of jets or radio lobes (Parma et al. 1986). It is probably a starburst, like B2 1318+34. There are no ROSAT X-ray data for this source. B2 0836+29A This object (4C 29.30), this object has been often confused in the literature with the cD galaxy B2 0836+29 at $`z`$=0.079, which is the brightest galaxy in Abell 690. B2 0924+30 B2 0924+30 appears to be a remnant radio galaxy whose nuclear engine is inactive (Fanti et al., 1987; Cordey 1987; Giovannini et al. 1988). It is the brightest member in a Zwicky cluster (Ekers et al. 1981), and the X-ray data suggest extended X-ray emission, although the detection is of marginal significance. The relatively high X-ray emission for a source with no detectable core radio emission may therefore be due to the extended gas in the cluster. We have taken the detected emission as an upper limit on possible point source emission. B2 1122+39 Analysis of this source, both in the PSPC and in the HRI, shows that $``$3 per cent of the total emission is contributed by an unresolved source. This is consistent with the findings of Massaglia et al. (1996) who find a contribution from a point source of $`<`$6 per cent. B2 1217+29 PSPC and HRI analysis of this source are consistent. A $`\beta `$+point model fits the data better than a $`\beta `$ model alone (see Table 2). B2 1254+27 There is a large discrepancy between the positions given for this object in the NASA Extragalactic Database (NED) and the SIMBAD Astronomical Database. This is because the radio source has, in some cases, been incorrectly associated with the galaxy NGC 4819 rather than the true host galaxy for the radio emission which is NGC 4839. NGC 4839 is classified confusingly as morphological type S0 (Eskridge & Pooge 1991), E/S0 (Jorgensen, Franx & Kjaegaard 1992) or as a cD (Gonzalez-Serrano, Carballo & Perez-Fournon 1993; Fisher, Illingworth & Franx 1995). Andreon et al. (1996) also mention that the low average surface brightness suggests that this galaxy is dominated by an extended disc. The X-ray map of the source shows extended large scale emission to the SW \[described by Dow & White (1995), as being in the process of interacting with the intracluster medium of the main (Coma) cluster\]. This goes beyond the size of the optical galaxy and has been excluded here so as not to affect the background emission. About 88 per cent of the net counts arise from a point-like emission component. B2 1257+28 The region of enhanced X-ray emission in B2 1257+28 in the Coma cluster is substantially smaller than the size of the optical galaxy. Small on-source (12 arcsec radius) and background (15-22.5 arcsec) source-centered circles were used in order to verify the contribution from unresolved X-ray emission given by our best-fit model. B2 1317+33 B2 1317+33 (NGC 5098A) has a companion galaxy (NGC 5098B) at a distance of $`40`$ arcsec. We have checked that the X-ray and radio source come from NGC 5098A by overlaying the radio, optical and X-ray maps. B2 1318+34 B2 1318+34 is a classic merger-induced starburst, whose total radio flux can be attributed to starburst activity rather than an active nucleus (Condon, Huang & Yin 1991). B2 1346+26 The source is a cD galaxy in Abell 1795, identified with 4C 26.42. It contains a central cooling-flow component, as discussed in Fabian et al. (1994). $`\mathrm{𝐻𝑆𝑇}`$ WFPC2 images of the core of this cooling flow are presented in Pinkney et al. (1996). Our analysis of the HRI data detects a central point source which is significant on an F-test at the 95 per cent confidence level (see Table 2). About 1 per cent of the total counts lie in this point-like component (see Table 3). B2 1422+26 B2 1422+26 is not radially symmetric in the X-ray. An off-axis X-ray source in the same field of view gives a good fit to the nominal PSF, and so we can rule out the possibility that the X-ray extension seen in B2 1422+26 is due to the ROSAT aspect correction problem. The possible detection of a point-like component is not significant on an F-test at the 95 per cent level (although it passes at the 90 per cent level). We have nevertheless taken the point counts calculated from these model fits as our best estimate of the central emission, though the errors are large. B2 1615+35 HRI analysis is consistent with Feretti et al. (1995). The X-ray emission is largely point like (see Table 3), with $``$60 per cent of the net counts coming from the point source. B2 1621+38 B2 1621+38 was analysed by Feretti et al. (1995), who found a point-source contribution of $``$50 per cent of the total X-ray flux. Our analysis is consistent with this result; we find the point-source contribution to be 18$`\pm `$4 per cent. B2 1626+39 B2 1626+39 lies in a cluster (A2199), with a prototypical cooling flow. Owen & Eilek (1998) conclude that the radio source is relatively young and has been disrupted by the surrounding gas. The ROSAT HRI data set for this source consists of 2 OBIs roughly 7 months apart. Only in the second observation does the source appear extended, with two adjacent peaks. We have taken this to be due to errors in the aspect correction or processing effects and therefore have used only the first OBI in our analysis. B2 1833+32 B2 1833+32 is an FRII radio galaxy (Laing, Riley & Longair 1983; Black et al. 1992) with broad emission lines (Osterbrock, Koski & Phillips 1975; Tadhunter, Perez & Fosbury 1986; Kaastra, Kunieda & Awaki 1991). Its higher than expected X-ray flux, as compared with the core radio strength, may arise from emission in the central accretion disc around the active nucleus, seen due to an advantageous viewing angle as indicated by the broad emission lines. B2 2236+35 This source has a double symmetric radio jet embedded in a low surface-brightness region. The two extended lobes are similar in strength and size (Morganti et al. 1987). The X-ray emission at radii greater than about 20 arcsec seems aligned with the radio jets in this source. Model fitting shows $``$25 per cent of the total counts to be in the point source. ## 4 X-ray – radio comparisons In Table 3 we list the net counts detected for each source within a specified radius and our best estimate of the contribution from unresolved emission derived as described above. 1-keV luminosity densities and broad-band soft X-ray luminosities calculated from the values for unresolved emission are given in Table 4, along with the radio core flux density and luminosity density. In Fig. 2 we show a logarithmic plot of radio core luminosity density against the unresolved core X-ray emission; the corresponding flux density-flux density plot appears in Fig. 3. Both the logarithmic X-ray and radio flux densities and the corresponding luminosities are correlated at the $`>99.99`$ per cent significance level on a modified Kendall’s $`\tau `$-test which takes upper limits into account, as implemented in ASURV (Lavalley, Isobe & Feigelson 1992). The flux-flux correlation gives us confidence that the luminosity-luminosity relationship is not an artificially-introduced redshift effect. To determine the slope of the core flux-flux and luminosity-luminosity plots, a generalised version of the Theil-Sen estimator was used as presented in Akritas, Murphy $`\&`$ LaValley (1995). This takes into account the nature of the upper limits by assuming that the individual points are all part of the same parent population. For more details of this analysis and its advantages over the more commonly used survival-analysis method, see Hardcastle & Worrall (1999), where it is explained how using the bisector of two regression lines provides a more robust estimate of the slope. This method however does not give a value for the intercept of the best-fit line. To determine the best-fit line plotted on Fig. 2, a regression based on the bisector of slopes determined by the Schmitt algorithm as implemented in ASURV was used, because it does allow us to determine an intercept. For the whole sample, the Theil-Sen luminosity-luminosity slope is 1.05 with 90 per cent confidence limits 0.86-1.28. There are however, a few galaxies that should be removed. These are the starburst object B2 1318+34 (which is not member of a strict AGN sample), and the FRII B2 1833+32 (which is a broad-line galaxy). For comparison purposes we have retained these objects on Figs. 2 & 3. With the omission of these two sources, we find a Theil-Sen slope of 0.96 for the luminosity-luminosity relation, with a 90 per cent confidence range (derived from simulation) of 0.78 – 1.21. The median logarithmic dispersion about the regression line is $`0.2`$. Schmitt regression analysis of the slope gives consistent results, with a slope of 1.15 and a 90 per cent confidence limit of 0.92 – 1.39. Normalisation of the Schmitt slope at a radio luminosity of 10<sup>22</sup>W Hz<sup>-1</sup>sr<sup>-1</sup>, gives the best-fit normalisation value of 15.55 (i.e. the predicted X-ray flux at that radio luminosity is 10<sup>15.55</sup>W Hz<sup>-1</sup>sr<sup>-1</sup>. The 90 per cent confidence range on this is 15.2 to 15.9. From combining statistical methods, the best overall estimate and 90 per cent confidence uncertainty for the core X-ray–radio luminosity relation for low-power radio galaxies is given by: log(l<sub>x</sub>) = (0.96$`{}_{0.18}{}^{}{}_{}{}^{+0.25}`$) log (l<sub>r</sub>/10<sup>22</sup>) + (15.55 $`\pm `$ 0.35) A couple of sources lie significantly away from the regression line (B2 1254+27 and B2 1257+28). Both lie in the Coma cluster and may be contaminated by cluster emission. B2 0120+33 has been classified as a possible remnant of a radio galaxy whose nuclear engine is inactive (Fanti et al. 1987), and is shown as an upper limit above the expected value based on the X-ray correlation for other sample members. Its high total X-ray flux may be due to the cooling flow seen by Kim & Fabbiano (1995) (see Section 3 of this paper) and makes the extraction of the core flux difficult. B2 0924+30 is also a relic source. B2 0149+35 contains a cooling flow which may contribute excess central X-rays (Braine & Dupraz 1994). The correlation shown in Figs. 2 & 3 suggests a physical relationship between the soft X-ray emission of radio galaxies and the jet-generated radio core emission. Correlations between the total X-ray emission and the radio core emission have been seen in the $`\mathrm{𝐸𝑖𝑛𝑠𝑡𝑒𝑖𝑛}`$ $`\mathrm{𝑂𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑜𝑟𝑦}`$ data (e.g Fabbiano et al. 1984), but did not have the spatial resolution necessary to separate point and extended components. Our $`\mathrm{𝑅𝑂𝑆𝐴𝑇}`$ analysis and the decomposition of the X-ray emission into resolved and unresolved components, now shows that the nuclear X-ray emission is strongly correlated with the nuclear radio-core emission. This favours models which imply a nuclear jet-related origin for at least some of the X-ray emission. ## 5 Conclusions Radial profiling and model fitting of ROSAT data, primarily from the HRI, have allowed us to separate point-like contributions from the overall X-ray emission in low power radio galaxies from a well defined, nearby sample. We find flux-flux and luminosity-luminosity core X-ray/radio correlations for such sources, with slopes that are consistent with unity. This suggests a physical relationship between the soft X-ray emission of radio galaxies and the jet-generated radio core emission, with the clear implication that at least some of the X-ray emission is related to the nuclear radio jet. In future work we will estimate X-ray beaming parameters under the assumption that radio galaxies are the parent population of BL Lac objects. ## Acknowledgments This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Support from NASA grant NAG 5-1882 is gratefully acknowledged. We thank the referee for useful comments.
no-problem/9907/hep-ph9907520.html
ar5iv
text
# Neutrino Masses and Oscillations in Models with Large Extra Dimensions ## I Introduction Last year has seen an explosion of interest and activity in theories with large extra dimensions . It has been realized that extra dimensions almost as large as a millimeter could apparently be hidden from many extremely precise measurements that exist in particle physics. What makes such an idea exciting is the hope that the concept of hidden space dimensions can be probed by collider as well as other experiments in not too distant future. Furthermore, it brings the string scale closer to a TeV in some scenarios, making details of string physics within reach of experiments. On the theoretical side, recent developments in strongly coupled string theories have given a certain amount of credibility to such speculations in that the size of extra dimensions can be proportional to the string coupling and therefore in the context of strongly coupled strings such large dimensions are quite plausible. This new class of theories have a sharp distinction from the conventional grand unified theories as well as old weakly coupled string models in that in the latter case, most scales other than the weak scale and the QCD scale were assumed to be in the range of $`10^{13}`$ to $`10^{16}`$ GeV. That made it easier to understand observations such as small neutrino masses and a highly stable proton etc. Now that most scales are allowed to be small in the new models, the two particularly urgent questions that need to be answered are why is the proton stable and why are neutrinos so light. It has been speculated that proton stability may be understood by conjecturing the existence of U(1) symmetries that forbid the process. On the other hand no such simple argument for understanding small neutrino masses seems to exist. The familiar seesaw mechanism is not implementable in simple versions of these models where all scales are assumed to be low. So understanding lightness of neutrinos is therefore a challenge to such models. One approach to this problem discussed in recent literature is to use neutrinos that live in the bulk (they are therefore necessarily singlet or sterile with respect to normal weak interactions) and to observe that their coupling to the known neutrinos in the brane is inversely proportional to the square-root of the bulk volume. The neutrino mass (which is now a Dirac mass) can be shown to be $`\frac{hvM^{}}{M_P\mathrm{}}`$, where $`M^{}`$ is the string scale. If the string scale is in the TeV range, this leads to a neutrino mass of order $`10^4`$ eV or so. This therefore explains why the neutrino masses are small. A key requirement for small neutrino masses is therefore that the string scale be in the few TeV range<sup>§</sup><sup>§</sup>§By choosing the Yukawa coupling to be smaller, the string scale could of course be pushed higher. Furthermore, these models also generally predict oscillations between the modes of the bulk neutrino and the known neutrinos. Thus if one wants an explanation of the solar neutrino deficit via neutrino oscillations, this implies that there must be at least one extra dimension with size in the micrometer range. The latter provides an interesting connection between neutrino physics, gravity experiments searching for deviations from Newton’s law at sub-millimeter distances as well as possible collider search for TeV string excitations. It then follows that larger values of the string scale would jeopardize this simple explanation of the small neutrino mass. So the extent that larger values of string scale are also equally plausible as the TeV value, one might search for alternative ways to understand the small neutrino masses. It is the goal of this paper to outline such a scenario and study its consequences. The particular example we consider illustrates this scenario in models with large extra dimensions and generic brane-bulk picture for particles, where the string scale is necessarily bigger ($`10^8`$ GeV or so) and solar or atmospheric neutrino oscillations require at least one extra dimension be in the micrometer range. The new ingredient of the class of models we discuss here is that we include the right-handed neutrino in the brane and consider the gauge interactions to be described by a left-right symmetric model. In these models, the left-handed neutrinos are not allowed to form mass terms with the bulk neutrino due to extra gauge symmetries; instead it is only the right-handed neutrino which is allowed to form mass terms with the bulk neutrinos. This leads to a different profile for the neutrino masses and mixings. In particular, we find that in this model, the left-handed neutrino is excatly massless whereas the bulk sterile neutrinos have masses related to the size of the extra dimensions and it will be of order $`10^3`$ eV if there is at least one large extra dimension with size in the micrometer range. A key distinguishing feature of this model from the existing ones is that the string scale is now necessarily much larger than a TeV. We also find that the pattern of the neutrino oscillations is different from the previous case. As mentioned, the minimal gauge model where our scheme is realized is the left-right symmetric models where the right handed symmetry is broken by the doublet Higgs bosons $`\chi _R(1,2,1)`$. The notation we follow is that the three numbers inside the parenthesis correspond to the quantum numbers under $`SU(2)_L\times SU(2)_R\times U(1)_{BL}`$. We do not need supersymmetry for our discussion and will therefore work within the context of nonsupersymmetric left-right models. To set the stage for our discussion, let us start with a review of the neutrino mass mechanism in models with large extra dimensions discussed in Ref.. The basic idea is to include the coupling of bulk neutrino $`\nu _B(x^\mu ,y)`$ (which is a standard model singlet) to the standard model lepton doublet $`L(x^\mu ,y=0)`$. The Lagrangian that is responsible for the neutrino masses in this model is: $`𝒮=\kappa {\displaystyle d^4x\overline{L}H\nu _B(x,y=0)}+{\displaystyle d^4x𝑑y\overline{\nu }_B(x,y)\mathrm{\Gamma }^5_5\nu _B(x,y)}+h.c.`$ (1) Writing the four component spinor $`\nu _B\left(\begin{array}{c}\nu _B^1\\ i\sigma _2\nu _B^2\end{array}\right)`$, we get for the neutrino mass matrix: $`(\overline{\nu }_{eL}\overline{\nu }_{BL}^{})\left(\begin{array}{cc}\kappa v& \sqrt{2}\kappa v\\ 0& _5\end{array}\right)\left(\begin{array}{c}\nu _{0B}\\ \nu _{BR}^{}\end{array}\right)`$ (6) This is a compact way of writing the KK excitations along the fifth dimension. Notice that the $`\sqrt{2}`$ in the off-diagonal term appears to compensate the different normalization of the zero mode in the Fourier expansion of the bulk field in terms of $`\mathrm{sin}ny/R`$ and $`\mathrm{cos}ny/R`$ ($`R`$ being the radius of the fifth dimension). Also, notice that only the last terms may couple to the pure four dimensional fields, and we always may choose these modes to have positive KK masses. In the above equation $`\kappa `$ embodies the features of the string scale and the radius of the extra dimension i.e. $`\kappa \frac{M^{}}{M_P\mathrm{}}`$. For $`\kappa v\mu _0`$ where $`\mu _0=R^1`$, we get the mixing of the $`\nu _e`$ with the bulk modes to be $`tan\theta _{eB}{\displaystyle \frac{\sqrt{2}\kappa v_5}{_5^2\kappa ^2v^2}}`$ (7) Substituting the eigenvalues of the operator $`_5=n\mu _0`$, we get for the n-th KK excitation a mixing $`tan\theta _n\frac{\xi }{n}`$ where $`\xi \frac{\sqrt{2}\kappa v}{\mu _0}`$. This expression is same as in . Important point to note here is that since $`\nu _e`$ has a mass of $`\kappa v`$, present neutrino mass limits lead to an upper limit on $`\kappa `$ and hence on the string scale. For instance, if we choose the present tritium decay bounds of $`m_{\nu _e}2.5`$ eV, we get $`M^{}10^7`$ GeV. This bound gets considerably strengthened if we further require that the solar neutrino puzzle be solved via $`\nu _e\nu _s`$ oscillation (where we have called the typical excited models of the bulk neutrino as “sterile neutrinos”). The reason is that this requires $`\mathrm{\Delta }m^210^5`$ eV<sup>2</sup> and in the absence of any unnatural fine tuning, we will have to assume that that $`m_{\nu _e}m_{\nu _s}10^3`$ eV. This implies that $`M^{}10`$ TeV and the bulk radius is given by $`R^110^3`$ eV implying $`R0.2`$ mm. Another prediction of this model is that the mixing of the $`\nu _e`$ with the bulk neutrinos goes down like $`(1/n)`$ where $`n`$ denotes the level of Kaluza-Klein excitation. Let us now proceed to the new case we are considering. The part of the action relevant to our discussion is given by: $`𝒮={\displaystyle d^4x[\kappa \overline{L}\chi _L\nu _B(x,y=0)+\kappa \overline{R}\chi _R\nu _B(x,y=0)+h\overline{L}\varphi R]}+{\displaystyle d^4x𝑑y\overline{\nu }_B\mathrm{\Gamma }^5_5\nu _B}+h.c.`$ (8) where $`L^T=(\nu _{eL},e_L)`$ and $`R^T=(\nu _{eR},e_R)`$ and $`\varphi `$ is the bidoublet Higgs field that breaks the elctroweak symmetry and gives mass to the charged fermions. We assume that the $`SU(2)_R`$ gauge group is broken by $`<\chi _R^0>=v_R`$ with $`<\chi _L^0>=0`$In general, in the left-right model, there is a coupling of the form $`M_0\overline{\chi }_L\varphi \chi _R`$ which induces a vev for the $`\chi _L^0`$ field of order $`\frac{M_0v_{wk}}{M_{str}}`$, which for a choice of $`M_0=v_{wk}`$ gives a $`<\chi _L^0>0.1`$ MeV. We assume $`M_0`$ to be smaller so that this contribution to the neutrino masses is negligible. It could also be that the symmetries of the charged fermion sector are such that they either forbid such a coupling or give such a vev to $`\varphi `$ that this term does not effect the value of the potential at the minimum. We thank the referee for pointing this out. and the $`SU(2)_L\times U(1)_Y`$ is broken by $`Diag<\varphi >=(v,v^{})`$. The profile of the neutrino mixing matrix in this case is given by $`(\overline{\nu }_{eL}\overline{\nu }_{0BL}\overline{\nu }_{BL}^{})\left(\begin{array}{cc}hv& 0\\ \kappa v_R& 0\\ \sqrt{2}\kappa v_R& _5\end{array}\right)\left(\begin{array}{c}\nu _{eR}\\ \nu _{BR}^{}\end{array}\right)`$ (14) where $`hv`$ is the usual Dirac mass term present in the models with the seesaw mechanism and normally assumed to be of order of typical charged fermion masses (we will also make this plausible assumption that $`hv`$ is of the order of a few MeV’s for the first generation which will be the focus of this article). We can now proceed to find the eigenstates and neutrino mixings. First point to note is that this matrix being $`2\times 3`$ has one zero eigenvalue corresponding to the state $`\nu _0=(\mathrm{cos}\zeta \nu _{eL}\mathrm{sin}\zeta \nu _{0BL})`$ (15) where $`sin\zeta =\frac{hv}{\sqrt{(\kappa v_R)^2+(hv)^2}}`$. If we want the lightest eigenstate to be predominantly the electron neutrino so that observed universality in charged current weak interaction is maintained, we must demand that $`\kappa v_Rhv`$. Since $`\kappa \frac{M^{}}{M_P\mathrm{}}`$, this constraint will imply a constraint on the string scale $`M^{}`$. Let us see under what circumstances this condition is satisfied. Since the Dirac mass $`hv`$ few MeV’s, we would like $`\kappa v_R`$ few MeV’s. Let us assume that there is one extra dimension with large size (of order milli-meter and denoted by $`R_1`$) and all other extra dimensions have very small sizes, assumed to be equal. Then the observed strength of gravitational interaction implies the relation $`M_P\mathrm{}^2=M_{}^{}{}_{}{}^{n+2}R^{n1}R_1`$ (16) where $`R_1`$ is the largest dimension and the rest of the R’s are small as required by neutrino physics. For simplicity let us identify the right handed symmetry breaking scale, the string scale and the inverse radii of the small dimensions $`R^1`$. Then we have the approximate relation that $`\kappa {\displaystyle \frac{1}{\sqrt{M^nR^{n1}R_1}}}{\displaystyle \frac{M^{}}{M_P\mathrm{}}}`$ (17) $`\kappa v_R`$ few MeV (say 100 MeV), implies that $`M^{}v_RR^110^{8.5}`$ GeV. In fact it is not hard to see that to satisfy the relation in Eq. (16), the radii of the “small” compact dimensions must be also of order $`M_{}^{}{}_{}{}^{1}`$ and that of the large dimension is of course in the sub-millimeter range. While this is a generic possibility, one can of course make many variations on this general theme. The results of this paper are not effected by such variations. Thus it appears that our scheme prefers a high string scale in contrast with the earlier proposal. We can now look at the rest of the neutrino spectrum arising from the KK excitations of the bulk mode as well as the right handed neutrino. They can be studied by looking at the “$`2\times 2`$” matrix after extracting the zero mode discussed above. Defining the orthogonal combination to $`\nu _0`$ as $`\stackrel{~}{\nu }_{0L}`$, we have: $`(\overline{\stackrel{~}{\nu }}_{0L}\overline{\nu }_{BL}^{})\left(\begin{array}{cc}\kappa v_R& 0\\ \sqrt{2}\kappa v_R& _5\end{array}\right)\left(\begin{array}{c}\nu _{eR}\\ \nu _{BR}^{}\end{array}\right)`$ (22) where $`\stackrel{~}{\nu }_{0L}=\mathrm{cos}\zeta \nu _{0BL}+\mathrm{sin}\zeta \nu _{eL}`$ and we have used the approximation that $`\kappa v_Rhv`$. In terms of this short handed notation, the characteristic equation of the Dirac mass matrix above is easily computed to be $$(m^2_5^2)\left[m^2\kappa ^2v_R^2+\frac{2m^2\kappa ^2v_R^2}{_5^2m^2}\right]=0,$$ (23) This, in a manner similar that noted by Dienes et al. translates into the transcendental equation $`m_n=\pi \kappa ^2v_R^2R\mathrm{cot}(\pi m_nR)`$ (24) where $`m_n`$ is the mass of the n-th KK state. The tower of eigenstates, symbolically denoted by $`\nu _{nL}`$, are exactly given by $$\nu _{nL}=\frac{1}{\eta _n}\left[\stackrel{~}{\nu }_{0L}+\frac{\sqrt{2}m_n^2}{m^2_5^2}\nu _{BL}^{}\right],$$ (25) with the normalization factor, $`\eta _n`$, given as $$\eta _n^2=1+\underset{k=1}{\overset{\mathrm{}}{}}\frac{2m_n^4R^4}{(k^2m_n^2R^2)^2}.$$ (26) As long as $`\kappa v_RR1`$, all the mass eigenvalues, $`m_n`$ upto $`m_n\kappa v_R`$ satisfy $`cot(\pi m_nR)0`$. Therefore, the masses for all these states are $`(2n1)/2R`$. Thus the effect of the mixing of the brane righthanded neutrinos is to shift the bulk neutrino levels below and near $`\kappa v_R`$ downward. This is similar to what was noted in the case without the $`\nu _R`$ in the brane for all states. On the other hand, using the same equation (Eq. (11)) it is easy to see that the states much heavier than $`\kappa v_R`$ have masses $`n/R`$, which also means that they basically decouple from the light sector. For the eigenstates in the middle, those with masses just beyond $`\kappa v_R`$, their mixing with the lightest states strongly suppress their contribution to $`\stackrel{~}{\nu }_{0L}`$, by an amount $`1/\kappa v_RR`$, as it could be checked from Eq. (26), by summing over those elements for which $`k\kappa v_RR`$. There are, of course, similar results for the right handed states which involve the right handed bulk neutrinos. In fact, they are identified as the negative solutions to the characteristic equation, since the same mass matrix gives the masses of both left and right handed sectors and those are degenerate. For discussing neutrino oscillation, the left handed eigenstates are the only ones relevant. For this purpose let us write down the weak eigenstate $`\nu _e`$ in terms of the mass eigenstates through their mixing into $`\stackrel{~}{\nu }_0`$. Since $`\nu _e=c_\zeta \nu _0+s_\zeta \stackrel{~}{\nu }_0`$, with $`c_\zeta `$ ($`s_\zeta `$) standing for $`\mathrm{cos}\zeta `$ ($`\mathrm{sin}\zeta `$), the survival probability of $`\nu _e`$ oscillations reads $$P(\nu _e\nu _e(t))=c_\zeta ^4+s_\zeta ^4P(\stackrel{~}{\nu }_0\stackrel{~}{\nu }_0(t))+2c_\zeta ^2s_\zeta ^2Re\stackrel{~}{\nu }_0|\stackrel{~}{\nu }_0(t),$$ (27) where $`P(\stackrel{~}{\nu }_0\stackrel{~}{\nu }_0(t))`$ is the corresponding “survival probability” for $`\stackrel{~}{\nu }_0`$. In terms of the mass eigenstates $$\stackrel{~}{\nu }_0(t)=\underset{n=1}{\overset{N}{}}\frac{e^{i(m_n^2t/2E)}}{\eta _n}\nu _n,$$ (28) where we have cut off the sum by $`N\kappa v_RR`$, explicitly decoupling the heavy eigenstates. This is justified by the arguments given above. For the remaining states, one can show that $`\eta _n^2\frac{\pi ^2}{8}(2n1)^2`$ which follows from the following exact expression for $`\eta _n^2`$: $`\eta _n^2=1+{\displaystyle \frac{1}{4}}\mathrm{csc}^2(2\pi m_nR)\left[2\pi ^2m_n^2R^2+\pi sin(2\pi m_nR)+2cos(2\pi m_nR)2\right]`$ (29) It means that the main contribution to $`\stackrel{~}{\nu }_0`$ comes from the lightest mode, which is expected since the bulk zero mode is its main original component. The final survival probability after the neutrino traverses a distance L in vacuum can be written down as $$P_{ee}(E)=14c_\zeta ^2s_\zeta ^2\underset{n=1}{\overset{N}{}}\frac{1}{\eta _n^2}\mathrm{sin}^2\left(\frac{m_n^2L}{4E}\right)4s_\zeta ^4\underset{k<n}{\overset{N}{}}\frac{1}{\eta _n^2\eta _k^2}\mathrm{sin}^2\left[\frac{(m_n^2m_k^2)L}{4E}\right].$$ (30) Therefore, the oscillation length is given by $`L_{osc}=ER_1^2(E/MeV)\times 510^4`$m for $`R_1=0.1mm`$. This value for the oscillation length is right in the domain of accessibility of the KAMLAND experiment. In Fig. 1 we present the survival probability as a function of distance for specific values of $`\kappa v_R`$ and $`hv`$. Also, we present the probability for the early proposal of Ref. in Fig. 2 for comparision. Notice that the differences in both profiles come, basically, from two facts: first, the masses of the light bulk modes have been shifted down by $`\frac{1}{2}\mu _0`$ in the present case. Such an effect is absent in the approach followed in the former case. This is reflected in a (four times) larger oscillation length. Secondly, the mixing of $`\nu _e`$ with bulk states is different in our case than Ref. though for large values of $`n`$ they coincide. The averaged probability is now obtained in a stgraightforward manner to be $`\overline{P_{ee}}=c_\zeta ^4+\frac{2}{3}s_\zeta ^4`$, which is smaller than the two neutrino case with the same mixing angle, although, for $`\zeta 1`$ it approaches the former result $`\overline{P_{ee}}12\zeta ^2`$. Moreover, we may average over all the modes, but the lowest frequency one, to get $$P_{ee}(E)=c_\zeta ^4+\frac{2}{3}s_\zeta ^4+\frac{16}{\pi ^2}c_\zeta ^2s_\zeta ^2\left[12\mathrm{sin}^2\left(\frac{\mu _0^2L}{16E}\right)\right].$$ (31) Hence, the depth of the oscillations is now of the order of $`\frac{32}{\pi ^2}c_\zeta ^2s_\zeta ^2`$. In conclusion, we have presented a new way to understand small neutrino masses in models with large extra dimensions by including the righthanded neutrino among the brane particles. In addition to several quantitative differences from earlier works which have only then standard model particles in the brane, we find that the string scale in the new class of models is necessarily larger. However, we need one extra dimension to be of large size if we want to solve the solar neutrino problem as well have other meaningful oscillations between the known neutrinos with the bulk neutrinos. We have not discussed the matter effect and the implications for solar neutrino puzzle nor have we discussed the astrophysical constraints on our scenario. We hope to return to these topics subsequently. But we have noted that multiple ripple effect induced by the KK modes in the $`\nu _e`$ survival probability that was noted for the earlier case in Ref. remains in our case too. Secondly, we have focussed only on the first generation neutrinos; but there is no obstacle in principle to extending the discussion to the other generations. Clearly, as in the case of Ref., attempting to solve the atmospheric neutrino puzzle by $`\nu _\mu `$-$`\nu _{bulk}`$ oscillation would require that we make the bulk radius $`R_1`$ smaller. Our overall impression is that while it is possible to understand the smallness of the neutrino mass using the bulk neutrinos without invoking the seesaw mechanism and concommitant high scale physics such as $`BL`$ symmetry, a unified picture that clearly incorporates all three generations with preferred mixing and mass patterns is yet to come whereas in existing “four-dimensional” models there exist perhaps a surplus of ideas that lead to desirable neutrino mass textures. ¿From this point of view, the neutrino physics study in models with extra diensions is still in its infancy and whether it grows into adulthood depends much on the direction that these extra dimensional models take in the coming years. Acknowledgements. The work of RNM is supported by a grant from the National Science Foundation under grant number PHY-9802551. The work of SN is supported by the DOE grant DE-FG03-98ER41076. The work of APL is supported in part by CONACyT (México). RNM would like to thank the Institute for Nuclear Theory at the University of Washington for hospitality and support during the last stages of this work. SN gratefully acknowledges the warm hospitality and support during his visit to the University of Maryland particle theory group when this work was started and to the Fermilab theory group where it was completed.
no-problem/9907/hep-ph9907549.html
ar5iv
text
# Physics at 𝛾⁢𝛾and 𝛾⁢𝑒colliders ## 1 Possible scenarios $``$ The developed physical programs of new machines are based on the idea, that the Nature is so favorable to us that it has placed the essential fraction of new particles within the LHC operation domain. Moreover, in these programs we believe, that new particles or interactions belong to one of the known sets and we consider an opportunity to see them. Together with Higgs boson (or bosons) in the one or two doublet $`𝒮`$, the expected most probable variants are SUSY–$`𝒮𝒮`$, leptoquarks, $`WW`$ or $`WZ`$ resonances, compositeness, effects from large extra dimensions,… Such type assumption is necessary component of the physical program of the LHC, where the discovery of something unexpected is hardly probable. Main goal of the LHC is a discovery of some of enumerated particles or effects. The $`e^+e^{}`$Linear Colliders should measure parameters of new particles and corresponding couplings with high accuracy. Some unexpected new particles will be seen here via the threshold behaviour in two jet or lepton production. The Photon Colliders in this approach are considered as the machines for the precise study of QCD and cross check of couplings measured earlier. $``$ In this approach one forget some problems of the $`𝒮`$(including its foundation). The solutions (still unknown) could be tested with the aid of Photon Colliders only. $``$ We should be ready to meet the opposite opportunity: no new particles will be discovered at LHC, except Higgs boson ($`𝒮`$or $`2𝒟`$or $`𝒮𝒮`$). In this case Photon Colliders could give the best key to the discovery of a New Physics. $``$ In this report we consider this very opportunity. We assume that either New Physics differs strong from the expected variants or new particles in the one of expected variant are very heavy (except Higgs boson(s)) or some difficult opportunity within $`𝒮𝒮`$is realized (see e.g. ). ## 2 Photon Colliders. Main features The future Linear Colliders would form the complexes including both $`e^+e^{}`$collider mode and Photon collider mode ($`\gamma \gamma `$and $`\gamma e`$) . This photon mode is based on the $`e^+e^{}`$ one with electron energy $`E`$ and luminosity $`_{ee}`$. Here I present its main characteristics assuming no special efforts in optimization of photon mode at the initial stages of acceleration . * Characteristic photon energy $`E_\gamma 0.8E`$. * Annual luminosity $`_{\gamma \gamma }100`$ fb<sup>-1</sup> ($`_{\gamma \gamma }0.2_{ee}`$). * Mean energy spread $`<\mathrm{\Delta }E_\gamma >0.07E_\gamma `$. * Mean photon helicity $`<\lambda >0.95`$ with easily variable sign. One can transform this polarization into the linear one . * There are no special reasons against observations at small angles except non–principal technical details of design. * The conversion region is $`\gamma e`$collider with c.m.s. energy about 1.2 MeV but with annual luminosity $`10^6`$ fb<sup>-1</sup>! Below we denote by $`\lambda _i`$ mean degree of the photon circular polarization (helicity) and by $`\mathrm{}_i`$ mean degree of their linear polarization, $`\varphi `$ is the angle between directions of these linear polarizations for opposite photons, $`y_i=E_{\gamma i}/E`$. Approximate photon spectra . The luminosity distribution in the effective $`\gamma \gamma `$ mass has usually two well separated peaks: a) high energy peak with mentioned mean energy spread $`7\%`$ and integrated luminosity $`0.2_{ee}`$; b) wide low energy peak. The last depends strongly on details of design, it is unsuitable for the study of New Physics phenomena. The form of high energy peak depends on the reduced distance between conversion and interaction points $`\rho `$. At $`\rho =0`$ separation of peaks is practically absent. In the modern projects $`\rho 1`$. In this case peaks are separated well, the high energy peak contains about 30 % of geometrical luminosity and 80 % from luminosity of peak at $`\rho =0`$. At $`\rho ^2<1.3`$ and the ellipticity of beam $`A>1.5`$ the high energy peak is independent on $`A`$ (usually $`A1`$). In this region the form of high energy peak is approximated with high precision by a convolution of two effective photon spectra with functions $`\stackrel{~}{F}(y,\rho )`$ written in ref. $$d=\stackrel{~}{F}(y_1,\rho )\stackrel{~}{F}(y_2,\rho )dy_1dy_2.$$ (1) ## 3 Higgs window to a New Physics The basic point here is the opportunity to measure the two–photon width of the $`𝒮`$Higgs boson with accuracy 2 % (one can hope to improve this accuracy since the used luminosity integral 30 fb<sup>-1</sup> corresponds to only three month operations). $``$ Let us first talk about the opportunity that some of the discussed scenarios ($`2𝒟`$, $`𝒮𝒮`$) is realized, but new particles are heavier than those observable at the LHC (decoupling regime) with two variants: * Strong decoupling. The additional Higgs bosons $`H,A,H^\pm `$ are also very heavy. * Weak decoupling. The additional Higgs bosons $`H,A,H^\pm `$ are lighter than 400 GeV. We distinguish the following variants of $`𝒮`$or its extensions: $``$ $`𝒮`$with one Higgs boson doublet — $`𝒮`$; $``$ $`𝒮`$with two Higgs boson doublet — $`2𝒟`$, or its SUSY extension — $`𝒮𝒮`$in the decoupling regime. In this regime the difference between the $`𝒮𝒮`$and the $`2𝒟`$in the $`𝒮𝒮`$like variant (at $`M_{H^\pm }^2\lambda _5v^2`$) is hardly observable. So we denote usually both cases as $`𝒮𝒮`$. We denote by $`2𝒟`$only its general variant with $`M_{H^\pm }^2\lambda _5v^2`$. Our discussion below is based on the results of Refs. . The studies at the LHC could give us some coupling constants of the Higgs boson with a matter (quarks, leptons, $`W`$ and $`Z`$) with accuracy about 10 %. The measurements at $`e^+e^{}`$Linear Collider will improve accuracy to the level of about 1 %. The deviation of these couplings from their $`𝒮`$values could be considered as a signal for realization of a $`2𝒟`$or $`𝒮𝒮`$. The measurement of the two–photon width of a (lightest) Higgs boson allows to separate the general $`2𝒟`$from the $`𝒮𝒮`$. The additional measurement of a $`HZ\gamma `$ coupling in the $`e\gamma eH`$process could support this differentiation. We show this for the difficult enough case. Let the measured couplings of the Higgs boson with a matter are given by the $`𝒮`$. The same values can be obtained in the $`2𝒟`$or $`𝒮𝒮`$at $`\beta \alpha =\pi /2`$. In the $`H\gamma \gamma `$, $`HZ\gamma `$ widths effects of $`W`$ and $`t`$ quark loops are of opposite sign. That is why the effect of very heavy charged Higgses is enhanced here up to about 13% in the general $`2𝒟`$. This difference will be seen good even in the case of strong decoupling. In the $`𝒮𝒮`$this effect reduces to a few percent level (just as the effect of heavy superparticles) — depending of the mass of these particles. Therefore the measurement of the Higgs boson $`\gamma \gamma `$width with 2 % accuracy could answer which model is realized — general $`2𝒟`$or $`𝒮𝒮`$. And one can get this answer before the discovery of superpartneurs. This measurement could differentiate general $`2𝒟`$from $`𝒮`$together with $`𝒮𝒮`$at $`\beta \alpha =\pi /2`$. The discrimination of the $`𝒮𝒮`$from the $`𝒮𝒮`$like $`2𝒟`$needs higher precision in the measuring of the discussed width. In the weak decoupling case the additional measurements of $`H\gamma \gamma `$, $`A\gamma \gamma `$ couplings in the $`\gamma \gamma `$ collisions, $`HZ\gamma `$, $`AZ\gamma `$, $`H^{}W^{}\gamma `$ couplings in the $`\gamma e`$ collisions would be very useful to discriminate the discussed opportunities. $``$ Let the New Physics is different from the discussed models. When the collision energy is below scale of New Physics $`\mathrm{\Lambda }`$, last manifests itself via anomalies in the interactions of known particles. In this case Photon Colliders provide the best place for discovery of this New Physics and for understanding it. Indeed the $`H\gamma \gamma `$, $`H\gamma Z`$,… vertices in the $`𝒮`$are one–loop effects. Therefore the relative value of anomalies is enhanced here in comparison with other interactions. Together with the $`𝒮`$effects the above anomalies are described by an Effective Lagrangian relevant photon collisions: $$\begin{array}{c}_{H\gamma }=\frac{G_\gamma HF^{\mu \nu }F_{\mu \nu }}{2v}+\frac{G_ZHF^{\mu \nu }Z_{\mu \nu }}{v}+\\ \frac{\stackrel{~}{G}_\gamma HF^{\mu \nu }\stackrel{~}{F}_{\mu \nu }}{2v}+\frac{\stackrel{~}{G}_ZH\stackrel{~}{F}^{\mu \nu }Z_{\mu \nu }}{v}.\\ G_i=\frac{\alpha \mathrm{\Phi }_i^{SM}}{4\pi }+\theta _i\frac{v^2}{\mathrm{\Lambda }_i^2},(i=\gamma ,Z).\end{array}$$ (2) Here $`Z_{\mu \nu }`$, $`F_{\mu \nu }`$ and $`\stackrel{~}{F}_{\mu \nu }`$ are standard field strength tensors, $`v=246`$ GeV. The values $`\mathrm{\Phi }_i^{SM}`$ are well known $`(|\mathrm{\Phi }_i|1)`$. For the $`𝒞𝒫`$even case $`\stackrel{~}{G}_i=0`$, $`\theta _i=\pm 1`$. For the $`𝒞𝒫`$odd case all quantities $`\theta _i`$, $`\stackrel{~}{\theta }_i`$ can be complex, $`\theta _a=e^{i\varphi _a}`$. The $`𝒞𝒫`$odd anomalies manifest itself in the polarization asymmetry in the production $`\gamma \gamma H`$, $`e\gamma eH`$. In particular, for the $`\gamma \gamma H`$ process we have $$\begin{array}{c}<\sigma >(\lambda _i,\mathrm{}_i,\psi )=<\sigma ^{SM}>_{np}\frac{T(\lambda _i,\mathrm{}_i\psi )}{\left|G_\gamma ^{SM}\right|^2};\\ T(\lambda _i,\mathrm{}_i,\psi )=|G_\gamma |^2(1+\lambda _1\lambda _2+\mathrm{}_1\mathrm{}_2\mathrm{cos}2\psi )\\ +|\stackrel{~}{G}_\gamma |^2\left(1+\lambda _1\lambda _2\mathrm{}_1\mathrm{}_2\mathrm{cos}2\psi \right)\\ +2Re(G_\gamma ^{}\stackrel{~}{G}_\gamma )(\lambda _1+\lambda _2)+2Im(G_\gamma ^{}\stackrel{~}{G}_\gamma )\mathrm{}_1\mathrm{}_2\mathrm{sin}2\psi .\end{array}$$ (3) Similar equations were obtained for the $`e\gamma eH`$process and $`HZ\gamma `$ anomalies. The sensitivity of the corresponding experiments to the scale of New Physics was studied in Refs. \[9–13\]. ## 4 Gauge boson physics $``$ In the discussed energy range the New Physics effects will be seen as some deviations from the prediction of $`𝒮`$. These deviations can be described by anomalies in the Effective Lagrangian $`_{eff}`$. There are many anomalies of dimensions 6 and 8 relevant to the gauge boson interactions ($`𝒞𝒫`$even and $`𝒞𝒫`$odd). Each process reacts for several of them. The separate extraction of different anomalies is difficult for the $`e^+e^{}`$mode with only one well measurable process $`e^+e^{}W^+W^{}`$. (The $`\gamma WW`$ and $`ZWW`$ vertexes enter this process simultaneously.) In these problems the potential of the Photon Collider is exceptional. Indeed, large variety of the processes with the production of gauge boson will be observed here with both high purity and counting rate (millions or hundred thousands events per year) ($`\gamma \gamma W^+W^{}`$, $`e\gamma W\nu `$, $`\gamma eeWW`$, $`\gamma e\nu WZ`$, $`\gamma \gamma WWZ`$, $`\gamma \gamma WWWW`$, $`\gamma \gamma WWZZ`$, $`\gamma e\nu e^+e^{}W`$, …). This large variety allows to separate different anomalies. For example, one can realize such a program: First, to extract $`\gamma WW`$ anomalies from $`\gamma e\nu W`$ process. After that, to extract $`ZWW`$ anomalies from $`e^+e^{}W^+W^{}`$process. Last, to extract $`\gamma \gamma WW`$ anomaly from $`\gamma \gamma W^+W^{}`$. In the same way the process $`\gamma eeWW`$ allows to study anomaly $`\gamma ZWW`$. The study of some separate distributions in the specific regions of parameter space can enhance effect of some anomalies (in comparison with the entire cross section). For example, the total cross section of the process $`\gamma eeWW`$ is determined mainly by effect of $`\gamma \gamma W^+W^{}`$subprocess, the $`Z\gamma WW`$ subprocess become very essential in the cross section of this process, given at the transverse momentum of an electron $`p_{}>30`$ GeV; the polarization asymmetry is sensitive to the possible $`𝒞𝒫`$odd anomalies. $``$ The dynamical $`𝒮𝒰2𝒰1`$symmetry breaking is also considered often. Here this breaking is caused the strong interaction of $`W`$ bosons (longitudinal components) — instead of Higgs bosons at $`E\text{ }>4\pi v3`$ TeV. At lower energies amplitudes differ weakly from their $`𝒮`$values. The $`\gamma e`$collider allows study this strong interaction at smaller energy. For this goal, one should study the process $`\gamma eeWW`$ and should consider charge asymmetry of produced $`W`$’s (longitudinally polarized, if possible), caused interference between t–channel $`\gamma /Z`$ exchange diagram and corresponding bremsstrahlung diagram. Even if the cross section itself differs weakly from its $`𝒮`$value, the value of this asymmetry is $`\mathrm{cos}(\delta _0\delta _1)`$ where $`\delta _i`$ are the phases of strongly interacted $`WW`$ amplitudes . $``$ The reactions $`\gamma \gamma W^+W^{}`$ and $`e\gamma W\nu `$ will give about 10 millions $`W`$’s per year. It provides an opportunity to measure the corresponding cross sections with two loop accuracy. The EW theory is the standard QFT based on the whole set of the asymptotical states for the fundamental particles. It is the basis for the construction of perturbation theory with the standard particle propagators. But the fundamental particles of theory ($`W`$, $`Z`$, $`H`$) are unstable. The QFT with unstable fundamental particles is unknown till now. Without such a theory precise description of EW processes is impossible. From this point of view, the breaking of gauge invariance in the calculations of the processes with gauge boson production (like $`e^+e^{}W^+W^{}\mu \overline{\nu }q\overline{q}`$) is not the main effect but the signal on the unsatisfactory state with EW theory. This signal should be used in the construction of satisfactory scheme. We hope, that new features of such scheme (as compare with constructed recipes like ) will be seen at the expected two–loop accuracy level. The solution of this problem will be an essential step in the construction of QFT relevant for the description of real world. $``$ The additional interesting field here is the study of the QCD radiative corrections to the $`\gamma \gamma W^+W^{}`$process in the Pomeron regime (two gluon, etc. exchange between quarks from $`W`$ decays or in their polarization operator at $`SM_W^2`$). That is a large area for new work here. ## 5 The discovery of new unexpected particles The experiments at the LHC could discover many expected particles but the discovery here of some unexpected particle is very difficult task. Assuming that the decay products of some unexpected charged particle contain known particles, these new particles can be discovered at Linear Collider in both $`e^+e^{}`$ and $`\gamma \gamma `$ mode. The higher value of the production cross section in $`\gamma \gamma `$ mode comparing with the $`e^+e^{}`$ mode compensates difference in the luminosities in these modes. The values of these cross sections depend on $`s/M^2`$, charge and spin of produced particle. In the $`e^+e^{}`$ mode the additional dependence on the coupling with $`Z`$ make difficult an unambiguous restoration of the charge and spin of produced particle from the data. The $`\gamma \gamma `$ mode is free from this difficulty. Additionally, the polarization dependence is useful to determine spin of produced particle independent on its charge. Near the threshold this cross section in the $`\gamma \gamma `$mode is $$(1+\lambda _1\lambda _2\pm \mathrm{}_1\mathrm{}_2\mathrm{cos}2\varphi )$$ (4) with sign + for scalars and sign – for spinors. These cross sections decreases slowly with energy growth. They are high at large enough energies. It allows to study decay products in the region where they don’t mix with each other. ## 6 $`\gamma \gamma \gamma \gamma `$ process for the nonstandard New Physics. Some authors discuss the cases when this process become observable due to loops with some new particles $`F`$. It happens possible if the c.m.s. energy is larger than $`2M_F`$ (usually much larger). These effects cannot give us new information about the particles $`F`$ because of the processes like $`\gamma \gamma F\overline{F}`$ have higher cross sections and they observable usually a at lower energy. So I discuss here only two topics related to the nonstandard New Physics: $``$ Heavy point–like Dirac monopole. $``$ Effect of extra dimensions. In both cases we consider the process much below new particle production threshold. Denoting corresponding mass by $`M`$, the cross section can be written in the form $$\sigma (\gamma \gamma \gamma \gamma )=\frac{A}{32\pi s}\left(\frac{s}{4M^2}\right)^4$$ (5) with specific angular distribution (roughly — isotropic) and polarization dependence. The wide angle elastic light to light scattering has excellent signature and small QED background. The observation of strong elastic $`\gamma \gamma `$scattering, quickly raising with energy, will be a signal for one of mentioned mechanisms. The study of polarization and angular dependence at photon collider can discriminate which mechanism is working. $``$ Point–like Dirac monopole . The existance of this particle explains mysterious quantization of an electric charge. There is no place for it in modern theories of our world but there are no precise reasons against its existence. At $`sM^2`$ the electrodynamics of monopoles is expected to be similar to the standard QED with effective perturbation parameter $`g\sqrt{s}/(4\pi M)`$ where the coupling constant $`g=n/(2e)`$. The effect is described by monopole loop. So, the coefficient $`A`$ in eq. (5) is calculated within QED. It is $`g^8`$ and depends strongly on the spin of monopole $`J`$ (just as details of angular and polarization dependence). For example, $`A(J=1)/A(J=0)1900`$. The effect can be seen at TESLA500 at the monopole mass $`M<410`$ TeV (depending on monopole spin). Modern limitation obtained at Tevatron is about one order of value lower. $``$ Effect of extra dimensions . Nowadays the scenario is considered where gravity propagates in the $`(4+n)`$–dimensional bulk of spacetime, while gauge and matter fields are confined to the (3+1)–dimensional world volume of a brane configuration. The extra $`n`$ dimensions are compactified with some space scale $`R`$ that result Kaluza–Klein excitations having masses $`\pi n/R`$. The corresponding scale $`M`$ in our world is assumed to be $``$ few TeV. The particles of our world interact via the set of these Kaluza-Klein excitations having spin 2 or 0. In this approach all unknown coefficients are accumulated in the definition of $`M`$ in the equation for the cross section (with $`A1`$). The angular and polarization dependence of cross section are also known. Similar results were obtained for other processes $`B\overline{B}C\overline{C}`$. The two photon final state has the best signature and the lowest $`𝒮`$background. The two photon initial state has numerical advantage as compare with $`e^+e^{}`$one. The limitation for effect of extra dimensions correspond for the Photon Collider based on TESLA500 is $`M2.2`$ TeV. It is higher than that observable at the LHC. ## 7 Axions, etc, … from the conversion region Some very light and elusive particles $`a`$ (axions, majorons,…) are expected in many schemes. They can be produced in the conversion region, that is the $`\gamma e`$collider with $`\sqrt{s_{e\gamma _0}}1.2`$ MeV or $`\sqrt{s_{\gamma \gamma _0}}1`$ MeV and with annual luminosity about million fb<sup>-1</sup> . The production processes are $$e\gamma _0ea,\gamma \gamma _0a.$$ (6) The angular spread of these $`a`$ is very narrow and they interact with the matter very weakly. So, the registration scheme can be of the following type. After the damping of photon beam one should set a lead cylinder of diameter about 3–5 cm and length 300 m – 1 km (within angular spread of produced particles). After that some set of scintillators situated in covered circle with radius about 3 m will fix the most part of products of interaction of this particle with the lead nuclei within the cylinder. ## 8 Final notes The schedule in the operations of different modes and energies of a Linear Collider depends on the results of LHC studies. Two variants should be considered: $``$ Let some new particles (SUSY like) will be discovered at the LHC. In this case the natural continuation of the LC500 program will be LC800 with Photon Colliders after that. $``$ Let no new particles (except Higgs boson) will be discovered at the LHC. In this case the Photon Collider should operate as soon as possible. For example, the Photon Collider with c.m.s. energy $`\sqrt{s}M_h100÷200`$ GeV for the study mainly of Higgs boson can be the first stage of entire LC project. The advantages of this way are: The basic electron energy is lower. The positron beam is unnecessary. ## Acknowledgment I am grateful to J.J. van der Bij and S. Söldner–Rembold for the kind invitation to Freiburg and this conference. This work was also supported by grant RFBR 99–02–17211 and grant of Sankt–Petersburg Center of Fundamental Sciences.
no-problem/9907/cond-mat9907084.html
ar5iv
text
# Self-organized criticality in a model of biological evolution with long range interactions ## Abstract In this work we study the effects of introducing long range interactions in the Bak-Sneppen (BS) model of biological evolution. We analyze a recently proposed version of the BS model where the interactions decay as $`r^\alpha `$; in this way the first nearest neighbors model is recovered in the limit $`\alpha \mathrm{}`$, and the random neighbors version for $`\alpha =0`$. We study the space and time correlations and analize how the critical behavior of the system changes with the parameter $`\alpha `$. We also study the sensibility to initial conditions of the model using the spreading of damage technique. We found that the system displays several distinct critical regimes as $`\alpha `$ is varied from $`\alpha =0`$ to $`\alpha \mathrm{}`$ In recent years an increasing numbers of systems that present Self Organized Criticality have been widely investigated. The general approach of statistical physics, where simple models try to catch the essencial ingredientes responsable for a given complex behavior has turned out to be very powerful for the study of this kind of problems. In particular Bak and Sneppen have introduced a simple model which has shown to be able to reproduce evolutionary features such as punctuacted equilibrium . Altough this model does not intend to give an accurate description of darwinian evolution, it catches into a single and very simple scheme (it is based on very simple dynamical rules) several features that are expected to be present in evolutionary processes, that is, punctuated equilibrium , Self Organized Criticality (SOC) and weak sensitivity to initial conditions (WSIC) (i.e., chaotic behaviour where the trajectories depart with a power law of the time instead of exponentially). In this sense, one important question arises about the robustness of such properties against modifications (i.e., complexifying) of the simple dynamical rules on which the model is based. The original model, hereafter referred as the first nearest neighbors (FNN) version , includes only nearest neighbors interactions in a one dimensional chain. This model presents SOC and weak sensitivity to initial conditions . On the other hand, another version of the model with interactions between sites randomly chosen in the lattice (and therefore it can be regarded as a mean field version of the FNN), hereafter referred as the random negihbors (RN) version , does not present SOC . Moreover, it is not expected (and we shall show in this work that it is indeed the case) to present WSIC. Systems of coevolutionary species are expected to have some distance decaying interactions, thus lying somehow between the two previous schemes. Although not well defined, the concept of “distance” between species in these scenarios may be regarded as associated to some complex network of relationships including competition for resources and predator-prey ones, among many others. In this sense, the environmental modifications produced by the extinction of one species may be expected to affect many others not directly related to it, where the intensity of such influence depends on the above mentioned distance. Along this line, in this letter we will focuse on the robustness of the SOC and sensitivity to initial conditions properties of the Bak and Sneppen model against the introduction of long-range distance dependent interactions. To this end, we consider a generalization of the model, recently proposed by Cafiero et al that takes into account long-range interactions between species that decay as $`r^\alpha `$, where $`r`$ represent the distance between species (mesured in lattice units, i.e., $`r=1,2,\mathrm{}`$ in a chain) and $`\alpha >0`$ is a parameter that control s the effective range of the interactions. The major value of this generalization, unlike others introduced in the literature , resides in the fact that it allows simply to retrieve the two above mentioned models by varying continuously the parameter $`\alpha `$: when $`\alpha 0`$ we recover the RN version while for $`\alpha \mathrm{}`$ we recover the FNN one. The model consist of an $`N`$-site linear lattice with periodic boundary conditions (i.e., a ring of $`N`$ sites), where each site represents a species. Each species has associated a real variable $`b_j`$, $`0b_j1,`$ that measures the relative fitness barrier. Starting from a random barrier distribution, at each successive time step we identify the smallest barrier $`b_j`$, and modify it by choosing a new random value from a uniform distribution. This change represents a jump of a species across its fitness barrier to a mutated species. This mutation must also affect other species in the chain, and to take into account this phenomena one defines a neighborhood which will also be modified in the same way. In Ref. ($`\alpha \mathrm{}`$) the authors considered the case in which this neighborhood consists of the two nearest species of the mutating one while in Ref. ($`\alpha =0`$) the neighborhood consist of $`K1`$ species chosen at random among all the species of the chain. In order to generalize these models, we choose the neighborhood (of $`K1`$ species) at random with a probability that decays as $`r_{ij}^\alpha `$, where $`r_{ij}`$ is the distance of a given species $`j`$ to the species with the smallest barrier $`i`$, and giving them new random values chosen from a uniform distribution. In this way,for $`K=3`$ and $`\alpha \mathrm{}`$ we recover the two nearest neighbors model, while for $`\alpha =0`$ we reproduce the random neighbors mean field version. To determine whether the system attains a self-organized critical state, we analyze the following quantities: the barrier distribution $`P(b)`$ in the final steady state, the spatial correlation $`C(r)`$ and the first return time distribution $`C(t)`$. In order to study the sensitivity to initial conditions, we calculate the time evolution of the Hamming distance $`D(t)`$ between two different replicas submitted to the same noise (damage spreading method). Since our main interest is to analyze the crossover between the limits $`\alpha =0`$ and $`\alpha \mathrm{}`$, we will restrict ourselves to consider the $`K=3`$ case. Latter on we will discuss briefly the effect of increasing $`K`$. Figure 1 presents the distribution $`P(b)`$ of barrier values, for three different $`\alpha `$ values. Note that, independently of $`\alpha `$ the curves are qualitatively similar. The typical behavior of these curves can be characterized by the value of $`P(b(1))`$ (i.e, the saturation value of the distribution), which is displayed in Fig.2 as a function of $`\alpha `$ for three different system sizes. We can clearly distinguish three different regimes. For $`\alpha <1`$ the value of $`P(b(1))`$ is independent of $`N`$ and the behavior observed for $`P(b)`$ colapses to the one observed in the RN Bak-Sneppen model. There exists some value $`\alpha =\alpha _c`$ such that for intermediate values $`1<\alpha <\alpha _c`$ the value of $`P(b(1))`$ is very sensitive to changes in $`\alpha `$, increasing its value as $`\alpha `$ grows, and finally, for $`\alpha >\alpha _c`$, $`P(b(1))`$ reaches a saturation value, and we recover the behavior of $`P(b)`$ for the FNN model when $`N\mathrm{}`$. The value of $`\alpha _c`$ can be roughly estimated from numerical extrapolations of the curves to $`1/N0`$. We obtained that $`\alpha _c4`$ for $`K=3`$ (further analysis of the critical exponents will confirm this estimation). We consider now the spatial and temporal correlations between the minimum barriers in order to determine the presence of SOC. Figure 3 presents in a log-log plot the probability $`C(r)`$ that the minimum barriers at two succesive updates will be separated by $`r`$ sites. We observed a power law behavior $`C(r)r^\pi `$ for all $`\alpha 0`$. In Fig. 4 we present how $`\pi `$ changes with $`\alpha `$. When $`\alpha =0`$ the spatial correlation is constant ($`\pi =0`$) as in the RN model. As $`\alpha `$ grows $`\pi `$ increases until it reaches a saturation value $`\pi =3.2\pm 0.2`$ for $`\alpha >\alpha _c`$, in agreement with the results observed in the FNN model. Next we calculate the first return time distribution $`C(t)`$, defined as the probability that, if a given site is the minimum at time $`t_0`$, it will again be the minimum for the first time at time $`t_0+t`$. In Fig. 5 we present our results for four different values of $`\alpha `$ (0.5, 1.5, 2.0 and 3.0). For $`\alpha 2`$ the first return time clearly presents a power law behavior $`C(t)t^\tau `$, even for finite system sizes. For $`\alpha <2`$ the system displays finite size effects, as can be seen in Fig. 6 where we present $`C(t)`$ when $`\alpha =0.5`$ for different system sizes; it is clear that a power law decay emerges as $`N`$ grows. In Fig. 7 we show how the first return time exponent $`\tau `$ depends on $`\alpha `$. Here again we find three different regimes: For $`\alpha <1`$ (unlike the spatial correlation exponent) all the curves $`C_\alpha (t)`$ colapse and $`\tau =1.5`$, displaying the same behavior found in the RN Bak-Sneppen model with $`K=3`$, where $`\tau =3/2`$ exactly . For $`1<\alpha <\alpha _c`$, the value of $`\tau `$ strongly depends on the value of $`\alpha `$, with a minimum for $`\alpha 1.6`$, in agreement with the results of Cafiero et al . For $`\alpha >\alpha _c`$ the value of $`\tau `$ attains a saturation value $`\tau =1.56\pm 0.05`$ in agreement with the value observed in the FNN Bak-Sneppen model where $`\tau =1.6`$. Summarizing the results displayed in these figures, for $`0<\alpha <1`$, since $`\tau `$ presents the same trivial value observed in the RN Bak-Sneppen model, we cannot regard the system as critical . For $`1<\alpha <\alpha _c`$ the exponents depend strongly on $`\alpha `$, and since the exponents are non trivial, we regard this as a strong indicator of criticality in the system. Finally for $`\alpha >\alpha _c`$ the exponents becomes independent of $`\alpha `$, taking the short range values observed in the FNN Bak-Sneppen model. We have observed that as we increase the number of interacting sites $`K`$, the value of $`\alpha _c`$ decreases, slowly converging to $`\alpha _c=2`$. This behavior reminds that of the one dimensional ferromagnetic Ising model with the same type of interactions presented here, where the borderline between short and long range critical regimes is $`\alpha =2`$ . Next, we study the sensibility to initial conditions of this model and its dependence on $`\alpha `$. To do that, we use the spreading of damage technique, which had previouly been applied to the FNN model . In this particular limit it was shown that the system presents a weak sensivity to initial conditions, characterized by a power law increment, as times goes on, of the Hamming distance between replicas of the system. This behavior is reminiscent of those observed at the edge of chaos in dynamical systems with few degrees of freedom. The procedure is as follows: given a configuration of $`N`$ barrier values $`(\{b_j^{(1)}\})`$ in the self-organized critical state, we create a replica of the system $`(\{b_j^{(2)}\})`$ by choosing a site randomly and interchanging the value of this site with the value of the site with the smallest barrier. From then on we use the same random numbers for updating the barrier values in both replicas. We define the Hamming distance between the two replicas as: $$D(t)\frac{1}{N}\underset{j=1}{\overset{N}{}}|b_j^{(1)}(t)b_j^{(2)}(t)|$$ (1) If the Hamming distance goes to zero we say that the system is in a frozen phase. On the other hand if the Hamming distance remains non zero we say that the system is chaotic in analogy with dynamical systems. Regarding the behavior of the average normalized Hamming distance $`D(N,t)D(t)/D(1)`$, we observed two different regimes as $`\alpha `$ is varied. For $`\alpha <2`$, $`D(N,t)`$ reaches a saturation value $`D(N,\mathrm{})`$ in just one step. The quotient $`D(N,\mathrm{})/N0`$ when $`N\mathrm{}`$, and therefore the system does not present sensibility to the initial conditions. The typical temporal behaviour of $`D(N,t)`$ for $`\alpha 2`$ is displayed in Fig.8 for $`\alpha =2.5`$ and three different system sizes (the results presented correspond to averages over $`500N`$ realizations). We see that $`D(N,t)t^\delta `$ when $`tN`$ and it saturates into a system size dependent value for large times, clearly showing WSIC . Moreover, we verified for $`\alpha 2`$ the finite size scaling behavior : $$D(N,t)N^{z\delta }F\left(\frac{t}{N^z}\right)$$ (2) where $`\delta =0.40`$ , and $`z=1.7\pm 0.2`$ is the dynamical exponent defined by $`t_sN^z`$, $`t_s`$ being the value of $`t`$ at which the increasing regime crosses over onto the saturation regime (given by the intersection of the linearly increasing branch of the curve and the horizontal branch). Both exponents are independent of $`\alpha `$. Concluding, we have studied how long range interactions affects the criticality of the stationary state of our model and its sensibility to the initial conditions. Concerning the SOC, we observed three different regimes depending on $`\alpha `$. For $`\alpha >\alpha _c`$ we can speak about a short-range critical regime, where the system presents SOC. Moreover, we observe that this property displays universality, in the sense that most of the associated critical exponents are independent of $`\alpha `$. For $`0\alpha 1`$ the system does not present SOC, although $`C(r)`$ displays a non-trivial power law decay with $`r`$, unlike the RN model for wich $`C(r)`$ is constant. Moreover, all the relevant state functions or distributions become independent of $`\alpha `$. This behavior has already been observed in a variety of systems with long-range interactions, both related to equilibrium and non-equilibrium properties . In all these systems it has been observed that the mean field behavior becomes dominant when $`0\alpha d`$, $`d`$ being the dimensionality of the underlying lattice. Hence, in our case we can speak about a “mean-field” (non-critical) regime, i.e., that of the RN model. Finally, for $`1\alpha \alpha _c`$ we have a long-range critical regime, where the system presents non-universal SOC, i.e., the associated critical exponents depend strongly on $`\alpha `$. Concerning the sensibility to the initial conditions we observed two regimes: one for $`0\alpha <2`$ where the system does not present sensibility to the initial conditions of any type, while for $`\alpha >2`$ it displays universal WSIC, in the sense that the exponents of the scaling law (2) are independent of $`\alpha `$. We see that, at least one of the borderline values (and probably all of them) that separate the different regimes seems to be directly related to the dimensionality of the system. Hence, such dimensionality appears as a fundamental parameter to determine the robustness of the model against variations in the range of the interactions. Fruitful discussions and suggestions from S. Boettcher are acknowledged. This work was partially supported by the following agencies: CONICOR (Córdoba, Argentina), Secretaria de Ciencia y Tecnología de la Universidad Nacional de Córdoba and CONICET (Argentina).
no-problem/9907/hep-ph9907241.html
ar5iv
text
# 1 Introduction ## 1 Introduction In recent years several experiments have been dedicated to high precision measurements of deep inelastic lepton scattering (DIS) off nuclei. Experiments at CERN and Fermilab focus especially on the region of small values of the Bjorken variable $`x=Q^2/2M\nu `$, where $`Q^2=q^2`$ is the squared four-momentum transfer, $`\nu `$ the energy transfer and $`M`$ the nucleon mass. The data , taken over a wide kinematic range $`10^5x\mathrm{\hspace{0.17em}0.1}`$ and $`0.05GeV^2Q^2\mathrm{\hspace{0.17em}100}GeV^2`$, show a systematic reduction of the nuclear structure function $`F_2^A(x,Q^2)/A`$ with respect to the free nucleon structure function $`F_2^N(x,Q^2)`$. This phenomena is known as the shadowing effect. The analysis of the shadowing corrections for the nuclear case in deep inelastic scattering (DIS) has been extensively discussed . It is motivated by the perspective that in a near future an experimental investigation of the nuclear shadowing at small $`x`$ and $`Q^2>>1GeV^2`$ using $`eA`$ scattering could occur at DESY Hadron Electron Ring Accelerator (HERA). Measurements over the extended $`x`$ and $`Q^2`$ ranges, which would become possible at HERA, will give more information in order to discriminate between the distinct models of shadowing and the understanding of the phenomenon which limits the rise of the proton structure function $`F_2`$ at small $`x`$. The deep inelastic scattering off a nucleus is usually interpreted in a frame where the nucleus is going very fast. In this case the nuclear shadowing is a result of an overlap in the longitudinal direction of the parton clouds originated from different bound nucleons . It corresponds to the fact that small $`x`$ partons cannot be localized longitudinally to better than the size of the nucleus. Thus low $`x`$ partons from different nucleons overlap spatially creating much larger parton densities than in the free nucleon case. This leads to a large amplification of the nonlinear effects expected in QCD at small $`x`$. In the target rest frame, the electron-nucleus scattering at HERA allows a new regime to be probed experimentally for the first time. This is a new regime in which the virtual photon interacts coherently with all the nucleons at a given impact parameter. This can be visualized in terms of the propagation of a small $`q\overline{q}`$ pair in high density gluon fields through much larger distances than it is possible with free nucleons. Few years ago, a perturbative approach has been developed to calculate the gluon distribution in a nucleus using perturbative QCD at small $`x`$. This approach, known as Glauber-Mueller (GM) approach is formulated in the target rest frame, takes into account the fluctuations of the hard probe. It includes the shadowing corrections (SC) due to parton rescatterings inside the nucleus, and provides the SC to the nuclear gluon distribution using the solution of the DGLAP evolution equations to the nucleon case. As a result the behavior of related observables ($`F_2^A,dF_2^A/dlogQ^2,F_L^A`$, …) at high energies can be calculated. The GM approach was extended for the nucleon case in and a comprehensive phenomenological analysis of the behavior of distinct observables ($`F_2,F_L,F_2^c`$) was made for the $`ep`$ HERA kinematical region using this approach . Our main conclusion was that the unitarity corrections are large in the HERA kinematical region, but only new data, with better statistics, will allow to discriminate these corrections from the DGLAP predictions. The recent ZEUS data for the slope of the proton structure function presents a ’turn over’ which cannot be reproduced by the DGLAP evolution equations with the GRV95 parameterization . Initially, this behavior was interpreted as the first evidence of the shadowing corrections in the kinematic region of the $`ep`$ HERA collider . However, the MRST and GRV groups have produced a new set of parameterizations of the parton distributions which also reproduced the data. Therefore the current situation of the $`ep`$ HERA data still cannot demonstrate clearly the presence of the shadowing corrections. This conclusion motivates an analysis of these corrections in other processes. In this Letter we analyze the $`A`$ dependence of the slope of the nuclear structure function which should be measured in the future in the $`eA`$ HERA collider. Lets us start from the space-time picture of the $`eA`$ processes . The deep inelastic scattering $`eAe+X`$ is characterized by a large electron energy loss $`\nu `$ (in the target rest frame) and an invariant momentum transfer $`q^2Q^2`$ between the incoming and outgoing electron such that $`x=Q^2/2m_N\nu `$ is fixed. The general features of the time development can be established using only Lorentz invariance and the uncertainty principle. The incoming physical electron state can, at a given instant of time, be expanded in terms of its (bare) Fock states $`|e>_{phys}=\psi _e|e>+\psi _{e\gamma }|e\gamma >+\mathrm{}.`$ (1) The amplitudes $`\psi _i`$ depend on the kinematic variables describing the states $`|i>`$, and have the time dependence $`exp(iE_it)`$, where $`E_i=_i\sqrt{m_i^2+\stackrel{}{p}_i^{\mathrm{\hspace{0.17em}2}}}`$ is the energy of the state. The ’lifetime’ $`\tau _i1/(E_iE_e)`$ of a Fock state $`|i>`$ is given by the time interval after which the relative phase $`exp[i(E_iE_e)]`$ is significantly different from unity. If $`\tau _i>R_A`$ the Fock state forms long before the electron arrives at the nucleus, and it lives long after its passage. New Fock states are not formed inside the nucleus. Therefore, the scattering inside the nucleus is diagonal in the Fock basis. If the state $`|i>`$ contains particles with mass $`m_j`$, energy fraction $`x_j`$ and transverse momentum $`p_{tj}`$, we have that the transverse velocities $`v_{tj}=p_{tj}/x_jE_e`$ are small at large $`E_e`$. Hence the impact parameters (transverse coordinates) of all particles are preserved. In terms of Fock states we then view the $`eA`$ scattering as follows: the electron emits a photon ($`|e>|e\gamma >`$) with $`E_\gamma =\nu `$ and $`p_{t\gamma }^2Q^2`$, after the photon splits into a $`q\overline{q}`$ ($`|e\gamma >|eq\overline{q}>`$) and typically travels a distance $`l_c1/m_Nx`$, referred as the ’coherence length’, before interacting in the nucleus. For small $`x`$, the photon converts to a quark pair at a large distance before it interacts to the target; for example, at the $`ep`$ HERA collider, where one can study structure functions at $`x10^5`$, the coherence length is as large as $`10^4fm`$, much larger than the nuclear radii. Consequently, the space-time picture of the DIS in the target rest frame can be viewed as the decay of the virtual photon at high energy (small $`x`$) into a quark-antiquark pair long before the interaction with the target. The $`q\overline{q}`$ pair subsequently interacts with the target. In the small $`x`$ region, where $`x\frac{1}{2mR}`$, the $`q\overline{q}`$ pair crosses the target with fixed transverse distance $`r_t`$ between the quarks. It allows to factorize the total cross section between the wave function of the photon and the interaction cross section of the quark-antiquark pair with the target. The photon wave function is calculable and the interaction cross section is modelled. Therefore, the nuclear structure function is given by $`F_2^A(x,Q^2)={\displaystyle \frac{Q^2}{4\pi \alpha _{em}}}{\displaystyle 𝑑z\frac{d^2r_t}{\pi }|\mathrm{\Psi }(z,r_t)|^2\sigma ^{q\overline{q}+A}(z,r_t)},`$ (2) where $`|\mathrm{\Psi }(z,r_t)|^2={\displaystyle \frac{6\alpha _{em}}{(2\pi )^2}}{\displaystyle \underset{i}{\overset{n_f}{}}}e_f^2\{[z^2+(1z)^2]ϵ^2K_1(ϵr_t)^2+m_f^2K_0^2(ϵr_t)^2\},`$ (3) $`\alpha _{em}`$ is the electromagnetic coupling constant, $`ϵ=z(1z)Q^2+m_f^2`$, $`m_f`$ is the quark mass, $`n_f`$ is the number of active flavors, $`e_f^2`$ is the square of the parton charge (in units of $`e`$), $`K_{0,1}`$ are the modified Bessel functions and $`z`$ is the fraction of the photon’s light-cone momentum carried by one of the quarks of the pair. In the leading log$`(1/x)`$ approximation we can neglect the change of $`z`$ during the interaction and describe the cross section $`\sigma ^{q\overline{q}+A}(z,r_t^2)`$ as a function of the variable $`x`$. We estimated the unitarity corrections considering the Glauber multiple scattering theory , which was probed in QCD . The nuclear collision is analysed as a sucession of collisions of the probe with individual nucleons within the nucleus, and summarizing we obtain that the $`F_2`$ structure function can be written as $`F_2^A(x,Q^2)={\displaystyle \frac{R_A^2}{2\pi ^2}}{\displaystyle \underset{1}{\overset{n_f}{}}}ϵ_i^2{\displaystyle _{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}}}{\displaystyle \frac{d^2r_t}{\pi r_t^4}}\{C+ln(\kappa _q(x,r_t^2))+E_1(\kappa _q(x,r_t^2))\},`$ (4) where $`A`$ is the number of nucleons, $`R_A^2`$ is the mean nuclear radius and $`\kappa _q=(2\alpha _sA/3R_A^2)\pi r_t^2xG_N(x,\frac{1}{r_t^2})`$ (see for details). The slope of the nuclear structure function can be obtained directly from the expression (4). We obtain that $`{\displaystyle \frac{dF_2^A(x,Q^2)}{dlogQ^2}}={\displaystyle \frac{R_A^2Q^2}{2\pi ^2}}{\displaystyle \underset{1}{\overset{n_f}{}}}ϵ_i^2\{C+ln(\kappa _q(x,Q^2))+E_1(\kappa _q(x,Q^2))\},`$ (5) which predicts the $`x,Q^2`$ and $`A`$ dependence of the shadowing corrections for the $`F_2^A`$ slope. We see that the behavior of the $`F_2^A`$ and its slope are strongly dependent on the nucleon gluon distribution. This is a common characteristic of the observables in the small $`x`$ region, where the gluon distribution dominates. Therefore, before we make predictions for these observables an analysis of the behavior of the gluon distribution should be made. In the nucleon case, a strong growth of the nucleon structure function is observed in the $`ep`$ HERA data, which violates the unitarity boundary . Consequently, we expect that unitarity corrections will be present in the $`ep`$ HERA kinematical region. In we have shown that the ZEUS data for the $`F_2^p`$ slope which presents a turn over can only be successfully described considering the expression (5) for the nucleon case and a shadowed gluon distribution (quark + gluon sectors). In this Letter we consider that the nucleon gluon distribution is calculated using the GM approach, i. e. we consider that the behavior of the gluon distribution was modified by the unitarity corrections (see for details). The behavior of the nuclear structure function was analysed in using the Glauber-Mueller approach. We have shown that the ratio $`R_1=F_2^A/(AF_2^p)`$ is strongly modified by the shadowing corrections and that it saturates in the perturbative regime ($`Q^21GeV^2`$) when both the quark and gluon sectors are considered. Here we estimate the shadowing corrections for the $`F_2^A`$ slope in the HERA kinematic region, where $`s=9\times 10^4GeV^2`$. Following , where the data points correspond to different $`x`$ and $`Q^2`$, we consider that the variables $`x`$ and $`Q^2`$ are related by the expression $`x=Q^2/(sy)`$ and that the inelasticity variable $`y`$ is given by $`y=0.25`$. This is a typical value in the measurements at HERA . In figure (1) we present our predictions (solid curve) for the behavior of the $`F_2^A`$ slope using the expression (5). We compare our results with the predictions of the DGLAP evolution equations using the GRV parameterization (dashed curve) without any nuclear effect. We see that a ’turn over’ is present in the DGLAP (GRV) and GM predictions. However, the remarkable property of the result is that the shadowing corrections modify the maximum for each of the slopes and that it is $`A`$ dependent. This is expected from the formalism based on GM approach as extensively explained in . In table (1) we present explicitly the $`A`$ dependence of the ’turn over’ in the $`F_2^A`$ slope. In the Calcium case ($`A=40`$) we predict that the ’turn over’ occurs at $`Q^2=5GeV^2`$, differently from the DGLAP (GRV) case for which is at $`Q^2=1.7GeV^2`$ independently of $`A`$. We believe that this behavior cannot be mimicked by modifications in the parton parameterizations, which makes $`dF_2^A/dlogQ^2`$ a sensitive probe of the shadowing corrections. The behavior of the $`F_2^A`$ slope can be understand intuitively. The $`A`$ dependence of the ’turn over’ is associated with the regime in which the partons in the nucleus form a dense system with mutual interactions and recombinations. The recombinations, i.e the shadowing corrections, occur predominantly at large density. As the partonic density growth at larger values of the number of nucleons $`A`$ and smaller values of $`x`$, the same density at $`A=1,40,197`$ is obtained at larger values of $`x`$ and in extension $`Q^2`$. This behavior of the recombinations is verified in the $`F_2^A`$ slope. Our main conclusion is that the analysis of the slope of the nuclear structure functions at $`eA`$ HERA energies will allow to discriminate the presence of the shadowing corrections from the DGLAP predictions. Our result has important implications in the nucleon case and in QCD at high densities. In the nucleon case, the evidence of an $`A`$ dependence of the ’turn over’ will demonstrate that the correct way to estimate the observables in the $`ep`$ HERA collider is considering the shadowing corrections, without modifing the parton distributions. On the other hand, in the near future, the collider facilities such as BNL Relativistic Heavy Ion Collider (RHIC), and CERN Large Hadron Collider (LHC) ($`p\overline{p}`$, $`AA`$) will be able to probe new regimes of dense quark matter at very small Bjorken $`x`$ or/and at large $`A`$, with rather different dynamical properties. The description of these processes is directly associated with a correct description of the dynamics of minijet production, which will be strongly modified by shadowing corrections. We expect that this result contributes to motivate the running of nucleus at HERA in the future. ## Acknowledgments This work was partially financed by CNPq and by Programa de Apoio a Núcleos de Excelência (PRONEX), BRAZIL. ## Tables ## Figure Captions Fig. 1: The $`F_2^A`$ slope as function of the variable $`x`$ at different values of $`A`$. Each value of $`x`$ is related with the virtuality $`Q^2`$ by the expression $`x=Q^2/(sy)`$, where we have assumed $`s=9\times 10^4GeV^2`$ and $`y=0.25`$. See text.
no-problem/9907/astro-ph9907002.html
ar5iv
text
# Photometric/Spectroscopic Redshift Identification of Faint Galaxies in STIS Slitless Spectroscopy Observations ## 1. Introduction Observations of distant galaxies have advanced rapidly as a result of the Keck telescope, which because of its large collecting area is sensitive to faint objects. But despite extensive searches, only four galaxies of redshift $`z>5`$ have as yet been identified spectroscopically using the Keck telescope (Dey et al. 1998; Hu, McMahon, & Cowie 1998; Weymann et al. 1998; Spinrad et al. 1998). It has proven extremely difficult to identify high-redshift galaxies solely based on ground-based spectroscopy, because (1) galaxies lack prominent narrow-band features at rest-frame ultraviolet wavelengths (which are redshifted to observed-frame optical or infrared wavelengths) and (2) background sky light is the dominant source of noise at near-infrared wavelengths. We have sought to identify distant galaxies in very deep slitless spectroscopy observations acquired by STIS on board HST, by combining a new spectrum extraction technique with photometric and spectroscopic analysis techniques. Our analysis was designed to identify redshifts of galaxies in the slitless data by means of broad-band photometric techniques and to test these redshifts by identifying narrow-band emission, absorption, and continuum features in the same spectra. As a result of this analysis, we obtained optimally extracted spectra and self-confirming redshift measurements for roughly 250 object, including a galaxy at $`z=6.68`$, and 5 isolated emission-line objects. ## 2. Data The very deep observations, obtained by HST using STIS toward a region of sky flanking the Hubble Deep Field, consisted of pairs of images: a direct image taken using no filter and a dispersed image taken using the G750L grating. Additional observations consisted of only a direct image. The integration time of the direct images totaled 4.5 h over 82 exposures, and the integration time of the dispersed images totaled 13.5 h over 60 exposures. We summed the direct and dispersed images using conventional image processing techniques. The summed image covers a sky area of $`51\times 51`$ arcsec<sup>2</sup>. The spatial resolution of the summed direct and dispersed images is $`\mathrm{FWHM}0.08`$ arcsec, the $`1\sigma `$ detection threshold of the summed direct image is $`26.2`$ mag arcsec<sup>-2</sup>, and the $`1\sigma `$ detection threshold of the summed dispersed image at $`\lambda 9800`$ Å is $`5.5\times 10^{18}`$ erg s<sup>-1</sup> cm<sup>-2</sup> Å<sup>-1</sup> arcsec<sup>-1</sup>. ## 3. Optimal Extraction of Slitless Spectra The spectrum extraction of the dispersed image is made especially difficult because the image of the field is covered by light of faint galaxies, which when dispersed overlaps in the spatial direction and is blurred in the spectral direction. To spatially deblend and spectrally deconvolve the spectra, we used the summed direct image to determine not only the exact object locations but also the exact two-dimensional spatial profiles of the spectra on the summed dispersed image. The spatial profiles of these objects are crucial because (1) they provide the “weights” needed to optimally extract the spectra, (2) they provide the models needed to deblend the overlapping spectra and determine the background sky level, and (3) they provide the spectral templates needed to optimally deconvolve the spectral blurring of extended objects. First, we identified objects in the summed direct image, using the SExtractor program of Bertin & Arnouts (1996). Roughly 250 objects were identified in the summed direct image. Next, we modeled each pixel $`(i,j)`$ of the summed dispersed image as a linear sum of contributions from (1) relevant portions of all overlapping neighboring objects and (2) background sky: $$_{i,j}=\underset{k}{}S_{ii_k\mathrm{\Delta }+1}^k\underset{i_k^{^{\prime \prime }}}{}f_{i_ki_k^{^{\prime \prime }},j}^k+B_{i,j},$$ (1) where $`S_l^k`$ are spectral elements of the $`k`$th object, $`i_k`$ is the object position along the spectral direction in the direct image, $`\mathrm{\Delta }`$ is the constant offset between object position and the starting pixel of the spectrum along the spectral direction, $`f`$ is the object profile measured in the direct image, and $`B`$ is the model sky (for which we found that a fourth order polynomial is necessary and sufficient). We treated the problem as a $`\chi ^2`$ minimization problem, where the data were the pixel values of the summed dispersed image, the model was a linear sum of appropriate elements of the spatial profiles, and the parameters of the model were values of the spectral pixels. The $`\chi ^2`$ is written $$\chi ^2=\underset{i,j}{}\frac{(_{i,j}\stackrel{~}{}_{i,j})^2}{\sigma _{i,j}^2},$$ (2) where $`\stackrel{~}{}_{i,j}`$ is the flux measurement and $`\sigma _{i,j}`$ is the 1-$`\sigma `$ error at pixel $`(i,j)`$. Finally, we minimized $`\chi ^2`$ between the model and the data with respect to the model parameters and obtained estimates of all model parameters simultaneously to form one-dimensional spectra. The model parameters included roughly 250,000 “object” parameters (from roughly 1000 spectral pixels each of roughly 250 objects) and four thousand more “sky” parameters. As a zeroth order approximation, the spectral elements of individual objects are independent of each other, and so we can solve the minimum $`\chi ^2`$ separately for individual columns. The $`\chi ^2`$ for column $`i`$ is $$\chi _i^2=\underset{j}{}\frac{\left(\underset{k}{}S_{ii_k\mathrm{\Delta }+1}^k\underset{i_k^{^{\prime \prime }}}{}f_{i_ki_k^{^{\prime \prime }},j}^k+\underset{\alpha =0}{\overset{L}{}}a_i^\alpha j^\alpha \stackrel{~}{}_{i,j}\right)^2}{\sigma _{i,j}^2}.$$ (3) Now we only need to solve for approximately 250 spectral elements $`S_l^k`$ and $`(L+1)`$ sky paremeters $`a_i^\alpha `$ (here $`L=4`$) that give the minimum $`\chi _i^2`$ each time and repeat the calculation for all columns. Errors of the model parameters were obtained by solving the Hessian matrix at the minimum $`\chi _i^2`$. This new spectrum extraction method is superior to other extraction methods in many ways. First, the method employs optimal weights for the spectral extraction. Second, the method deblends the spectra. Third, the method determines the precise sky background. Because the dispersed image is covered by light of extremely faint galaxies over most of its area, the spatial templates are needed to find the rare “clear” regions of sky background between the object spectra. Finally, the method estimates errors correctly by taking into account the correlation between spectral elements of overlapping objects. As a result of the spectrum extraction technique, we obtain 250 optimally extracted spectra for redshift analysis. To determine redshifts of extremely faint galaxies in STIS slitless spectra, we first measured photometric redshifts by means of a redshift likelihood technique (Fernández-Soto, Lanzetta, & Yahil 1999) and verified these redshifts by identifying narrow-band emission, absorption, and continuum features in the same spectra. The goal is to obtain self-confirming photometric redshift estimates. ## 4. Noise Characteristics The new spectrum extraction technique adopts object profiles determined from the direct image as model templates to determine the exact sky and to optimally extract spectra from the dispersed image. The analysis involves complicated smoothing and deblending. It is therefore not unreasonable to expect that the noise characteristics may have been highly skewed in the extracted spectra and do not follow a normal distribution any more. To proceed with discussions on the significance of detections of any spectral features, it is indeed necessary for us to first understand the noise characteristics of the final products of the analysis. To examine the noise characteristics, we form a histogram of pixel values of about 15 rows randomly chosen from the residual image. The histogram is shown in Figure 1, in which we also plot the best-fit Gaussian distribution with the full width at half maximum of 0.004. The corresponding 1-$`\sigma `$ deviation is 0.0017, which is slightly larger than the 1-$`\sigma `$ deviation measured in the dispersed image (0.0015). The fact that the noise in the residual image follows a Gaussian distribution with a slightly larger 1-$`\sigma `$ deviation has secured our subsequent reference to a normal noise distribution for any further statistical analysis. ## 5. Properties of Isolated Emission Lines In addition to the 250 objects observed in the direct image, which included a galaxy at $`z=6.68`$, we identified five isolated emission lines in the dispersed image that were not accounted for by objects detected in the direct image. The $`z=6.68`$ galaxy is the most distant galaxy that has yet been spectroscopically identified. Detailed analysis has been presented elsewhere (Chen, Lanzetta, & Pascarelle 1999). Here we focus on the five emission-line objects. To objectively identify these isolated emission lines, we applied the SExtractor program to the smoothed dispersed image, setting the detection threshold such that nothing was detected in the negative of the image. Next, we removed the lines that correspond to objects identified in the direct image. Consequently, five isolated emission lines remained in the dispersed image. The nature of these isolated emission lines is uncertain because of the ambiguities in the wavelength determination. It is impossible to determine the wavelengths and therefore to calibrate the fluxes for these lines without first knowing their positions on the sky. We show in Figure 2 that the 1-$`\sigma `$ flux threshold of an unresolved emission line in the dispersed image may vary with wavelength by as much as a factor of four across the entire spectral range. However, given the 1-$`\sigma `$ single pixel detection threshold of the direct image, $`26.2`$ mag arcsec<sup>-2</sup>, we can work out the 3-$`\sigma `$ lower limit to the observed equivalent width (EW) as a function of wavelength for these isolated emission lines. The dot-dashed curve in Figure 2 indicates the 3-$`\sigma `$ lower limit to the observed EW of an unresolved line detected at a 3-$`\sigma `$ significance level in the dispersed image versus wavelength. The actual observed EW limit as a function of wavelength for a particular line can be obtained by scaling the dot-dashed curve to the significance of the line detection. According to the EW limits shown in Figure 2, the observed five emission lines are most likely to be high-redshift $`\mathrm{Ly}\alpha `$ emission lines, rather than low-redshift \[O II\] or H$`\alpha `$ lines. The sensitivity curve shown in Figure 2 also indicates that the STIS observations are more sensitive to faint emission-line galaxies than existing deep narrowband surveys from the ground (e.g. Hu, Cowie, & McMahon 1998). If these lines are high-redshift $`\mathrm{Ly}\alpha `$ emission lines, the statistics drawn from this observation may place a strong constraint on the number density of high-redshift $`\mathrm{Ly}\alpha `$ emission line galaxies and the star formation rate density contributed by this population. However, the study has been made extremely difficult because of the ambiguities in the wavelength (and therefore redshift) determination. To solve this problem, we applied a likelihood analysis to the observed emission lines, assuming that all five emission lines are high-redshift $`\mathrm{Ly}\alpha `$ lines and that the number density of these lines is well represented by a simple $`\delta `$-function luminosity function, $$\varphi (L/L_{})=\varphi _{}\delta (L/L_{}1).$$ (4) The likelihood analysis returns a best-fit characteristic luminosity $`L_{}=0.4\times 10^{42}h^2\mathrm{ergs}\mathrm{s}^1`$. Given the fact that five lines were detected, we find the number density to be $`\varphi _{}=0.04h^3\mathrm{Mpc}^3`$. On the basis of the simple $`\delta `$-function galaxy luminosity function determined for the observed $`\mathrm{Ly}\alpha `$ emission line objects, we estimated the statistical properties of the galaxy population. First, we calculated the mean redshift of the observed lines and found $`z=4.69`$. Second, we estimated the star formation rate density, using the relationship between H$`\alpha `$ luminosity and star formation rate of Madau, Pozzetti, & Dickinson (1998) and assuming case B recombination, where $`\mathrm{Ly}\alpha `$/H$`\alpha 10`$ (Osterbrock 1989). It turns out that the star formation rate density of $`\mathrm{Ly}\alpha `$ emission line galaxies is $`0.01h\mathrm{M}_{}\mathrm{yr}^1\mathrm{Mpc}^3`$ for $`q_0=0.5`$ (without correction for extinction). This is about 15% of the star formation rate density measured by Steidel et al. (1999) for the Lyman break galaxies at $`z>4`$, suggesting that galaxies exhibiting the $`\mathrm{Ly}\alpha `$ emission feature represent only a small portion of the whole galaxy population at high redshifts. We therefore conclude that high-redshift galaxies are more efficiently and objectively identified using broad-band photometric rather than narrow-band imaging/spectroscopic techniques. Finally, we calculated the surface density of $`\mathrm{Ly}\alpha `$ emission line galaxies expected in observations of a given sensitivity. Figure 3 shows that we expect to detect no more that five $`\mathrm{Ly}\alpha `$ emission line galaxies per arcmin<sup>2</sup> at $`z>4`$ that are stronger than $`1\times 10^{17}\mathrm{ergs}\mathrm{s}^1\mathrm{cm}^2`$ and no more than 10 $`\mathrm{Ly}\alpha `$ emission line galaxies per arcmin<sup>2</sup> at $`z>5`$ that are stronger than $`5\times 10^{18}\mathrm{ergs}\mathrm{s}^1\mathrm{cm}^2`$. ### Acknowledgments. This research was supported by NASA grant NACW-4422 and NSF grant AST-9624216. ## References Bertin, E. & Arnouts, S. 1996, A&AS, 117, 393 Chen, H.-W., Lanzetta, K. M., & Pascarelle, S. 1999, Nature, 398, 586 Dey, A. et al. 1998, ApJ, 498, L93 Fernández-Soto, A., Lanzetta, & Yahil, A. 1999, ApJ, 513, 34 Hu, E. M., Cowie, L. L., & McMahon, R. G. 1998, ApJL, 502, 99 Madau, P., Pozzetti, L., & Dickinson, M. 1998, ApJ, 498, 106 Osterbrock, D. E. 1989, Astrophysics of Gaseous Nebulae and Active Galactic Nuclei (University Science Books: California) Spinrad, H. et al. 1998, AJ, 116, 2617 Steidel, C. C. et al. 1999, ApJ, in press (astro-ph/9811399). Weymann, R. J. et al. 1998, ApJL, 505, 95
no-problem/9907/patt-sol9907005.html
ar5iv
text
# Quasiperiodic Envelope Solitons ## Abstract We analyse nonlinear wave propagation and cascaded self-focusing due to second-harmonic generation in Fibbonacci optical superlattices and introduce a novel concept of nonlinear physics, the quasiperiodic soliton, which describes spatially localized self-trapping of a quasiperiodic wave. We point out a link between the quasiperiodic soliton and partially incoherent spatial solitary waves recently generated experimentally. For many years, solitary waves (or solitons) have been considered as coherent localized modes of nonlinear systems, with particle-like dynamics quite dissimilar to the irregular and stochastic behaviour observed for chaotic systems . However, about 20 years ago Akira Hasegawa, while developing a statistical description of the dynamics of an ensemble of plane waves in nonlinear strongly dispersive plasmas, suggested the concept of an incoherent temporal soliton, a localised envelope of random phase waves . Because of the relatively high powers required for generating self-localised random waves, this notion remained a theoretical curiosity until recently, when the possibility to generate spatial optical solitons by a partially incoherent source was discovered in a photorefractive medium , known to exhibit strong nonlinear effects at low powers. The concept of incoherent solitons can be compared with a different problem: the propagation of a soliton through a spatially disordered medium. Indeed, due to random scattering on defects, the phases of the individual components forming a soliton experience random fluctuations, and the soliton itself becomes partially incoherent in space and time. For a low-amplitude wave (linear regime) spatial incoherence is known to lead to a fast decay. As a result, the transmission coefficient vanishes exponentially with the length of the system, the phenomenon known as Anderson localisation . However, for large amplitudes (nonlinear regime), when the nonlinearity length is much smaller than the Anderson localization length, a soliton can propagate almost unchanged through a disordered medium as predicted theoretically in 1990 and recently verified experimentally . These two important physical concepts, spatial self-trapping of light generated by an incoherent source in a homogeneous medium, and suppression of Anderson localisation for large-amplitude waves in spatially disordered media, both result from the effect of strong nonlinearity. When the nonlinearity is sufficiently strong it acts as an effective phase-locking mechanism by producing a large frequency shift of the different random-phase components, and thereby introducing an effective order into an incoherent wave packet, thus enabling the formation of localised structures. In other words, both phenomena correspond to the limit when the ratio of the nonlinearity length to the characteristic length of (spatial or temporal) fluctuations is small. In the opposite limit when this ratio is large the wave propagation is basically linear. What will happen in the intermediate case when the length scales of nonlinearity and fluctuations become comparable ? It is usually believed that localised structures would not be able to survive for such incoherent wave propagation and should rapidly decay. In this Letter we show that, at least for aperiodic inhomogeneous structures, solitary waves can exist in the form of quasiperiodic nonlinear localised modes. As an example we consider second-harmonic generation (SHG) and nonlinear beam propagation in Fibonacci optical superlattices, and demonstrate numerically the possibility of spatial self-trapping of quasiperiodic waves whose envelope amplitude varies quasiperiodically, while still maintaining a stable, well-defined spatially localised structure, a quasiperiodic envelope soliton. We consider the interaction of a fundamental wave (FW) with the frequency $`\omega `$ and its second harmonic (SH) in a slab waveguide with quadratic (or $`\chi ^{(2)}`$) nonlinearity. Assuming the $`\chi ^{(2)}`$ susceptibility to be modulated and the nonlinearity to be of the same order as diffraction, we write the dynamical equations in the form $$\begin{array}{c}i\frac{w}{z}+\frac{1}{2}\frac{^2w}{x^2}+d(z)w^{}ve^{i\beta z}=0,\hfill \\ i\frac{v}{z}+\frac{1}{4}\frac{^2v}{x^2}+d(z)w^2e^{i\beta z}=0,\hfill \end{array}$$ (1) where $`w(x,z)`$ and $`v(x,z)`$ are the slowly varying envelopes of the FW and SH, respectively. The parameter $`\beta =\mathrm{\Delta }k|k_\omega |x_0^2`$ is proportional to the phase mismatch $`\mathrm{\Delta }k=2k_\omega k_{2\omega }`$, $`k_\omega `$ and $`k_{2\omega }`$ being the wave numbers at the two frequencies. The transverse coordinate $`x`$ is measured in units of the input beam width $`x_0`$, and the propagation distance $`z`$ in units of the diffraction length $`l_d=x_0^2|k_\omega |`$. The spatial modulation of the $`\chi ^{(2)}`$ susceptibility is described by the quasi-phase-matching (QPM) grating function $`d(z)`$. In the context of SHG, the QPM technique is an effective way to achieve phase matching, and has been studied intensively (see Ref. for a comprehensive review). Here we consider a QPM grating produced by a quasiperiodic nonlinear optical superlattice. Quasiperiodic optical superlattices, one-dimensional analogs of quasicrystals , are usually designed to study the effect of Anderson localisation in the linear regime of light propagation. For example, Gellermann et al. measured the optical transmission properties of quasiperiodic dielectric multilayer stacks of SiO<sub>2</sub> and TiO<sub>2</sub> thin films and observed a strong suppression of the transmission . For QPM gratings, a nonlinear quasiperiodic superlattice of LiTaO<sub>3</sub>, in which two antiparallel ferro-electric domains are arranged in a Fibonacci sequence, was recently fabricated by Zhu et al. , who measured multi-colour SHG with energy conversion efficiencies of $`5\%20\%`$. This quasiperiodic optical superlattice in LiTaO<sub>3</sub> can also be used for efficient direct third harmonic generation . The quasiperiodic QPM gratings have two building blocks A and B of the length $`l_A`$ and $`l_B`$, respectively, which are ordered in a Fibonacci sequence \[Fig. 1(a)\]. Each block has a domain of length $`l_{A_1}`$=l ($`l_{B_1}`$=l) with $`d`$=$`+1`$ (shaded) and a domain of length $`l_{A_2}`$=$`l(1+\eta )`$ \[$`l_{B_2}`$=$`l(1\tau \eta )`$\] with $`d`$=$`1`$ (white). In the case of $`\chi ^{(2)}`$ nonlinear QPM superlattices this corresponds to positive and negative ferro-electric domains, respectively. The specific details of this type of Fibbonacci optical superlattices can be found elsewhere (see, e.g., Ref. and references therein). For our simulations presented below we have chosen $`\eta `$= $`2(\tau 1)/(1+\tau ^2)`$= 0.34, where $`\tau `$= $`(1+\sqrt{5})/2`$ is the so-called golden ratio. This means that the ratio of length scales is also the golden ratio, $`l_A/l_B`$= $`\tau `$. Furthermore, we have chosen $`l`$=0.1. The grating function $`d(z)`$, which varies between $`+1`$ and $`1`$ according to the Fibonacci sequence, can be expanded in a Fourier series $$d(z)=\underset{m,n}{}d_{m,n}e^{iG_{m,n}z},G_{m,n}=\frac{2\pi (m+n\tau )}{D},$$ (2) where $`D`$=$`\tau l_A+l_B`$=0.52 for the chosen parameter values. Hence the spectrum is composed of sums and differences of the basic wavenumbers $`\kappa _1`$=$`2\pi /D`$ and $`\kappa _2`$=$`2\pi \tau /D`$. These components fill the whole Fourier space densely, since $`\kappa _1`$ and $`\kappa _2`$ are incommensurate. Figure 1(b) shows the numerically calculated Fourier spectrum $`G_{m,n}`$. The lowest-order “Fibonacci modes” are clearly the most intense. From Eq. (2) and the numerically found spectrum we identify the six most intense modes presented in Table 1. The corresponding wavenumbers $`G_{m,n}`$ are in good agreement with Eq. (2). | $`m`$ | 1 | 0 | 1 | 2 | 1 | 2 | | --- | --- | --- | --- | --- | --- | --- | | $`n`$ | 1 | 1 | 2 | 3 | 0 | 4 | | $`G_{m,n}`$ | 31.42 | 19.42 | 50.83 | 82.25 | 12.00 | 101.66 | TABLE 1. The six most intense Fibonacci modes $`G_{m,n}`$. To analyse the beam propagation and SHG in a quasiperiodic QPM grating one could simply average Eqs. (1). To lowest order this approach always yields a system of equations with constant mean-value coefficients, which does not allow to describe oscillations of the beam amplitude and phase. However, here we wish to go beyond the averaged equations and consider the rapid large-amplitude variations of the envelope functions. This can be done analytically for periodic QPM gratings . However, for the quasiperiodic gratings we have to resolve to numerical simulations. Thus we have solved Eqs. (1) numerically with a second-order split-step routine, in which the linear part is solved with the fast-Fourier-transform (FFT) method and the nonlinear part, with a fourth-order Runge-Kutta scheme. The step-length is adapting to the local domain length of the QPM grating. At the input of the crystal we excite the fundamental beam (corresponding to unseeded SHG) with a Gaussian profile, $$w(x,0)=A_we^{x^2/10},v(x,0)=0.$$ (3) We consider the quasiperiodic QPM grating with matching to the peak at $`G_{2,3}`$, i.e., $`\beta `$=$`G_{2,3}`$=82.25. First, we study the small-amplitude limit when a weak FW is injected with a low amplitude. Figures 2(a,b) show an example of the evolution of FW and SH in this effectively linear regime. As is clearly seem from Fig. 2(b) the SH wave is excited, but both beams eventually diffract. When the amplitude of the input beam exceeds a certain threshold, self-focusing and localization should be observed for both harmonics. Figures 2(c,d) show an example of the evolution of a strong input FW beam, and its corresponding SH. Again the SH is generated, but now the nonlinearity is so strong that it leads to self-focusing and mutual self-trapping of the two fields, resulting in a spatially localized two-component soliton, despite the continuous scattering of the quasiperiodic QPM grating. It is important to notice that the two-component localised beam created due to the self-trapping effect is quasiperiodic by itself. As a matter of fact, after an initial transient its amplitude oscillates in phase with the quasiperiodic QPM modulation $`d(z)`$. This is illustrated in Fig. 3, where we show in more detail the peak intensities in the asymptotic regime of the evolution. Since the oscillations shown in Fig. 3 are in phase with the oscillations of the QPM grating $`d(z)`$, their spectra should be similar. This is confirmed by Fig. 4, which gives the spectrum of the peak intensity $`|w(z,0)|^2`$ of the FW. Note that the Fibonacci peak at $`k`$=82.25 is suppressed (or reduced) because the identical mismatch $`\beta `$ down-converts it to the dc-component. Sum and difference wavenumbers between $`\beta `$ and $`G_{m,n}`$ appear, which are generated by the nonlinearity. For example, the component at $`k`$=62.8 is the difference between $`\beta `$=82.25 and $`G_{0,1}`$=19.42. Our numerical results show that the quasiperiodic envelope solitons can be generated for a broad range of the phase-mismatch $`\beta `$. The amplitude and width of the solitons depend on the effective mismatch, which is the separation between $`\beta `$ and the nearest strong peak $`G_{m,n}`$ in the Fibbonacci QPM grating spectrum. Thus, low-amplitude broad solitons are excited for $`\beta `$-values in between peaks, whereas high-amplitude narrow solitons are excited when $`\beta `$ is close to a strong peak, as shown in Fig. 2(c,d). The existence of spatially localized self-trapped states in nonlinear quasiperiodic media should not depend on the particular kind of nonlinearity. The dependence on $`\beta `$ observed here for the $`\chi ^{(2)}`$ gratings is simply because the ”real” strength of the $`\chi ^{(2)}`$ nonlinearity is inversely proportional to the phase-mismatch. In fact, it is well-known that for large values of the mismatch $`\beta `$ the quadratic nonlinearity becomes effectively cubic . Thus, our findings are directly applicable to nonlinear optical superlattices in cubic (or $`\chi ^{(3)}`$) nonlinear media. To analyse in more detail the transition between the linear (diffraction) and nonlinear (self-trapping) regimes, we have made a series of careful numerical simulations. In Fig. 5 we show the transmission coefficients and the beam widths at the output of the crystal versus the intensity of the FW input beam, for a variety of $`\beta `$-values. These dependencies clearly illustrates the universality of the generation of localised modes for varying strength of nonlinearity, i.e. a quasiperiodic soliton is generated only for sufficiently high amplitudes. This is of course a general phenomenon also observed in many nonlinear isotropic media. However, here the self-trapping occurs for quasiperiodic waves, with the quasiperiodicity being preserved in the variation of the amplitude of both components of the soliton. Numerical simulations for other values of the phase-mismatch $`\beta `$ reveal the same basic property of quasiperiodic self-trapping: Spatial solitons are formed in Fibonacci quadratic nonlinear slab waveguides above a certain power threshold, and such solitons are always quasiperiodic, i.e. they exhibit large-amplitude oscillations along $`z`$, which are composed of mixing of the two incommensurate Fibonacci wavenumbers $`\kappa _1`$ and $`\kappa _2`$. The amplitude and width of these solitons depend on the difference between the phase-mismatch $`\beta `$ and the nearest strong peak $`G_{m,n}`$ in the Fibonacci spectrum. Finally we would like to emphasize that the phenomenon described here is qualitatively different from the propagation of topological and nontopological kinks in disordered and quasiperiodic nonlinear media . Such kinks can be well approximated by an effective structureless particle, which either preserves identity, as in the case of topological kinks , or decays rapidly into radiation . In conclusion, we have analysed SHG, self-focusing, and nonlinear beam propagation in Fibonacci optical superlattices with a quadratic nonlinear response. We have predicted spatial self-trapping of quasiperiodic waves and the formation of quasiperiodic solitons. Such solitons have a localised envelope that traps the random-phase components through the phase and frequency locking effect of strong nonlinearity, and whoes amplitude undergoes clearly detectable quasiperiodic oscillations. The results presented here would allow to extend the concepts of self-localisation and self-modulation of nonlinear waves to a broader class of spatially inhomogeneous media, and can also be found in systems of different physical context. The authors acknowledge support from the Danish Technical Research Council (Talent Grant no. 9800400), the Danish Natural Science Research Council (Grant no. 9600852), and the Department of Industry, Science, and Tourism (Australia).
no-problem/9907/hep-ph9907280.html
ar5iv
text
# 1 Introduction ## 1 Introduction Before the prediction of a perturbative QCD calculation (“parton-level” cross section) can be compared to a measured “hadron-level” jet cross section, the size of non-perturbative contributions (“hadronization corrections”) has to be estimated. Advanced techniques based on “power corrections” are presently only available for the mean values of event shape variables and predict very large hadronization corrections for most of the HERA kinematic range, preventing these observables to be used for stringent tests of perturbative QCD. For such tests observables with small hadronization corrections are needed, for example the production rate of jets with high transverse energies<sup>1</sup><sup>1</sup>1Throughout the whole paper “transverse energy” always refers to transverse energies in the Breit frame, where “transverse” means the direction perpendicular to the z-axis, which is given by the direction of the incoming proton and the exchanged virtual photon.. Predictions of hadronization corrections to these observables are presently only available in the form of phenomenological fragmentation models such as the Lund string model (as implemented in JETSET ) and the HERWIG cluster fragmentation model . These models are implemented in event generators that include leading order matrix elements and a perturbative parton cascade which is matched to the hadronization model. Based on these models, hadronization corrections are compared for different jet definitions, including a new angular ordered jet clustering algorithm (“Aachen algorithm”). The model dependence and the dependence on model parameters is investigated. These model estimates are usually needed for comparisons of perturbative QCD in next-to-leading order (NLO) to measured data distributions. We therefore also discuss the compatibility of jet topologies in parton cascade models and in NLO. Finally we review the uncertainties of NLO predictions and compare their size to the uncertainties of the estimates of hadronization corrections. ## 2 Definitions The present study includes four different jet clustering algorithms which differ in two aspects in how they define jets. The first aspect is the ordering in the clustering of particles. This is either done in the order of smallest relative transverse momenta (“$`k_{}`$ ordering”) or in the order of smallest angles (“angular ordering”) between particles. The second aspect concerns the definition of the jets inside the event. In one case all particles are clustered either to one of the hard jets or to the proton remnant (“exclusive” definitions), while in the other case only some particles are clustered into the hard jets, while other particles remain outside the hard jets (“inclusive” definitions). The following four jet definitions are used, all in the Breit frame: * the exclusive $`k_{\mathbf{}}`$ ordered algorithm as proposed in . * the Cambridge algorithm as proposed in but modified for DIS to consider the proton remnant as a particle of infinite momentum according to the prescription in . This algorithm is similar to the exclusive $`k_{}`$ but uses angular ordering. * the inclusive $`k_{\mathbf{}}`$ ordered algorithm as proposed in . * the Aachen algorithm — this is a new jet definition, invented for these comparisons. In analogy to the modification from the exclusive $`k_{}`$ algorithm to the Cambridge algorithm, we have modified the inclusive $`k_{}`$ algorithm to obtain an inclusive algorithm with angular ordering. The definition is very simple: particles with smallest $`R_{ij}^2=\mathrm{\Delta }\eta _{ij}^2+\mathrm{\Delta }\varphi _{ij}^2`$ are successively clustered into jets, until all distances $`R_{ij}`$ between jets are above some value $`R_0`$ (as for the inclusive $`k_{}`$ algorithm we set $`R_0=1`$). The jets with highest $`E_T`$ are considered in the analysis. In dijet production in the Breit frame this definition is, at NLO, identical to the inclusive $`k_{}`$ algorithm. For the exclusive jet definitions the recombination of particles is performed in the “$`E`$-scheme” (addition of four-vectors), while for the inclusive definitions it is done in the “$`E_T`$-scheme” (the $`E_T`$ of the jet is the scalar sum of the particle $`E_T`$s) . To obtain jet cross sections of similar size for all jet definitions the following parameters are used<sup>2</sup><sup>2</sup>2These parameters are identical to those used in a recent dijet analysis by the H1 collaboration .: * for the inclusive jet definitions jets are required to have $$E_{T\mathrm{jet}}>5\text{GeV},E_{T\mathrm{jet1}}+E_{T\mathrm{jet2}}>17\text{GeV}$$ * for the exclusive jet definitions, the resolution of jets in the event is defined by the resolution parameter $`y_{\mathrm{cut}}`$ $$y_{\mathrm{cut}}<k_{ij}^2/100\text{GeV}^2\text{with}\text{ }k_{ij}^2=2\mathrm{min}(E_i^2,E_j^2)(1\mathrm{cos}\theta _{ij})\text{and}\text{ }y_{\mathrm{cut}}=1.$$ Only events with (at least) two jets in the central region of the detector acceptance ($`1<\eta _{\mathrm{jet},\mathrm{lab}}<2.5`$) are accepted. The studies are performed in the kinematic range $`0.2<y<0.6`$ and $`150<Q^2<5000\text{GeV}^2`$ (unless stated otherwise). We define the hadronization corrections to an observable $`𝒪`$ as the ratio of its value in a perturbative calculation (“parton-level”: $`𝒪_{\mathrm{parton}}`$) and its value in a calculation including perturbative and non-perturbative contributions (“hadron-level”: $`𝒪_{\mathrm{hadron}}`$) $`c_{\mathrm{hadr}.\mathrm{corr}.}=𝒪_{\mathrm{parton}}/𝒪_{\mathrm{hadron}}.`$ All predictions have been obtained by the QCD models HERWIG5.9 (using leading order matrix elements (LO ME), parton shower and cluster fragmentation), LEPTO6.5 (LO ME, parton shower and string fragmentation) and ARIADNE4.08 (LO ME, dipole cascade and string fragmentation). The calculations have been performed for the HERA data-taking in 1997 ($`820\text{GeV}`$ protons collided with $`27.5\text{GeV}`$ positrons) using CTEQ4L parton distributions and the 1-loop formula for the running of $`\alpha _s`$. The LEPTO predictions are obtained without the soft color interaction model. The NLO calculations are performed using the program DISENT in the $`\overline{\mathrm{MS}}`$-scheme for CTEQ4M parton distributions and the 2-loop formula for the running of $`\alpha _s`$. The renormalization scale is set to the average transverse energy of the dijet system $`\mu _r=\overline{E}_T`$, the factorization scale to the mean $`E_T`$ of the jets $`\mu _f=E_T14\mathrm{GeV}`$. ## 3 Size and Model Dependence of the Predictions The hadronization corrections as defined above are shown in Fig. 1 for the HERWIG model as a function of $`Q^2`$ for the different jet definitions. While at $`Q^2>1000\text{GeV}^2`$ all jet definitions have similar and reasonably small corrections (below 10%), at smaller $`Q^2`$ large differences are seen. In all cases the corrections are smaller for inclusive jet definitions than for exclusive definitions, and smaller for $`k_{}`$ ordered algorithms than for angular ordered ones. Only the inclusive $`k_{}`$ algorithm shows a small $`Q^2`$ dependence and acceptably small corrections, even down to very small $`Q^2`$ values (below 10%). For this definition we will study in more detail differential distributions. In Fig. 2 the hadronization corrections from different models are shown as a function of the average transverse jet energy $`\overline{E}_T`$ and the reconstructed parton momentum fraction $`\xi `$ in different regions of $`Q^2`$. While the corrections for the $`\xi `$ distribution are flat in all $`Q^2`$ regions, we observe a slight decrease towards higher $`\overline{E}_T`$. The predicted corrections agree within 3% between the different models. The predictions of these models may, of course, depend on parameters that define the perturbative parton cascade, as well as on parameters of the hadronization model. We have investigated the sensitivity of the LEPTO/JETSET model predictions to variations of some parameters as listed in Table 1. Fig. 3 gives an overview on the effects of these variations which are seen to be small in all cases (less than 4% level). ## 4 Parton Cascade Models vs. NLO Calculations There is no unique way to separate perturbative and non-perturbative contributions in theoretical calculations. A consistent treatment requires a well defined matching of both contributions, e.g. by the introduction of an “infrared matching scale” . This, however, is not (yet) available for high $`E_T`$ jet cross sections in DIS. The only available predictions are those of the hadronization models mentioned above. So the following question appears: “What are the uncertainties if we nevertheless use these model predictions for estimating the hadronization corrections to be applied to NLO calculations?” Our attempt to tackle this problem is to assume that non-perturbative effects alter the production rates of multi-jet events only due to the change of the final state topology. Hadronization effects for example cause particles to migrate out of the phase space considered for a particular jet, leading to a decrease of the jet’s transverse energy. For a fixed $`E_{T\mathrm{jet}}`$ selection cut the resulting jet cross section will be reduced in this case. The argument is therefore the following: If the final states in the NLO calculation and parton shower models show the same properties, the same influence of hadronization processes is to be expected for both. In this case the model predictions can be used to estimate the hadronization corrections for the NLO calculation. In the following we address this question by comparing the predictions of the parton cascade models and NLO for the distribution of jets inside the event (angular jet distributions), the internal structure of the single jets (subjet multiplicities) and the dependence of the dijet cross section on the $`R_0`$ parameter in the jet definition. ### 4.1 Higher Order Corrections to Angular Jet Distributions The pseudorapidity distributions of jets are shown in Fig. 4 for the forward and the backward jet in the HERA laboratory frame (top), for the forward jet in the Breit frame (bottom left) and for the average jet pseudorapidity of the dijet system (bottom right). Compared are the predictions from the leading-order matrix elements (LO, necessarily the same for DISENT, HERWIG, LEPTO and ARIADNE) and those including higher order corrections from either the next-to-leading order, or as given by parton showers (HERWIG, LEPTO) or the dipole cascade (ARIADNE) for the inclusive $`k_{}`$ algorithm. All angular jet distributions are shifted by the NLO corrections to the forward (i.e. the proton) direction — a feature which is reproduced by all parton cascade models (with the exception of the shift in $`\eta _{\mathrm{forward},\mathrm{lab}}`$ by ARIADNE). We therefore do not derive any uncertainty on the estimation of hadronization corrections for NLO from the study of the angular jet distributions. ### 4.2 Internal Jet Structure Another test of the comparability of the different approaches is the internal structure of jets. We decided to compare the average number of subjets that are resolved at a resolution scale $`y_{\mathrm{cut}}`$ which is a fraction of the transverse jet energy (a detailed definition of this observable can be found in ). The average number of subjets is shown in Fig. 5 as a function of the resolution parameter $`y_{\mathrm{cut}}`$ in different regions of $`E_{T\mathrm{jet}}`$ for an inclusive jet sample in the same $`\eta _{\mathrm{lab}}`$ region where the dijet sample is defined. These subjet multiplicities are sensitive to perturbative processes at larger $`y_{\mathrm{cut}}`$ values, while towards smaller $`y_{\mathrm{cut}}`$ non-perturbative contributions become increasingly important. At smaller $`y_{\mathrm{cut}}`$ the $`𝒪(\alpha _s^2)`$ calculation<sup>3</sup><sup>3</sup>3While the $`𝒪(\alpha _s^2)`$ calculation makes next-to-leading order predictions for dijet cross sections, it describes the internal structure of jets only at leading order. has a very different behavior than the parton cascade models where the number of subjets is limited by the available number of partons due to the cutoff in the parton shower while the $`𝒪(\alpha _s^2)`$ calculation smoothly approaches the divergence at $`y_{\mathrm{cut}}0`$. These differences become smaller towards higher $`E_T`$ where both approaches show similar qualitative behavior, although significant differences remain. Especially the dipole cascade in ARIADNE gives a much smaller number of subjets than the $`𝒪(\alpha _s^2)`$ calculation. The best agreement with the $`𝒪(\alpha _s^2)`$ calculation is observed for HERWIG at larger values of $`y_{\mathrm{cut}}`$ (which characterize the last steps in the clustering procedure). Larger values of $`y_{\mathrm{cut}}`$ are hence connected to the coarse structure of jets which is the region where parton cascades and NLO can be compared. It is important to note that the spread of the models in the relevant region of larger $`y_{\mathrm{cut}}`$ is of the same order as their difference to the NLO calculation. In the previous section we demonstrated that the predicted hadronization corrections agree well between the different models. We therefore conclude that 1) the hadronization corrections are not sensitive to differences in the subjet multiplicities, and 2) that the observed differences between model predictions and NLO in the subjet multiplicities do not enter as an uncertainty in the estimation of the hadronization corrections to NLO predictions. ### 4.3 Radius Dependence of the Dijet Cross Section The definition of the inclusive $`k_{}`$ algorithm contains a single free parameter $`R_0`$ which defines the maximal distance within which particles are clustered in each step. It follows that the final jets are all separated by distances above $`R_0`$. The dependence of the dijet cross section on the value of $`R_0`$ in the jet definition is directly correlated to the broadness of the jets. In Fig. 6 we compare the $`R_0`$ dependence of the dijet cross section for parton jets (left) and for hadron jets (right). Shown is the ratio of the dijet cross section at $`R_0`$ to the dijet cross section at $`R_0=1`$ in the range $`0.6<R_0<1.2`$. For the (broader) hadron jets a large $`R_0`$ dependence is observed (HERWIG: $`\pm 13\%`$ for a reasonable variation $`0.8<R_0<1.2`$ around the prefered value of $`R_0=1`$). This dependence is reduced for the parton jets, but slightly different for NLO ($`\pm 4\%`$) than for the parton cascade models (HERWIG: $`\pm 7\%`$). The different behavior of the NLO dijet cross sections and the model predictions for the same observable as a function of $`R_0`$ constitutes an uncertainty when applying the model predictions of the hadronization corrections to the NLO calculations. This difference (and the corresponding uncertainty) is however below 5%. ## 5 Properties of the Perturbative Cross Sections In this section we give a brief overview on some properties and uncertainties of the perturbative NLO cross sections. Fig. 7 (left) shows the size of the NLO corrections (i.e. the k-factor which is defined as the ratio of the NLO and the LO cross section) for the inclusive $`k_{}`$ algorithm as a function of $`Q^2`$. To be sensitive to the fraction of the $`𝒪(\alpha _s^2)`$ contributions, both the NLO and the LO calculations have been performed using the same (CTEQ4M) parton densities and the 2-loop formula for the running of $`\alpha _s`$. The k-factor is shown for two different choices of the renormalization scale: $`\mu _r=\overline{E}_T`$ and $`\mu _r=Q`$. For both scales the k-factor shows a strong dependence on $`Q^2`$. While at large $`Q^2`$ the NLO corrections are small, they become sizeable for $`Q^2<100\text{GeV}^2`$. Throughout it is seen that the k-factor is smaller for a renormalization scale of the order of the transverse jet energies. It is usually assumed that the scale dependence of a cross section is somehow correlated to the possible size of higher order corrections, and therefore a measure of the uncertainty. Fig. 7 (right) shows the relative change of the dijet cross section when the renormalization scale $`\mu ^2`$ is changed by a factor of four up and down. The comparison is made for the scales $`\mu _r=\overline{E}_T`$ and for $`\mu _r=Q`$. The dependence on the renormalization scale becomes large at small $`Q^2`$. Only for $`Q^2>100\text{GeV}^2`$ this dependence is reasonably small (below 10%). Over the whole range of $`Q^2`$ the renormalization scale dependence is smaller for the scale $`\mu _r=\overline{E}_T`$ than for the scale $`\mu _r=Q`$. The same variation has been studied for the factorization scale and yields a negligible dependence (below 2 $`\%`$) over the whole range. ## 6 Summary and Conclusions Hadronization corrections to jet cross sections in deep-inelastic scattering have been investigated based on predictions from the hadronization models HERWIG and JETSET as implemented in the event generators HERWIG, LEPTO and ARIADNE. It is seen that these corrections are smaller for inclusive and $`k_{}`$ ordered jet definitions as compared to exclusive and angular ordered algorithms. For reasonably large transverse jet energies, the inclusive $`k_{}`$ algorithm has hadronization corrections below 10% over very large regions of phase space. The predictions from different models are in very good agreement and show only a weak dependence on the settings of specific model parameters. For the inclusive $`k_{}`$ algorithm the corresponding uncertainties are not larger than 4%. A consequent and consistent consideration of hadronization corrections for perturbative next-to-leading order (NLO) predictions requires a well-defined matching of a hadronization model to the NLO calculation. This is, however, not (yet) available. Any other approach can only be an approximation and is subject to various uncertainties. Based on the assumption that these uncertainties are directly connected to the differences in the final state topology in parton cascade models and in NLO calculations, we have compared their predictions for various topological variables. It is seen that changes in angular jet distributions w.r.t. leading order calculations are very similar for the parton cascade models and NLO and hence do not contribute to the uncertainty. The subjet multiplicities show differences in their behavior in the NLO calculations and the parton cascade models. These differences can however be shown to have no significant influence on the predictions of the hadronization corrections. The dependence on the radius parameter $`R_0`$, which is directly correlated to the broadness of the jets, turns out to be different for NLO and the parton cascades, leading to an uncertainty of less than 5%. We conclude that for the inclusive $`k_{}`$ jet algorithm at sufficiently large $`Q^2`$ and $`E_{T\mathrm{jet}}`$ the hadronization corrections are under control with uncertainties not larger than those of the perturbative NLO calculations. This will allow meaningful tests of perturbative QCD with a precision of better than 10%.
no-problem/9907/hep-ph9907306.html
ar5iv
text
# 1 Introduction ## 1 Introduction The absence of flavor changing neutral currents at tree level is one of the most severe criteria to select among the various candidates which could generalize the Glashow-Salam-Weinberg model of electroweak interactions. They are in general linked to higher order corrections and their experimental quest is of high interest for the pursuit of theoretical investigations and the search for hints of new physics beyond the standard model. Among the decays of particular interest involving flavor changing neutral currents are $$K^+\pi ^+\mathrm{}\overline{\mathrm{}}$$ (1) where the $`\mathrm{}`$’s stand for leptons which can be charged (electrons or muons) or neutral (neutrinos). One event compatible with a two-neutrinos final state has recently been observed corresponding to a branching ratio $$BR(K^+\pi ^+\nu \overline{\nu })_{exp}=4.2\begin{array}{c}+9.7\\ 3.5\end{array}\mathrm{\hspace{0.33em}10}^{10},$$ (2) while recent theoretical calculations find, for massless neutrinos, an absolute upper bound $$BR(K^+\pi ^+\nu \overline{\nu })_{th}<\mathrm{1.22\hspace{0.33em}10}^{10},$$ (3) and the authors of claim “a clear conflict with the Standard Model if $`BR(K^+\pi ^+\nu \overline{\nu })`$ should be measured at $`\mathrm{2\hspace{0.33em}10}^{10}`$ ”, that is less than one-half the experimental average (2). At the same time, we are witnessing a dramatic change in our perception of neutrinos since there seems to be more and more compelling evidence that their flavor eigenstates oscillate during their travel across vacuum or matter, which is the best model-independent sign of them being massive. The question that can naturally be raised is whether massive neutrinos can substantially increase the rate of the above decay in the standard framework, specially through penguin-like diagrams where the Higgs boson couples to the internal $`W`$ gauge boson or to the very massive top quark. The computations have just been performed in the case of $`K_L\pi ^0\nu \overline{\nu }`$ and showed that the influence of massive neutrinos is totally negligeable when mass and flavour leptonic eigenstates coincide, and has a relative upper bound of no more than $`1/10`$ when flavour mixing is allowed. Rare semi-leptonic $`K`$ decays provide consequently a good testing ground for physics beyond the standard model . I investigate below the influence of neutrino masses on the decay $`K^+\pi ^+\nu \overline{\nu }`$ in the framework of the extension of the Glashow-Salam-Weinberg model to an $`SU(2)_L\times U(1)`$ gauge theory of $`J=0`$ mesons proposed in . It is built with a maximum compatibility with the standard model in the quark-gauge sector <sup>1</sup><sup>1</sup>1Relying on this, the contributions involving gauge bosons will be assumed to be of the same order of magnitude as when computed in the standard model. : chiral and electroweak properties of quarks are included from the start <sup>2</sup><sup>2</sup>2The $`SU(2)_L\times U(1)`$ electroweak group is embedded into the larger chiral $`U(N)_L\times U(N)_R`$ group ($`N/2`$ is the number of generations), which is only possible when $`N`$ is even; this is to be compared with chiral perturbation theory , which is always performed for an odd (three) number of flavours; in the present framework, the relevant chiral breaking appears instead to be the one of $`SU(2)_L\times SU(2)_R`$ into its diagonal subgroup, the custodial $`SU(2)`$. ; it however differs in the Higgs-scalar sector in that the three Goldstones of the broken symmetry(es) are now related through a scaling factor (see (7) below) to pseudoscalar mesons and the Higgs boson is naturally incorporated as one among the $`J=0`$ mesons, which all are considered to transform like quark-antiquark composite fields. The $`SU(2)_L\times U(1)`$ electroweak group is embedded into the larger chiral $`U(N)_L\times U(N)_R`$ group ($`N/2`$ is the number of generations), which is only possible when $`N`$ is even; this is to be compared with chiral perturbation theory , which is always performed for an odd (three) number of flavours; in the present framework, the relevant chiral breaking appears instead to be the one of $`SU(2)_L\times SU(2)_R`$ into its diagonal subgroup, the custodial $`SU(2)`$. The process under concern can accordingly, now, be also mediated by the Higgs boson which, because of the Cabibbo-Kobayashi-Maskawa (CKM) rotation, connects, on one side through the mexican-hat potential, pseudoscalar mesons with different flavors <sup>3</sup><sup>3</sup>3The occurrence of these flavor changing neutral currents have already been mentioned in in the decays of the $`Z`$ boson into two leptons and two pseudoscalar mesons. , and, on the other side couples through Yukawa couplings to massive neutrinos. ## 2 The (non-standard) Higgs contribution to $`𝑲^\mathbf{+}\mathbf{}𝝅^\mathbf{+}\mathbf{}\overline{\mathbf{}}`$ ### 2.1 Theoretical framework The theoretical framework has been set in and (section 2); we work in the approximation where the electroweak quadruplet of $`J=0`$ mesons containing the Higgs boson and the three (pseudoscalar) goldstone bosons is <sup>4</sup><sup>4</sup>4The $`N^2/2`$ quadruplets of $`J=0`$ mesons corresponding to $`N`$ flavors of quarks are labeled by $`(N/2)^2`$ real matrices $`𝔻`$ of dimension $`N/2\times N/2`$ and $`𝔻_1`$ is the corresponding unit matrix; see for the notations. $$(H,\stackrel{}{G})=(𝕊^0,\stackrel{}{})(𝔻_1);$$ (4) this is akin to taking as non-vanishing only the flavor-diagonal quark condensates and to choosing all of them to be identical <sup>5</sup><sup>5</sup>5In this limit, the two neutral kaons are not coupled to the Higgs boson, which consequently does not participate, in particular, to their decays into $`\pi ^0\mathrm{}\overline{\mathrm{}}`$. . The quartic term in the “mexican hat” potential triggers, after symmetry breaking, a coupling between the Higgs boson and two of the three Goldstones of the broken electroweak symmetry (or, equivalently, of the breaking of $`SU(2)_L\times SU(2)_R`$ into the custodial $`SU(2)_V`$ ). Because of the CKM rotation, the Goldstones are not flavor eigenstates but mixtures of them, which entails new type of flavour changing neutral currents, connecting in particular $`K^+`$ and $`\pi ^+`$ mesons. On the other side, the Yukawa couplings that are introduced between leptons and the real quadruplet (complex doublet) (4) to give Dirac masses to the former couple the Higgs boson to leptonic mass eigenstates (which may not be flavour eigenstates in the case of neutrinos). So, working with two generations ($`N=4`$), the two vertices involved in the diagram of Fig. 1 write $`V_{hK\pi }`$ $`=`$ $`i\sqrt{2}c_\theta s_\theta \lambda v,`$ (5) $`V_{h\mathrm{}\overline{\mathrm{}}}`$ $`=`$ $`i{\displaystyle \frac{m_{\mathrm{}}^D}{v/\sqrt{2}}},`$ (6) where $`\lambda `$ is the coupling constant of the quartic term in the mexican-hat potential, $`c_\theta `$ and $`s_\theta `$ are the cosine and sine of the Cabibbo angle, $`v/\sqrt{2}`$ is the vacuum expectation value of the Higgs boson, and $`m_{\mathrm{}}^D`$ is the Dirac mass of the outgoing lepton. The mass of the Higgs boson is $`M_h^2=\lambda v^2`$; as it is supposed to be much larger than the mass of the incoming mesons, we shall neglect the momentum dependence of the Higgs propagator, which makes the amplitude independent of $`\lambda `$. Fig. 1: The Higgs mediated decay $`K^+\pi ^+\mathrm{}\overline{\mathrm{}}`$ ### 2.2 Calculation of the decay rate for $`𝑲^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝝂\overline{𝝂}`$ The calculation includes the normalization factor $$\text{b}=\frac{H}{2f_0}=\frac{v}{2\sqrt{2}f_0}$$ (7) which relates the fields of dimension $`[mass]`$ in the Lagrangian to observed asymptotic mesons; $`f_0`$ is the generic leptonic decay constant of pseudoscalar mesons, that we consider to be the same for $`K`$ and $`\pi `$; b modifies accordingly (see ) the phase-space measure for the outgoing pion, which becomes itself proportional to $`\text{b}^2`$; this makes the decay rate (8) below proportional to $`1/\text{b}^2`$, that is to $`f_0^2`$, and yields a $`G_F^3`$ dependence though one deals with a tree-level diagram. The decay rate writes $$\mathrm{\Gamma }_{K^+\pi ^+\nu \overline{\nu }}=(s_\theta c_\theta )^2\frac{\sqrt{2}f_0^2G_F^3(m_\nu ^D)^2}{\pi ^3M_K^3}\left[\frac{(M_K^2M_\pi ^2)^3}{3}+\frac{M_K^2+M_\pi ^2}{2}\left(M_K^4M_\pi ^42M_K^2M_\pi ^2\mathrm{ln}\frac{M_K^2}{M_\pi ^2}\right)\right],$$ (8) where having neglected their masses in the phase-space integration for neutrinos enabled to get an analytic expression, in which the sole dependence on the Dirac neutrino mass $`m_\nu ^D`$ comes from their coupling to the Higgs boson. The value of the corresponding branching ratio is plotted on Fig. 2 as a function of $`m_\nu ^D`$. Fig. 2: The branching ratio $`\mathrm{\Gamma }_{K^+\pi ^+\nu \overline{\nu }}/\mathrm{\Gamma }_{K^+all}`$ as a function of $`m_\nu ^D`$ ### 2.3 Influence of the neutrino spectrum We have supposed a hierarchical scheme for the neutrino masses and only considered the coupling of the Higgs to the heaviest one. In case non-sterile neutrinos are roughly degenerate, the three corresponding, nearly equivalent, amplitudes should be added; as a result, Fig. 2 should be read for $`3m_\nu ^D`$ instead of $`m_\nu ^D`$. As detecting and identifying the outgoing neutrinos by their flavor properties is well beyond present experimental ability, there is no purpose in introducing the leptonic mixing matrix and studying a precise channel. ### 2.4 Calculation of the decay rates for $`𝑲^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝒆^\mathbf{+}𝒆^{\mathbf{}}`$ and $`𝑲^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝝁^\mathbf{+}𝝁^{\mathbf{}}`$ It is important to check that the same mechanism does not grossly alters the standard predictions in the cases where the two outgoing leptons are charged (electrons or muons). The calculations go along the same way, except that one has to keep the dependence on their masses when performing the phase space integral; the final evaluation can then only be numerical. One gets $$\mathrm{\Gamma }_{K^+\pi ^+e^+e^{}}=\mathrm{6.4\hspace{0.33em}10}^{29}GeV$$ (9) and $$\mathrm{\Gamma }_{K^+\pi ^+\mu ^+\mu ^{}}=\mathrm{8.37\hspace{0.33em}10}^{25}GeV;$$ (10) this corresponds to the branching ratios $$BR(K^+\pi ^+e^+e^{})=\mathrm{1.19\hspace{0.33em}10}^{12}$$ (11) and $$BR(K^+\pi ^+\mu ^+\mu ^{})=\mathrm{1.56\hspace{0.33em}10}^8,$$ (12) to be compared with the experimental values $$BR_{exp}(K^+\pi ^+e^+e^{})=2.74\pm \mathrm{.23\hspace{0.33em}10}^7$$ (13) and $$BR_{exp}(K^+\pi ^+\mu ^+\mu ^{})=5\pm \mathrm{1\hspace{0.33em}10}^8.$$ (14) This shows that the non-standard Higgs contribution is negligeable in the case of two outgoing electrons, and within the range of the experimental uncertainty in the case of two outgoing muons. In those two last cases, ours is consequently not expected to modify present theoretical calculations. ## 3 An upper bound for the Dirac mass term of the heaviest non-sterile neutrino We suppose that standard calculations could at the maximum account for a branching ratio$`BR_{K^+\pi ^+\nu \overline{\nu }}\mathrm{1.5\hspace{0.33em}10}^{10}`$; having no information on the relative sign between the standard amplitude and the new Higgs-mediated contribution, we can only say that the latter dominates when it yields a partial decay rate at least twice as large as the limit above, that is for $`m_\nu ^D2.5MeV`$. Then the experimental upper bound (2) entails $$m_\nu ^D5.5MeV$$ (15) which is three times lower than the direct bound $$m_{\nu _\tau }18.2MeV.$$ (16) The average experimental value (2) corresponds to $$m_\nu ^D3MeV.$$ (17) This value is much higher than generally presumed order of magnitudes for masses of non-sterile neutrinos coming form recent results on solar and atmospheric neutrinos , setting the scales for the mass splittings, combined with absolute upper bounds on neutrinos masses , in particular the ones coming from studying the spectrum of the $`\beta `$ decay of $`{}_{}{}^{3}H`$ $$m_{\nu _e}3.9eV(90\%CL).$$ The only known mechanism which could account for such a discrepancy between the observed neutrino mass and a Dirac mass term is the so-called “see-saw” mechanism in which, in addition to the Dirac mass term, a Majorana mass term $`m_\nu ^D`$ is generated through a coupling to a new triplet of scalars; it is associated with a new scale of physics (right-handed gauge fields, grand unified theories etc). It is thus of interest to study this case here. However, as shown below, the only effect of advocating for such a mechanism is to replace in the expression of the decay rate (8) the Dirac neutrino mass $`m_\nu ^D`$ by the one of the lightest Majorana eigenstate. The conclusion is thus maintained that the new process advocated here has to be considered only for neutrino masses in the $`MeV`$ range, and is negligeable if they lie in the $`eV`$ range. After the diagonalisation of this general mass matrix involving the two types of mass terms , the two eigenstates are Majorana neutrinos $`\nu _1=i(\nu _L(\nu _L)^c),\nu _2=\nu _R+(\nu _R)^c`$, with masses respectively $$m_1=(m_\nu ^D)^2/,m_2.$$ (18) But, while the left-handed neutrino is mostly made of the lightest eigenstate $`\nu _1`$, the right-handed one which, in the process under scrutiny, is coupled through the Higgs boson to the left-handed neutrino, is mostly made of the heaviest one $`\nu _2`$, which one does not expect to be produced here, plus only a very small admixture of the light one, in the proportion $`m_\nu ^D/m_2m_\nu ^D/`$. The decay rate (8) has thus now to be multiplied by $`(m_\nu ^D/)^2(m_1/m_\nu ^D)^2`$, where we have used (18). This has the global effect of replacing in (8) the factor $`(m_\nu ^D)^2`$ by the light neutrino mass $`m_1^2`$. ## 4 Conclusion By providing a unified view of $`J=0`$ mesons which includes the Higgs boson, the extension of the electroweak standard model enables, in this sector, predictions which depart from the Glashow-Salam-Weinberg model. After $`K\pi \pi `$ decays and the disintegrations of the $`Z`$ boson into two pseudoscalar mesons and two leptons , we have extended here our investigations to the rare semi-leptonic decays of kaons; we have shown that, in this framework and unlike in the standard model, decay rates for $`K^+\pi ^+\nu \overline{\nu }`$ in agreement with present experimental bounds can be accounted for with neutrino masses in the $`MeV`$ range, which is not yet excluded experimentally. Acknowledgments: It is a pleasure to thank S.T. Petcov and X.Y. Pham for suggestions, comments and advice, and also the referee for pointing out a misinterpretation in the first version of this work.
no-problem/9907/hep-ph9907470.html
ar5iv
text
# 1 Introduction and General Framework ## 1 Introduction and General Framework Previous studies have exposed the photoproduction of (di-)jets in longitudinally polarized $`ep`$ collisions at HERA as a very promising and feasible tool to measure the parton densities of circularly polarized photons in ‘resolved’-photon processes. It should be stressed that these photonic parton distributions, defined as $$\mathrm{\Delta }f^\gamma (x,Q^2)f_+^{\gamma _+}(x,Q^2)f_{}^{\gamma _+}(x,Q^2),$$ (1) where $`f_+^{\gamma _+}`$ $`(f_{}^{\gamma _+})`$ denotes the density of a parton $`f`$ with helicity ‘+’ (‘$``$’) in a photon with helicity ‘+’, are completely unmeasured so far. The $`\mathrm{\Delta }f^\gamma `$ contain information different from that included in the unpolarized $`f^\gamma `$ \[defined by taking the sum in (1)\], and their measurement is indispensable for a thorough understanding of the partonic structure of the photon. As in we will exploit the predictions of two very different models for the $`\mathrm{\Delta }f^\gamma `$ , and study the sensitivity of di-jet production to these unknown quantities. In the first case (‘maximal scenario’) we saturate the positivity bound $`|\mathrm{\Delta }f^\gamma (x,Q^2)|f^\gamma (x,Q^2)`$ at a low input scale $`\mu 0.6\mathrm{GeV}`$, using the unpolarized GRV densities $`f^\gamma `$ . The other extreme input (‘minimal scenario’) is defined by a vanishing hadronic input at the same scale $`\mu `$. We limit ourselves to leading order (LO) QCD, which is entirely sufficient for our purposes; however both scenarios can be straightforwardly extended to the next-to-leading order (NLO) of QCD . The generic expression for polarized photoproduction of two jets with laboratory system rapidities $`\eta _1`$, $`\eta _2`$ reads in LO $$\frac{d^3\mathrm{\Delta }\sigma }{dp_Td\eta _1d\eta _2}=2p_T\underset{f^e,f^p}{}x_e\mathrm{\Delta }f^e(x_e,\mu _f^2)x_p\mathrm{\Delta }f^p(x_p,\mu _f^2)\frac{d\mathrm{\Delta }\widehat{\sigma }}{d\widehat{t}},$$ (2) where $`p_T`$ is the transverse momentum of one of the two jets (which balance each other in LO), $`x_ep_T/(2E_e)\left(e^{\eta _1}+e^{\eta _2}\right)`$, and $`x_pp_T/(2E_p)\left(e^{\eta _1}+e^{\eta _2}\right)`$. The $`\mathrm{\Delta }f^p`$ in (2) denote the spin-dependent parton densities of the proton, and<sup>1</sup><sup>1</sup>1The direct (‘unresolved’) photon contribution to (2) is obtained by setting $`\mathrm{\Delta }f^\gamma (x_\gamma ,\mu _f^2)\delta (1x_\gamma )`$ in (3). $$\mathrm{\Delta }f^e(x_e,\mu _f^2)=_{x_e}^1\frac{dy}{y}\mathrm{\Delta }P_{\gamma /e}(y)\mathrm{\Delta }f^\gamma (x_\gamma =\frac{x_e}{y},\mu _f^2),$$ (3) where $`\mathrm{\Delta }P_{\gamma /e}`$ is the polarized ‘equivalent-photon’ spectrum for which we will use<sup>2</sup><sup>2</sup>2Very recently the non-logarithmic corrections to (4) have been calculated in . They typically lead to an $`𝒪(10\%)`$ correction which, however, cancels to a large extent in the experimentally relevant spin asymmetry $`\mathrm{\Delta }\sigma /\sigma `$, and thus can be safely neglected here. $$\mathrm{\Delta }P_{\gamma /e}(y)=\frac{\alpha _{em}}{2\pi }\left[\frac{1(1y)^2}{y}\right]\mathrm{ln}\frac{Q_{\mathrm{max}}^2(1y)}{m_e^2y^2},$$ (4) with the electron mass $`m_e`$ and $`Q_{\mathrm{max}}^2=4\mathrm{GeV}^2`$. Needless to say that the unpolarized LO jet cross section $`d^3\sigma `$ is obtained by using the corresponding unpolarized quantities in (2)-(4). The appropriate LO $`22`$ partonic cross sections $`d(\mathrm{\Delta })\widehat{\sigma }`$ in (2) for the direct $`(\gamma bcd)`$ and resolved $`(abcd)`$ cases can be found, for instance, in . The key feature of di-jet production is that a measurement of both jet rapidities allows for fully reconstructing the kinematics of the underlying hard subprocess and thus for determining the variable $`x_\gamma ^{\mathrm{OBS}}=_{jets}p_T^{jet}e^{\eta ^{jet}}/(2yE_e)`$, which to LO equals $`x_\gamma =x_e/y`$, with $`y`$ being the fraction of the electron’s energy taken by the photon. In this way it becomes possible to experimentally suppress the direct contribution by introducing some suitable cut<sup>3</sup><sup>3</sup>3To achieve a similar ‘separation’ for single-inclusive jet production one has to ‘look’ into different rapidity directions since the direct (resolved) contribution dominates in the electron (proton) direction . $`x_\gamma ^{\mathrm{OBS}}0.75`$ , or by scanning different $`x_\gamma ^{\mathrm{OBS}}`$ bins. In the usefulness of this method was demonstrated also for the polarized case. In addition it was shown that the LO QCD parton level calculations nicely agree with ‘real’ jet production processes including initial and final state QCD radiation as well as non-perturbative effects such as hadronization, as modeled using the SPHINX Monte-Carlo . These results were all very encouraging; however it was not studied how one can actually unfold the $`\mathrm{\Delta }f^\gamma `$ from such a measurement. This question will be addressed here. In this context the concept of ‘effective parton densities’, developed many years ago and recently revived , proves to be a useful tool. We will first recall the basic idea behind this approximation and subsequently discuss its extension to the spin-dependent case. ## 2 ‘Effective’ Parton Densities Revisited Obviously it would be a very involved task to unfold the $`\mathrm{\Delta }f^\gamma `$ from a jet-measurement since many subprocesses and combinations of parton densities contribute to the cross section (2). Some handy but still accurate approximation for (2) is certainly required to facilitate this job. In the unpolarized case a useful approximation procedure was developed in . It was observed that the ratios of the dominant, properly symmetrized, LO subprocesses are roughly independent of the c.m.s partonic scattering angle $`\mathrm{\Theta }`$ and, most importantly, that for $`\mathrm{cos}\mathrm{\Theta }=\pm 1`$ all ratios tend to the same value determined by the color factors $`C_A`$ and $`C_F`$: $$\frac{\widehat{\sigma }_{qq^{}}}{\widehat{\sigma }_{qg}}|_{\mathrm{cos}\mathrm{\Theta }=\pm 1}=\frac{\widehat{\sigma }_{qq}}{\widehat{\sigma }_{qg}}|_{\mathrm{cos}\mathrm{\Theta }=\pm 1}=\frac{\widehat{\sigma }_{qg}}{\widehat{\sigma }_{gg}}|_{\mathrm{cos}\mathrm{\Theta }=\pm 1}=\frac{C_F}{C_A}=\frac{4}{9}.$$ (5) Making use of (5) for all values of $`\mathrm{\Theta }`$ and introducing the ‘effective’ parton density combinations $$f_{\mathrm{eff}}^{(p,\gamma )}\underset{q}{}[q^{(p,\gamma )}+\overline{q}^{(p,\gamma )}]+\frac{9}{4}g^{(p,\gamma )},$$ (6) the jet cross section factorizes into these densities times a single subprocess cross section (cf. Eq. (9) below). The ratios of the parton cross sections are depicted in Fig. 1, and, although they considerably deviate from $`4/9`$ for $`\mathrm{cos}\mathrm{\Theta }\pm 1`$, the approximation works amazingly well at a level of about $`𝒪(10\%)`$ accuracy. Unfortunately this approximation has no straightforward extension to the spin-dependent case as is obvious from Fig. 1. The ratios of the LO polarized subprocess cross sections obey $$\frac{\mathrm{\Delta }\widehat{\sigma }_{qq^{}}}{\mathrm{\Delta }\widehat{\sigma }_{qg}}|_{\mathrm{cos}\mathrm{\Theta }=\pm 1}=\frac{4}{11},\frac{\mathrm{\Delta }\widehat{\sigma }_{qq}}{\mathrm{\Delta }\widehat{\sigma }_{qg}}|_{\mathrm{cos}\mathrm{\Theta }=\pm 1}=\frac{8}{33},\frac{\mathrm{\Delta }\widehat{\sigma }_{qg}}{\mathrm{\Delta }\widehat{\sigma }_{gg}}|_{\mathrm{cos}\mathrm{\Theta }=\pm 1}=\frac{22}{81}$$ (7) rather than approaching a common value for $`\mathrm{cos}\mathrm{\Theta }=\pm 1`$, and, consequently, the factorization as outlined above is bound to fail. However, one also notices that all spin-dependent ratios in Fig. 1 are more flattish w.r.t $`\mathrm{cos}\mathrm{\Theta }`$ than in the unpolarized case, and $`qq^{}/qg=4/11`$ is exact for all values of $`\mathrm{cos}\mathrm{\Theta }`$. It turns out that by approximating all ratios by $`4/11`$ and introducing $$\mathrm{\Delta }f_{\mathrm{eff}}^{(p,\gamma )}\underset{q}{}[\mathrm{\Delta }q^{(p,\gamma )}+\mathrm{\Delta }\overline{q}^{(p,\gamma )}]+\frac{11}{4}\mathrm{\Delta }g^{(p,\gamma )},$$ (8) the effective parton density approximation works remarkably well also in this case, and (2) factorizes, e.g., for the resolved contribution, schematically into $$d\mathrm{\Delta }\sigma ^{2\mathrm{jet}}\mathrm{\Delta }f_{\mathrm{eff}}^\gamma \mathrm{\Delta }f_{\mathrm{eff}}^p𝑑\mathrm{\Delta }\widehat{\sigma }_{qq^{}qq^{}}.$$ (9) For all relevant purposes the approximated and the exact polarized LO di-jet cross sections agree within $`5\%`$, even better than what is achieved in the unpolarized case. Figure 2 shows the polarized effective photon density according to (8) for the two extreme scenarios specified above at a scale relevant for the production of jets with $`p_T`$ values of about $`510\mathrm{GeV}`$. ## 3 Results and Conclusions Figure 3 shows the experimentally relevant di-jet spin asymmetry $`A^{2\mathrm{jet}}d\mathrm{\Delta }\sigma /d\sigma `$, for three different bins in $`x_\gamma `$, using similar cuts as in : the difference of the jet pseudorapidities is required to be $`|\mathrm{\Delta }\eta ^{\mathrm{jets}}|<1`$, for the average rapidity we demand $`0<(\eta _1+\eta _2)/2<1`$, and $`0.2<y<0.83`$. The factorization scale $`\mu _F`$ in (2) was chosen to be equal to $`p_T`$, but the asymmetry is largely independent of that choice. Very recently, the complete NLO QCD corrections to polarized jet-(photo)production have become available . They lead to an improved scale dependence of the cross sections. Moderate NLO corrections for the asymmetry were found for the single-inclusive case ; similar results should be expected also for di-jet production. As can be inferred from Fig. 3, the effective parton density approximation works very well. It is only for $`0.4x_\gamma 0.75`$ and large $`p_T`$ that the deviations from the exact results become more pronounced. Also shown in Fig. 3 is the expected statistical accuracy for such measurements, assuming three bins in $`p_T`$ for each $`x_\gamma `$ bin, an integrated luminosity of $`200\mathrm{pb}^1`$, and $`70\%`$ beam polarizations. Given these error bars the prospects for distinguishing between different scenarios for $`\mathrm{\Delta }f_{\mathrm{eff}}^\gamma `$ are rather promising provided the proton densities $`\mathrm{\Delta }f_{\mathrm{eff}}^p`$, also entering (9), are known fairly well, which is clearly not the case yet. However, our ignorance of the $`\mathrm{\Delta }f^p`$ will be vastly reduced by the upcoming polarized $`pp`$ collider RHIC and ongoing efforts in the fixed target sector by HERMES and (soon) by COMPASS. It should be kept in mind that so far nothing at all is known about the $`\mathrm{\Delta }f^\gamma `$, and even to establish the very existence of a resolved component also in the spin-dependent case would be an important step forward.
no-problem/9907/astro-ph9907064.html
ar5iv
text
# The HST view of FR I radio galaxies: evidence for non-thermal nuclear sources Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555 and by STScI grant GO-3594.01-91A ## 1 Introduction Optical studies of radio galaxies are central for the understanding of the physics of their nuclei, by allowing to investigate the relationship between the environment/host galaxy and the occurrence of activity (e.g. the role of merging in the evolution of the nuclear fueling, Colina & De Juan colina (1995)), the formation of jets, the ‘schemes’ unifying different classes of Active Galactic Nuclei (AGN) and in particular radio galaxies with object dominated by relativistic beamed emission (blazars) (e.g. Urry & Padovani urry95 (1995)). The original classification of radio galaxies by Fanaroff & Riley (fr (1974)) is based on a morphological criterion, i.e. edge darkened (FR I) vs edge brightened (FR II) radio structure. It was later discovered that this dicothomy corresponds to a (continuous) transition in total radio luminosity (at 178 MHz) which formally occurs at $`L_{178}=2\times 10^{26}`$ W Hz<sup>-1</sup>. In this paper we focus on the properties of FR I radio galaxies, which represent the lower power objects. From the optical point of view FR I are associated with elliptical galaxies, are generally found in regions of galaxy density higher than powerful radio sources (Zirbel zirb97 (1997)) and often show signs of interactions (Gonzales-Serrano et al., gonz93 (1993)). Their optical spectra are dominated by starlight with no evidence for a continuum component directly related to the active nucleus. In general faint narrow lines are present while no broad lines are detected (Morganti et al. morg92 (1992), Zirbel & Baum zirb95 (1995)). Significant progresses in the understanding of the inner structure of FR I have been obtained thanks to HST observations. Most importantly they revealed the presence of dusty or gaseous kpc scale disks in the nuclear regions of several FR I radio galaxies (Ford et al. ford94 (1994); Jaffe et al. jaff93 (1993); De Koff et al. deko95 (1995); Van der Marel, & Van den Bosch vand98 (1998)). The study of the dynamics of these disks provides one of the strongest pieces of evidence to date of the presence of supermassive black holes associated with the activity in galactic nuclei, with masses reaching $`10^9M_{\mathrm{}}`$ (Harms et al. harm94 (1994), Macchetto et al. macc97 (1997), Ferrarese et al. ferr96 (1996), Van der Marel & Van den Bosch vand98 (1998), Bower et al. bowe98 (1998)). A further, newly discovered feature in FR I, which will concentrate on, are faint, nuclear optical components, which might represent the (as yet) elusive emission associated with the AGN. Their study can be a powerful tool to directly compare the nuclear properties of FR I with those of other AGNs, such as BL Lac objects and powerful radio galaxies. In fact, in the frame of the unification schemes of low luminosity radio loud AGN, FR I radio galaxies are believed to be the misaligned counterparts of BL Lacs (for a review, see Urry & Padovani urry95 (1995)). In this scenario the non–thermal beamed emission from relativistic jets, which dominates in blazars, should also be present in radio galaxies although not amplified or even de-amplified. The possibility of directly detecting this component in the optical band thanks to the HST capabilities, has been explored by Capetti & Celotti (ac2 (1999)). They studied five radio galaxies whose extended nuclear discs can be used as indicators of the radio sources orientation. The ratio of the nuclear luminosities of FR I and BL Lacs with similar extended properties, shows a suggestive correlation with the orientation of the radio galaxies. This behavior is quantitatively consistent with a scenario in which also the FR I emission is dominated by the beamed radiation from a relativistic jet. Further, independent support to this interpretation comes from the rapid variability of the central source of M~87, the only object for which multi epoch HST data were available (Tsvetanov et al. tsve98 (1998)). The role of obscuration in low luminosity radio galaxies is also still a matter of debate. Within the unification scheme for Seyfert galaxies, it is believed that in type 2 objects the Broad Line Region (BLR) and the nuclear continuum source are hidden by an absorbing, edge-on torus, while in Seyfert 1 our line of sight is within absorption-free visibility cones (Antonucci & Miller anto85 (1985)). Similarly, a combination of obscuration and beaming is essential for the unification of powerful radio sources (Barthel bart89 (1989)). However, although circumnuclear tori appear to be commonly associated with active galactic nuclei, there is as yet no evidence in favour of nuclear obscuring material in FR I (and this is not required by the FR I / BL Lac unified scheme, Urry & Padovani urry95 (1995)). A search for H<sub>2</sub>O megamasers (which have been successfully used to probe the dense molecular gas associated to the torus in Seyfert galaxies, Miyoshi et al. miyo95 (1995), Braatz et al. braa96 (1996)) in a sample of FR I galaxies gave negative results (Henkel et al. henk98 (1998)). In order to further and thoroughly investigate these issues, we consider a complete sample of 33 3CR radio galaxies, morphologically identified as FR I sources. For 32 of these, HST/WFPC2 images are available in the public archive. Most of the images were taken as part of the HST snapshot survey of the 3C radio source counterparts, and are already presented by Martel et al. (martel (1998)) (objects with $`z<0.1`$) and by De Koff et al. (De Koff (1996)) (objects with $`0.1<z<0.5`$). Here we specifically focus on the origin of these unresolved nuclear sources, which we found to be present in most of the objects of the sample and to which we refer as Central Compact Cores (CCC). The selection of the sample is presented and discussed in Sect. 2, while in Sect. 3 we describe the HST observations. In Sect. 4 and Sect. 5 we focus on the detection and origin of the CCC component, respectively and in Sect. 6 we discuss some of the consequences of our results. Our findings are summarized in the final Sect. 7. ## 2 The sample Our sample comprises all radio galaxies belonging to the 3CR catalogue (Spinrad et al. spinrad (1985)) and morphologically identified as FR I radio sources by Laing et al. (laing (1983)) and/or Zirbel & Baum zirbel (1995) (see Table 1). However, the powerful but simple original morphological FR I/II classification has revealed to be often inadequate, as being somewhat subjective and sensibly depending on the quality, resolution and frequency of the available radio maps. Also, several radio sources show a complex morphology: for example signatures of FR I structures (such as extended plumes and tails) can be detected together with typical characteristics of FR II sources (narrow jets and hot spots) in sources which might represent transition FRI/II objects (see e.g. Parma et al. parm87 (1987), Capetti et al. capetti95 (1995)). Furthermore, even among the edge darkened radio galaxies, a large variety of structures is present, including wide and narrow angle tails, fat doubles and twin jet sources. Although all these objects are classified as FR I and share the common characteristic of a low total radio luminosity, it is far from obvious that they represent a well defined class. Therefore, in order to establish possible differences among the optical properties of the various subclasses of FR I galaxies and also directly re-assess their radio morphology against erroneous or doubtful identifications, we searched the literature for recent radio maps of each object of our sample. The radio structure of at least four sources is peculiar, there are several transition FR I/II objects and each of the FR I morphological ‘types’ described above is represented in the sample. In view of this ambiguity of the simple morphological classification, in the following we will also consider separately objects below and above a total radio luminosity of $`L_{178}=2\times 10^{26}`$ W Hz<sup>-1</sup>, i.e. the fiducial radio power separation between FR I and FR II. Two thirds of the sources of our sample lie below this value. Having only excluded 3C~231 (M 82) from the original list, as it is in fact a well known starburst galaxy, the remaining radio galaxies constitute a complete, flux limited sample of 33 FR I sources. In Table 1 we report redshifts, radio fluxes and total luminosities as taken from the literature: redshifts span the range $`z=0.0037`$$`0.29`$, with a median value of $`z=0.03`$, and total radio luminosities at 178 MHz are between $`10^{23.7}`$ and 10<sup>28.1</sup> W Hz<sup>-1</sup> ($`H_0=75`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.5`$ are adopted hereafter). ## 3 HST observations HST observations are available in the public archive (up to July 1998) for 32 out of the 33 sources (only 3C~76.1 has not been observed). The HST images were taken using the Wide Field and Planetary Camera 2 (WFPC2). The pixel size of the Planetary Camera, in which the target is always located, is 0<sup>′′</sup>.0455 and the 800 $`\times `$ 800 pixels cover a field of view of $`36^{\prime \prime }\times 36^{\prime \prime }`$. The whole sample was observed using the F702W filter as part of the HST snapshot survey of 3C radio galaxies (Martel et al. martel (1998), De Koff et al. De Koff (1996)). For about half of the sources, additional archival images taken through narrow and broad filters are also available. The HST observations log is reported in Table 2. The data have been processed through the PODPS (Post Observation Data Processing System) pipeline for bias removal and flat fielding (Biretta et al. biretta (1995)). Individual exposures in each filter were combined to remove cosmic rays events. In Figs. 1, 2 we present the final broad band images of the innermost regions (1.5<sup>′′</sup> – 6 <sup>′′</sup>) of our 32 radio galaxies. The most interesting feature which is present in the great majority of them, is indeed an unresolved central source. ## 4 The Central Compact Cores ### 4.1 Identification of CCCs While it is straightforward to identify unresolved sources when they are isolated, the situation is more complex when these are located at the center of a galaxy, superposed to a brightness distribution with large gradients and whose behaviour in the innermost regions cannot be extrapolated from its large scale structure with any degree of confidence. We therefore adopted a simple operative approach: we derived the radial brightness profiles of the nuclear regions of all galaxies using the IRAF RADPROF task and measured the FWHM setting the background level at the intensity measured at a distance of $``$5 pixels ($``$0.23<sup>′′</sup>) from the center. In 22 cases the measured FWHM is in the range 0.05<sup>′′</sup> – 0.08<sup>′′</sup>, i.e. indicative of the presence of an unresolved source at the HST resolution. In 5 cases, namely 3C~28, 3C~89, 3C~314.1, 3C~424 and 3C~438, the fitting procedure yields FWHM larger than $`0.15^{\prime \prime }`$. The behaviour of these sources is radically different from those in which a CCC is detected and therefore we believe that, even with this operative definition, no ambiguity exists on whether or not a central unresolved source is present. The remaining 5 sources have complex nuclear morphologies. The central regions of 3C~75, 3C~293, 3C~305 and 3C~433 are covered by dust lanes. Bright compact knots are seen, but they are completely resolved and offset from the center of the galaxy. Conversely 3C~315 has a peculiar highly elongated structure, contrasting with the typical roundness of FR I host galaxies, and no central source is seen. ### 4.2 CCC photometry The F702W transmission curve covers the wavelength range 5900 - 8200 Å and thus, within our redshift range, includes the H$`\alpha `$ and \[N II\] emission lines. To estimate the continuum emission of the CCC we therefore preferred, when possible, to use images obtained with the F814W or F791W filters which are relatively line-free spectral regions up to a redshift of 0.075. We performed aperture photometry of the 22 CCCs, adopting the internal WFPC2 flux calibration, which is accurate to better than 5 per cent. However, the dominant photometric error is the determination of the background in regions of varying absorption and steep brightness gradients, especially for the faintest CCCs, resulting in a typical error of 10% to 20%. Narrow-band images were used, where available, to remove the line contamination from the F702W images. We found that the line contribution is typically 5 to 40 per cent of the total flux measured in the F702W filter, therefore even the uncorrected fluxes are (for our purposes) reliable estimates of the continuum level. Optical fluxes of the CCCs are given in Table 3. HST images in two broad filters are available for 9 galaxies. However, only in 5 cases (3C~78, 3C~84, 3C~264, 3C~272.1, 3C~274) the accuracy of the photometry is sufficient to deduce reliable estimates of the optical slope. For the last three objects data were taken simultaneously in the two bands, which avoids uncertainties due to possible variability. Galactic extinction is significant (and we corrected for it) only in the case of 3C~84, for which $`A_B=0.7`$. In 3C~272.1 we estimated, by comparing the F547M and the F814W images, that the extended nuclear dust lane produces an absorption of 1 – 2 mag in the V band. The derived spectral indices (corrected for reddening) are in the range $`\alpha _o=0.71.3`$ ($`F_\nu \nu ^\alpha `$). For the galaxies only showing diffuse emission we set as upper limits the light excess of the central 3x3 pixels with respect to the surrounding galaxy background. For the complex sources no photometry was performed. ## 5 Origin of the Central Compact Cores As already pointed out in the introduction, although the presence of nuclear optical emission associated with the active nuclei of FR I radio galaxies is not a surprising feature, it is important to investigate its origin. With this aim we now explore possible (cor-)relations between the CCC flux/luminosity and other observed properties. No trend is found between CCC luminosities and total radio power or absolute visual magnitude of the host galaxy. Conversely, the CCCs emission bears a clear connection with the radio core. In Fig. 3 we plot the optical flux of the CCCs $`F_o`$ versus the core radio flux $`F_r`$ (at $`5`$ GHz): a clear trend is visible. In order to quantify this relation let us consider separately the low and high luminosity sub–samples. The 18 CCCs associated to the low luminosity sources show a tight correlation between $`F_o`$ and $`F_r`$. Only one point, representing 3C~386, is well separated from the others. We perform a non-weighted least squares fit (excluding 3C 386): the correlation coefficient is $`r=0.88`$ which gives a probability that the points are taken from a random distribution of $`P=3.1\times 10^6`$. The dotted lines in Fig. 4 are the fits to the data using each of the two fluxes as independent variables. The best fit is represented by the bisectrix of these two regressions (dashed line) and has a slope of $`0.95\pm 0.10`$. The statistical parameters<sup>1</sup><sup>1</sup>1We also performed the linear fit by using a weighted Chi-square method adopting ten per cent errors on both variables and a “robust” least absolute deviation method. The results both in term of probability and linear fits parameters are fully consistent (with even smaller errors) with those reported above. of the fits are reported in Table 4. A similarly strong correlation ($`P=6.0\times 10^6`$) is present also between radio core and CCC luminosities (Fig. 5). The fact that the correlation is found both in flux and luminosity gives us confidence that it is not induced by either selection effects or a common redshift dependence. Moreover the 3CR sample has been selected according to the total radio flux at low frequency which is only weakly correlated with the core properties entering in the correlations (e.g. Giovannini et al. giovannini (1988)). Let us then consider the case of 3C 386, which we excluded from the statistical analysis. While all points are closely clustered around the linear fit, with a dispersion of only $``$ 0.4 dex, 3C 386 deviates by 3 orders of magnitude, clearly indicating that its optical/radio properties differ from the remaining of the sample. This suggestion is strengthened by the presence of a further peculiarity in the optical band of this object, i.e. the detection of a broad H$`\alpha `$ line (Simpson et al. simpson (1996)), atypical of FR I radio galaxies. In any case, although the inclusion of 3C 386 in the analysis decreases the statistical significance of the correlation to $`P=2.6\times 10^3`$, it would not affect our conclusions. Turning now to the high luminosity sub-sample, five out of the nine points are upper limits, since no CCC is detected. This prevents us from performing a meaningful statistical analysis. Note, however, that the four detected CCC fluxes lie along the same correlation defined by the lower luminosity objects (see Fig. 5). We conclude that a striking linear correlation is present between the core radio and CCC fluxes (and luminosities): it extends over four orders of magnitude, has an extremely high statistical significance, a small dispersion and a slope consistent with unity. Since the radio core emission is certainly originated as synchrotron radiation the above tight link is a strong suggestion that also the optical CCC emission is produced by the same non–thermal process. Independent support to this hypothesis comes from considering the spectral information relative to the CCCs, which can be compared with that of sources where synchrotron emission dominates in the optical as well as in the radio band (e.g. blazars, optical jets). First of all, we find that the radio–optical spectral indices of the CCC of our radio galaxies span the range $`\alpha _{ro}0.6`$$`0.9`$, similar to those of optical jets (e.g. Sparks et al. smb94 (1994)) and at the upper end of the spectral indeces of blazars for which $`\alpha _{ro}0.2`$$`0.9`$ (e.g. Fossati et al. gfos (1998)). Furthermore, the CCC optical spectral indices (determined however for only five sources) are in the range $`\alpha _o0.71.3`$, also typical of the synchrotron emitting sources mentioned above. ### 5.1 Alternative explanations Even though synchrotron radiation provides a rather convincing interpretation of the nature of CCCs, in the following we consider and discuss other emission processes usually occurring in the nuclei of active galaxies. We should however already note that none of these mechanisms seems to naturally account for the linear correlation between the radio and optical CCC luminosities and even less for its small dispersion, as radio cores are strongly affected by relativistic beaming while in all the alternative scenarios the optical emission is essentially quasi–isotropic. $``$ Nuclear cusps or star clusters: recent works have shown that stellar concentrations are often present in the nuclear regions of elliptical galaxies (e.g. Lauer et al. laue95 (1995)). However, the CCCs optical spectral slopes are not compatible with the ‘red’ colors typical of old stellar populations. $``$ Nuclear starburst: as CCCs are observed in the great majority of the galaxies of our sample, star formation should be maintained continuously for a timescale comparable to the lifetime of the radio sources, i.e. $`10^{78}`$ yr (Parma et al. parm99 (1999)). Although some ad hoc mechanisms regulating the star formation rate might exist, this possibility appears implausible. $``$ Accretion discs: the optical spectral information also contrasts with the emission expected at least in the simplest hypothesis of a Shakura-Sunyaev geometrically thin, optically thick disk, which generally predicts a harder spectral slope ($`\alpha \stackrel{<}{}0.3`$). A softer index is expected only at higher frequencies (UV–soft–X; e.g. Szuszkiewicz et al. ewa (1996)). No observational constraints are provided by more complex disc models (e.g. the recently proposed case of low density radiatively inefficient accretion flows, ADAF, Rees et al. rees82 (1982); Narayan & Yi narayan (1995); Chen et al. chen (1997)) as they can reproduce widely different spectral slopes in the optical band. $``$ Emission line region: as already pointed out, although line emission contributes from 5 to 40 per cent, continuum emission provides the bulk of the total CCC flux. We conclude from this analysis that there is no other likely explanation for the origin of CCCs, except non–thermal synchrotron emission. ## 6 Discussion ### 6.1 Implications for unified models For the first time, thanks to the high spatial resolution of HST, it has been possible to separate the contribution of the host galaxy from the genuine nuclear emission in FR I radio galaxies, which manifests itself as a Central Compact Core. CCCs appear to be associated with non–thermal synchrotron emission. Three pieces of evidence also suggest that the CCC radiation is anisotropic due to relativistic beaming: 1. Capetti & Celotti (ac2 (1999)) studied five radio galaxies in which HST images revealed the presence of extended nuclear discs, which appear to be useful indicators of the radio sources orientation. The FR I CCC luminosity shows a suggestive correlation with the orientation of the radio galaxies with respect to the line of sight; 2. Sparks et al. (spar95 (1995)) argued that jets are detected in the optical band only when pointing towards the observer. This would explain why jets with optical counterparts are smaller (they are foreshortened), brighter and one-sided (they are relativistically beamed) with respect to typical radio jets. Five sources of our sample (namely 3C~66B, 3C~78, 3C~264, 3C~274 and 3C~346) indeed show optical jets and these very same objects clearly stand out for being among the brightest CCC of their respective subsamples (see Fig. 6 where we report the CCC luminosity versus the total radio power). Sources with optical jets are also the only FR I galaxies detected in UV HST observations (Zirbel & Baum zirb98 (1998)); 3. as noted above, the radio core emission is certainly beamed and the corresponding orientation dependence is reflected in the large spread found when comparing the core to extended (quasi–isotropic) radio luminosities (Giovannini et al. giovannini (1988)). Therefore if the optical emission were isotropic the $`F_o`$ vs $`F_r`$ correlation would show at least a similarly large scatter. The beamed synchrotron scenario which emerges for the origin of the CCC is therefore a strong evidence in favour of the idea that FR I are the misoriented counterparts of BL Lacs. The large spread (3 orders of magnitude) in CCC luminosity among objects of similar extended properties (see Fig. 6) can be ascribed to different orientations. An extensive and quantitative analysis of this issue and of the relationship between FR I and BL Lacs in the light of these results will be presented in a forthcoming paper (Chiaberge et al., in prep). ### 6.2 Are there obscuring tori in FR I? The observation of optical synchrotron emission has important consequences on the role/geometry of absorption structures in the nuclear regions of such objects. A very important result of our analysis is in fact the high fraction of galaxies in which the central source has been detected. Limiting ourselves, for the moment, to the low luminosity subsample, we found a CCC in 85 % of the objects. The sources in which we do not detect it (namely 3C~75, 3C~293 and 3C~305) are three out of the four cases in which the center of the galaxy is affected by obscuration from a large scale dust structure (the fourth galaxy is 3C~272.1 whose CCC, although reddened, shines through the dust lane). In order to estimate the optical depth of obscuring material in these sources, we derived their expected optical flux from the CCC vs radio core flux correlation. We found that an extinction of only a few magnitudes ($`A_V\stackrel{<}{}6`$ mag, corresponding to a column density of $`N_H\stackrel{<}{}`$ 1.2 $`10^{22}`$ cm<sup>-2</sup>) is sufficient to hide the CCC optical emission. Therefore, even in these three cases the presence of an optically (Thomson) thick structure is not required (although cannot be ruled out) and the CCC emission might be simply obscured by the foreground dust. Infrared observations can address this issue. The behaviour of the 11 sources with higher total radio luminosity is probably different and certainly more complex: – four of them have CCC fluxes and luminosities well consistent with the correlations found for lower power FR I, and would simply lie on the high luminosity part of the radio–optical correlation (see Fig. 5) – two of them show complex (possibly absorbed) nuclei; – in the last five objects no unresolved component is detected (upper limits in Figs. 3, 6, 5; see also §4). The upper limits derived could be in general agreement with the $`F_o`$ vs $`F_r`$ CCC correlation, except possibly in the case of 3C~89 (the lowest point in Fig. 3). The smaller number of CCC found could be simply due to the fact that this subsample is on average at higher redshift. Clearly, we cannot exclude the alternative possibility, i.e. that these sources are indeed obscured, which might indicate that the degree of obscuration increases with the source power (a crucial test here is the inclusion in the sample of high luminosity FR II radio galaxies; Chiaberge et al., in prep). The detection of CCCs indicates that we have a direct view of the innermost nuclear regions of FR I. Limits on the extension of CCC are implied by the variability of the nucleus of M~87 on time scales of two months (Tsvetanov et al. tsve98 (1998)). Furthermore, since the optical emission in relativistic jets is likely to be produced co-spatially or even closer to the black hole with respect to the radio one, we can infer a further constraint on the CCC extension from VLBI observations. These in fact show that most of the radio emission comes from a source unresolved at mas resolution which can be as small as 0.01 pc, i.e. only $``$ 100 Schwarzschild radii for a $``$ 10<sup>9</sup> M black hole (e.g. Junor & Biretta juno95 (1995)). It therefore appears that a “standard”, pc scale, geometrically thick torus with is not present in low luminosity radio galaxies. Any absorbing material must be distributed in a geometrically thin structure (our CCC detection rate implies a thickness over size ratio $`\stackrel{<}{}0.15`$) or thick tori are present only in a minority of FR I. This result is particularly intriguing since dusty nuclear discs on kpc scales have been discovered in several FR I and they indeed are geometrically thin (Jaffe et al. jaff96 (1996)). In this sense, the lack of broad emission lines in FR I cannot be accounted for by obscuration. ### 6.3 Limits on thermal disc emission and AGN efficiency Except for blazars, whose overall spectral energy distributions are almost always dominated by the non–thermal emission from a relativistic jet, the nuclear emission of AGNs from the optical to the soft-X band is generally interpreted as thermal emission from accreting material. Conversely, we find that in FR I a non–thermal synchrotron component dominates the emission in all sources. In fact, an isotropic optical component sufficiently bright would produce a flattening in the radio/optical CCC correlation, due to the presence of an additional source of optical flux (with no radio counterpart), which we do not observe (see Figs. 4 and 5). This is surprising if one considers that, in a complete sample, a substantial fraction of the objects are observed at large angles from the line of sight and thus the emission from their jets is expected to be strongly de-beamed, favouring the detection of any isotropic (disc) emission. The CCC fluxes thus set upper limits to the disc emission, which in turn imply extremely low radiative efficiency of the accretion process. Note, in fact, that the observed CCC emission corresponds to $`\stackrel{<}{}10^7`$$`10^5`$ of the Eddington luminosity of a $`10^9M_{\mathrm{}}`$ black hole, which appears to be typical for these radio galaxies. While these values argue against the presence of a radiatively efficient accretion phase, they are still compatible with the expected radiative cooling rate of low density and high temperature accreting plasma in which the electron–ion coupling is ineffective and most of the thermal energy is thus advected inwards and not radiated (Rees et al. rees82 (1982); also ADAF, e.g. Narayan & Yi narayan (1995), Chen et al. chen (1997) and ADIOS, Blandford & Begelman blandford (1999)). This latter possibility has been indeed proposed to account for the paucity of emission in radio galaxies harbouring supermassive black holes (Fabian & Rees 1995). The HST observations of CCC set consistent but independent constraints relative to the optical emission in these systems. This low efficiency in producing thermal emission might also account for the lack of broad lines in FR I spectra, which could be attributed just to the lack of those ionizing photons which, in the other classes of active nuclei, illuminate the dense clouds forming the Broad Line Region. And indeed Zirbel et al. (zirb95 (1995)) have suggested the possibility that FR I sources produce far less UV radiation than FR II on the basis of the comparison between emission line and radio luminosities of radio galaxies. Intriguingly, the only object in which a broad line has been detected is 3C~386, whose CCC presents a much larger optical luminosity with respect to the sources with similar radio core power, possibly indicative of a thermal contribution. ## 7 Summary and conclusions HST images of a complete sample of 33 FR I radio galaxies belonging to the 3CR catalogue have revealed that an unresolved nuclear source (Central Compact Core, CCC) is present in the great majority of these objects. The CCC emission is found to be strongly connected with the radio core emission and anisotropic. We propose that the CCC emission can be identified with optical synchrotron radiation produced in the inner regions of a relativistic jet. Support to this possibility comes also from spectral information. These results are qualitatively consistent with the unifying model in which FR I radio galaxies are misoriented BL Lacs objects. The identification of the CCC radiation with misoriented BL Lacs emission opens the possibility of studying this class of AGNs from a different line of sight. This can be particularly useful in understanding the jet structure and the level of the activity occurring near the central object whose emission in blazars is swamped by the highly beamed component. Further information on the nature of CCC can be inferred by simultaneous studies of radio cores and CCC optical variability which could establish whether their emission is indeed produced in the same region. The detection of CCC indicates that we have a direct view of the innermost regions of the AGN ($`\stackrel{<}{}100R_S`$). If we restrict the analysis to objects with a total radio power of $`<2\times 10^{26}`$ W Hz<sup>-1</sup>, a CCC is found in all galaxies except three, where absorption from extended dust structures clearly plays a role. This casts serious doubts on the presence of obscuring thick tori in FR I as a whole. Given the dominance of non-thermal emission, the CCC luminosity represents a firm upper limit to any thermal component, which translates into an optical luminosity of only $`\stackrel{<}{}10^510^7`$ times the Eddington one (for a $`10^9M_{\mathrm{}}`$ black hole). This limit on the radiative output of accreting matter is independent but consistent with those inferred in X–rays for large elliptical galaxies, thus suggesting that accretion might take place in a low efficiency radiative regime (Fabian & Rees 1995). The picture which emerges is that the innermost structure of FR I radio galaxies differs in many crucial aspects from that of the other classes of AGN; they lack the substantial BLR, tori and thermal disc emission, which are usually associated with active nuclei. Similar studies of higher luminosity radio galaxies will be clearly crucial to determine if either a continuity between low an high luminosity sources exists or, alternatively, they represent substantially different manifestations of the accretion process onto a supermassive black hole. ###### Acknowledgements. The authors thank G. Bodo and E. Trussoni for useful comments on the manuscript and acknowledge the Italian MURST for financial support. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
no-problem/9907/quant-ph9907101.html
ar5iv
text
# Expanding Hermitean Operators in a Basis of Projectors on Coherent Spin States ## Acknowledgements The author acknowledges financial support by the Schweizerische Nationalfonds.
no-problem/9907/physics9907005.html
ar5iv
text
# Quantum superluminal communication must exist ## Abstract We show that the quantum superluminal communication based on the quantum nonlocal influence must exist if a basic principle of science is still valid, which states that if we have demonstrated the existence of something real, we can confirm its existence. After having shown quantum superluminal communication, if exists, does not result in the causal loop, we will further demonstrate that quantum superluminal communication must exist due to the validity of a basic principle of science, which states that if we have demonstrated the existence of something real, we can confirm its existence. At first, as we have demonstrated, the existence of quantum nonlocal influence requires that there must exist a preferred Lorentz frame for consistently describing the quantum nonlocal process, or we will meet logical contradiction in our explanation about the experimental results about such process, this conclusion is independent of the existence of the reality behind the process. Then, according to a basic principle of science, which states that if we have demonstrated the existence of something real, we can confirm its existence, since we have demonstrated the existence of the preferred Lorentz frame we can find it. This principle is the only assumption in our demonstrations, and one impressive example of its validity is the existence of atom. At last, we will demonstrate that in order to confirm the existence of the preferred Lorentz frame quantum superluminal signalling or communication must exist, first, as we have analyzed, in order to respect special relativity, the existence of the preferred Lorentz frame, which is only required by the quantum nonlocal process, will only influence the time orders of the correlating quantum nonlocal events as in one Bell experiment, namely in such a preferred Lorentz frame the above correlating quantum nonlocal events are simultaneous, while in other inertial frames they are not simultaneous, thus in order to confirm or find the existence of the preferred Lorentz frame, we need to identify the time orders of the correlating quantum nonlocal events in different inertial frames, as to one Bell experiment this requires that we can distinguish the change of the measured single state due to the quantum nonlocal influence in quantum measurement in order to identify the time order of the change, then this further means that we can distinguish the nonorthogonal single state, such as $`\psi _1+\psi _2`$ and $`\psi _1`$, which can be easily used to achieve the quantum superluminal communication, thus we demonstrate that in order to confirm the existence of the preferred Lorentz frame quantum superluminal communication must exist.
no-problem/9907/astro-ph9907412.html
ar5iv
text
# Ionospheric corrections via PIM and real-time data ## 1 Introduction The Earth’s ionosphere imparts additional phase/group delay ($`\tau _{\mathrm{iono}}`$), Faraday rotation ($`\psi _{\mathrm{iono}}`$), refraction, and absorption to cosmic radio signals propagating through it. These ionospheric effects arise because of the dispersive index of refraction for a cold, weak plasma (and the presence of Earth’s magnetic field in the case of Faraday rotation). For $`\nu 10`$ MHz, we can safely ignore absorption (see, e.g., Davies, (1990), §7.4.1), and will do so henceforth. We can then write the collision-free Appleton-Hartree index of refraction: $$\mu _p^2=\frac{2X(1X)}{2(1X)Y_{}^2\pm [Y_{}^4+4(1X)^2Y_{}^2]^{\frac{1}{2}}},$$ (1) where $`X(e^2/4\pi ^2\epsilon _0m_e)n_e/\nu ^2`$, $`Y(e/2\pi m_e)B/\nu `$, and the “$``$” and “$``$” subscripts for $`Y`$ refer to $`B_{}`$ and $`B_{}`$, the components of the Earth’s magnetic field perpendicular and parallel to the direction of propagation. The “$`\pm `$” shows that there are two characteristic modes, giving rise to Faraday rotation for linear polarizations. Equation (1) can be expanded to yield $`\mu _p`$ to the desired order (except for $`\nu `$ well less than 50 MHz at times of solar maximum, first-order expansion will suffice). Any ionospheric effect will then be proportional to $`f(n_e,B_{},B_{})𝑑l`$, where the integration is along the propagation path. To first order, $`\tau _{\mathrm{iono}}n_e𝑑l`$ \[$``$ TEC\], and $`\psi _{\mathrm{iono}}n_eB_{}𝑑l`$. Because ionospheric effects are dispersive, dual-frequency observations are the most straightforward way of correcting for them (again, to first order). However, such a tactic is not conducive to some types of observations (e.g., should a source’s spectral index preclude X-band detection). This paper will discuss one approach for removing ionospheric contributions from single-frequency data. This approach comprises two components: an ionospheric “climatology” model that derives $`n_e(\stackrel{}{r},t)`$ and the capability to incorporate contemporaneous ionospheric data, if available, to update the modeled $`n_e`$. ## 2 PIM: Parameterized Ionospheric Model PIM is a theoretical model of ionospheric climatology developed at USAF Phillips Laboratory. It forms the basis for PRISM, a global real-time ionospheric model for operational use at the Air Force Space Forecast Center (see §3). PIM compiles runs of the more general Phillips Lab Global Theoretical Ionospheric Model (GTIM), spanning parameter space in local time, latitude, season, solar activity, geomagnetic activity, and interplanetary magnetic-field direction. GTIM calculates $`n_e`$ by solving ion-balance equations along magnetic flux tubes, taking into account production, loss, and transport processes. Daniell et al., (1995) describes PIM itself; Anderson, 1993a and Anderson et al., (1996) discuss the physics underlying GTIM. These references also discuss the operational (and philosophical) differences between theoretical and empirical climatologies. To illustrate representative ionospheric morphology in PIM, Fig. 1 shows a global map of vertical TEC at 20UT around the vernal equinox and solar maximum. The contours are labeled in units of TECU ($`=10^{16}`$ m<sup>-2</sup> — a rule of thumb: 1 TECU $`\frac{4}{3\nu }`$ cycles of ionospheric delay, for $`\nu `$ in GHz). The thick vertical line represents the longitude of the sun, and the thick dashed line is the magnetic equator. Roughly speaking, the sun “pulls” the entire pattern, sliding along the magnetic equator, from east to west. We can see that the predicted horizontal gradients have significant structure in both E–W and N–S directions, especially for lines of sight transecting low-latitude regions. The blobs of anomalously high TEC lying at $`\mathrm{\Phi }\pm 15^{}`$ during the local evening illustrate the interplay between production, loss, and transport processes in the ionosphere. Neutral winds in the early afternoon in equatorial regions tend westward. The differential collisional coupling of the neutrals to electrons and ions generates an eastward horizontal E. Acting with the northward B, also predominantly horizontal near the equator, an $`𝐄\times 𝐁`$ drift pushes electrons up, to regions of lower neutral densities, and hence slower recombination (loss). These blobs of electrons fall back down along lines of B, moving away from the magnetic equator and persisting a few hours longer than they would have otherwise. This process also causes a diurnal variation in the height of maximum electron density ($`h_\mathrm{m}F_2`$) typically in a range of 250–400 km. A general weakness of vertical TEC maps like Fig. 1 is that they mask the fact that vertical $`n_e`$ profiles can change significantly, in their shape and $`h_\mathrm{m}F_2`$, on time-scales corresponding to diurnal, seasonal, and solar-cycle factors. I am working on a variant of PIM, originally motivated by application to VLBI astrometry (unimaginatively, called PIMVLBI). The user needs only to provide the coordinates of the radio antennas and sources, and the JD/UT of observations (e.g., input of a Mk III/HOPS export file will suffice to pass all necessary information). The program automatically looks up the relevant geophysical/solar parameters from files downloaded from NOAA and GSFC. Internally, PIM computes $`n_e`$ at sample points along the propagation path from the source; PIMVLBI includes IGRF routines to calculate $`B_{}`$ and $`B_{}`$ at these points. The slant $`\tau _{\mathrm{iono}}`$ and $`\psi _{\mathrm{iono}}`$ can be integrated from these values. This approach takes directly into account the more realistic horizontal gradients and vertical profiles in PIM, obviating the need for purely geometric slant factors about fixed-height sub-ionospheric points, as are “traditionally” used for converting vertical to slant TEC. ## 3 Real-Time Data There are, however, processes whose effects PIM does not model — global magnetic storms, traveling ionospheric disturbances, scintillation (see, e.g., Davies, (1990)). For this reason, inclusion of contemporaneous ionospheric observations should improve the reliability of the modeled $`n_e`$. Much of the data used to constrain PRISM (Anderson, 1993b, ) are available only to the military (as is the PRISM code itself). I have rather incorporated a real-time adjustment (RTA) algorithm devised for range/refraction corrections at an ARPA radar in the Marshall Islands (Daniell et al.,, 1996). This RTA distinguishes between two types of real-time data: integral and profile. TEC observations from a collocated GPS receiver would be an example of integral data, providing a ratio of GPS/PIM TEC along the line of sight to each observed GPS satellite (SV). In this situation, one may view PIM as a physics-based mapping function for spatial interpolation among the GPS TEC values to the direction of the source. Fig. 2 plots the GPS/PIM TEC ratios observed from the IGS receiver at Goldstone on 9 June 1998. For legibility, the 30s-sampled IGS data has been culled to 10-minute sampling. The error bars stem from the uncertainty in the GPS TEC measurements, for which I use the minimum of the running-30-minute standard deviation in the fit of differential carrier phase and code range for each satellite pass. Ratios for all observed SVs are plotted regardless of their direction. If PIM has reproduced all spatial gradients perfectly, each such ratio would equal a single constant. Scatter within the ratios at a specific time therefore implies unmodeled structure in $`(Az,El)`$ space. We can see that there are a couple individual SVs diverging from the trend between 18–22 LT on that day; these lie at low elevations off to the south-west, sampling the equatorial anomaly. Except for these cases, the GPS/PIM ratio behaves well, especially so from 08–16 LT. Ionosondes would be a representative source of profile data. The RTA currently uses $`h_\mathrm{m}F_2`$ and the maximum density ($`N_\mathrm{m}F_2`$) from the ionosonde profile. Data from ionosondes can neatly complement GPS TEC constraints. One problem with the latter is that SVs may not lie along the lines of sight you would prefer to sample at a given time. In principle, an ionosonde station could be located where ionospheric effects along specific lines of sight from a telescope would be most sensitive to changes in $`h_\mathrm{m}F_2`$ and $`N_\mathrm{m}F_2`$. For example, $`h_\mathrm{m}F_2`$ rises towards 400 km in the equatorial anomaly. Low elevation southerly lines of sight from Europe (as far north as WSRT) intersect this altitude around the African Mediterranean coast — just where Fig. 1 shows the equatorial anomaly in the evening. There are UMass/Lowell digisondes (digital ionosondes) in southern Spain and Italy (Klobuchar,, 1997); access to their data could increase the reliability of PIMVLBI $`n_e`$ estimates along such directions for EVN stations, regardless of whether an SV happened to be conveniently located. Other types of profile-data instruments would include incoherent scatter radars and occultations of low-earth orbiting satellites carrying a GPS receiver. There are also other possibilities for expanding the Daniell et al., (1996) RTA. This was developed to provide real-time corrections for a site literally in the middle of the ocean; the typical situation for a radio-astronomy antenna providing data for later VLBI correlation is not so severe on both counts. PIMVLBI already takes advantage of the relaxation of the real-time requirement by using data from the complete passes of relevant SVs when computing GPS TEC values at a specific time (allowing us to use low-elevation GPS satellites with noise characteristics comparable to higher-elevation ones when anti-spoofing is on; conversely, Daniell et al., (1996) used no GPS satellite with $`El<35^{}`$). However, the PIMVLBI RTA still considers the GPS/PIM ratio “field” as an independent spatial function each instant. This leads to situations where, say, an SV yields a non-unity GPS/PIM ratio at a time just before it sets. Immediately afterwards, the information contained in the ratio is lost, even though the actual ionospheric $`n_e`$ distribution is not changing so abruptly. An improved algorithm might be able to incorporate some sort of physically reasonable temporal smoothing of the spatial ratio-field in response to such discontinuous changes in the SV sampling. Further, we may also be able to incorporate GPS data from non-collocated receivers nearby. GPS satellites unfortunately move too slowly to allow proper tomographic reconstruction of $`n_e`$ above an array of receivers (as could be done with the old TRANSIT satellites for meridional arrays), but inclusion of PIM and ionosonde $`h_\mathrm{m}F_2`$ and $`N_\mathrm{m}F_2`$ data could perhaps provide sufficient external constraints to increase the reliability of the slant-to-vertical and horizontal-translation transformations required to use non-collocated GPS/PIM ratios in the traditional way. ## 4 Concluding Thoughts The aeronomy community has made a great deal of progress in ionospheric modeling since the mid-70’s that has not been thoroughly absorbed by radio astronomy, and from which we could profit. The PIM model just described falls in this category. Among its advantages, it is freely available and is upgraded as model refinements proceed. It calculates $`n_e(\stackrel{}{r})`$, allowing direct integration, to arbitrary order, of effects along the specific lines of sight from our antennas to our sources. It can readily incorporate various types of external contemporaneous data should they be available, but does not require them should they not. The next step, once all the code is in place, involves pursuing validation, especially via the extensive store of IGS GPS data (e.g., more “Figures 2”) to characterize its strengths and weaknesses as correlated to location, local time, season, solar cycle, etc. Such work would complement USAF PRISM-validation efforts, for which GPS data play a decidedly subsidiary role to ionosondes and in situ ionospheric data. There has also been preliminary discussion about including ionospheric modeling capability in AIPS++. For example (in broadest strokes of AIPS++ parlance), the PIM databases and geophysical-parameter files could become global data; GPS TEC measurements from individual stations could become part of their measurement set; and the user could have the flexibility to explore various combinations of PIM and the available real-time data as she sees fit during the course of analysis.
no-problem/9907/hep-th9907073.html
ar5iv
text
# Bose-Einstein condensation and superfluidity of a weakly-interacting photon gas in a nonlinear Fabry-Perot cavity ## 1 Introduction This work originates from a very recent experimental proposal in order to detect a Bose-Einstein condensation and a new superfluid state of light. The apparatus consists basically of a planar Fabry-Perot cavity filled with a nonlinear polarizable medium responsible for an effective repulsive short-range four-photon interaction. The mirrors of the cavity have a low but finite transmittivity, allowing photons of an incident laser beam to enter and leave the cavity, so that a steady-state condition is achieved after many photon-photon collisions. Following , the weak interaction between the photons can be viewed as a four-photon term arising from a repulsive pairwise interaction provided by a self-defocusing Kerr nonlinear medium inside the Fabry-Perot cavity . Moreover, due to the boundary conditions required by the Fabry-Perot, one is led with an effective two-dimensional nonrelativistic massive weakly-interacting photon gas confined in the cavity. The nonrelativistic regime is due to the paraxial propagation of the light inside the resonator . This effective 2D weakly-interacting photon gas displays a Bogoliubov type dispersion relation, suggesting the existence of a Bose-Einstein photon condensate and of a possible superfluid state of light. In particular, one of the tasks of the experiment is to investigate the existence of the sound-like waves corresponding to the collective phonon excitations in the photon superfluid, which, according to , should propagate with a velocity whose value $`v_c`$ is a few thousandths of the vacuum speed of light, $`v_c=4.2\times 10^7cm/s`$. A follow-up experiment is being planned in order to demonstrate that the sound wave velocity $`v_c`$ is a critical velocity for the photon (super)fluid, corresponding to the existence of persistent currents. It is useful now to spend a few words on the meaning of the Bose-Einstein condensation for this 2D photon gas, as is well known that there are quite severe restrictions on the existence of a Bose-Einstein condensation in momentum space for a weakly-interacting 2D gas in the thermodynamic limit for any nonvanishing temperature . We underline first of all that the system realized by the experimental set-up of is a zero-temperature Bose gas. We know that for an ideal Bose gas a macroscopic number $`N_0`$ of particles will condense occupying the zero-momentum state; however the presence of interactions could strongly modify this picture, in such a way that the presence of the condensate is no longer so obvious . Nevertheless, in our case the interaction is very weak, and we may think that $`N_0`$ remains a macroscopic number. Moreover, the presence of a nonvanishing interaction is crucial in order to imply a redefinition of the spectrum of the excitations which gives rise to the Bogoliubov dispersion relation. Another remarkable issue is the fact that the photon gas ought to be considered here as a genuine finite-sized system, as the whole apparatus possesses a finite small volume (i.e. the cavity volume) and the average number $`N`$ of photons inside the cavity is kept finite as well. Also, the Fabry-Perot boundary conditions play a crucial role in order to provide an effective mass for the photons , which turns out to be proportional to the inverse of the separation length $`L`$ between the mirrors of the cavity. The situation looks very close to that of trapped Bose gases , for which the Bose-Einstein condensation has been observed even in the case of 1D dimensionally reduced systems . For these inhomogeneous finite-sized systems the existence of a Bose-Einstein condensation is not, strictly speaking, a phase transition . Rather, it is a direct consequence of the experimental evidence of a macroscopic occupation of the lowest state. It is worth remarking that the number $`N_0`$ of photons occupying the lowest state has been estimated to be of the order of $`N_0=8\times 10^{11}`$. The aim of the present work is to propose and analyse a possible theoretical set up. We shall be able to show that the effective 2D weakly-interacting massive photon gas can be actually obtained by a four-photon $`QED`$-inspired Hamiltonian, once the gauge freedom and the Fabry-Perot boundary conditions have been properly taken into account. ## 2 The Effective Hamiltonian As the starting $`QED`$-inspired effective Hamiltonian describing a weak repulsive four-photon interaction we take the following gauge invariant expression $$H_{\mathrm{eff}}=_Vd^3x\left(\frac{1}{2}\left(\stackrel{}{E^2}+\stackrel{}{B^2}\right)+\frac{\lambda }{4}\left(A_\mu ^TA^{T\mu }\right)^2\right),$$ (2.1) $`V`$ being the volume<sup>1</sup><sup>1</sup>1As we shall see in the next section, the boundary conditions required by the Fabry-Perot cavity do allow for usual integration by parts. of the Fabry-Perot cavity. The coupling constant $`\lambda `$ is positive and $`A_\mu ^T`$ stand for the transverse gauge invariant components of the gauge field, i.e. $$A_\mu ^T=(g_{\mu \nu }\frac{_\mu _\nu }{^2})A^\nu =\frac{1}{^2}^\nu F_{\nu \mu },$$ (2.2) where $`g_{\mu \nu }=\mathrm{diag}(+,,,)`$ is the flat Minkowski metric and $$F_{\mu \nu }=_\mu A_\nu _\nu A_\mu .$$ (2.3) In order to motivate the choice of the Hamiltonian $`\left(\text{2.1}\right)`$ we underline that, according to , the field propagation inside the cavity in the paraxial approximation is described by a nonlinear Schrödinger equation. The effective Hamiltonian $`\left(\text{2.1}\right)`$ can then be obtained by requiring that the Heisenberg equations of motion for the field reproduce the nonlinear Schrödinger equation in the semiclassical limit . Although not needed, it is worth remarking that expression $`\left(\text{2.1}\right)`$ can be immediately generalized in a gauge invariant way to a typical two-body interaction, namely $$H_{\mathrm{eff}}=_Vd^3x\frac{1}{2}\left(\stackrel{}{E^2}+\stackrel{}{B^2}\right)+\frac{1}{4}_Vd^3xd^3y\left(A^TA^T\right)(x)U(xy)\left(A^TA^T\right)(y),$$ (2.4) for some short-range repulsive potential $`U(xy).`$ Being interested in the analysis of the ground state of the Hamiltonian $`\left(\text{2.1}\right)`$, we shall work in the static situation in which all fields are assumed to be time-independent. Accordingly, we shall make use of the so called temporal gauge $`A_0`$ $`=`$ $`0,`$ (2.5) $`\stackrel{}{A}`$ $`=`$ $`\stackrel{}{A}(\stackrel{}{x}),`$ which implies $`A_0^T`$ $`=`$ $`0,`$ (2.6) $`A_i^T`$ $`=`$ $`\left(\delta _{ij}{\displaystyle \frac{_i_j}{^2}}\right)A_j={\displaystyle \frac{1}{^2}}_jF_{ji},i,j=1,2,3,`$ where $`1/^2`$ is the Green’s function of the three-dimensional laplacian, $$\frac{1}{^2}=\frac{1}{4\pi \left|\stackrel{}{x}\stackrel{}{y}\right|}.$$ (2.7) For the Hamiltonian $`\left(\text{2.1}\right)`$ we get $$H_{\mathrm{eff}}=d^3x\left(\frac{1}{4}F_{ij}F^{ij}+\frac{\lambda }{4}\left(A_i^TA_i^T\right)^2\right).$$ (2.8) Obviously, expression $`\left(\text{2.8}\right)`$ is left invariant by the time-independent gauge transformations, i.e. $$\delta A_i=_i\eta (\stackrel{}{x}).$$ (2.9) This spatial type of gauge invariance can be fixed by imposing the axial gauge condition $$A_3(\stackrel{}{x})=0,$$ (2.10) naturally suggested by the geometry of the Fabry-Perot cavity. Here the $`z`$-axis is chosen to be coincident with the direction of propagation of the laser beam incident on the Fabry-Perot. The mirrors of the cavity lie in the transverse $`xy`$ plane. However, as is well known, condition $`\left(\text{2.10}\right)`$ does not fix completely the gauge freedom and allows for a further residual local invariance, corresponding to the gauge transformations on the plane $`xy`$ ortogonal to the $`z`$-axis. In fact, owing to the equation $`\left(\text{2.10}\right),`$ for the field strength $`F_{ij}`$ we get $`F_{a3}`$ $`=`$ $`_3A_a,`$ (2.11) $`F_{ab}`$ $`=`$ $`_aA_b_bA_a,a,b=1,2.`$ It is apparent then that the components of $`F_{ij}`$ are left invariant by the following $`z`$-independent transformations $$\delta A_a=_a\eta (x_{}),$$ (2.12) where $`\stackrel{}{x}_{}=(x,y)`$. This further residual gauge invariance $`\left(\text{2.12}\right)`$ can be fixed by requiring the additional condition $$_aA_a=0,$$ (2.13) from which it follows that the two gauge fields $`A_a`$ can be identified with their transverse components. Finally, for the fully gauge fixed effective Hamiltonian we obtain $`H_{\mathrm{eff}}`$ $`=`$ $`{\displaystyle d^2x_{}𝑑z\left(\frac{1}{2}F_{3a}F^{3a}+\frac{1}{4}F_{ab}F^{ab}+\frac{\lambda }{4}\left(A_aA_a\right)^2\right)}`$ $`=`$ $`{\displaystyle d^2x_{}𝑑z\left(\frac{1}{2}(_3A_a)(_3A_a)\frac{1}{2}A_a_{}^2A_a+\frac{\lambda }{4}\left(A_aA_a\right)^2\right)},`$ where $`_{}^2=_a_a`$ is the two-dimensional laplacian. This Hamiltonian will be the starting point for the analysis of the spectrum of the excitations of the weakly coupled photon gas. ## 3 The spectrum of the excitations In order to analyse the spectrum of the Hamiltonian $`\left(\text{2}\right)`$, we have first to properly take into account the boundary conditions of the problem. These require the vanishing of the fields at the reflecting surfaces of the mirrors of the Fabry-Perot cavity, i.e. $$A_a(x_{},z)=\frac{1}{\sqrt{L}}\stackrel{~}{A}_a(x_{})\mathrm{sin}(\frac{\pi }{L}n_0z),$$ (3.15) where $`L`$ is the distance between the mirrors and where the fixed integer $`n_0`$ is related to the frequency $`\omega `$ of the laser beam incident on the cavity through $`n_0\pi /L=\omega `$. It should be pointed out that the form of the field $`\left(\text{3.15}\right)`$ requires the assumption that the spacing between the modes of the cavity is so large that only one longitudinal mode is excited by the laser beam . Concerning now the $`xy`$ plane, periodic boundary conditions will be assumed. Inserting equation $`\left(\text{3.15}\right)`$ in the expression $`\left(\text{2}\right)`$ and performing the integration over the $`z`$-axis, we easily get the following $`2D`$ dimensionally reduced effective Hamiltonian $$H_{\mathrm{eff}}=d^2x_{}\left(\frac{1}{4}\stackrel{~}{A}_a_{}^2\stackrel{~}{A}_a+\frac{m^2}{4}\stackrel{~}{A}_a\stackrel{~}{A}_a+\frac{3}{32L}\lambda \left(\stackrel{~}{A}_a\stackrel{~}{A}_a\right)^2\right),$$ (3.16) where $`m=n_0\pi /L=\omega `$ is the effective mass of the photon gas confined in the Fabry-Perot cavity. It is worth mentioning that the paraxial approximation guarantees that the photons have a finite effective mass also when tunneling effects due to the low but finite transmittivity of the mirrors are taken into account . Setting $`\stackrel{~}{A}_1`$ $`=`$ $`(\phi +\phi ^{}),`$ (3.17) $`\stackrel{~}{A}_2`$ $`=`$ $`i(\phi \phi ^{}),`$ we obtain the final form of the four-photon Hamiltonian $$H_{\mathrm{eff}}=d^2x_{}\left(\phi ^{}_{}^2\phi +m^2\phi ^{}\phi +\frac{3}{2L}\lambda \left(\phi ^{}\phi \right)^2\right),$$ (3.18) describing an effective weakly-interacting massive 2D photon gas. This Hamiltonian displays a $`U(1)`$ global phase symmetry, which follows from the $`O(2)`$ rotational invariance in the $`xy`$ plane of the dimensionally reduced $`2D`$ effective Hamiltonian $`\left(\text{3.16}\right)`$. Moreover, for a paraxial propagation of the light inside the cavity , we have $`p_{}=\sqrt{p_1^2+p_2^2}m`$, so that the photon gas is in fact nonrelativistic. In order to obtain the spectrum of the Hamiltonian we expand the fields $`\phi ,`$ $`\phi ^{}`$ in Fourier modes $`\phi `$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{A}}}{\displaystyle \underset{p}{}}a_pe^{i\stackrel{}{p}_{}\stackrel{}{x}_{}},`$ (3.19) $`\phi ^{}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{A}}}{\displaystyle \underset{p}{}}a_p^{}e^{i\stackrel{}{p}_{}\stackrel{}{x}_{}},`$ where $`A`$ is the available area of the cavity in the transverse plane $`xy.`$ Thus $$H_{\mathrm{eff}}=\underset{p}{}ϵ_pa_p^{}a_p+\frac{3\lambda }{2}\frac{1}{V}\underset{p_1+p_2=p_3+p_4}{}a_{p_1}^{}a_{p_2}^{}a_{p_3}a_{p_4},$$ (3.20) where $`V=AL`$ is the volume of the cavity and $$ϵ_p=\sqrt{\stackrel{}{p}_{}^2+m^2}.$$ (3.21) Of course, expression $`\left(\text{3.20}\right)`$ is nothing but the starting Hamiltonian of . Now we can proceed with the standard analysis of the weakly-interacting Bose gas within the Bogoliubov approximation . Assuming that the number $`N_0`$ of excitations occupying the zero-momentum state is macroscopic<sup>2</sup><sup>2</sup>2We recall here that $`N_0`$ is of the order of $`8\times 10^{11}`$, so that the condition $`N_01`$ is in fact verified., i.e. $`N_01`$, and neglecting higher order interaction terms above the condensate, the Hamiltonian $`\left(\text{3.20}\right)`$ is diagonalized by means of a Bogoliubov transformation $`a_p`$ $`=`$ $`u_p\alpha _p+v_p\alpha _p^{},`$ (3.22) $`a_p^{}`$ $`=`$ $`u_p\alpha _p^{}+v_p\alpha _p,`$ with $$u_p^2v_p^2=1.$$ The resulting spectrum is easily worked out and turns out to be given by $$H_{\mathrm{eff}}=\underset{p0}{}\stackrel{~}{ϵ}_p\alpha _p^{}\alpha _p+E_0,$$ (3.23) with $$\stackrel{~}{ϵ_p}=\left[\left(ϵ_pm+3\lambda \frac{N}{V}\right)^2\left(3\lambda \frac{N}{V}\right)^2\right]^{1/2},$$ (3.24) $`N`$ being the total average number of photons in the cavity. As is well known, expression $`\left(\text{3.24}\right)`$ provides a nonvanishing critical velocity for the phonon sound-like waves $$v_c=\underset{p0}{lim}\frac{\stackrel{~}{ϵ_p}}{p}=\sqrt{\frac{3\lambda }{m}\frac{N}{V}}.$$ (3.25) Let us finally display the time evolution of the fields in the Bogoliubov approximation $`\left(\text{3.23}\right)`$. Making use of the relations $`u_p^2+v_p^2`$ $`=`$ $`{\displaystyle \frac{\left(ϵ_pm+3\lambda \frac{N}{V}\right)}{\stackrel{~}{ϵ_p}}},`$ (3.26) $`2u_pv_p`$ $`=`$ $`3{\displaystyle \frac{N}{V}}{\displaystyle \frac{\lambda }{\stackrel{~}{ϵ_p}}},`$ for the time evolution of the mode $`a_p`$ we obtain $`i{\displaystyle \frac{a_p(t)}{t}}`$ $`=`$ $`\left(ϵ_pm\right)a_p(t)+3\lambda {\displaystyle \frac{N}{V}}\left(a_p(t)+a_p^{}(t)\right),`$ (3.27) $`a_p(t)`$ $`=`$ $`e^{iH_{\mathrm{eff}}t}a_pe^{iH_{\mathrm{eff}}t},`$ in agreement with the classical nonlinear argument of . ## 4 Conclusion In this work we have derived a possible theoretical set up for the photon condensation effect proposed in , starting from a nonlinear $`QED`$inspired Hamiltonian. In particular, we have shown that the dynamics of the photon gas inside the Fabry-Perot cavity filled with a nonlinear polarizable medium is described in terms of a $`2D`$ dimensionally reduced effective Hamiltonian of a massive complex scalar field. This implies that the transverse dimensions of the cavity should play a crucial role in the photon condensation. In fact, according to our field theory description, if one takes as possible thermodynamic limit<sup>3</sup><sup>3</sup>3We remind that $`A`$ is the available area of the cavity in the transverse plane $`xy`$. $`N,`$ $`A\mathrm{},`$ $`N/A=`$const., the lowest lying phonon of the Bogoliubov spectrum would play the role of the Goldstone boson corresponding to the spontaneous breaking of the $`U(1)`$ symmetry of the Hamiltonian (3.18). However, it is well known that spontaneous symmetry breaking cannot take place in two-dimensions, due to the infrared divergencies associated to massless scalar fields . Therefore, as in the case of trapped Bose gases, the photon gas has to be considered as a genuine finite-sized system, the transverse dimensions of the cavity providing a natural infrared cut-off<sup>4</sup><sup>4</sup>4Notice in fact that finite size effects are rather relevant in the condensation of 2D dimensionally reduced atomic gases, for very similar reasons: long wavelenght phonons destabilize the long range order of the condensate .. Thus, we expect that these finite size conditions should be actually realized in the experimental framework in order to observe the photon condensate. The role played by the finiteness of the system is under investigation. In this context, it would be also very interesting to study the possibility of the existence of a Kosterlitz-Thouless phase transition for the $`2D`$ photon (super)fluid. ## 5 Acknowledgements We are very grateful to R.Y. Chiao for valuable suggestions and comments. We are indebted to P. Di Porto, D. Barci and E. Fraga for fruitful discussions. The Conselho Nacional de Desenvolvimento Científico e Tecnológico CNPq-Brazil, the Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (Faperj) and the SR2-UERJ are acknowledged for the financial support.
no-problem/9907/astro-ph9907314.html
ar5iv
text
# Liquid Helium and Liquid Neon - Sensitive, Low Background Scintillation Media For The Detection Of Low Energy Neutrinos ## I Introduction The observed deficit in solar neutrino flux at the Earth’s surface is now well established; the neutrino detection rates measured in the Homestake, SAGE/GALLEX, and Kamiokande/Super-Kamiokande experiments are each significantly less than predicted by the Standard Solar Model (SSM), but taken together are also logically incompatible with any current solar model. Resolution of this problem remains a tantalizing goal. It is plausible that the correct model explaining the observed neutrino detection rates involves flavor oscillation of massive neutrinos. The several scenarios for flavor conversion will most likely be discriminated through measurement of the solar neutrino flux, including temporal variations, at all energies and for all neutrino species. Distortions of the predicted solar neutrino energy spectra could indicate neutrino flavor oscillations, as could daily or seasonal variation of the detected neutrino flux. With these motivations, it is no surprise that real-time detection of neutrinos is rapidly becoming more sophisticated, with many new detectors either in development or recently implemented. One of the most daunting experimental challenges in neutrino observation is the real-time measurement of the full flux of low energy neutrinos from the solar reaction $`\mathrm{p}+\mathrm{p}\mathrm{e}^++\mathrm{d}+\nu _\mathrm{e}`$. This “pp” reaction is the most intense source of solar neutrinos, and initiates the chain of fusion reactions in the sun. The emitted pp neutrinos range in energy from 0 to 420 keV and have a precisely predicted flux of $`5.94\times 10^{10}\mathrm{s}^1\mathrm{cm}^2`$ at the Earth. Despite this high flux, the pp neutrinos have proven difficult to characterize in real time; low energy neutrinos yield low energy scattering events, and these are difficult to detect and discriminate from radioactive backgrounds. In order to characterize and monitor the pp neutrino flux, a detector is needed that has a high signal yield for neutrino-induced events, a high rate of such events, and a low background rate from intrinsic radioactivity. We are familiar with several approaches to the real-time detection of pp neutrinos: bolometric detection of helium atoms liberated by rotons from a liquid helium bath (HERON), measurement of electron tracks generated in a pressurized He (HELLAZ) or $`\mathrm{CF}_4`$ (SUPER-MuNu) gas-filled time projection chamber, and the use of a low energy neutrino absorbing nuclide that follows absorption with a delayed gamma emission(LENS). Here we propose a detector that uses liquid helium or neon as a scintillation target. This scheme offers the advantages of high scintillation yield, high neutrino detection rate, low intrinsic radioactivity, and simplicity. ## II Experimental Overview Detection of neutrinos in our proposed experiment is based on neutrino-electron elastic scattering, $`\nu _\mathrm{x}+\mathrm{e}^{}\nu _\mathrm{x}+\mathrm{e}^{}`$, where x = ($`\mathrm{e}`$, $`\mu `$, $`\tau `$). For pp neutrinos, the scattered electron can range in energy from 0 to 260 keV. The scattering cross-section for electron neutrinos is about $`1.2\times 10^{45}\mathrm{cm}^2`$ (about 4 times larger than for $`\mu `$ or $`\tau `$ neutrinos). This small cross-section leads to the need for a large detector. With 10 tons of active scintillator ($`3\times 10^{30}`$ electrons), a total solar neutrino scattering rate of roughly 27 per day will occur with about 18 of these from p-p neutrinos (according to the SSM). This mass of liquid helium (neon) fills a 5.1 (2.6) meter diameter sphere. We have diagrammed our proposed experiment in Figure 1. The design characteristics are similar to those used currently in the Borexino experiment, with crucial differences arising from the choice of scintillator and associated cryogenics. A spherical geometry is chosen for conceptual simplicity (a cylindrical volume, for example, could be used instead). In the center of the experiment is an active region (10 tons) of liquid helium or neon. Surrounding the active region is a thin shell of transparent material. On the inner surface of this shell is evaporated a layer of tetraphenyl butadiene (TPB), a wavelength shifting fluor. Around the active (inner) region is a shielding (outer) region filled with either liquid neon or liquid helium. If neon is used as a shielding medium, it should be about 2 meters thick, while if the shielding region is liquid helium, this region should be 5 meters thick. These liquids are held in a large transparent tank (or 2 separate tanks, see below). Surrounding the central tank(s), separated by vacuum, is another transparent tank filled with pure liquid nitrogen. Outside the cryogens, at room temperature, is a large array of low-activity photomultiplier tubes, all facing the interior and fitted with light concentrators. Around the entire assembly is a stainless-steel tank, filled with water. Detection of solar neutrinos is via scintillation originating from neutrino-electron scattering that occurs in the active region. These events cause intense emission of extreme ultraviolet light (EUV), centered at a wavelength of approximately 80 nm. This light is absorbed by the TPB waveshifter, causing fluorescence in the blue ($``$ 430 nm). The blue light travels through the shield region, through the transparent acrylic walls and liquid nitrogen, and is detected by the photomultipliers at room temperature. Detection electronics are triggered by multiple photomultiplier coincidence, indicating a potential neutrino scattering event. There are several aspects of this geometry that lead to important advantages. EUV light that originates in the active region will hit the TPB film and be converted into blue light, but EUV light that originates outside the active region will simply be absorbed and will not contribute to the background. The liquid nitrogen acts both as black-body radiation shielding and gamma ray shielding, while the tank of deionized water outside the photomultipliers acts as further shielding. The entire experiment will be located deep underground to reduce cosmic ray events. Muon events will be actively vetoed. Vetoing could be done using a set of photomultipliers to detect Cerenkov light in the water tank. ## III Signal A relatively clear model of scintillations in liquid helium and neon can be elucidated from the numerous experimental characterizations of charged-particle-induced scintillation in condensed noble gases. When an energetic charged particle passes through the liquid, numerous ion-electron pairs and excited atoms are created. The ions immediately attract surrounding ground state atoms and form ion clusters. When the ion clusters recombine with electrons, excited diatomic molecules are created. Similarly, the excited atoms react with surrounding ground state atoms, also forming excited diatomic molecules. Fluorescence in condensed noble gases is observed to be almost entirely composed of a wide continuum of EUV light, emitted when these excited diatomic molecules decay to the monoatomic ground state. The energy of emission is less than the difference in energies between the ground state (two separated atoms) and the first atomic excited state for any given noble gas. The scintillation target is thus transparent to its own scintillation light, and a detector based on a condensed noble gas can be built to essentially arbitrary size without signal loss from reabsorption. Liquid helium scintillations have been more quantitatively studied than neon scintillations. It has been found that conversion of electron kinetic energy into prompt scintillation light is highly efficient; about 24% of the energy of an energetic electron is converted into prompt EUV light, corresponding to 15,000 photons per MeV of electron energy. Recent work towards detection of ultracold neutrons trapped in liquid helium, has resulted in the characterization of efficient wavelength shifting fluors that convert EUV light into blue visible light. This blue light is well matched to the peak sensitivity of available photomultiplier tubes. TPB is the fluor of choice, having a (prompt, $`<`$ 20 ns) photon-to-photon conversion efficiency from the EUV to the blue of at least 70% (and a total conversion efficiency of 135%). The prompt scintillation component from the combined liquid helium-waveshifter system has been measured to have a 20 ns width, allowing the use of coincidence techniques to reduce background. (In liquid argon and liquid xenon, the prompt ultraviolet photon yield has been measured to be even larger; Doke et. al. have measured yields of 40,000 and 42,000 photons/MeV respectively. This indicates that it is likely that neon has a comparable yield.) Given a scintillation yield of 15,000 photons per MeV, a waveshifting efficiency of 70%, a photomultiplier covering fraction of 70%, and a bialkali photocathode quantum efficiency of 20%, a total photoelectron yield of about 1500 per MeV could be achieved from the prompt component. With this expected photoelectron yield, the energy of a 100 keV neutrino-electron scattering event could be measured with an average of 150 photoelectrons, attaining 16% energy resolution. Liquid neon can be expected to be a similarly fast and efficient scintillation medium, with properties similar to those found in liquid helium. Packard et. al. have found that the electron-excited emission spectrum of liquid neon peaks at 77 nm. Liquid neon should also have an intense afterpulsing component due to the extreme ultraviolet radiation of triplet molecules. In liquid helium, the lifetime of this slow component has been measured to be 13 seconds, close to the radiative lifetime of the ground state triplet molecule. But the theoretically predicted lifetime of ground state triplet neon molecules is only 11.9 $`\mu \mathrm{s}`$. In liquid neon, the ground triplet molecular lifetime has been measured to be 2.9 $`\mu \mathrm{s}`$. Intense afterpulsing following neutrino scattering events could be used to positively identify events within the active neon, and could also be added into the prompt signal to improve pulse height resolution. However, our detection scheme does not necessarily require the use of this afterpulsing signal. ## IV Cryogenics We describe here the cryogenic and structural requirements for a low energy neutrino detector whose active region is a 10-ton reservoir of liquid helium or neon. We consider three cases. The backgrounds due to construction materials are discussed in section V. Case A: Liquid neon active region, liquid neon shielding region. Here the transparent tank holding the shielding and active regions would be constructed of a copper grid and a transparent, low radioactivity material, such as quartz or acrylic. Copper is used to give the tank walls high thermal conductivity and structural rigidity, while the quartz or acrylic allows scintillation light through to the photomultipliers. Given a total surface area of $`\pi (6.6\mathrm{m})^2=137\mathrm{m}^2`$ and a conservatively estimated emissivity of 1, a total of 270 W is absorbed by the tank walls and routed through a copper heat link to a closed-cycle helium gas refrigerator outside the shielding. If the copper grid covers 20% of the tank surface, has a bulk thermal conductivity of 15 W $`\mathrm{cm}^1`$ $`\mathrm{K}^1`$, and this copper is 10 cm thick, then the power absorbed from 77 K blackbody radiation results in a temperature difference across the tank of no more than 2 degrees. The use of copper to maintain a low thermal gradient is necessary because of the narrow temperature window at which neon is liquid ($`24.527.1`$ K) and the poor thermal conductivity ($`10^3`$ W $`\mathrm{cm}^1`$ $`\mathrm{K}^1`$) of liquid neon. The cryogenic constraints on this tank may be relaxed if convection in the liquid neon is found to play an appreciable role in the flow of heat through its volume. The active and shielding regions are separated by a thin ($``$ 0.1 mm) shell of transparent plastic or quartz. This shell simply floats in the neon and is held in place by nylon strings connecting the shell to the copper tank. The shell may have small holes in it to allow liquid neon to flow freely between the active and shielding regions. Case B: Liquid helium active region, liquid neon shielding region. As in case A, the active and shielding regions are held in a copper grid composite tank. The tank must however be of larger diameter (9.1 m instead of 6.6 m) to accomodate the larger active region. Also, the active and shielding regions must be separated by a vacuum space because of the different temperatures of the liquid neon and liquid helium. The separation of the active and shielding regions must be accomplished with as little material as possible so as to minimize radioactive backgrounds. Appropriate separation may be possible using a 1 mm thick Kevlar-acrylic composite shell, with shielding and active regions held apart using small acrylic pegs. Case C: Liquid helium active region, liquid helium shielding region. Liquid helium is not an effective enough gamma ray absorber to protect the active region from copper activity. Therefore the tank must be made from a transparent, low radioactivity material such as acrylic. The heat load from 77 K is large (1430 W), but by cooling the helium through its superfluid transition temperature (2.2 K) to achieve high thermal conductivity, the temperature of the helium may be made constant throughout its volume. The high thermal load on the helium may be handled with a large pumped helium system outside the stainless steel tank. As in Case A, the active and shielding regions may be separated with a thin sheet of plastic or quartz. General Considerations. The liquid nitrogen shielding may be held in either a copper grid composite or acrylic tank. The nitrogen should be thick enough (1-2 m) to sufficiently absorb gamma rays from the photomultipliers and stainless tank. Acrylic is a low activity, transparent, strong material. At low temperatures, acrylic remains strong and tough. The yield strength of acrylic increases significantly as temperature is lowered, while the fracture toughness remains roughly constant. Nevertheless, any acrylic containers will have to be designed carefully to avoid unnecessary thermal and mechanical stresses, as the cryogens are of larger scale than is common in low temperature work. ## V Backgrounds Condensed noble gases have an important advantage over organic scintillators: they have no $`{}_{}{}^{14}\mathrm{C}`$ contamination. But among the condensed noble gases, only liquid neon and liquid helium can satisfy the strictest requirements of low radioactive contamination. Natural argon is contaminated by the two long-lived isotopes $`{}_{}{}^{39}\mathrm{Ar}`$ and $`{}_{}{}^{42}\mathrm{Ar}`$, and natural krypton contains $`{}_{}{}^{85}\mathrm{Kr}`$ that precludes its use in low background detectors. Liquid xenon would need to be cleaned of Ar and Kr, and double beta decay of $`{}_{}{}^{136}\mathrm{Xe}`$ would have to be addressed. In addition, while liquid xenon has been put to increasing use in searches for dark matter, its high price (at least $1,000,000 per ton) makes liquid xenon unattractive for use in a large low energy neutrino detector. Helium and neon have no unstable naturally occuring isotopes and therefore no inherent radioactive backgrounds. They do however need to be cleaned of dissolved Ar and Kr, as well as possible low-level contamination by K, U, and Th, but their low boiling temperatures allows for simple and effective solutions to these problems. Distillation can effectively remove argon and krypton, and by passing the helium or neon through a cold trap, the non-noble radioactive contaminants can be frozen out. In neon one remaining possible radioactive contaminant is tritium. If it is found that commercially available neon is contaminated with low levels of tritium, then it can be easily removed by chemical means. Impurities within the helium or neon are therefore not expected to be a significant source of background. Helium and neon are also relatively inexpensive. Because liquid helium and neon are easily cleaned of radioactive isotopes, the limiting backgrounds are expected to arise from the various construction materials. Copper (used in cases A and B) has been shown to possess low levels of radioactive impurities; an estimate of the activity of copper stored underground for a year gives .02 events $`\mathrm{kg}^1`$ $`\mathrm{minute}^1`$. Possible impurity levels of other necessary materials can be estimated from the results of the BOREXINO and SNO collaborations. It is found that acrylic is commercially available with U and Th levels of less than $`10^{13}`$ g/g. Photomultiplier assemblies can be constructed with U and Th levels of $`10^8`$ g/g. Gamma rays emitted from the copper, acrylic, photomultipliers, stainless steel tank, and heat link will Compton scatter in the nitrogen and shielding regions, producing Cerenkov light that can be detected by the photomultipliers. There will be a significant rate of such events; for example, the BOREXINO group reports a gamma flux of $`2\times 10^6\mathrm{day}^1\mathrm{m}^2`$ from their photomultiplier assembly. Fortunately, the light yield from gamma Compton scattering events should be relatively small. Cerenkov light should result in no more than 10 photoelectrons per MeV, and visible scintillation light should contribute even less. In liquid helium scintillations, the visible light output has been measured to be 500 times less intense than the extreme ultraviolet output. Furthermore, the visible output is concentrated in wavelengths greater than 640 nm, where photocathode responsivities can be chosen to be low. In liquid neon, the visible light emissions are similarly weak, with wavelengths that are shifted even further into the infrared. As a result, the outer neon region, without exposure to an ultraviolet waveshifter, will yield an insignificant amount of visible light from gamma scattering events within its volume. However, even with these effects the high rate of gamma scattering events in the shielding will produce significant background at low photoelectron number. This will therefore set a low energy threshold for neutrino events of roughly 20 keV. This leaves only 10% of solar neutrinos undetected. With a 2 (5) meter thick liquid neon (helium) shielding region, the rate of gammas entering the active volume should be less than 1/day, compared to the predicted 27/day solar neutrino counting rate. Also, gamma rays that penetrate the shielding region will have relatively high energies and are likely to deposit most of their energy in the active region, allowing energy cuts to further reduce background. The background levels arising from events in the shielding regions can be independently tested by running the experiment without any waveshifter. A variety of other effects may help to decrease background counts. The three-dimensional photomultiplier arrangement will allow rough determination of the event location. Events in the active volume will be more evenly spread over the photomultipliers than events in the liquid nitrogen and shielding volume. Also, the light concentrators affixed to the photomultiplier tubes will restrict their immediate field of vision to the active volume. The expected intense ultraviolet afterpulsing from the active liquid neon (see section III) could also provide an important test against background events. Radioactive contamination requirements of the materials separating the active and shield regions are stringent. However, very little of these materials are necessary. If clear plastic is used as a divider between the active and shielding regions, radioactive background from U and Th should be insignificant (given U and Th levels of less than $`10^{13}`$ g/g.) However, $`{}_{}{}^{14}\mathrm{C}`$ contamination is a serious issue. In the BOREXINO experiment, $`{}_{}{}^{14}\mathrm{C}`$ levels were demonstrated to be less than $`1.9\times 10^{18}`$ $`{}_{}{}^{14}\mathrm{C}`$/C in organic scintillator synthesized from petroleum. The theoretical estimate for $`{}_{}{}^{14}\mathrm{C}/\mathrm{C}`$ in old petroleum is $`5\times 10^{21}`$, and the higher measured value is presumed to arise during scintillator synthesis or later handling. A $`1.9\times 10^{18}`$ $`{}_{}{}^{14}\mathrm{C}`$/C level in a 100 $`\mu \mathrm{m}`$ thick plastic divider would result in roughly 80 (30) events per day if helium (neon) is used as the active medium. This would obscure the expected 27 neutrino events per day. However, the fact that very little material is required ($``$ 10 kg of plastic compared to 100 tons of organic scintillator used in the BOREXINO experiment) suggests it is reasonable to expect that the $`{}_{}{}^{14}\mathrm{C}`$ concentration could be held to an acceptable level. In scheme B, a strong, largely transparent material is needed to separate the liquid helium and liquid neon shielding regions. Because the amount of plastic needed is larger than in cases A and C, a lower level of radioactive impurities is necessary. A second option is to use thin quartz sheet as a substrate. If old silicon is used (older than 50,000 years), then $`{}_{}{}^{32}\mathrm{Si}`$ and $`{}_{}{}^{14}\mathrm{C}`$ are not a problem. But, of course, $`{}_{}{}^{238}\mathrm{U}`$, $`{}_{}{}^{40}\mathrm{K}`$, $`{}_{}{}^{232}\mathrm{Th}`$, $`{}_{}{}^{3}\mathrm{H}`$ and $`{}_{}{}^{22}\mathrm{Na}`$ must be shown to contribute less than 1 event per day in the energy range of interest. This should be possible because cleanliness levels of less than $`10^{12}`$ g/g are routinely achieved in pure Si through zone-refining techniques. By converting this clean Si into silane ($`\mathrm{SiH}_4`$) gas, ridding the silane gas of radioactive impurities, and then oxidizing, sufficiently clean $`\mathrm{SiO}_2`$ could be produced. Again, the fact that very little quartz is needed makes this contamination level a reasonable requirement. Contamination requirements on the TPB are not so stringent, as only $`0.2\mathrm{mg}\mathrm{cm}^2`$ is necessary for efficient wavelength shifting. Muons are another potential source of background. Muons will pass through the experiment at a rate of about $`25\mathrm{day}^1\mathrm{m}^2`$ (at Gran Sasso). These prompt events can be eliminated through active vetoing. One way to do this is to detect the Cerenkov radiation in the ultrapure water tank using a second set of photomultipliers. In addition, muons that pass through the active region will produce extremely bright, easily distinguishable scintillation pulses. In the neon experiment, neutrons and radioactive species can be produced by muons stopping in the active volume. With only a small fraction ($``$ .008) of muons stopping, and with 40% of these stopped muons absorbed by neon nuclei, a rate of muon radiogenesis of about 0.5 per day follows. Most of these events result in the production of $`{}_{}{}^{19}\mathrm{F}`$, a stable isotope. Prompt muon coincidence rejection and energy cuts will reduce background due to the remaining events (e.g. prompt gammas from neutron absorption, decay of long-lived nuclei) to negligible levels. Muons can also lead to the production of neutrons in the surrounding rock. These neutrons, as well as those emitted from fission products and ($`\alpha `$, n) reactions, will be moderated and absorbed in the ultrapure water tank, possibly with the help of boric acid dissolved in the water, and are not expected to constitute a significant source of background. ## VI Conclusion There are several other experimental programs currently underway to develop real time detectors of pp neutrinos. We believe the method described above compares favorably to all of these. However, making exact technical comparisons with HELLAZ, SUPER-MuNu, and LENS is beyond the scope of this paper. Because the HERON experiment also uses a liquid cryogen it is possible to make a few simple comparisons. The HERON program uses liquid helium as a neutrino scattering medium, and bolometers to detect helium atoms liberated by rotons from the liquid helium surface. The possible event rate achievable with HERON is similar to that possible using our proposed scintillation technique with helium as the active scintillator. If liquid neon is used, however, the event rate is 8 times larger for a given active volume. Our design is technically simpler because it requires temperatures of only 27 K (2 K) for liquid neon (helium), while HERON requires 30 mK superfluid helium to avoid roton scattering. HERON has the requirement (not present in our proposed design) that the helium be isotopically pure to avoid $`{}_{}{}^{3}\mathrm{He}`$-roton scattering centers. The added effort and complexity of isotopic purification of 10 tons of helium is significant. A significant technical requirement present in our proposed experiment and not in HERON is the need for large, strong clear plastic tanks at low temperatures. Also, unlike HERON, our proposed experiment relies almost entirely on high purity shielding materials to reduce background, obviating the need for precise event reconstruction for background reduction but requiring additional materials processing. The use of liquid helium or neon as a scintillation medium is a promising method for the detection of low energy neutrinos. First, the background level should be very low because of the extreme cleanliness possible in the active region. All other materials (with higher levels of contamination) can either be well shielded from the active volume or are present in such small amounts that their contribution may be made negligible. Second, the photoelectron output from neutrino scattering events should be high because of the intense extreme ultraviolet scintillation yield. Detection with standard PMTs is made possible by the availability of efficient wavelength shifters. Third, the rate of detected neutrino scattering events will be comparable or larger than those expected in other experimental techniques. Finally, this experiment uses only existing technologies; a small “proof of principle” apparatus could be constructed and tested in relatively little time. Along with the calibration and monitoring of the pp neutrino flux, this detector will be sensitive to other neutrino sources. For example, the relative and absolute intensities of the $`{}_{}{}^{7}\mathrm{Be}`$ and pep solar neutrino lines might be measured using this sort of detector, yielding a good diagnostic test of what happens to neutrinos after they are emitted. Whether these line intensities could be measured over radioactive background (and other neutrino spectra) must be tested by Monte Carlo methods. We conclude that liquid helium and neon are intriguing possible detectors for solar neutrinos. An efficient real-time neutrino detector based on this technique could be used to calibrate the pp neutrino flux from the sun, look for time variation signatures of neutrino oscillations, and provide detailed energy information over the entire solar neutrino spectrum. ## VII Acknowledgements We would like to thank J.N. Bahcall and G.W. Seidel for stimulating discussions. This work was supported by National Science Foundation Grant No. PHY-9424278. Correspondence Information Correspondence and requests for materials should be addressed to D.N.M.
no-problem/9907/cond-mat9907224.html
ar5iv
text
# Vortex structures in dilute quantum fluids ## 1 Introduction The breakdown of superfluidity in liquid helium (HeII) , dissipation in superconductors , and topological defects in cosmology all involve vortices. Given their prevalence, it is perhaps surprising how little is known about their structure and the exact mechanism of their formation. For example, in HeII vortex rings can be produced experimentally by injecting ions into the fluid . However, it is not known whether the rings emerge via quantum transition, where the ion creates an encircling ring and subsequently attaches itself to the vortex core; or peeling, where the ion creates a vortex loop which grows to form a pinned ring . In principle, this question could be addressed by simulation, except that a complete hydrodynamical model of HeII has yet to be established. One can formulate a simple model of a dilute Bose fluid based on the Gross-Pitaevskii or non-linear Schrödinger equation (NLSE) . This approach can be expected to provide an accurate quantitative description in the case of dilute Bose-Einstein condensates , and offer considerable insight into the physics of HeII. Vortex solutions of the NLSE equation have been studied for a number of simple geometries: Flow past an object has been studied in one and two dimensions , and it is found that stationary vortex solutions exist only for motion slower a critical velocity. Above this velocity, one observes the periodic emission of vortices leading to a pressure imbalance which produces drag on the object . Consequently, the critical velocity also determines the transition between superfluid and normal flow. In this paper, we investigate stationary vortex solutions near an object in three dimensions. For a spherical object, there are two solutions: the encircling ring, where the object is positioned at the centre of the ring; and the pinned ring, where the object centre lies within the core of the vortex line. For all object potentials laminar flow evolves smoothly into the encircling ring solution suggesting that the quantum transition model of ring nucleation is favoured. Finally, we study flows parallel to a plane and illustrate the effect of surface roughness on the critical velocity. ## 2 Numerical method Consider an object described by a potential $`V(𝒓)`$ moving with velocity $`𝒗`$ through a fluid of interacting bosons with mass $`m`$ and uniform asymptotic number density $`n_0`$. In the fluid rest frame, $`𝒓^{}=𝒓+𝒗t`$, the dynamics can be described in terms of the order parameter, $`\psi (𝒓^{},t)`$, which is a solution of the NLSE, $$i\mathrm{}\frac{}{t}\psi (𝒓^{},t)=\frac{\mathrm{}^2}{2m}^2\psi (𝒓^{},t)+V(𝒓^{}𝒗t)\psi (𝒓^{},t)+C|\psi (𝒓^{},t)|^2\psi (𝒓^{},t),$$ (1) where the nonlinear coefficient, $`C=4\pi \mathrm{}^2a/m`$, and $`a`$ is the $`s`$-wave scattering length. If distance and velocity are measured in terms of the healing length, $`\xi =\mathrm{}/\sqrt{mn_0C}`$, and the speed of sound, $`c=\sqrt{n_0C/m}`$, respectively, and the asymptotic number density is rescaled to unity, then Eq. (1) in the frame of the object becomes $$i_t\psi =\frac{1}{2}^2\psi +V\psi +|\psi |^2\psi +i𝒗\psi .$$ (2) Stationary states, $`\psi (𝒓,t)=\mathrm{e}^{i\mu t}\varphi (𝒓)`$, corresponding to a chemical potential, $`\mu =1`$, are found by numerical solution of the discretized system of equations. In the special case of rotational symmetry about the axis of flow, the problem can be solved using 2D cylindrical polar coordinates $`(\rho ,z)`$. In the general case, we use a cartesian 3D grid $`\varphi (𝒓)=\varphi (x,y,z)`$. Defining $`\varphi _{ijk0}=\mathrm{}(\varphi (x_i,y_j,z_k))`$ and $`\varphi _{ijk1}=\mathrm{}(\varphi (x_i,y_j,z_k))`$, and taking the flow direction to define the $`z`$-axis, Eq. (2) becomes $`f_{ijkr}\left(\varphi _{i+1jkr}+\varphi _{i1jkr}+\varphi _{ij+1kr}+\varphi _{ij1kr}+\varphi _{ijk+1r}+\varphi _{ijk1r}6\varphi _{ijkr}\right)/2\mathrm{\Delta }^2`$ (3) $`+V_{ijk}\varphi _{ijkr}+\left(\varphi _{ijk0}^2+\varphi _{ijk1}^2\right)\varphi _{ijkr}\varphi _{ijkr}+(2r1)v(\varphi _{ijk+1,1r}\varphi _{ijk1,1r})/2\mathrm{\Delta }=0`$ where $`\mathrm{\Delta }`$ is the grid spacing (values of $`\mathrm{\Delta }`$ between 0.125 and 0.4 were used). The solution of these equations is found using the linearization $$f_{ijkr}(\varphi _{lmns}^{(p)})+\underset{lmns}{}\left(\varphi _{lmns}^{(p+1)}\varphi _{lmns}^{(p)}\right)\left[\frac{f_{ijkr}}{\varphi _{lmns}}\right]^{(p)}0,$$ (4) where $`\varphi ^{(p+1)}`$ is determined from the approximation $`\varphi ^{(p)}`$ by solving Eq. (4) using the bi-conjugate gradient method . Large box sizes ($`0x\mathrm{}`$) are simulated by employing the nonlinear transformation $`\widehat{x}=x/(x+D)`$, where $`D`$ is chosen to give sufficient point coverage within the healing length. The iterative solution depends on the initial guess $`\varphi ^{(0)}`$; for example, laminar flows are found by choosing $`\varphi ^{(0)}=\mu =1`$, whereas vortex solutions are found by imposing the vortex phase pattern on $`\varphi ^{(0)}`$, and then relaxing the imposed phase after a few iterations. ## 3 Free vortex rings For an unbounded flow ($`V=0`$) one finds free vortex rings. Fig. 1 shows a surface plot of ring solutions with circulation, $`\kappa =2\pi `$, $`4\pi `$, and $`6\pi `$. The double ($`\kappa =4\pi `$) and triple ($`\kappa =6\pi `$) rings only exist for $`v>0.56`$ and $`v>0.67`$, and their cores consists of 2 and 3 lines of zero density, respectively. The separation of the density minima depends on the radius, increasing up to about two healing lengths for $`R=5`$. For $`\kappa =6\pi `$, the central minima has a larger radius. Similar core structures are also found in the corresponding 2D solutions. Although vortex rings with multiple circulation have higher energy ($`E\kappa ^2`$) than the corresponding number of single rings ($`E\kappa `$), we find that they are stable when subject to a perturbation in a time-dependent simulation (similar robustness has also been found for vortex lines with multiple circulation ). Fig. 2 shows a plot of the velocity (equivalent to the ring propagation velocity in a stationary fluid) and momentum as a function of ring radius, $`R`$. For $`R>5`$, the velocity can predicted accurately by $`v=\kappa \left[\mathrm{ln}(8R/a)b\right]/4\pi R`$, where $`a`$ is the core radius ($`a=1/\sqrt{2}`$ in our units) and $`b`$ is a constant which depends on the structure of the core ($`b=0.25`$ for a classical fluid, whereas $`b=0.615`$ has been predicted for a dilute quantum fluid ). Numerical fits give $`b=0.615`$ (as expected), 1.58, and 2.00 for $`\kappa =2\pi `$, $`4\pi `$ and $`6\pi `$, respectively. For $`\kappa =2\pi `$ (single ring), the ring disappears (i.e. the on-axis density becomes zero) when $`v=0.88`$, however, a solution other than laminar flow exists up to $`v=1.0`$ (see below). ## 4 Flow past a sphere The free ring solutions discussed above are modified by the presence of an object or surface. In this section, we present results for a spherical object with radius $`R`$ and potential height $`V`$, i.e., $`V(r)=V`$ ($`rR`$) and $`V(r)=0`$ ($`r>R`$). Fig. 3 shows surface density images of the three possible solutions: laminar flow; the pinned ring or vortex loop; and the encircling ring (the corresponding 2D solutions: laminar flow; a free and a bound vortex, and a vortex pair, were studied by Huepe and Brachet ). Fig. 4 shows a section of the velocity field pattern around the obstacle for each case. As the flow velocity increases, the vortices move closer to the object and eventually merge into the surface. Close to the critical velocity, the flow patterns converge (Fig. 4 lower). ## 5 Vortex energy The Lagrangian density of the NLSE (1) in scaled units, is given by $$(\psi ,\psi ,\dot{\psi })=i\psi ^{}\dot{\psi }\frac{1}{2}\psi ^{}\psi V\psi ^{}\psi \frac{1}{2}\left(\psi ^{}\psi \right)^2.$$ (5) From the Hamiltonian, $`H=d𝒓[(/\dot{\psi })\dot{\psi }]`$, we define the energy relative to the ground state (i.e. laminar flow with $`V=0`$ and having the same number of particles), as $$E=d𝒓\left\{\frac{1}{2}\left|\varphi \right|^2+V\left|\varphi \right|^2+\frac{1}{2}\left(\left|\varphi \right|^21\right)^2\right\}.$$ (6) Any deviation of the local particle density $`\left|\varphi \right|^2`$ from $`1`$ constitutes an excitation. Eq. (6) can be rewritten as $$E=d𝒓\left\{\frac{1}{2}\left(1\left|\varphi \right|^4\right)\right\}+𝒗𝑷,$$ (7) where $`𝑷=id𝒓\left\{\varphi ^{}\varphi \right\}`$ is ring momentum, and $`E`$ and $`𝑷`$ are measured in units of $`\mathrm{}n_0c\xi ^2`$ and $`\mathrm{}n_0\xi ^2`$, respectively. The energy as a function of flow velocity for different obstacle heights is shown in Fig. 5. With no object ($`V=0`$), Fig. 5(upper left), the energy of the ring decreases with increasing velocity reaching a minimum at $`v=0.93`$, which coressponds to the cusp in the dispersion curve (see inset). For $`v>0.93`$, the collapsed ring leaves a lower density, higher velocity region with energy $`EcP`$, christened a rarefraction pulse by Jones and Roberts . Inserting $`E=cP`$ in Eq. (7), one finds that as $`v`$ decreases, $`|\varphi |^2`$ must also decrease, but this becomes impossible when $`|\varphi |^2=0`$, so the rarefraction pulse is replaced by a vortex ring. The process of supersonic flow creating a localised sound wave which evolves into a vortex ring (or pair in 2D) appears to be central to the mechanism of vortex nucleation in dilute quantum fluids (see e.g. Fig. 4 in Ref. ). For $`V>0`$, the energy of the laminar flow solution is no longer zero, and the ring solution splits into two branches corresponding to the pinned ring and the encircling ring (Fig. 5 upper right). For low velocities (large radii), the energy of the encircling ring is higher than the pinned ring by an amount corresponding to the energy of the ring segment inside the object. Note that the pinned and encircling ring solutions always merges below the critical velocity, Fig. 5 lower right (inset). In a time-dependent simulation we observe a smooth transition (with no energy barrier) between laminar flow and the encircling ring supporting the quantum transition model of ring formation . ## 6 Flow adjacent to a plane boundary For flow adjacent to a plane boundary, three classes of solution may be distinguished: laminar flow; vortex loops, and vortex lines parallel to the plane. The loop behaves similar to a free ring, i.e., the radius decreases with increasing velocity, and merges into the plane at a critical velocity, $`v=1`$. To comment on the flow of superfluids in real systems, we consider the effect of a surface bump. In this case, the vortex loop can either encircle or pin to the bump (again the pinned loop has a lower energy), and the vortex line acquires undulations, whose wavelength decreases with increasing flow velocity (Fig. 6). The key effect of the bump is to reduce the critical velocity, $`v_\mathrm{c}`$ (Fig. 6 lower right). In the limit of large radius, $`v_\mathrm{c}0.55`$ for both surface and volume defects. As $`v_\mathrm{c}`$ coincides with the appearance of drag in superfluids, one may conjecture that surface roughness is a significant factor in determining the dissipation at low flow velocities. ## 7 Conclusions We have investigated vortex line and ring structures in dilute quantum fluids. We find that vortex tubes with multiple circulation consist of separate minima, and that near an object or surface lower energy, pinned solutions exist. Both laminar flow and vortex solutions become unstable at a critical velocity which is equal to the speed of sound for an unbounded flow or adjacent to a wall, but decreases near an object or surface bump. As the critical velocity corresponds to a transition between normal and drag-free flow, this dependence indicates how surface roughness can produce a marked effect on the flow of superfluids. \*** Financial support for this project was provided by the EPSRC. TW is supported by the Studienstiftung des Deutschen Volkes.
no-problem/9907/hep-ph9907246.html
ar5iv
text
# Flux tube dynamics in the dual superconductor ## 1 INTRODUCTION ’t Hooft and Mandelstam proposed long ago that quark confinement in QCD would come about as the result of the confinement of color electric flux into flux tubes, and that such flux tubes would form in a dual superconductor. One possible pathway to formation of this dual superconductor was offered later by ’t Hooft . He showed that an Abelian projection of a non-Abelian gauge theory contains magnetic monopoles. Though the effective interaction among these monopoles is hard to calculate, it is not unreasonable to suppose that they form a condensate like that of the Cooper pairs in a superconductor. This magnetic condensate would then bring about an electric Meissner effect, confining electric flux and electric charge. In order to study particle dynamics in QCD, one has to study dynamics of the flux tube. Casher, Kogut, and Susskind argued that particle creation in the flux tube is the soft process responsible for the inside–outside cascade in $`e^+e^{}`$ annihilation; Casher, Neuberger, and Nussinov calculated the particle spectrum via WKB (generalizing Schwinger’s famous formula), and this picture then entered phenomenology via the Lund Monte Carlo program and its descendants. The flux tube has also been much studied in the context of $`pA`$ and $`AA`$ collisions. These and subsequent studies generally lacked any dynamics for the structure of the flux tube itself. We have taken the first step of studying the dynamics of classical charges moving in an electric flux tube and the reaction of the flux tube in the dual superconductor . ## 2 THE DUAL SUPERCONDUCTOR To specify the model, we begin with Maxwell’s equations coupled to both magnetic and electric currents, $`_\mu F^{\mu \nu }`$ $`=`$ $`j_e^\nu `$ (1) $`_\mu \stackrel{~}{F}^{\mu \nu }`$ $`=`$ $`j_g^\nu .`$ (2) Eq. (2) is no longer just a Bianchi identity; thus a vector potential can be introduced only if a new term is added to take care of the magnetic current, viz., $$F^{\mu \nu }=^\mu A^\nu ^\nu A^\mu +ϵ^{\mu \nu \lambda \sigma }G_{\lambda \sigma },G^{\mu \nu }=n^\mu (n)^1j_g^\nu .$$ (3) This vector potential can be coupled to electric charges as usual; in order to introduce magnetic charges, one introduces a dual potential via $$\stackrel{~}{F}^{\mu \nu }=^\mu B^\nu ^\nu B^\mu +ϵ^{\mu \nu \lambda \sigma }M_{\lambda \sigma },M^{\mu \nu }=n^\mu (n)^1j_e^\nu .$$ (4) Now we can write a model for the monopoles, for which the simplest is an Abelian Higgs theory , $$D_\mu ^BD^{\mu B}\psi +\lambda (|\psi |^2v^2)\psi =0,$$ (5) where $$D_\mu ^B_\mu igB_\mu .$$ (6) This theory should produce the desired magnetic condensate to confine electric charge. The magnetic current appearing in (2) is $$j_g^\mu =2g\mathrm{Im}\psi ^{}D^{\mu B}\psi .$$ (7) We envision a long flux tube with some charge distribution $`\pm Q(r)`$ at the ends. Far from the ends, the flux tube is initially the well-known cylindrically symmetric solution of the field equations above (and we impose cylindrical symmetry on the subsequent evolution). We release into this flux tube a fluid of electrically charged particles, realized via simple two-fluid MHD.<sup>1</sup><sup>1</sup>1This means no Schwinger pair creation as yet; it absolves us, however, of the need to calculate the electric vector potential $`A_\mu `$. The two fluids ($`+`$ and $``$) obey the Euler equations $$m\left[\frac{𝐯^\pm }{t}+(𝐯^\pm )𝐯^\pm \right]=\pm e𝐄\pm e𝐯^\pm \times 𝐇\frac{1}{n_e}P$$ and the continuity equation $$\frac{n_e}{t}+(n_e𝐯^\pm )=0.$$ The electric current is thus $$𝐣_e=n_ee\left(𝐯^+𝐯^{}\right).$$ This fluid will flow under the influence of the initial electric field and will screen the charges at the ends; the fluid’s inertia will set up plasma oscillations. Since the flux tube geometry is dynamic, the tube itself will react to the weakening of the field by contracting under the pressure of the vacuum field $`|\psi (\mathrm{})|=v`$; then it will be forced open as the fluid overshoots and oscillates. The final ingredient is the equation of state of the fluid, which we take to be that of a quark–gluon plasma. ## 3 PLASMA OSCILLATIONS The superconductor has two length scales, the vector mass $`m_V=\sqrt{2}gv`$ and the scalar (Higgs) mass $`m_H=\sqrt{2\lambda }v`$. In condensed matter language, we have the London penetration depth $`\lambda _L=m_V^1`$ and the coherence length $`\xi =m_H^1`$; their ratio $`\kappa =\lambda _L/\xi `$ determines whether we have a Type I ($`\kappa <1`$) or a Type II ($`\kappa >1`$) superconductor. Another scale in our problem, introduced by the charged fluid, is the plasma frequency $`\omega _p=\sqrt{2n_ee^2/m}`$. If $`\omega _p<m_V`$, one expects that electromagnetic radiation will be unable to penetrate into the superconductor, with the Meissner effect providing dynamical confinement as well as static; the regime $`\omega _p>m_V`$, where photons (i.e., Abelian gluons) propagate freely is presumably outside the range of applicability of the model. Our numerical results, presented in the figures, belie our expectations. We show in Figure 1 the on-axis electric field for the Type I case, with $`\omega _p<m_V`$. There is strong non-linear modification of the plasma oscillations. Figure 2 presents snapshots of the electric field $`E(r)`$ and the Higgs field $`\rho (r)=|\psi (r)|`$. It is clear that electric flux penetrates far outside the initial radius of the flux tube, accompanied by strong oscillations in $`\rho `$. (The Type II case is not too different, though less spectacular.) This surely casts doubt on the usefulness of this model for the study of dynamical confinement phenomena.
no-problem/9907/nucl-th9907080.html
ar5iv
text
# Low-lying 2⁺ states in neutron-rich oxygen isotopes in quasiparticle random phase approximation ## Abstract The properties of the low-lying, collective 2$`{}_{}{}^{+}{}_{1}{}^{}`$ states in neutron-rich oxygen isotopes are investigated in the framework of self-consistent microscopic models with effective Skyrme interactions. In RPA the excitation energies $`E_{2_1^+}`$ can be well described but the transition probabilities are much too small as compared to experiment. Pairing correlations are then accounted for by performing quasiparticle RPA calculations. This improves considerably the predictions of B(E2) values and it enables one to calculate more reliably the ratios $`M_n/M_p`$ of neutron-to-proton transition amplitudes. A satisfactory agreement with the existing experimental values of $`M_n/M_p`$ is obtained. PACS numbers: 23.20.Js, 21.60.Jz, 21.10.Re The prospects of nuclear physics studies with nuclei far from stability open up a wide range of possibilities for refining our understanding of nuclear properties in terms of microscopic descriptions and effective nucleon-nucleon interactions. One of the important aspects is the ability to disentangle neutron and proton contributions to collective transitions between low-lying states and the ground state. Experimentally, this can be done in a phenomenological way by combining the information obtained in measurements involving various hadronic probes and purely electromagnetic probes. For instance, numerous experimental studies have been performed on <sup>18</sup>O using different hadronic probes like proton, neutron, or pion scattering. More recently, proton scattering on <sup>20</sup>O yielded information on the first $`2^+`$ state in this unstable neutron-rich oxygen isotope. On the other hand, studies involving only electromagnetic properties such as electron scattering, Coulomb excitation or lifetime measurement have also been done for these nuclei. While the excitation processes of purely electromagnetic nature are sensitive only to the protons and give access to the proton transition amplitudes and transition densities, the hadronic processes are sensitive to both proton and neutron transition densities. Therefore, it is possible by a combined analysis of the data from electromagnetic and hadronic processes to determine experimentally for a given excited state the transition amplitudes $`M_p`$ and $`M_n`$ corresponding to protons and neutrons, respectively. Proton scattering experiments yielding $`M_n/M_p`$ values have been recently performed on neutron-rich sulfur and oxygen isotopes . In this work, we present microscopic calculations of low-lying $`2^+`$ states in neutron-rich oxygen isotopes. These calculations are based on effective Skyrme interactions and they are performed in the framework of the random phase approximation (RPA) and the quasiparticle random phase approximation (QRPA). In microscopic models the properties of the states depend on two main inputs, the single-particle spectrum and the residual two-body interaction. In the present approach these two features are linked since the same effective interaction determines the Hartree-Fock (HF) single-particle spectrum and the residual particle-hole interaction. This approach has proved to be an efficient mean to predict properties of collective excitations like giant resonances and it has also been used for calculating low-lying collective states in closed-shell nuclei. In unstable nuclei we usually don’t deal with closed-shell or closed-subshell systems and therefore, the HF and RPA calculations must be done with additional approximations. The HF calculations are carried out assuming spherical symmetry and using the standard filling approximation with equal occupation numbers for all $`(jm)`$-substates of the partially filled $`j`$-subshell. The RPA calculations are then be carried out taking into account these occupation numbers. However, pairing correlations can be important in such nuclei and they must be taken into account. Their effects will be described by HFBCS calculations for the ground states and by QRPA calculations for the excitation spectra. The HF and HFBCS method in spherical nuclei with Skyrme interactions is well-known. For the pairing interaction we simply choose a constant gap given by: $$\mathrm{\Delta }=12.A^{\frac{1}{2}}\mathrm{MeV}.$$ (1) In a more realistic treatment of the pairing, the gap would depend on the single-particle state considered and it would tend to zero when the subshell is far from the Fermi level. Thus, in the constant gap approximation it is necessary to introduce a cut-off in the single-particle space. Above this cutoff subshells don’t participate to the pairing effect. In the case of oxygen isotopes, we choose the BCS subspace to include the $`1s,1p`$ and $`2s1d`$ major shells. The results of the HF and HFBCS calculations performed for the nuclei <sup>18,20,22</sup>O using typical Skyrme interactions are summarized in Table I. For all the interactions, the binding energies per particle decrease with increasing neutron number but SGII has a different behavior as compared to the other interactions and it predicts larger values of $`B/A`$. The pairing correlations decrease $`B/A`$ by about 4% in all cases. The neutron radii increase substantially from <sup>18</sup>O to <sup>22</sup>O and the proton radii also increase slightly as a result of neutron-proton attraction. The effect of neutron pairing correlations is to redistribute neutron densities to the tail region and therefore, this leads to a small increase in $`r_n`$ (and also in $`r_p`$ for the reason mentioned above). The HF-RPA model with Skyrme effective forces is also well-known. We only mention that in this work we solve the RPA equations in configuration space, choosing the particle-hole space so as to exhaust the energy-weighted sum rules. The continuous part of the single-particle spectrum is discretized by diagonalizing the HF hamiltonian on a harmonic oscillator basis. To generalize the HF-RPA to the QRPA model we follow the standard procedure. Denoting by $`a_\alpha ^{}`$, $`a_\alpha `$ the creation and annihilation operators of a particle in a HF state $`\alpha =(j_\alpha ,m_\alpha )`$ and by $`c_\alpha ^{}`$, $`c_\alpha `$ the corresponding operators for a quasiparticle state, we have: $$c_{j_\alpha m_\alpha }^{}=u_\alpha a_{j_\alpha m_\alpha }^{}v_\alpha (1)^{j_\alpha +m_\alpha }a_{j_\alpha m_\alpha },$$ (2) where the BCS amplitudes $`u_\alpha ,v_\alpha `$ satisfy the normalization condition: $$u_\alpha ^2+v_\alpha ^2=1.$$ (3) These amplitudes are determined by solving the BCS equations. One can then build the two-quasiparticle creation operators in an angular momentum coupled scheme: $$C_{\alpha \beta }^{}(JM)=(1+\delta _{\alpha \beta })^{1/2}\underset{m_\alpha m_\beta }{}(j_\alpha j_\beta m_\alpha m_\beta |JM)c_\alpha ^{}c_\beta ^{}.$$ (4) In QRPA the nuclear excitations correspond to phonon operators which are linear combinations of two-quasiparticle creation and annihilation operators: $$Q^\nu (JM)=\underset{\alpha \beta }{}X_{\alpha \beta }^\nu (J)C_{\alpha \beta }^{}(JM)+(1)^MY_{\alpha \beta }^\nu (J)C_{\alpha \beta }(JM).$$ (5) Making use of the condition that the QRPA ground state $`|\stackrel{~}{0}`$ is a vacuum of phonon: $$Q^\nu (JM)|\stackrel{~}{0}=0,$$ (6) one can then derive the QRPA equations whose solutions yield the excitation energies $`E_\nu `$ and amplitudes $`X_{\alpha \beta }^\nu ,Y_{\alpha \beta }^\nu `$ of the excited states. An important quantity that characterizes a given state $`\nu =(E_\nu ,LJ)`$ is its transition density: $$\delta \rho ^\nu (𝐫)\nu |\underset{i}{}\delta (𝐫𝐫_𝐢)|\stackrel{~}{0},$$ (7) and a similar definition of the neutron (proton) transition density $`\delta \rho _n^\nu `$ ($`\delta \rho _p^\nu `$) with the summation in eq.(7) restricted to neutrons (protons). In QRPA the radial part of the transition density is: $$\delta \rho ^\nu (r)=\underset{\alpha \beta }{}\phi _\alpha (r)\phi _\beta ^{}(r)<\beta ||Y_{L0}||\alpha >\{X_{\alpha \beta }^\nu (J)Y_{\alpha \beta }^\nu (J)\}\{u_\alpha v_\beta +(1)^Jv_\alpha u_\beta \},$$ (8) where $`\phi _\alpha (r)`$ is the radial part of the wavefunction of the quasiparticle state $`\alpha `$. As an example the QRPA neutron and proton transition densities of the first $`2^+`$ state in <sup>20</sup>O calculated with the interaction SGII are shown in Fig.1. The neutron transition density is shifted outwards as compared to the proton transition density due to the presence of a neutron skin. Clearly, the two transition densities do not scale like $`N/Z`$ as it is sometimes assumed and this has quantitative consequences as we shall see below. The neutron and proton matrix elements $`M=<\nu |r^LY_{L0}|\stackrel{~}{0}>`$ of a multipole operator are obtained by integrating the corresponding transition densities over r : $$M_{n,p}=\delta \rho _{n,p}^\nu (r)r^{L+2}𝑑r,$$ (9) and the reduced electric multipole transition probabilities are calculated as $$B(EL)_{n,p}=|M_{n,p}|^2.$$ (10) We have calculated the $`J^\pi `$ = 2<sup>+</sup> states in <sup>18,20,22</sup>O using RPA and QRPA with SIII, SGII and SLy4 interactions. The energies and $`B(E2)_p`$ values of the first 2<sup>+</sup> states are shown in Fig.2 together with the existing experimental values in <sup>18</sup>O and <sup>20</sup>O. The data come from experiments involving electromagnetic processes such as Coulomb excitation or lifetime measurement. For the $`E_{2^+}`$ energies, standard RPA reproduces very well experimental values especially for SGII and SLy4. On the other hand, QRPA deteriorates this agreement and it predicts the first 2<sup>+</sup> states at somewhat higher energies. The theoretical prediction of the energies of low-lying states in models based on a HF or HFBCS mean field is a delicate task because these energies are sensitive to the spin-orbit part of the mean field while the spin-orbit component of the two-body effective interaction is not so well determined. For the three interactions used here there is a clear difference in the QRPA $`E_{2^+}`$ energies between SGII and the other interactions. In the case of B(E<sub>2</sub>)<sub>p</sub> values RPA predicts too small values with the three interactions. However, the agreement becomes very good for QRPA both in <sup>18</sup>O and <sup>20</sup>O, especially for SGII and SLy4 interactions. This indicates that pairing effects are important for transition probabilities of the first 2<sup>+</sup> state in these nuclei. In Fig.3 are shown the ratios $`M_n/M_p`$ calculated with the three interactions within RPA and QRPA. On the same figure are displayed the experimental values taken from ref . It can be seen that the RPA results are somewhat larger than those of QRPA due to the very small B(E<sub>2</sub>)<sub>p</sub> values obtained in RPA. In comparison with the data, the QRPA results are in good agreement in <sup>20</sup>O whereas they are slightly too large in <sup>18</sup>O. The three interactions predict different $`A`$ dependence in these oxygen isotopes for the QRPA ratios. It would be interesting to obtain the experimental value of $`M_n/M_p`$ in <sup>22</sup>O as well as the corresponding B(E2) transition probability. If one assumes the quadrupole excitation to be purely isoscalar the ratio $`M_n/M_p`$ would be equal to $`N/Z`$. Taking as a guideline the QRPA results calculated with SGII one sees that the ratio of neutron-to-proton transition amplitudes is about (1.8 - 2.0)$`N/Z`$, thus indicating that the low-lying 2<sup>+</sup> state has an important isovector component. This is not surprising since in these neutron-rich nuclei there are neutron particle-hole configurations at low energy which have no counterpart on the proton side and therefore, these neutron configurations necessarily introduce both isoscalar and isovector type of excitations. In summary, we have investigated the properties of the low-lying, collective 2$`{}_{}{}^{+}{}_{1}{}^{}`$ states in neutron-rich oxygen isotopes in the framework of self-consistent microscopic models. Within the RPA model the excitation energy $`E_{2_1^+}`$ can be well described but the transition probabilities are much too small as compared to experiment. In these open subshell nuclei the pairing correlations can be important and therefore, we have extended the previous model to the HFBCS approximation at the mean field level and we have performed QRPA calculations for the excited states. The quasiparticle microscopic description improves considerably the predictions of B(E2) values and it enables us to calculate more reliably the ratios $`M_n/M_p`$ of neutron-to-proton transition amplitudes. These calculated values differ noticeably from the naive $`N/Z`$ estimate and they are in satisfactory agreement with experiment. However, the QRPA overestimates the $`E_{2_1^+}`$ energies. The sensitivity of positive parity low-lying states to the effective interaction should give one a handle on some specific components of the force, for instance the two body spin-orbit part. Further proton scattering results on <sup>18</sup>O and <sup>20</sup>O will be available soon, yielding energies and $`M_n/M_p`$ ratio for the first 2<sup>+</sup> and 3<sup>-</sup> states. It would also be useful to perform experiments on more neutron-rich oxygen isotopes to establish firmly the trend of $`M_n/M_p`$ as a function of $`N/Z`$. We would like to thank G. Colò, T. Suomijärvi and C. Volpe for useful discussions. Figure captions Figure 1. Neutron and proton transition densities of the first 2<sup>+</sup> state in <sup>20</sup>O, calculated in QRPA with interaction SGII. Figure 2. Energies and B(E2)<sub>p</sub> values of the first 2<sup>+</sup> states in oxygen isotopes. Open and black symbols correspond to RPA and QRPA calculations, respectively. Three effective interactions are used: SIII (circles), SGII (triangles), SLy4 (stars). Experimental values are shown as crosses, with error bars for B(E2)<sub>p</sub>. Figure 3. The $`M_n/M_p`$ ratios in oxygen isotopes. The notations are the same as in Fig. 2.
no-problem/9907/cond-mat9907475.html
ar5iv
text
# Nonlinear Modes of a Macroscopic Quantum Oscillator ## Abstract We consider the Bose-Einstein condensate in a parabolic trap as a macroscopic quantum oscillator and describe, analytically and numerically, its collective modes–a nonlinear generalisation of the (symmetric and antisymmetric) Hermite-Gauss eigenmodes of a harmonic quantum oscillator. The recent observation of different types of Bose-Einstein condensation (BEC) in atomic clouds led to the foundation of a new direction in the study of macroscopic quantum phenomena. From a general point of view, the dynamics of gases of cooled atoms confined in a magnetic trap at very low temperatures can be described by an effective equation for the condensate wave function known as the Gross-Pitaevskii (GP) equation . This is a classical nonlinear equation that takes into account the effects of the particle interaction through an effective mean field, and therefore it can be treated as a nonlinear generalization of a text-book problem of quantum mechanics, i.e. as a macroscopic quantum oscillator. Similar models of the confined dynamics of macroscopic quantum systems appear in other fields, e.g. in the case of an electron gas confined in a quantum well , or optical modes in a photonic microcavity . In all such systems, confined single-particle states are restricted to a set of discrete energies that form a set of eigenmodes. A classical and probably most familiar example of such a system is a harmonic quantum oscillator with equally spaced energy levels . When, instead of single-particle states, we describe quasiclassically a system of interacting bosons in a macroscopic ground state confined in an external potential, a standard application of the mean-field theory allows us to introduce a macroscopic wave function as a classical field $`\mathrm{\Psi }(𝐑,t)`$ having the meaning of the order parameter. The equation for the function $`\mathrm{\Psi }(𝐑,t)`$ looks similar to that of a single-particle oscillator, but it also includes the effect of interparticle interaction, taken into account as a mean-field nonlinear term. Then, the important questions are: Does the physical picture of eigenmodes remain valid in the nonlinear case, and what is the effect of nonlinearity on the modes ? In this paper we analyse nonlinear eigenmodes of a macroscopic quantum oscillator as a set of nonlinear stationary states that extend the well-known Hermite-Gauss eigenfunctions. We also make a link between seemingly different approximations, the well-known Thomas-Fermi approximation and the perturbation theory developed here for the case of weak nonlinearity. For both attractive and repulsive interaction, we demonstrate a close connection between the nonlinear modes and (bright and dark) multi-soliton stationary states. We consider the macroscopic dynamics of condensed atomic clouds in a three-dimensional, strongly anisotropic, external parabolic potential created by a magnetic trap. The BEC collective dynamics can be described by the GP equation, $$i\mathrm{}\frac{\mathrm{\Psi }}{t}=\frac{\mathrm{}^2}{2m}^2\mathrm{\Psi }+V(𝐑)\mathrm{\Psi }+U_0|\mathrm{\Psi }|^2\mathrm{\Psi },$$ (1) where $`\mathrm{\Psi }(𝐑,t)`$ is the macroscopic wave function of a condensate, $`V(𝐑)`$ is a parabolic trapping potential, and the parameter $`U_0=4\pi \mathrm{}^2(a/m)`$ characterises the two-particle interaction proportional to the s-wave scattering length $`a`$. When $`a>0`$, the interaction between the particles in the condensate is repulsive, whereas for $`a<0`$ the interaction is attractive. In fact, the scattering length $`a`$ can be continuously detuned from positive to negative values by varying the external magnetic field near the so-called Feshbach resonances . First of all, we derive from Eq. (1) an effective one-dimensional model, assuming the case of a highly anisotropic (cigar-shaped) trap of the axial symmetry $`V(𝐑)=\frac{1}{2}m\omega _{}^2(R_{}^2+\lambda X^2)`$, where $`R_{}=\sqrt{Y^2+Z^2}`$. This means that $`\lambda \omega _{}^2/\omega _{}^21`$, and the transverse structure of the condensate, being close to a Gaussian in shape, is mostly defined by the trapping potential . Measuring the spatial variables in the units of the longitudinal harmonic oscillator length $`a_{ho}=(\mathrm{}/m\omega \sqrt{\lambda })^{1/2}`$, and the wavefunction amplitude, in units of $`(\mathrm{}\omega _{}/2U_0\sqrt{\lambda })^{1/2}`$, we obtain the following dimensionless equation: $$i\frac{\mathrm{\Psi }}{t}+^2\mathrm{\Psi }[\lambda ^1(y^2+z^2)+x^2]\mathrm{\Psi }+\sigma |\mathrm{\Psi }|^2\mathrm{\Psi }=0,$$ (2) where time is measured in the units of $`(2/\omega _{}\sqrt{\lambda })`$, $`(x,y,z)=(X,Y,Z)/a_{ho}`$, and the sign $`\sigma =\mathrm{sgn}(a)=\pm 1`$ in front of the nonlinear term is defined by the sign of the s-wave scattering length of the two-body interaction. We assume that in Eq. (1) the nonlinear interaction is weak relative to the trapping potential force in the transverse dimensions, i.e. $`\lambda 1`$. Then, it follows from Eq. (2) that the transverse structure of the condensate is of order of $`\lambda `$, and the condensate has a cigar-like shape. Therefore, we can look for solutions of Eq. (2) in the form, $`\mathrm{\Psi }(r,x,t)=\mathrm{\Phi }(r)\psi (x,t)e^{2i\gamma t},`$ where $`r=\sqrt{y^2+z^2}`$, and $`\mathrm{\Phi }(r)`$ is a solution of the auxiliary problem for the 2D radially symmetric quantum harmonic oscillator $`_{}^2\mathrm{\Phi }+2\gamma \mathrm{\Phi }(r^2/\lambda )\mathrm{\Phi }=0,`$ which we take in the form of the no-node ground state, $`\mathrm{\Phi }_0(r)=C\mathrm{exp}(\gamma r^2/2)`$, where $`\gamma =1/\sqrt{\lambda }`$. To preserve all the information about the structure of the 3D condensate in an asymmetric trap describing its properties by the effective GP equation for the longitudinal profile, we impose the normalisation for $`\mathrm{\Phi }_0(r)`$ that yields $`C^2=\gamma /\pi `$. After substituting such a factorized solution into Eq. (2), dividing by $`\mathrm{\Phi }`$ and integrating over the transverse cross-section of the cigar-shaped condensate, we finally derive the following 1D nonstationary GP equation $$i\frac{\psi }{t}+\frac{^2\psi }{x^2}x^2\psi +\sigma |\psi |^2\psi =0.$$ (3) The number of the condensate particles $`N`$ is now defined as $`N=(\mathrm{}\omega /2U_0\sqrt{\lambda })Q`$, where $$Q=_{\mathrm{}}^{\mathrm{}}|\psi (x,t)|^2𝑑x$$ (4) is the integral of motion for the normalised nonstationary GP equation (3). Equation (3) includes all the terms of the same order, and it describes a longitudinal profile of the condensate state in a highly anisotropic trap. In the linear limit, i.e. when formally $`\sigma 0`$, Eq. (3) becomes the well-known equation for a harmonic quantum oscillator. Its stationary localised solutions, $$\psi (x,t)=\varphi (x)e^{i\mathrm{\Omega }t},$$ (5) exist only for discrete values of $`\mathrm{\Omega }`$, such that $`\mathrm{\Omega }_n=1+2n,n=0,1,2,\mathrm{},`$ and they are defined through the Hermite-Gauss polynomials, $`\varphi _n(x)=c_n\mathrm{e}^{x^2/2}H_n(x)`$, where $`c_n=(2^nn!\sqrt{\pi })^{1/2}`$, and $$H_n(x)=(1)^n\mathrm{e}^{x^2/2}\frac{d^n(\mathrm{e}^{x^2/2})}{dx^n},$$ (6) so that $`H_0=1`$, $`H_1=2x`$, etc. In general, the localised solutions of Eq. (3) for $`\sigma 0`$ can be found only numerically. All such solutions can be characterised by the dependence of the invariant (4) on the effective nonlinear frequency $`\mathrm{\Omega }`$. However, in some particular limits we can employ different approximate methods to find the localised solutions analytically. First of all, to describe the effect of weak nonlinearity, we use the perturbation theory based on the expansion of the general solution of Eq. (3) in the infinite set of the eigenfunctions (6). A similar approach has been used earlier in the theory of the dispersion-managed optical solitons . To apply such a perturbation theory, we look for solutions of Eq. (3) in the form \[cf. Eq. (5)\] $$\psi (x,t)=e^{i\mathrm{\Omega }t}\underset{n=0}{\overset{\mathrm{}}{}}B_n\varphi _n(x),$$ (7) where $`\varphi _n(x)`$ are the eigenfunctions of the linear equation for a harmonic oscillator that satisfy the equation $$\frac{d^2\varphi _n}{dx^2}x^2\varphi _n+\mathrm{\Omega }_n\varphi _n=0.$$ (8) Inserting the expansion (7) into Eq. (3), multiplying by $`\varphi _n`$ and averaging, we obtain a system of algebraic equations for the coefficients $$(\mathrm{\Omega }\mathrm{\Omega }_m)B_m\sigma \underset{n,l,k}{}V_{m,n,l,k}B_nB_lB_k=0,$$ (9) where $`V_{m,n,l,k}={\displaystyle _{\mathrm{}}^+\mathrm{}}\varphi _m(x)\varphi _n(x)\varphi _l(x)\varphi _k(x)𝑑x.`$ Equation (9) can be also rewritten in the traditional form $`\delta (H+\mathrm{\Omega }Q)=0`$, and it allows us to develop a perturbation theory for small nonlinearities, for any Hermite-Gaussian eigenmode. For example, let us consider the ground state mode at $`n=0`$. Assuming $`B_0B_m`$ for $`m0`$ and the condition of the symmetric solution $`\varphi (x)=\varphi (x)`$, i.e. $`B_{2k+1}=0`$ for any $`k`$, we find the corrections, $`\mathrm{\Omega }\mathrm{\Omega }_0+\sigma V_{0,0,0,0}|B_0|^2`$, and $`B_{2k}={\displaystyle \frac{\sigma V_{2k,0,0,0}}{(\mathrm{\Omega }\mathrm{\Omega }_{2k})}}|B_0|^2B_0,k0.`$ This allows us to calculate the asymptotic expansion of the invariant $`Q`$ for small nonlinearities, $$Q=\underset{k}{}|B_{2k}|^2\sigma a_0(\mathrm{\Omega }\mathrm{\Omega }_0)[1+b_0(\mathrm{\Omega }\mathrm{\Omega }_0)^2],$$ (10) where the coefficients are: $`a_0=\sqrt{2\pi },b_0={\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(2k)!}{(4k)^2(k!\mathrm{\hspace{0.17em}2}^{2k})^2}}.`$ Higher-order modes can be considered in a similar way, and the results are similar to Eq. (10) where $`a_0`$ and $`b_0`$ change to $`a_n`$ and $`b_n`$ respectively, where $`a_n`$ and $`b_n`$ depend on the mode order. In the opposite limit, i.e. when the nonlinear term or potential are large in comparison with the kinetic term given by the second-order derivative, we can use two different approximations for describing localized modes. For $`\sigma =+1`$ and large negative $`\mathrm{\Omega }`$, localised modes are described by the stationary solutions of the nonlinear Schrödinger (NLS) equation, that appears when we neglect the trapping potential. The NLS one-soliton solution is $`\varphi _s(x)=\sqrt{2\mathrm{\Omega }}\mathrm{sech}(x\sqrt{\mathrm{\Omega }})`$, so that the dependence $`Q(\mathrm{\Omega })`$ coincides with the soliton invariant $`Q_s=4\sqrt{\mathrm{\Omega }}`$. For $`\sigma =1`$ and large positive $`\mathrm{\Omega }`$, the ground-state solution can be obtained by using the so-called Thomas-Fermi approximation, based on neglecting the kinetic term - this yields $`\varphi _{\mathrm{TF}}(x)\sqrt{\mathrm{\Omega }x^2}`$. In general, we should solve Eq. (3) numerically. Figures 1(a) to 1(b) present examples of the numerically found ground-state solutions of Eq. (3) in the form (5) as continuous functions of the dimensionless parameter $`\mathrm{\Omega }`$, for both negative (a) and positive (b) scattering length. For $`\mathrm{\Omega }1`$, i.e. in the limit of the harmonic oscillator ground-state mode, the solution is close to Gaussian for both the cases. When $`\mathrm{\Omega }`$ deviates from 1, the solution profile is defined by the type of nonlinearity. For attraction ($`\sigma =+1`$), the profile approaches the sech-type soliton, whereas for repulsion ($`\sigma =1`$) the solution flattens, and it is better described by the Thomas-Fermi approximation, that is good except at the edge points. In Fig. 2 we present the dependence of the invariant $`Q`$ on the parameter $`\mathrm{\Omega }`$, for both the types of the ground-state solution, corresponding to two different signs of the scattering length. The dashed curve for the zero-order mode (marked as 0th-mode) shows the dependence $`Q_\mathrm{s}(\mathrm{\Omega })`$ for the soliton solution of the NLS equation without a trapping potential. In the asymptotic region of negative $`\mathrm{\Omega }`$, i.e. say for $`\mathrm{\Omega }<2`$, the dependence $`Q(\mathrm{\Omega })`$ for the BEC condensate in a trap approaches the curve $`Q_\mathrm{s}(\mathrm{\Omega })`$. This means that for such a narrow localised state the effect of a parabolic potential is negligible, and the condensate ground state becomes localised mostly due to an attractive interparticle interaction. In contrast, for large positive $`\mathrm{\Omega }`$ the effect of a trapping potential is crucial, and the solution of the Thomas-Fermi approximation, $`\varphi _{\mathrm{TF}}(x)=\sqrt{\mathrm{\Omega }x^2}`$, defines the common asymptotics for all the modes, $`Q_{\mathrm{TF}}\frac{4}{3}\mathrm{\Omega }^{3/2}`$ (dashed-dotted curves in Fig. 2). As has been mentioned above, in the linear limit ($`\sigma 0`$), Eq. (3) possesses a discrete set of localised modes described by the Hermite-Gauss polynomials. We have demonstrated that all such modes can be readily calculated by the perturbation theory in the weakly nonlinear approximation, and therefore they should all exist for the nonlinear problem as well, describing an analytical continuation of the Hermite-Gauss linear modes to a set of nonlinear stationary states. In application to the BEC theory, these non-ground-state solutions were first discussed by Yukalov et al . Figures 1(c) to 1(f) show examples of the first- and second-order modes for both negative and positive scattering length, respectively. In the limit $`\mathrm{\Omega }\mathrm{\Omega }_n`$, all those modes transform into the corresponding eigenfunctions of a linear harmonic oscillator. It is clear that nonlinearity has a different effect for the negative and positive scattering length. For the negative scattering length (attraction), the higher-order modes transform into multi-soliton states consisting of a sequence of solitary waves with alternating phases \[see Figs. 1(c) and 1(e)\]. This is further confirmed by the analysis of the invariant $`Q`$ vs. $`\mathrm{\Omega }`$, where all the branches of the higher-order modes approach asymptotically the soliton dependencies $`Q_n(n+1)Q_\mathrm{s}`$, where $`n`$ is the order of the mode ($`n=0,1,\mathrm{}`$). From the physical point of view, in the case of attractive interaction the higher-order stationary modes exist due to a balance between repulsion of out-of-phase bright NLS solitons and attraction imposed by the trapping potential. The analysis of the global stability of such higher-order multihump multi-soliton modes is still an open problem, however the recent results indicate that, at least in some nonlinear models, multihump soliton states can be stable . For the positive scattering length ($`\sigma =1`$), the higher-order modes transform into a sequence of dark solitons (or kinks) , so that the first-order mode corresponds to a single dark soliton, the second-order mode, to a pair of dark solitons, etc. \[see Figs. 1(d) and 1(f)\]. Again, these stationary solutions satisfy a force balance condition \- repulsion between dark solitons is exactly compensated by an attractive force of the trapping potential. The modal structure of the condensate macroscopic states described above and summarised in Fig. 2 for both positive and negative values of the scattering length allows us to draw an analogy between BEC in a trap and the guided-wave optics where the condensate dynamics in time corresponds to the stationary mode propagation along an optical waveguide, with the parameter $`\mathrm{\Omega }`$ as the propagation constant. As is well known from different problems of guided-wave optics, in the presence of interaction the guided modes become coupled and the coupling can lead to both power exchange and nonlinear phase shifting between the modes. In application to the BEC theory, the mode coupling resembles a kind of internal Josephson effect. These issues are beyond the scope of this paper and will be analysed elsewhere . The theory of nonlinear stationary modes of a macroscopic quantum oscillator developed above for the 1D analog of BEC can be easily extended to both 2D and 3D cases. Moreover, the coupled-mode theory for a single condensate is closely connected to the dynamics of strongly coupled two-component BECs where excitation of an antisymmetric (or, in our notation, the first order) collective mode, in the form of collapses and revivals, has been recently observed experimentally . At last, we would like to mention that the basic concepts and results presented above can find their applications in other fields. For example, the effect of nonlinearity can lead to a mode coupling in the so-called “photonic atom”, a micrometer-sized piece of semiconductor that traps photons inside , or two such ‘photonic atoms’ coupled together, a ‘photonic molecule’ . In such photonic microcavity structures, the macroscopic nature of the states may lead to different nonlinear effects including the mode mixing and power exchange. In conclusion, we have analysed nonlinear stationary modes of a macroscopic quantum oscillator considering, as an example, the cigar-shaped Bose-Einstein condensate in a parabolic trap.
no-problem/9907/cond-mat9907110.html
ar5iv
text
# Glass transitions and dynamics in thin polymer films: dielectric relaxation of thin films of polystyrene ## I Introduction Recently, glass transitions in amorphous materials have been investigated by many researchers. However, the mechanism involved in glass transitions is not yet fully understood . Understanding the behavior of the characteristic length scale of the dynamics of supercooled liquids near the glass transition is the most important problem to be solved in such studies. In Adam and Gibbs’ theory, it is assumed that there is a domain in which collective particle motion can occur and its size grows as the temperature is lowered. This domain is called the cooperatively rearranging region (CRR) . In connection with the CRR, recent molecular dynamics simulations have revealed the existence of significant large-scale heterogeneity in particle displacements, so-called dynamical heterogeneity in supercooled liquids . As the temperature decreases toward $`T_\mathrm{g}`$, the dynamical heterogeneity grows. Experimental studies using multi-dimensional NMR , dielectric hole burning and photobleaching have produced evidence of dynamical heterogeneity. These topics concerning heterogeneity are closely related to the length scale of dynamics near glass transitions. Glass transitions in finite systems confined to nanopores and thin films have recently attracted much attention, because such systems can be regarded as model systems for studying the length scale of glass transitions. In such systems, deviation from bulk properties is expected to appear if the system size is comparable to the characteristic length scale. In particular, $`T_\mathrm{g}`$ and the thermal expansion coefficient $`\alpha _\mathrm{n}`$ of thin films have been measured using several experimental techniques, including ellipsometry , positron annihilation lifetime spectroscopy (PALS) , Brillouin light scattering , and X-ray reflectivity . For the first time, Keddie et al. investigated $`T_\mathrm{g}`$ and the thermal expansion coefficient $`\alpha _\mathrm{n}`$ of thin polymer films supported on substrate. For polystyrene films on hydrogen-passivated Si, $`T_\mathrm{g}`$ was found to decrease with decreasing film thickness $`d`$ for $`d`$$`<`$40nm . The value of $`\alpha _\mathrm{n}`$ below $`T_\mathrm{g}`$ was found to increase with decreasing $`d`$ approaching the value characterizing liquid states. It was suggested that this decrease in $`T_\mathrm{g}`$ is caused by the presence of a liquid-like layer at the polymer-air interface in this case; in the case of freely standing polystyrene films, $`T_\mathrm{g}`$ decreases much more rapidly with decreasing film thickness . These results suggest that the interaction between polymers and the substrate competes with surface effects. This competition leads to a more gradual decrease of $`T_\mathrm{g}`$ in the former case. For a strong attractive interaction between polymers and the substrate, an increase in $`T_\mathrm{g}`$ with decreasing $`d`$ was observed . Positron annihilation lifetime measurements reveal that the observed $`T_\mathrm{g}`$ values of supported PS films are similar to those obtained by Keddie et al. However, the thermal expansion coefficient obtained by PALS is independent of $`d`$ below $`T_\mathrm{g}`$, while it decreases with decreasing thickness above $`T_\mathrm{g}`$. It was proposed that there is a dead layer near the interface between polymers and the substrate in addition to a liquid-like layer at the polymer-air interface. In the case of thin polymer films supported on substrate, the glass transition temperature and thermal properties strongly depend on the competetion between interfacial and surface effects. There are still controversial experimental results for such systems. The dynamics related to the glass transition in thin films have been investigated using several methods . Second harmonic generation reveals that the distribution of relaxation times broadens with decreasing film thickness, while the average relaxation time of the $`\alpha `$-process remains constant for supported films of a random copolymer . Ultrasonic measurements have shown that the temperature $`T_{\mathrm{max}}`$ at which the ultrasonic absorption exhibits a maximum for a given frequency has a $`d`$ dependence similar to that of $`T_\mathrm{g}`$ obtained by ellipsometric measurements for thin polystyrene films supported on substrate. In the case of freely standing films of polystyrne, photon correlation spectroscopy studies indicate that the relaxation behavior of the $`\alpha `$-process in thin films is similar to that of bulk samples of polystyrene, except for the reduction of the $`\alpha `$-relaxtion time. Atomic force microscopy studies have revealed the existence of a mobile layer near the free surface of films of polystyrene. Because there are only a few experimental observations on the dynamics of thin polymer films, it is not yet clear whether properties of the $`\alpha `$-process change together with $`T_\mathrm{g}`$ as the film thickness decreases or whether the obtained results depend on the methods used for measurements or on the details of the individual samples. Dielectric measurement is one of the most powerful expermental tools to investigate the dynamics of the $`\alpha `$-process in amorphous materials. Recently we applied this method to the determination of the glass transition temperature through measurements of the thermal expansion coefficient . Bauer et al. also used this method and further extended it to thermal expansion spectroscopy . By virture of dielectric measurements, it is possible to simultaneously measure the glass transition temperatures and determine the relaxation behavior of the $`\alpha `$-process of a single sample even for thin films. In a previous paper , we reported that $`T_\mathrm{g}`$ for thin polystyrene films supported on glass substrate can be determined from the temperature change of the electric capacitance during heating and cooling processes and that the dynamics of the $`\alpha `$-process can be determined from the dielectric loss of the films. We were able to obtain the distinct thickness dependences of $`T_\mathrm{g}`$ and $`T_\alpha `$ in which the dielectric loss exhibits a peak value for a fixed frequency due to the $`\alpha `$-process. In this paper, the results obtained through dielectric measurements are described in detail, and the dynamics of the $`\alpha `$-process are investigated by measurements of the frequecncy dispersion of the dielectirc loss for the purpose of clarifying the relation involving the thickness dependence of $`T_\mathrm{g}`$, the thermal expasion coefficients, and the dynamics of the $`\alpha `$-process. Based on the results of simultaneous measurements, we discuss the relationship between $`T_\mathrm{g}`$ and the dynamics of the $`\alpha `$-process of thin polymer films. A possible explanation of our experimental results for $`T_\mathrm{g}(d)`$ and the dynamics of the $`\alpha `$-process is given in terms of a three-layer model. This paper consists of five sections. In Sec.II, experimental details and principles regarding our determination of $`T_\mathrm{g}`$ and the thermal expansion coefficient using electric capacitance measurements are given. The experimental results for the thermal expansion coefficient $`\alpha _\mathrm{n}`$ and $`T_\mathrm{g}`$ obtained from our measurements are given in Sec.III. A three-layer model, which can account for the observed thickness dependences of $`\alpha _\mathrm{n}`$ and $`T_\mathrm{g}`$, is also introduced there. In Sec.IV the dynamics of the $`\alpha `$-process of thin films are investigated in reference to the peak profile in the dielectirc loss due to the $`\alpha `$-process in the frequency domain. In Sec.V overall discussion and a summary of this paper are given. ## II Experimental details ### A Sample preparation and measurement procedures Four different atactic polystyrenes (a-PS) were used. These were purchased from Scientific Polymer Products, Inc. ($`M_\mathrm{w}`$=2.8$`\times `$10<sup>5</sup>), the Aldrich Co., Ltd. ($`M_\mathrm{w}`$= 1.8$`\times `$10<sup>6</sup>, $`M_\mathrm{w}/M_\mathrm{n}`$=1.03), and Polymer Source, Inc. ($`M_\mathrm{w}`$= 3.6$`\times `$10<sup>4</sup>, $`M_\mathrm{w}/M_\mathrm{n}`$=1.06 and $`M_\mathrm{w}`$= 3.6$`\times `$10<sup>3</sup>, $`M_\mathrm{w}/M_\mathrm{n}`$=1.06). Thin films of a-PS with various thicknesses from 6 nm to 489 nm were prepared on an Al-deposited slide glass using a spin-coat method from a toluene solution of a-PS. The thickness was controlled by changing the concentration of the solution. After annealing at 70C in the vacuum system for several days to remove solvents, Al was vacuum-deposited again to serve as an upper electrode. Heating cycles in which the temperature was changed between room temperature and 110C ($`>`$$`T_\mathrm{g}`$) were applied prior to the dielectric measurements to relax the as-spun films and obtain reproducible results. ‘Bulk’ films of a-PS (each with $`d>`$100$`\mu `$m) were made by oil-pressing samples melted at about 200C for a few minutes, and gold was vacuum-deposited onto both sides of the films to serve as electrodes. Dielectric measurements were done using an LCR meter (HP4284A) in the frequency range from 20 Hz to 1MHz during heating (cooling) processes in which the temperature was changed at a rate of 2K/min. For dielectric measurements of very thin films, the resistance of the electrodes cannot be neglected. This leads to an extra loss peak on the high frequency side, which results from the fact that system is equivalent to a series circuit of a capacitor and resistor, where the capacitance is that of the sample and the resistance is that of the electrodes . The peak shape in the frequency domain can be fitted well by a simple Debye-type equation. Data obtained in the frequency domain, therefore, can be accurately corrected by subtracting the ‘C-R peak’ and assuming the validity of the Debeye equation. Data corrected in this manner were used for further analysis in this paper. ### B Relation involving the electric capacitance, thickness and thermal expansion coefficients In this section, we give the relation between the temperature change of the electric capacitance and thermal expansion coefficient. Similar discussion was also given by Bauer et al. . In our measurements, film thickness was evaluated from the capacitance at room temperature of as-prepared films by using the formula for the capacitance $`C^{}`$ of a flat-plate condenser, $`C^{}`$=$`ϵ^{}ϵ_0S/d`$, where $`ϵ^{}`$ is the permittivity of a-PS, $`ϵ_0`$ is the permittivity of the vacuum, $`S`$ is the area of the electrode ($`S`$=8.0mm<sup>2</sup>), and $`d`$ is the thickness of the films. In general, the geometrical capacitance is given by $`C_0(T)ϵ_0{\displaystyle \frac{S}{d}}ϵ_0{\displaystyle \frac{S_0}{d_0}}(1+(2\alpha _t\alpha _n)\mathrm{\Delta }T),`$ (1) and the permittivity is expressed by $`ϵ^{}(\omega ,T)=ϵ_{\mathrm{}}(T)+ϵ_{\mathrm{disp}}(\omega ,T),`$ (2) where $`ϵ_{\mathrm{}}`$ is the permittivity in the high-frequency limit, $`\alpha _t`$ is the linear thermal expansion coefficient parallel to the film surface, $`\alpha _n`$ is the linear thermal expansion coefficient normal to the film surface, $`\mathrm{\Delta }T=TT_0`$, and $`T_0`$ is a standard temperature. The second term $`ϵ_{\mathrm{disp}}`$ on the right-hand side of Eq.(2) is related to the frequency dispersion of the dielectric loss due to the $`\alpha `$-process, the $`\beta `$-process, and so on. If we here assume that the films are constrained along the substrate surface, $`\alpha _t`$ and $`\alpha _n`$ are given by $`\alpha _t=0\text{and}\alpha _n`$ $`=`$ $`{\displaystyle \frac{1+\nu }{1\nu }}\alpha _{\mathrm{}},`$ (3) where $`\nu `$ is Poisson’s ratio and $`\alpha _{\mathrm{}}`$ is the bulk linear coefficient of thermal expansion . It should be noted that this case corresponds to that of ‘constant area conditions’ in Ref. . In the temperature range where the effect of the dielectric dispersion, $`i.e.`$, the second term in the expression for $`ϵ^{}`$, can be neglected, we obtain $`ϵ^{}(T)=ϵ_{\mathrm{}}(T)ϵ_{\mathrm{}}(T_0)(1\eta _0\alpha _n\mathrm{\Delta }T),`$ (4) where $`ϵ_{\mathrm{}}(T_0)`$ $`=`$ $`{\displaystyle \frac{1+2\xi _0}{1\xi _0}}`$ (5) $`\eta _0`$ $``$ $`{\displaystyle \frac{3\xi _0}{(1\xi _0)(1+2\xi _0)}},`$ (6) $`\xi _0`$ $``$ $`{\displaystyle \frac{1}{3ϵ_0}}{\displaystyle \underset{j}{}}N_{j,0}\overline{\alpha _j}.`$ (7) Here $`N_{j,0}N_j(T_0)`$, $`N_j(T)`$ is the number density of the j-th atom at $`T`$, and $`\overline{\alpha _j}`$ is the polarizability of the j-th atom. The Clausius-Mossotti relation, $`(ϵ_{\mathrm{}}1)/(ϵ_{\mathrm{}}+2)=1/3ϵ_0N_j\overline{\alpha _j}`$, where $`N_j(T)=N_j(T_0)(1\alpha _\mathrm{n}\mathrm{\Delta }T)`$, has been used. In the case of a-PS, the dielectric constant $`ϵ^{}`$ of bulk samples is 2.8 at room temperature . If we assume that $`ϵ^{}(T_0)ϵ_{\mathrm{}}(T_0)`$=2.8, then $`\xi _0=0.375`$ and $`\eta _0`$ is nearly equal to 1. Using Eqs.(1), (3) and (4), we obtain the temperature coefficient of the capacitance $`\stackrel{~}{\alpha }`$ as follows: $`\stackrel{~}{\alpha }`$ $``$ $`{\displaystyle \frac{1}{C^{}(T_0)}}{\displaystyle \frac{dC^{}(T)}{dT}}`$ (8) $`=`$ $`\left({\displaystyle \frac{1}{ϵ^{}(T_0)}}{\displaystyle \frac{dϵ^{}}{dT}}+{\displaystyle \frac{1}{C_0(T_0)}}{\displaystyle \frac{dC_0}{dT}}\right)`$ (9) $`=`$ $`(1+\eta _0)\alpha _\mathrm{n}2\alpha _n.`$ (10) We thus see that the temperature coefficient of $`C^{}`$ is proportional to $`\alpha _\mathrm{n}`$. It is therefore expected that the temperature coefficient changes at $`T_\mathrm{g}`$. In the literature we find $`\nu `$=0.325 and $`\alpha _{\mathrm{}}`$=0.57$`\times 10^4`$/K for $`T<T_\mathrm{g}`$, and $`\nu `$=0.5 and $`\alpha _{\mathrm{}}`$=1.7$`\times 10^4`$/K for $`T>T_\mathrm{g}`$. Hence, for bulk samples of a-PS, it can be expected that $`\stackrel{~}{\alpha }=\{\begin{array}{cc}\hfill 2.2\times 10^4:& T<T_\mathrm{g}\hfill \\ \hfill 10.2\times 10^4:& T>T_\mathrm{g}.\hfill \end{array}`$ (13) ## III Glass transition temperature of thin films ### A Glass transition temperature and thermal expansion coefficients Figure 1 displays the temperature dependence of the capacitance, normalized with respect to the value at 303K during the heating processes. In Fig.1(a) we can see that the values at thickness 91 nm for different frequencies fall along a single line and decrease with increasing temperature for the temperature range from room temperature to approximately 370K. At higher temperature the normalized capacitance decreases with increasing temperature more steeply than at lower temperature. In this range, the values for different frequencies can no longer be fitted by a single line. Here, they are dispersed due to the appearance of the $`\alpha `$-process. For the temperature range shown in the figure, however, it is apparent that the effect of the dispersion is quite weak above 10kHz. Therefore, for such frequencies the temperature at which the slope of $`C^{}(T)`$ changes discontinuously can be determined unambiguously as the crossover temperature between the line characterizing the lower temperature side and that characterizing the higher temperature side. This crossover temperature can be regarded as $`T_\mathrm{g}`$, because the thermal expansion coefficient changes FIG. 1.: Temperature dependence of the capacitance normalized with respect to the values at 303K during the heating process for various frequencies from 100Hz to 10kHz and three different thicknesses: ((a) $`d`$=$`91`$nm and $`M_\mathrm{w}`$=1.8$`\times `$10<sup>6</sup>; (b) $`d`$=$`20`$nm and $`M_\mathrm{w}`$= 2.8$`\times `$10<sup>5</sup>; (c) $`d`$=$`11`$nm and $`M_\mathrm{w}`$=1.8$`\times `$10<sup>6</sup>). The solid lines were obtained by fitting the data for 10kHz to a linear function below and above $`T_\mathrm{g}`$. The arrows indicate the values of $`T_\mathrm{g}`$. through the crossover temperature, as can be expected from the discussion in Sec.II. As $`d`$ decreases, $`T_\mathrm{g}`$ also decreases, as shown in Figs.1(b) and (c). Figure 2 displays the $`d`$ dependence of $`T_\mathrm{g}`$ for a-PS films with four different molecular weights. In each case, the values of $`T_\mathrm{g}`$ for the various values of $`d`$ were determined as the crossover temperatures at which the temperature coefficient of the capacitance at 10 kHz changes during heating process. For thick films, the values of $`T_\mathrm{g}`$ are almost equal to those for bulk PS. When the films are thinner than about 100 nm, however, a decrease in $`T_\mathrm{g}`$ is observed. The value of $`T_\mathrm{g}`$ for films of 6 nm thickness is lower by about 30K than that of films of 489 nm thickness for a-PS films with $`M_\mathrm{w}`$= 2.8$`\times `$10<sup>5</sup>. The dependence of $`T_\mathrm{g}`$ on $`d`$ can be expressed as $`T_\mathrm{g}(d)=T_\mathrm{g}^{\mathrm{}}\left(1{\displaystyle \frac{a}{d}}\right),`$ (14) where $`T_\mathrm{g}(d)`$ is the measured glass transition temperature for a film of thickness $`d`$. The values of the parameters resulting in the best fit are listed in Table I. The asymptotic value $`T_\mathrm{g}^{\mathrm{}}`$ has a distinct molecular weight dependence. For bulk samples of a-PS, the variation in $`T_\mathrm{g}`$ with molecular weight is described well by the empirical equation $`T_\mathrm{g}^{\mathrm{}}=\stackrel{~}{T}_\mathrm{g}^{\mathrm{}}{\displaystyle \frac{C}{N}},`$ (15) where $`N`$ is the degree of polymerization, $`\stackrel{~}{T}_\mathrm{g}^{\mathrm{}}`$=373 K and C=1.1$`\times `$10<sup>3</sup> . Using Eq.(15), we obtain the values of $`T_\mathrm{g}^{\mathrm{}}`$ as follows: $`T_\mathrm{g}^{\mathrm{}}`$= 373 K for $`M_\mathrm{w}`$=1.8$`\times `$10<sup>6</sup> and 2.8$`\times `$10<sup>5</sup>, 370 K for $`M_\mathrm{w}`$ =3.6$`\times `$10<sup>4</sup>, and 341 K for $`M_\mathrm{w}`$=3.6$`\times `$10<sup>3</sup>. Therefore, it is found that the $`M_\mathrm{w}`$-dependence of $`T_\mathrm{g}`$ for the bulk samples can be reproduced quite well by the present measurements. As shown in Table I, the length scale $`a`$ related to the thickness dependence of $`T_\mathrm{g}`$ ranges from 0.3-0.6nm. Taking into account the scatter in $`a`$, however, there is no distinct molecular weight dependence of $`a`$. This length scale is of the same order as that of the statistical segment of polystyrene (0.68nm) . These experimental results for $`T_\mathrm{g}^{\mathrm{}}`$ and $`a`$ suggest that the thickness dependence of $`T_\mathrm{g}`$ is almost independent of the molecular weight of a-PS after rescaling with respect to $`T_\mathrm{g}`$ of bulk samples. This seems to be consistent with the fact that the length scale $`a`$ is not related to chain lengths but rather to segment lengths. If our data are fitted by the function proposed by Keddie et al., $`T_\mathrm{g}(d)=T_\mathrm{g}^{\mathrm{}}\left(1\left(a/d\right)^{\stackrel{~}{\delta }}\right)`$, the observed data for $`M_\mathrm{w}`$=1.8$`\times `$10<sup>6</sup> and 2.8$`\times `$10<sup>5</sup> can also be fitted to this equation, and the parameter values resulting in the best fit are as follows: $`a=0.39\pm 0.10`$nm and $`\stackrel{~}{\delta }=0.96\pm 0.08`$. The revised value of $`\stackrel{~}{\delta }`$ obtained by Keddie and Jones is $`\stackrel{~}{\delta }=1.28\pm 0.20`$ . This suggests that Keddie’s equation can be replaced by Eq.(14) ($`\stackrel{~}{\delta }`$1). The values $`T_\mathrm{g}`$ obtained in our measurements also agree well with those obtained by Forrest et al. . Thus, it can be concluded that $`T_\mathrm{g}`$ of thin films has been accurately determined by measurements of the electric capacitance. With the present method it has been confirmed that the apparent $`T_\mathrm{g}`$ obtained in our measurements decreases with decreasing film thickness. Since no general theory for the effect on the glass transition of finite size has yet been presented, Eq.(14) is just an experimental findings at the present stage. As discussed later, in the context of the three-layer model we regard the glass transition temperature described by Eq.(14) as that associated with the $`\alpha `$-process (in this case, segmental motions of polymer chains) in the bulk-like layer between the dead layer and the liquid-like one. The observed decrease in $`T_\mathrm{g}`$ may be attributed to the distribution of the relaxation times of the $`\alpha `$-process, which comes from the existence of boundary between liquid-like layer and bulk-like layer, and also between bulk-like layer and dead layer. Here, it should be noted that the temperature coefficient $`\stackrel{~}{\alpha }`$ of electric capacitance also changes with film thickness, as shown in Fig.1. To clarify the $`d`$ dependence of $`\stackrel{~}{\alpha }`$, Fig.3 displays $`\stackrel{~}{\alpha }`$ as a function of the inverse of the film thickness for thin films. It is found that $`\stackrel{~}{\alpha }`$ increases with decreasing thickness below $`T_\mathrm{g}`$, while it decreases with decreasing film thickness above $`T_\mathrm{g}`$. In both cases, the thickness dependence of $`\stackrel{~}{\alpha }`$ can be expressed as a linear function of the inverse of film thickness. This observed $`d`$ dependence of $`\stackrel{~}{\alpha }`$ seems to be independent of the molecular weight within experimental accuracy, except for the case with $`M_\mathrm{w}`$=3.6$`\times `$10<sup>3</sup> below $`T_\mathrm{g}`$. The values of $`\stackrel{~}{\alpha }`$ for bulk samples can be obtained by taking $`1/d`$ to zero as follows: $`\stackrel{~}{\alpha }`$=9.0$`\times `$10<sup>-4</sup> K<sup>-1</sup> for $`T>T_\mathrm{g}`$ and 2.8$`\times `$10<sup>-4</sup> K<sup>-1</sup> for $`T<T_\mathrm{g}`$. These values agree well with those predicted by Eq.(13) in Sec.II, and hence the temperature coefficient of the electric capacitance $`\stackrel{~}{\alpha }`$ observed in the present measurements can be regarded as the linear thermal expansion coefficient normal to the substrate $`\alpha _\mathrm{n}`$ multiplied by a factor of 2, as shown in Eq.(8). This $`d`$ dependence of $`\stackrel{~}{\alpha }`$ and $`\alpha _\mathrm{n}`$ can be attributed to the existence of some layers with different chain mobilities within thin polymer films supported on substrate, as discussed in the next section. ### B Three-layer model In order to explain the observed thickness dependence of $`\stackrel{~}{\alpha }`$, we introduce a three-layer model, in which it is assumed that a thin polymer film on substrate consists of three layers with different molecular mobilities . Near the interface between the glass substrate and polymers there is a dead layer which has almost no mobility. On the other hand, near the free surface there is a liquid-like layer which has higher mobility. Here, we assume that the thicknesses of the two layers are $`\delta `$ (dead layer) and $`\xi `$ (liquid-like layer). Between these two layers there is a bulk-like layer which has the same mobility as that of the bulk samples. In this model, below the apparent $`T_\mathrm{g}`$ the observed linear thermal expansion coefficient $`\alpha _\mathrm{n}`$ normal to the surface of the substrate is given by $`\alpha _\mathrm{n}={\displaystyle \frac{\xi }{d}}\alpha _\mathrm{l}^{\mathrm{}}+\left(1{\displaystyle \frac{\delta +\xi }{d}}\right)\alpha _\mathrm{g}^{\mathrm{}},`$ (16) and above $`T_\mathrm{g}`$ by $`\alpha _\mathrm{n}=\left(1{\displaystyle \frac{\delta }{d}}\right)\alpha _\mathrm{l}^{\mathrm{}},`$ (17) where $`\alpha _\mathrm{g}^{\mathrm{}}`$ ($`\alpha _\mathrm{l}^{\mathrm{}}`$) is the linear thermal expansion coefficient of the bulk glassy (liquid) state. Therefore, this simplified model can reproduce the observed thickness dependence of $`\stackrel{~}{\alpha }`$(=2$`\alpha _\mathrm{n}`$) both below and above $`T_\mathrm{g}`$. By fitting the observed results given in Fig.3 to Eqs. (16) and (17), the thicknesses of dead and liquid-like layers are obtained: $`\delta `$=$`(2.5\pm 0.3)`$nm and $`\xi `$=$`(7.5\pm 0.3)`$nm. Keddie et al. estimated the thickness of a liquid-like layer near the free surface to be (8.0$`\pm `$0.8)nm. DeMaggio et al. obtained the thickness of the dead layer between polymer films and substrate to be (5.0$`\pm `$0.5)nm and also proposed the existence of a mobile surface layer with thickness 2 nm. The values of $`\delta `$ and $`\xi `$ observed in the present dielectric measurements are found to be consistent with those obtained by other experimental techniques: PALS and ellipsometry. Therefore, we conclude that the three layer model can successfully be applied in our case. The values of $`\delta `$ and $`\xi `$ for the lowest molecular weight sample ($`M_\mathrm{w}`$=3.6$`\times `$10<sup>3</sup>) are $`\delta `$=(3.7$`\pm `$1.0)nm and $`\xi `$=(15.2$`\pm `$0.9)nm. These values are clearly different from those for other molecular weights. This difference may be due to the entanglement effect, because the critical molecular weight at the entanglement limit is 1.3$`\times `$10<sup>4</sup> for a-PS . In the present measurements no obvious molecular weight dependence was observed within experimental accuracy except for $`M_\mathrm{w}`$=3.6$`\times `$10<sup>3</sup>. However, it is plausible that the values of $`\delta `$ and $`\xi `$ are functions of the molecular weight. More precise measurements will be required to detect such molecular weight dependence. Thin polymer films with both substrate and upper electrodes of Al for dielectric measurements were used in the present measurements. Due to the presence of these electrodes, for our thin films there was no real free surface existing in the air-polymer interface. The samples discussed presently were capped supported films, according to the terminology given in Ref.. However, the values of $`T_\mathrm{g}`$ observed in the present measurements agree with those obtained by ellipsometry for uncapped films. It has also been reported that there is no obvious differences between the results obtained for uncapped and capped supported films . These experimental results support the conclusion that there is no appreciable difference in $`T_\mathrm{g}`$ for capped and uncapped supported thin films; i.e., the upper electrode of our samples can be assumed to have no effect on the thermal properties of the polymer films, in particular the thermal expansivity along the direction normal to the substrate. Recent DSC measurements have revealed the existence of a surface mobile layer of PS spheres dispersed in Al<sub>2</sub>O<sub>3</sub> powders . According to the discussion given by Mayes , the glass transition temperature of a (surface) layer can be depressed only if the end concentration of the layer is higher than that of bulk samples. The existence of a true FIG. 4.: Reduced dielectric loss as a function of temperature for various film thicknesses ($`M_\mathrm{w}`$=2.8$`\times `$10<sup>5</sup> and $`f`$=100Hz). The symbol $``$ corresponds to $`d=105`$ nm, $`\mathrm{}`$ to $`d=26`$ nm, and $`\mathrm{}`$ to $`d=9`$ nm. The curves were obtained by fitting the data to the equation $`\stackrel{~}{ϵ}^{\prime \prime }=\stackrel{~}{ϵ}_{\mathrm{max}}^{\prime \prime }/(1+((TT_\alpha )/\mathrm{\Delta }T_\alpha )^2)`$. free surface is not a necessary condition for the decrease of $`T_\mathrm{g}`$ and the existence of a liquid-like layer. Recent measurements of the mass density of a-PS thin films supported on Si using neutron reflectivity show that the average mass density within films is near the bulk value regardless of film thickness . Sound velocities in thin freely standing PS films measured by Brillouin light scattering are reported to be the same for all films with various film thicknesses. This also suggests that the average mass density of thin films is the same as that of bulk samples . In the presently considered layer model it is assumed that there are thin liquid-like and dead layers in addition to the layer with bulk properties. Because the liquid-like layer has a lower mass density and the dead layer has a higher mass density than the bulk layer, it is not unreasonable to assume that the average mass density of these thin films is the same as that of bulk samples. In the case of freely standing films, because there is no dead layer, the average mass density is expected to become lower for very thin films than that of bulk sample. However, the observed value of the average mass density does not change with film thickness. The simple layer model may no longer be valid for freely standing films, and it may be the case that another physical factor must be taken into account. It should also be noted here that picosecond acoustic techniques reveal an increase in the longitudinal sound velocity for thin films of poly(methyl methacrylate) and PS . This suggests a change in the average mass density for thin films from that of bulk samples. This result disagrees with that obtained in Ref.. ## IV Dynamics of the $`\alpha `$-process of thin films In this section, results concerning dielectric loss during the heating process are given to allow for discussion how the dynamics of the $`\alpha `$-process change with decreasing $`T_\mathrm{g}`$ as a result of decreasing thickness. First, the temperature dependence of the dielectric loss with a fixed frequency is investigated to directly compare the thickness dependence of $`T_\mathrm{g}`$ with that of the dynamics of the $`\alpha `$-process. Second, the results for the dielectric loss in the frequency domain under an isothermal condition are given to confirm the results obtained with fixed frequency and to allow for discussion of the relaxation behavior due to the $`\alpha `$-process in thin polymer films. ### A Dielectric loss with fixed frequency Figure 4 shows the reduced dielectric loss $`\stackrel{~}{ϵ}^{\prime \prime }/\stackrel{~}{ϵ}_{\mathrm{max}}^{\prime \prime }`$ as a function of temperature at 100Hz in a-PS samples of thickness 9nm, 26nm and 105nm with $`M_\mathrm{w}`$=2.8$`\times `$10<sup>5</sup>. Here, the reduced dielectric loss is defined by $`\stackrel{~}{ϵ}^{\prime \prime }/\stackrel{~}{ϵ}_{\mathrm{max}}^{\prime \prime }=(C^{\prime \prime }(T)C^{\prime \prime }(T_0))/(C_{\mathrm{max}}^{\prime \prime }(T_\alpha )C^{\prime \prime }(T_0))`$, where $`C^{\prime \prime }`$ is the imaginary part of the complex capacitance, $`T_0`$ is a standard temperature (in this case, $`T_0`$ is room temperature), and $`C_{\mathrm{max}}^{\prime \prime }`$ is the peak value of $`C^{\prime \prime }(T)`$ due to the $`\alpha `$-process. Above $`T_\mathrm{g}`$ the dielectric loss $`\stackrel{~}{ϵ}^{\prime \prime }/\stackrel{~}{ϵ}_{\mathrm{max}}^{\prime \prime }`$ for a given frequency displays an anomalous increase with temperature due to the $`\alpha `$-process, and it possesses a maximum at the temperature $`T_\alpha `$. The value of $`T_\alpha `$ and the width of the $`\alpha `$-peak, $`\mathrm{\Delta }T_\alpha `$, also depend on $`d`$, as shown in Fig.4. Here, $`\mathrm{\Delta }T_\alpha `$ is defined as the temperature difference between $`T_\alpha `$ and the lower temperature at which $`\stackrel{~}{ϵ}^{\prime \prime }`$ is half its peak value. As shown in Fig.5(b), the width $`\mathrm{\Delta }T_\alpha `$ begins to increase at about 100 nm and continues to increase monotonically with decreasing $`d`$. The $`d`$ dependence of $`\mathrm{\Delta }T_\alpha `$ can be expressed as $`\mathrm{\Delta }T_\alpha (d)=\mathrm{\Delta }T_\alpha ^{\mathrm{}}\left(1+{\displaystyle \frac{a^{}}{d}}\right),`$ (18) where $`a^{}`$=6.9$`\pm `$0.8 nm and $`\mathrm{\Delta }T_\alpha ^{\mathrm{}}`$=10.3$`\pm `$0.4 K. Comparing the $`d`$-dependence of $`\mathrm{\Delta }T_\alpha `$ with that of $`T_\mathrm{g}`$ (Fig.2), we find that the decrease of $`T_\mathrm{g}`$ is directly correlated with the broadening of the $`\alpha `$-peak as $`\delta (T_\mathrm{g}(d))/T_\mathrm{g}^{\mathrm{}}`$= $`C_1\delta (\mathrm{\Delta }T_\alpha (d))/\mathrm{\Delta }T_\alpha ^{\mathrm{}}`$, where $`\delta T_\mathrm{g}(d)`$=$`T_\mathrm{g}(d)T_\mathrm{g}^{\mathrm{}}`$, $`\delta (\mathrm{\Delta }T_\alpha (d))`$=$`\mathrm{\Delta }T_\alpha (d)\mathrm{\Delta }T_\alpha ^{\mathrm{}}`$, and $`C_1`$ is a constant (4.8$`\times `$10<sup>-2</sup>$``$6.5$`\times `$10<sup>-2</sup>). In other words, it can be concluded that the broadening of the distribution of relaxation times for the $`\alpha `$-process is closely correlated to the decrease of $`T_\mathrm{g}`$. Contrastingly, Fig.5(a) shows that $`T_\alpha `$ remains almost constant as $`d`$ is decreased, down to the critical thickness $`d_\mathrm{c}`$, at which point it begins to decrease linearly with decreasing $`d`$. Therefore, $`T_\alpha `$ is given as follows: $`T_\alpha (d)=\{\begin{array}{cc}\hfill T_\alpha ^{\mathrm{}}:& d>d_\mathrm{c}\hfill \\ \hfill T_\alpha ^{\mathrm{}}\left(1+\frac{dd_\mathrm{c}}{\zeta }\right):& d<d_\mathrm{c},\hfill \end{array}`$ (21) where $`T_\alpha ^{\mathrm{}}`$ and $`\zeta `$ are constants. The functional form of $`T_\alpha `$ with respect to $`d`$ is independent of $`M_\mathrm{w}`$, because Eq.(21) can well reproduce experimental values of $`T_\alpha `$ for two different molecular weights $`M_\mathrm{w}`$=2.8$`\times `$10<sup>5</sup> and 1.8$`\times `$10<sup>6</sup>. The parameters $`d_\mathrm{c}`$ and $`\zeta `$ show a distinct molecular weight dependence as shown in Table II. The $`M_\mathrm{w}`$ and $`d`$ dependences of $`T_\alpha `$ are quite different from those of $`T_\mathrm{g}`$ and $`\mathrm{\Delta }T_\alpha `$ found in the present and previous measurements on supported PS films . They are similar to those of $`T_\mathrm{g}`$ for freely standing films of a-PS . The values of $`d_\mathrm{c}`$ listed in Table II are $`d_\mathrm{c}`$=11 nm for $`M_\mathrm{w}`$=2.8$`\times `$10<sup>5</sup> and $`d_\mathrm{c}`$=20$``$23 nm for $`M_\mathrm{w}`$=1.8$`\times `$10<sup>6</sup>. These values seem to be related to the radius of gyration of the bulk polymer coil ($`R_\mathrm{g}`$=0.028$`\times `$$`\sqrt{M}`$ (nm) ): $`R_\mathrm{g}`$=15 nm for $`M_\mathrm{w}`$=2.8$`\times `$10<sup>5</sup> and 38 nm for $`M_\mathrm{w}`$=1.8$`\times `$10<sup>6</sup>. Furthermore, if we assume that $`d_\mathrm{c}`$ and $`\zeta `$ can be scaled by the functional form of $`d_\mathrm{c}M^ϵ`$ and $`\zeta M^\gamma `$, where $`M`$ is the molecular weight of polymers, the values of $`ϵ`$ and $`\gamma `$ can be estimated as follows: $`ϵ`$=0.38$`\pm `$0.10 and $`\gamma `$= 0.51$`\pm `$ 0.08. These values are nearly equal or similar to the exponent of radius of gyration for Gaussian chains ($`\nu =0.5`$). In the molten state, polymer chains can be regarded as Gaussian chains. This result suggests that the length scale such as the radius of gyration of the bulk polymer may control the drastic change of $`T_\alpha `$ near and below $`d_\mathrm{c}`$. In other words, the deformation of random coils of polymer chains confined in thin films may cause the observed decrease in $`T_\alpha `$ with decreasing film thickness. As discussed later, one of the possible origin of this deformation is the competition between the liquid-like layer and the dead layer. The investigation of $`d`$ dependence of $`T_\alpha `$ for various $`M_\mathrm{w}`$ samples will reveal the detailed mechanism of drastic decrease in $`T_\alpha `$ with decreasing $`d`$. Here, it should be noted that the length scale $`a^{}`$ included in the expression of $`\mathrm{\Delta }T_\alpha `$ as a function of $`1/d`$ is much larger than $`a`$ in $`T_\mathrm{g}`$: $`a^{}/a=15.3`$. This can be explained in the following way. It is assumed that the shape of the loss peak due to the $`\alpha `$-process in the plot of $`ϵ^{\prime \prime }`$ vs. $`T`$ can be expressed by the same function of $`T_\alpha `$ and $`\mathrm{\Delta }T_\alpha `$ for any film thickness $`d`$ and that $`T_\mathrm{g}`$ can be regarded as the temperature at which dielectric loss begins to increase due to the $`\alpha `$-process. From this assumption, the following relation can be obtained: $`T_\alpha (d)T_\mathrm{g}(d)=A\times \mathrm{\Delta }T_\alpha (d),`$ (22) where $`A`$ is a constant. Taking into account that for $`d>d_\mathrm{c}`$, $`T_\alpha (d)=T_\alpha ^{\mathrm{}}`$, and substituting Eqs.(14) and (18) into Eq.(22), we obtain the relations $`T_\alpha ^{\mathrm{}}T_\mathrm{g}^{\mathrm{}}=A\mathrm{\Delta }T_\alpha ^{\mathrm{}}`$ and $`T_\mathrm{g}^{\mathrm{}}a=A\mathrm{\Delta }T_\alpha ^{\mathrm{}}a^{}`$. Hence, the value of $`a^{}/a`$ is expressed by $`{\displaystyle \frac{a^{}}{a}}={\displaystyle \frac{T_\mathrm{g}^{\mathrm{}}}{T_\alpha ^{\mathrm{}}T_\mathrm{g}^{\mathrm{}}}}.`$ (23) Using Eq.(23) with the observed values of $`T_\mathrm{g}^{\mathrm{}}`$ and $`T_\alpha ^{\mathrm{}}`$ we obtain $`a^{}/a17`$ for $`f`$=100Hz and 12 for $`f`$=1kHz. Although the errors in $`\mathrm{\Delta }T_\alpha `$ prevent us from obtaining the frequency dependence of $`a^{}`$, the values of $`a^{}/a`$ evaluated using the above assumption agree well with those found in the present measurements. Recently, Forrest et al. obtained $`\alpha `$-relaxation data with a characteristic time scale $`\tau `$$``$2$`\times `$10<sup>-4</sup>s using a quartz crystal microbalance technique applied to supported PS films covered with SiC particles . It was reported that the small dissipation peak $`T_{\mathrm{max}}`$, which corresponds to $`T_\alpha `$ in this paper, exhibits the same $`d`$ dependence as $`T_\mathrm{g}`$ when the values of $`T_\mathrm{g}`$ are shifted by 20K. The $`d`$ dependence of $`T_{\mathrm{max}}`$ found from their measurements seems to be different from that found from the present measurements. The values of $`T_\mathrm{g}`$ used for comparison with $`T_{\mathrm{max}}`$ were observed by ellipsometry for PS films supported on a hydrogen-passivated Si substrate with a free surface . Hence, the comparison between $`T_\mathrm{g}`$ and $`T_{\mathrm{max}}`$ in Ref. was carried out for samples with different molecular weight and of different geometries (with and without a free surface) by using FIG. 7.: Peak frequency of dielectric loss due to the $`\alpha `$-process as a function of the inverse of temperature for thin films of a-PS with various film thicknesses ($`M_\mathrm{w}`$=1.8$`\times `$10<sup>6</sup>). Solid curves were obtained by fitting the data to the VFT equation, $`f_{\mathrm{max}}=f_0\mathrm{exp}(U/(TT_V))`$, where $`f_0`$, $`U`$, and $`T_V`$ are constants. different experimental techniques, while a direct comparison was carried out in our study by using simultaneous measurements of $`T_\mathrm{g}`$ and $`T_\alpha `$ for the same sample. The results obtained in Ref. may have been affected by a small difference in experimental conditions for the two different measurements of $`T_\mathrm{g}`$ and $`T_{\mathrm{max}}`$. Furthermore, $`T_{\mathrm{max}}`$ plotted in the inset of Fig.2 of Ref. can also be fitted by the equation proposed for $`T_\alpha `$ in this paper (See Eq.(21)), and the critical thickness $`d_\mathrm{c}`$ is found to be 35 nm. ### B Dielectric relaxation behavior of thin films Here, we give the results for the imaginary part of the complex capacitance (dielectric loss) as a function of the frequency to facilitate discussion of the dynamics of thin films of a-PS with various film thicknesses between 14 nm and 194 $`\mu `$m. Figure 6 displays the dielectric loss vs. frequency at various temperatures above $`T_\mathrm{g}`$ for a-PS ($`M_\mathrm{w}`$=1.8$`\times `$10<sup>6</sup>) with film thicknesses of (a) 194 $`\mu `$m (bulk sample), (b) 91 nm and (c) 14 nm. The peak in Fig.6 corresponds to that due to the $`\alpha `$-process. It it found that the relaxation behavior of the $`\alpha `$-process changes with temperature and thickness. The peak frequency shifts to the higher frequency side as the temperature increases. In Fig.7 the peak frequency $`f_{\mathrm{max}}`$, which corresponds to the inverse of the relaxation time $`\tau _\alpha `$ of the $`\alpha `$-process, is plotted as a function of the inverse temperature. It is found that the values of $`\tau _\alpha `$ for the films with thickness from 33 nm to 194 $`\mu `$m fall on the same curve, which can be described by the Vogel-Fulcher-Tammann (VFT) equation. Here it should be noted that the values of $`T_\mathrm{g}`$ for the films with thickness of 33 nm and 91 nm are FIG. 8.: Dependence of the normalized dielectric loss on the logarithm of the normalized frequency ($`M_\mathrm{w}`$=1.8$`\times `$10<sup>6</sup>). The two axes are normalized with respect to the peak position due to the $`\alpha `$-process, corresponding to $`ϵ_{\mathrm{max}}^{\prime \prime }`$ and $`f_{\mathrm{max}}`$. The numbers given in the right margin are the film thicknesses. The solid curves for each film with various thicknesses were obtained by fitting the data to the HN equation. The parameter values resulting in the best fit are given in Table II. For $`d`$=194 $`\mu `$m, 91 nm and 14 nm, the symbols are the same as in Fig.6. For $`d`$=408 nm, the symbol $``$ corresponds to 393.5K, $`\mathrm{}`$ to 397.5K, $`\mathrm{}`$ to 401.3K, and $`\mathrm{}`$ to 404.3K. For $`d`$=187 nm, the symbol $``$ corresponds to 388.5K, $`\mathrm{}`$ to 392.4K, $`\mathrm{}`$ to 396.3K, $`\mathrm{}`$ to 400.4K, $``$ to 404.3K, to 407.6K, and to 408.1K. For $`d`$=33 nm, the symbol $``$ corresponds to 380.5K, $`\mathrm{}`$ to 384.5K, $`\mathrm{}`$ to 388.4K, $`\mathrm{}`$ to 392.4K, $``$ to 396.4K, to 399.9K, and to 400.6K. smaller than those of thicker films, although $`\tau _\alpha `$ remains constant. As the thickness decreases further, down to 14nm, $`\tau _\alpha `$ becomes much shorter than that for thicker films. It follows from this result that the relaxation time $`\tau _\alpha `$ of the $`\alpha `$-process remains constant down to a critical thickness, below which it begins to decrease. This result FIG. 9.: The relaxation function $`\varphi `$ as a function of the logarithm of the reduced time $`t/\tau _{\text{KWW}}`$ for a-PS thin films with various thicknesses. The relaxation functions plotted by the 6 different symbols were calculated using Eq.(24) and Eq.(25) with the best-fit parameters $`\alpha _{\mathrm{HN}}`$ and $`\beta _{\mathrm{HN}}`$ listed in Table II for each thickness. The solid curves were obtained by fitting the relaxation functions to the KWW equation. TABLE III.: The values of $`\alpha _{\mathrm{HN}}`$, $`\beta _{\mathrm{HN}}`$ and $`\beta _{\text{KWW}}`$ resulting in the best fit for thin films of a-PS with $`M_\mathrm{w}`$=1.8$`\times `$10<sup>6</sup>. $`d`$ (nm) $`\alpha _{\mathrm{HN}}`$ $`\beta _{\mathrm{HN}}`$ $`\beta _{\text{KWW}}`$ 194$`\times `$10<sup>3</sup> 0.22$`\pm `$0.01 0.46$`\pm `$0.01 0.419$`\pm `$0.012 408 0.22$`\pm `$0.01 0.48$`\pm `$0.02 0.435$`\pm `$0.017 187 0.25$`\pm `$0.01 0.46$`\pm `$0.03 0.399$`\pm `$0.021 91 0.25$`\pm `$0.01 0.47$`\pm `$0.01 0.406$`\pm `$0.014 33 0.38$`\pm `$0.01 0.51$`\pm `$0.02 0.344$`\pm `$0.017 14 0.40$`\pm `$0.02 0.37$`\pm `$0.03 0.271$`\pm `$0.019 is consistent with that extracted from the experimental observations of dielectric loss with fixed frequency discussed in Sec.IV.A. In order to discuss the relaxation behavior of the $`\alpha `$-process, the profiles in Fig.6 are replotted by scaling them with respect to peak positions and peak heights. Figure 8 shows that profiles of dielectric loss vs. frequency can be reduced in this way to a single master curve over the temperature range above $`T_\mathrm{g}`$ described in the figure captions. It it clearly found that peak profiles become broader as the film thickness decreases. The master curve for each film with different thickness can be fitted by using the Havriliak-Negami equation , $`ϵ^{\prime \prime }(\omega )=\mathrm{}{\displaystyle \frac{\mathrm{\Delta }ϵ}{[1+(i\omega \tau _0)^{1\alpha _{HN}}]^{\beta _{HN}}}},`$ (24) where $`\mathrm{\Delta }ϵ`$ is the dielectric strength, $`\omega `$ is angular frequency (= $`2\pi f`$), $`\tau _0`$ is the apparent relaxation time, and $`\alpha _{HN}`$ and $`\beta _{HN}`$ are shape parameters. The values of $`\alpha _{\mathrm{HN}}`$ and $`\beta _{\mathrm{HN}}`$ resulting in the best fit are given in Table III. The FIG. 10.: The KWW exponent $`\beta _{\text{KWW}}`$ as a function of the inverse of the thickness $`d`$ ($`M_\mathrm{w}`$=1.8$`\times `$10<sup>6</sup>). The values $`\beta _{\text{KWW}}`$ were obtained by fitting the relaxation function to the KWW equation. The solid line was plotted using Eq.(27). solid curves given in Fig.8 are calculated using the HN equation. Here, the data in the frequency domain are converted into those in time domain. Using the equation $`\varphi (t)`$ $`=`$ $`{\displaystyle \frac{2}{\pi }}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{ϵ^{\prime \prime }(\omega )}{\mathrm{\Delta }ϵ}}\mathrm{cos}\omega t{\displaystyle \frac{d\omega }{dt}},`$ (25) the relaxation function $`\varphi (t)`$ can be calculated via the HN equation with the best-fit parameters for thin films of various thicknesses, as shown in Fig.9. The shape of the relaxation function changes with film thickness. As seen in Fig.9, the relaxation function thus obtained can be fitted quite well by the Kahlrausch-Williams-Watts (KWW) equation $`\varphi (t)=\mathrm{exp}\left[\left({\displaystyle \frac{t}{\tau _{\text{KWW}}}}\right)^{\beta _{\text{KWW}}}\right],`$ (26) for any thickness. It is also found that the relaxation behavior becomes slower as the film thickness decreases. Figure 10 displays the exponent $`\beta _{\text{KWW}}`$ as a function of the inverse of the film thickness. It it found that $`\beta _{\text{KWW}}`$ decreases from 0.42 to 0.27 as the thickness changes from 194 $`\mu `$m to 14 nm. The functional form of $`\beta _{\text{KWW}}`$ with respect to the inverse of the thickness is found to be linear: $`\beta _{\text{KWW}}=\beta _{\text{KWW}}^{\mathrm{}}\left(1{\displaystyle \frac{a^{\prime \prime }}{d}}\right),`$ (27) where $`\beta _{\text{KWW}}^{\mathrm{}}`$=0.423$`\pm `$0.006 and $`a^{\prime \prime }`$=(5.2$`\pm `$0.4) nm. The value of $`\beta _{\text{KWW}}`$ is a measure of the distribution of relaxation times $`\tau _\alpha `$ of the $`\alpha `$-process; $`i.e.`$, the distribution becomes broader as $`\beta _{\text{KWW}}`$ becomes smaller. Therefore, as the thickness decreases, the distribution of the relaxation times $`\tau _\alpha `$ becomes broader according to Eq.(27). FIG. 11.: Reduced dielectric loss as a function of temperature during the cooling process for a-PS with film thicknesses of 105 nm and 9 nm ($`M_\mathrm{w}`$=2.8$`\times `$10<sup>5</sup>): (a) $`f`$=100Hz and (b) $`f`$=1kHz. The solid curves for the $`\alpha _\mathrm{l}`$-process were obtained by fitting the data to the equation $`\stackrel{~}{ϵ}^{\prime \prime }=_{j=\alpha ,\alpha _\mathrm{l}}\stackrel{~}{ϵ}_{\mathrm{max},j}^{\prime \prime }/(1+((TT_j)/\mathrm{\Delta }T_j)^2)`$. The characteristic length scale $`a^{\prime \prime }`$ obtained in this analysis of $`\beta _{\text{KWW}}`$ is consistent with the value $`a^{}`$ obtained for $`\mathrm{\Delta }T_\alpha `$. Furthermore, the relative change of $`\beta _{\text{KWW}}`$ measured with respect to that of the bulk sample, $`\delta \beta _{\text{KWW}}(d)`$ =$`\beta _{\text{KWW}}^{\mathrm{}}\beta _{\text{KWW}}(d)`$, is directly related to the relative change of $`T_\mathrm{g}`$ as follows: $`{\displaystyle \frac{\delta T_\mathrm{g}(d)}{T_\mathrm{g}^{\mathrm{}}}}=9.6\times 10^2\times {\displaystyle \frac{\delta \beta _{\text{KWW}}(d)}{\beta _{\text{KWW}}^{\mathrm{}}}}.`$ (28) The values of $`\beta _{\text{KWW}}`$ for freely standing films of a-PS have been evaluated using photon correlation spectroscopy and found to be indistinguishable from those of bulk PS . On the other hand, a decrease in $`\beta _{\text{KWW}}`$ has been observed for a copolymer thin film supported on quartz , as found in the present measurements on PS supported on a glass substrate. We thus conclude that there is a large difference in the dynamics of freely standing films and supported thin films. In the case of thin films supported on substrate, it is easily understood that the existence of the substrate may cause a broadening of the distribution of $`\alpha `$-relaxation times, because the dynamics of polymer chains should depend on the distance from the substrate near the boundary. Although only the results for the dielectric loss in thin films of a-PS with $`M_\mathrm{w}`$=1.8$`\times `$10<sup>6</sup> are given in IV.B, the results obtained for thin films with $`M_\mathrm{w}`$=2.8$`\times `$10<sup>5</sup> are consistent with these. ## V Discussion and Summary In the case of thin polymer films supported on substrate, not only the surface effects but also the interactions between the substrate and films strongly affect the dynamics and the glass transition of the thin films. We introduced a three-layer model in order to explain such surface and interfacial effects in Sec.III.B, following Ref.. In this model it is assumed that within a thin film there are a liquid-like layer and a dead layer in addition to a bulk-like layer. According to this model, two different $`\alpha `$-processes should exist corresponding to the liquid-like layer and bulk-like layer. (If the $`\alpha `$-process exists in the dead layer, it should exist only far above $`T_\mathrm{g}`$ of the bulk sample, $`i.e.`$, beyond the experimentally accessible temperature range.) We now are performing dielectric measurements to investigate the dynamical properties of a-PS thin films over a wider temperature range. Although the measurements are still in progress, direct evidence for the existence of different processes in thin films of a-PS is displayed in Fig.11. Figure 11 shows the normalized dielectric loss as a function of temperature for a-PS thin films with thicknesses of 105 nm and 9 nm and $`M_\mathrm{w}`$=2.8$`\times `$10<sup>5</sup>. In this case, the standard temperature $`T_0`$ is set to 180 K. For films with $`d`$=105 nm and 9 nm there the $`\alpha `$-process exists around 390 K for $`f`$=100Hz, as discussed in Sec.IV, while for films with $`d`$=9 nm there another $`\alpha `$-process exists at lower temperature. We refer to the latter as the $`\alpha _\mathrm{l}`$-process (see arrows in the figure). This peak in the dielectric loss due to the $`\alpha _\mathrm{l}`$-process shifts to the higher temperature side as the frequency of the applied electric field changes from 100Hz to 1kHz. This suggests that the dynamical behavior of the peak is similar to that of the $`\alpha `$-process. Since within the model the thickness of the liquid-like layer is assumed to be constant (independent of the thickness $`d`$) the contribution of the liquid-like layer should become more appreciable as the thickness decreases. Therefore, it is reasonable to attribute the loss peak for the $`\alpha _\mathrm{l}`$-process to the liquid-like layer. In other words, the existence of the $`\alpha _\mathrm{l}`$-process can be regarded as experimental evidence of the existence of a liquid-like layer within a thin film. Since the $`\alpha `$-process and the $`\alpha _\mathrm{l}`$-process can be attributed to the segmental motion of polymer chains in the bulk-like layer and liquid-like layer (surface layer), respectively, we can expect that the characteristic time of the $`\alpha _\mathrm{l}`$-process is smaller than that of the $`\alpha `$-process at a given temperature, because polymer chains involved in the $`\alpha _\mathrm{l}`$-process has a higher mobility than those in the $`\alpha `$-process of bulk-like layer. At higher temperatures the segmental motions of the surface layer is expected to become less different from those of the bulk-like layer. Therefore, $`T_{\alpha _l}`$ should be smaller than $`T_\alpha `$ at a given frequency and the difference between $`T_\alpha `$ and $`T_{\alpha _\mathrm{l}}`$ should become smaller with increasing frequency. Here, $`T_{\alpha _\mathrm{l}}`$ is the temperature at which the dielectric loss possesses a maximum due to the $`\alpha _\mathrm{l}`$-process at a given frequency. Because the chain end concentration in the surface layer is higher than that in the bulk-like layer, $`T_{\alpha _\mathrm{l}}`$ can be expected to have a stronger molecular-weight dependence than $`T_\alpha `$ does. If there are liquid-like and dead layers in addition to a bulk-like layer, it is plausible that there are boundary layers between the bulk-like layer and the dead layer and between the bulk-like layer and the liquid-like layer. The existence of such boundary layers causes the broadening of the distribution of the $`\alpha `$-relaxation time of the bulk-like layer, and this broadening is enhanced as the thickness decreases. Because $`T_\mathrm{g}`$ can be regarded as the temperature at which the anomalous increase in $`ϵ^{\prime \prime }`$ begins, $`T_\mathrm{g}`$ decreases as the distribution of $`\tau _\alpha `$ broadens; $`i.e.`$, $`\mathrm{\Delta }T_\alpha `$ increases or $`\beta _{\text{KWW}}`$ decreases. Since the liquid-like and dead layers cause the distribution of $`\tau _\alpha `$ to broaden toward the shorter time side and the longer time side, respectively, the relaxation time $`\tau _\alpha `$ can remain constant as long as the two contributions cancel each other. In the present measurements, we measured the peak frequencies $`f_{\mathrm{max}}`$ in the dielectric loss due to the $`\alpha `$-process, which are equal to the inverse of $`\tau _\alpha `$, and it is found that $`f_{\mathrm{max}}`$ at a given temperature is almost independent of the thickness down to the critical thickness $`d_\mathrm{c}`$. Below the critical thickness, there is no $`\alpha `$-process with dynamical properties observed in the bulk sample. In this case, the dynamical properties of the thin film are determined by the competition between the liquid-like layer and the dead layer within the thin film. The values of $`\tau _\alpha `$ can no longer remain constant, but it decreases or increases, depending on whether contributions from the liquid-like layer are stronger or weaker than those from the dead layer. In the present case, $`\tau _\alpha `$ and $`T_\alpha `$ decrease drastically for $`d<d_\mathrm{c}`$. This behavior can be accounted for by assuming that contributions from the liquid-like layer are much stronger than those from the dead layer. Here, it should be noted that the observed value of $`d_\mathrm{c}`$ is larger than the sum of thickness of the liquid-like layer and the dead layer, $`\xi +\delta `$: $`d_\mathrm{c}`$=11$``$23 nm, while $`\xi +\delta 10`$ nm. From this result it follows that for $`d`$=$`d_\mathrm{c}`$, the bulk-like layer still exists, but the relaxation time for the $`\alpha `$-process of the bulk-like layer is different from that of the bulk sample. This suggests the possibility that at $`d_\mathrm{c}`$ the thickness of the bulk-like layer ($`d\xi \delta `$) is comparable to the characteristic length scale $`\xi _{\text{CRR}}`$ of the $`\alpha `$-process. Therefore, the value of $`\xi _{\text{CRR}}`$ can be estimated as 1$``$13 nm, if there is assumed to exist any characteristic length scale of the $`\alpha `$-process at all. Because $`d_\mathrm{c}`$ depends on the molecular weight of a-PS, $`\xi _{\text{CRR}}`$ may also depend on the molecular weight. However, more precise measurements of $`\delta `$ and $`\xi `$ showing whether these values depend on the molecular weight are necessary to elucidate the molecular weight dependence of $`\xi _{\text{CRR}}`$. In this paper, four different length scales, $`a`$, $`a^{}`$, $`a^{\prime \prime }`$ and $`d_\mathrm{c}`$, were extracted from the dielectric measurements. From the above discussion, $`d_\mathrm{c}`$ is expected to be related to the characteristic length scale $`\xi _{\text{CRR}}`$ for the $`\alpha `$-process of bulk samples, while the values of $`a`$, $`a^{}`$ and $`a^{\prime \prime }`$ are believed to be related to the heterogeneous structure of the mobility within thin films or surface and interfacial effects. According to the model of Adam and Gibbs, the size of the CRR increases as the temperature approaches $`T_\mathrm{g}`$. Near the glass transition temperature, the characteristic time of the $`\alpha `$-process is larger than 10<sup>3</sup> sec, and only the slow modes contribute appreciably to the dielectric loss. Hence, a more prominent $`d`$-dependence of the $`\alpha `$-process can be expected when the dielectric measurements are performed for frequencies much smaller than those adopted in the present work. We plan to make such measurements in the future. In this paper, we made dielectric measurement on capped thin films of atactic polystyrene supported on an Al deposited glass substrate. The results can be summarized as follows: 1. The glass transition temperature of thin films of a-PS has been determined using the temperature change in the electric capacitance. The observed $`T_\mathrm{g}`$ is consistent with the results obtained by other methods. A decrease in $`T_\mathrm{g}`$ with decreasing thickness has been confirmed. 2. The thermal expansion coefficient normal to the film surface $`\alpha _\mathrm{n}`$ increases with decreasing thickness below the apparent $`T_\mathrm{g}`$, while it decreases with decreasing thickness above $`T_\mathrm{g}`$. The thickness dependence of $`\alpha _\mathrm{n}`$ can be described by a linear function of the inverse of the thickness. 3. The $`d`$ dependence of $`T_\mathrm{g}`$ is directly correlated to that of the width of the peak due to the $`\alpha `$-process in the temperature domain and also to the distribution of relaxation times of the $`\alpha `$-process. 4. The temperature at which dielectric loss is maximal due to the $`\alpha `$-process in the temperature domain and the $`\alpha `$-relaxation time obtained by the frequency dependence of the dielectric loss remain constant down to the critical thickness $`d_\mathrm{c}`$, while below $`d_\mathrm{c}`$ they decreases drastically with decreasing thickness. The values of $`d_\mathrm{c}`$ have a molecular weight dependence and are related to the radius of gyration of polymer chains. ## VI Acknowledgments This work was partly supported by a Grant-in-Aid from the Ministry of Education, Science, Sports and Culture of Japan.
no-problem/9907/cond-mat9907333.html
ar5iv
text
# Vortex dynamics in a three-state model under cyclic dominance ## ACKNOWLEDGMENTS We thank P. Krapivsky for a critical reading of the manuscript. G. S. acknowledges a senior research fellowship from PRAXIS (Portugal). Supports from NATO (CRG-970332), PRAXIS (project PRAXIS/2/2.1/Fis/299/94) and the Hungarian National Research Fund (T-23552) are also acknowledged.
no-problem/9907/cond-mat9907151.html
ar5iv
text
# Interactions and Interference in Quantum Dots: Kinks in Coulomb Blockade Peak Positions \[ ## Abstract We investigate the spin of the ground state of a geometrically confined many-electron system. For atoms, shell structure simplifies this problem– the spin is prescribed by the well-known Hund’s rule. In contrast, quantum dots provide a controllable setting for studying the interplay of quantum interference and electron-electron interactions in general cases. In a generic confining potential, the shell-structure argument suggests a singlet ground state for an even number of electrons. The interaction among the electrons produces, however, accidental occurrences of spin-triplet ground states, even for weak interaction, a limit which we analyze explicitly. Variaton of an external parameter causes sudden switching between these states and hence a kink in the conductance. Experimental study of these kinks would yield the exchange energy for the “chaotic electron gas”. \] The evolution of the properties of a system as a continuous change is made to it is a ubiquitous topic in quantum physics. The classic example is the evolution of energy levels as the strength of a perturbation is varied . Typically, neighboring energy levels do not cross each other, but rather come close and then repel in an “avoided crossing”. However, if there is an exact symmetry, neighboring levels can have different symmetries uncoupled by the perturbation, and in this special case they can cross. A new and powerful way of studying parametric evolution in many-body quantum systems is through the Coulomb blockade peaks that occur in mesoscopic quantum dots . The electrostatic energy of an additional electron on a quantum dot– an island of confined charge with quantized states– blocks the flow of current through the dot– the Coulomb blockade . Current can flow only if two different charge states of the dot are tuned to have the same energy; this produces a peak in the conductance through the dot. The position of the Coulomb blockade peak depends on the difference in ground state energy upon adding an electron, $`E_{\mathrm{gr}}(N+1)E_{\mathrm{gr}}(N)`$. Thus, the evolution of the peak position reflects the evolution of these many-body energy levels as a parameter, such as magnetic field or gate voltage, is varied. Since quantum dots are generally irregular in shape, the orbital levels have no symmetry and so avoid crossing. However, the spin degrees of freedom are often decoupled from the rest of the system, so states of different total spin can cross. Such a crossing will cause an abrupt change in the evolution of the Coulomb blockade peak position– a kink– as the spin of the ground state changes, even though there is no crossing in the single-particle states. Such kinks have been observed in small circular dots– “artificial atoms”– because of their special circular symmetry . More generally, kinks should occur in generic quantum dots with no special symmetries . In fact, the data of Ref. show evidence for kinks in large dots, though they were not the subject of that investigation. Small circular dots behave much like atoms (hence “artificial atoms”): the circular symmetry causes degeneracy of the orbital levels and so a large spacing between allowed energies. In sharp contrast, there is no degeneracy in irregular dots: the typical single-particle orbital level separation is simply given by $`\mathrm{\Delta }1/\nu V`$ where $`\nu `$ is the bulk density of states and $`V`$ is the volume of the dot. Kinks in the evolution of Coulomb blockade peak positions may occur whenever the ground state of the dot is separated from an excited state with different spin by an energy of order $`\mathrm{\Delta }`$. The interference effects causing the separation are unique to each state and change upon tuning. In fact, the two states may switch at a certain point, the former excited state becoming the ground state: such switching corresponds to a kink. Note that kinks occur in pairs: kinks in the peaks corresponding to the $`NN+1`$ transition and that for $`N1N`$ both occur when $`E_{\mathrm{gr}}(N)`$ switches. We see from this argument that kinks in the evolution of peak positions with parameter is a general property of quantum dots . Here we analyze these kinks explicitly in a particular limit. Consider a large quantum dot in which the single-particle properties are “random”: the statistics of the energies follow the classic random matrix ensembles and the wave-functions obey Gaussian statistics with a correlation function given by the superposition of random plane waves . The single-particle properties of such random systems have been extensively investigated, and it has been conjectured, with considerable evidence, that these are good models for quantum dots in which the classical dynamics is chaotic . To treat the Coulomb blockade, we must consider not only single-particle properties but interactions among the particles as well. One natural way to proceed is to treat these interactions in the basis of self-consistent single-particle states $`\{\psi _m(𝐫)\chi _\sigma (s)\}`$, where $`m`$ and $`\sigma `$ are the labels of orbital and spin states respectively (we neglect the weak spin-orbit interaction). In the limit of zero interaction, two electrons occupy each filled orbital state, except for the top level when the total number of electrons is odd. Because of the interference produced by confinement in the dot, the electron density is not smooth but rather has small deviations from the classical-liquid result. Due to the electron-electron interaction, these deviations contribute to the ground-state energy, in addition to the conventional “classical-liquid” charging energy. If the interaction does not change the double-occupancy of the levels, one finds that the part of such contribution coming from the last doubly-occupied level $`n`$ is $`\xi _{n,n}`$ $``$ $`{\displaystyle 𝑑𝐫𝑑𝐫^{}[|\psi _n(𝐫)|^2|\psi _n(𝐫)|^2]V_{\mathrm{scr}}(𝐫𝐫^{})}`$ (2) $`\times [|\psi _n(𝐫^{})|^2|\psi _n(𝐫^{})|^2].`$ It is appropriate to use the short-ranged screened interaction $`V_{\mathrm{scr}}`$ here since the smooth background of the other electrons provides screening; $`\mathrm{}`$ denotes the standard ensemble averaging. If, because of the interactions, one of the electrons of that level is promoted to the next orbital state, the result (2) is modified to become $`\xi _{n,n+1}^\pm {\displaystyle 𝑑𝐫𝑑𝐫^{}[|\psi _n(𝐫)|^2|\psi _n(𝐫)|^2]V_{\mathrm{scr}}(𝐫𝐫^{})}`$ (3) $`\times [|\psi _{n+1}(𝐫^{})|^2|\psi _{n+1}(𝐫^{})|^2]`$ (4) $`\pm {\displaystyle 𝑑𝐫𝑑𝐫^{}\psi _n(𝐫)\psi _{n+1}^{}(𝐫)V_{\mathrm{scr}}(𝐫𝐫^{})\psi _n^{}(𝐫^{})\psi _{n+1}(𝐫^{})}.`$ (5) The signs $`+`$ and $``$ in Eq. (4) correspond to the singlet and triplet states respectively. For a large ballistic dot, $`k_FL1`$, the lack of correlation among the random wavefunctions $`\psi _n`$ and $`\psi _m`$ with $`nm`$ leads to a hierarchy of the matrix elements of the interaction (here $`k_F`$ is the Fermi wave vector of electrons in the dot, and $`L`$ is the linear size of the dot). The first integral in Eq. (4) vanishes in the limit $`k_FL\mathrm{}`$, and one is left only with the second, exchange interaction, term. In the same limit, the exchange interaction term has a non-zero average value and vanishingly small mesoscopic fluctuations. The lowest of two energies $`\xi _{n,n+1}^\pm `$ corresponds to the triplet state ($`\pm `$). Neglecting the small mesoscopic fluctuations of the energies $`\xi _{n,n}`$ and $`\xi _{n,n+1}^{}`$, one finds the difference between the energies of the singlet state formed by doubly-occupied levels and of the triplet state with two singly-occupied orbital levels: $`\xi `$ $``$ $`\xi _{n,n}\xi _{n,n+1}^{}`$ (6) $`=`$ $`2{\displaystyle 𝑑𝐫𝑑𝐫^{}V_{\mathrm{scr}}(𝐫𝐫^{})\left|F(𝐫𝐫^{})\right|^2}=2\overline{\widehat{V}_{\mathrm{scr}}(𝐤𝐤^{})}.`$ (7) Here $`F(𝐫)\overline{\mathrm{exp}(i𝐤^{}𝐫)}`$, with the bar denoting the average over the Fermi surface $`|𝐤^{}|=k_F`$. In the above argument we have implicitly assumed the absence of time-reversal symmetry. For $`B=0`$, the same basic argument holds, but the interaction in the Cooper channel should be included; this increases $`\xi `$ making the proportionality constant in Eq. (3) larger than 2. We thus consider a model with a single non-zero interaction constant $`\xi `$. This quantity is simply related to the usual electron gas parameter $`r_s`$ (the ratio of the Coulomb energy at the mean interparticle distance to the Fermi energy): $`\xi (1/\sqrt{2}\pi )r_s\mathrm{ln}(1/r_s)\mathrm{\Delta }`$ for small $`r_s`$ and in the absence of time-reversal symmetry. As $`r_s`$ increases, the considerations discussed here apply until $`\xi `$ exceeds $`\mathrm{\Delta }`$ at which point the Stoner criterion for a magnetic instability is approached. For instance, for $`r_s=1`$ averaging the Thomas-Fermi screened potential over the Fermi surface yields $`\xi =0.6\mathrm{\Delta }`$ in two dimensions. The distribution of electrons among the levels depends on the single particle level spacing compared to $`\xi `$. This is particularly clear when the total number of electrons $`N`$ is even: the top two electrons can either be in the same orbital level at a cost of $`\xi `$ in interaction energy or one can be in level $`N/2`$ and the other in $`N/2+1`$ at an energy cost of $`ϵ_{N/2+1}ϵ_{N/2}`$. Since the magnitudes of both $`\xi `$ and $`ϵ_{N/2+1}ϵ_{N/2}`$ are of order $`\mathrm{\Delta }`$, sometimes the top level is doubly occupied and sometimes not. In the case of double occupation, the state is, of course, a singlet; if two sequential levels are occupied, the exchange interaction leads to a triplet state. If at most two orbital levels are singly occupied, the ground state energy of a dot is, then, a sum of three terms: $$E_{\mathrm{gr}}=E_{\mathrm{ch}}+\underset{(n,\sigma )\mathrm{filled}}{}ϵ_{n,\sigma }+M\xi $$ (8) where $`M`$ is the number of doubly occupied levels. For our arguments, the number of electrons is constant and so the charging energy $`E_{\mathrm{ch}}`$ is irrelevant. Note that the energy (8) is equivalent to the simultaneous filling of two sequences of levels, one of which is rigidly shifted by $`\xi `$ from the other . Finally, if several orbital level spacings in sequence are small, more complicated configurations occur for both even and odd $`N`$ . Moreover, the problem of the ground-state spin of a mesoscopic system becomes very complicated upon approaching the Stoner instability . As a parameter is varied, the single particle energies change and may cause a change in the level occupations and so a kink. This is explained and illustrated in Fig. 1. The distribution of the kinks in the parameter space follows from a random matrix model. We assume that the Hamiltonian of the dot follows the Gaussian Orthogonal Ensemble in the absence of a magnetic field (GOE, $`\beta =1`$) and the Gaussian Unitary Ensemble for nonzero $`B`$ (GUE, $`\beta =2`$) . The dependence on the parameter $`X`$ is included by means of a Gaussian process ; we consider the process $$H(x)=H_1\mathrm{cos}(X)+H_2\mathrm{sin}(X)$$ (9) where the distributions of $`H_1`$ and $`H_2`$ follow the appropriate Gaussian ensemble. Extensive work on parametric correlations has shown that the properties of Gaussian processes are universal when the energy is measured in units of the mean level separation $`\mathrm{\Delta }`$ and the parameter is scaled by the typical rate at which energies are perturbed, $`(dϵ/dX)^2^{1/2}`$ . For simplicity we restrict our attention to kinks caused by a change in configuration of the top two electrons when $`N`$ is even. The first quantity to consider is the mean density of kinks, $`\rho _{\mathrm{kink}}`$. First, because a kink occurs when $`ϵ_{N/2+1}ϵ_{N/2}=\xi `$, this density is proportional to the probability of having such a level-separation. Second, $`\rho _{\mathrm{kink}}`$ must reflect the rate at which the levels change as a function of parameter. In fact, it is known that the distribution of the slopes of the levels, $`dϵ/dX`$, is Gaussian and independent of the distribution of the levels themselves . Thus the two contributions are simply multiplied: $`\rho _{\mathrm{kink}}`$ $`=`$ $`{\displaystyle \frac{2}{\sqrt{\pi }}}p_ϵ^{(\beta )}(\xi )({\displaystyle \frac{dϵ}{dX}})^2^{1/2}`$ (10) $``$ $`\xi ^\beta \text{for small }\xi `$ (11) where $`p_ϵ^{(\beta )}(s)`$ is the distribution of nearest-neighbor level separations, for which the Wigner surmise is an excellent approximation. $`\rho _{\mathrm{kink}}`$ has a strong dependence on $`\beta `$ when $`\xi `$ is small because of the symmetry dependence of $`p_ϵ^{(\beta )}`$, and so will be sensitive to a magnetic field. In fact, the sensitivity to magnetic field could be used to extract experimental values for $`\xi `$ in quantum dots– a direct measure of the strength of interactions. Next, an important property is the spacing in $`X`$ between two neighboring kinks. For $`\xi \mathrm{\Delta }`$, kinks occur when two orbital levels come very close and so are caused by avoided crossings of single-particle levels. Since each avoided crossing produces two kinks, kinks will occur in pairs, with small intra-pair and large inter-pair separations. The behavior near an avoided crossing is dominated by just two levels and characterized by 3 parameters– the mean and difference of the slopes far from the crossing and the smallest separation. Wilkinson and Austin have derived the joint probability distribution of these parameters for Gaussian random processes . By expressing the intra-pair separation in terms of these parameters and integrating over the joint distribution, we find that the distribution of intra-pair separations is $`P_{\mathrm{intra}}(x)=2{\displaystyle \frac{x}{\xi ^2}}{\displaystyle _0^{\xi ^2/x^2}}𝑑u\mathrm{exp}(u)`$ (15) $`\times \{\begin{array}{cc}u^2,\hfill & \beta =2\hfill \\ u^{3/2}(1ux^2/\xi ^2)^{1/2}/\sqrt{\pi },\hfill & \beta =1\hfill \end{array}.`$ For small $`x`$, $`P_{\mathrm{intra}}`$ is linear in the separation $`x`$ both with and without a magnetic field. The separation between pairs is usually large for small $`\xi `$– avoided crossings with small gap are rare– typically many correlation lengths. Hence there is no correlation between pairs: $`P_{\mathrm{inter}}(x)`$ is Poisson (exponential) for large $`x`$. Correlation suppresses the probability of two close crossings. We make a simple model for this by assuming $`P_{\mathrm{inter}}x`$ for $`x<x_0`$ and so approximate $`P_{\mathrm{inter}}`$ by $$P_{\mathrm{inter}}(x)=\frac{C}{\alpha }\{\begin{array}{cc}\mathrm{exp}((xx_0)/\alpha ),\hfill & x>x_0\hfill \\ x/x_0,\hfill & x<x_0\hfill \end{array}$$ (16) where $`C`$ is for normalization. $`x_0`$ should be of order 1 in scaled units; we choose it to be the minimum of the correlation function of $`dϵ/dX`$, $`x_0=0.85`$ ($`0.6`$) for GOE (GUE). The mean density Eq. (10) sets alpha, $$1/\rho _{\mathrm{kink}}=x=\frac{1}{2}[x_{\mathrm{intra}}+x_{\mathrm{inter}}],$$ (17) combined with the distribution $`P_{\mathrm{intra}}`$. In this way we have a parameter free expression for the distribution of the separation between adjacent kinks. While the above theory is for $`\xi \mathrm{\Delta }`$, in the experiments $`r_s1`$ so that $`\xi \mathrm{\Delta }`$. Hence we turn to numerical calculation to test the range of validity of our expressions. Gaussian processes were produced using the Hamiltonian (9) with matrix size $`200`$ over the full interval $`X[0,2\pi ]`$; the middle third of the spectrum was used. Sample energy levels for $`\beta =2`$ are shown in Fig. 1. Fig. 2 shows the distribution of kink spacings for the experimentally relevant value $`\xi =0.5`$ and $`\beta =1,\mathrm{\hspace{0.17em}2}`$. Though outside the regime of immediate applicability, the theoretical curves agree closely with the numerical data. Thus the simple two-level calculation captures the main features of the kink distribution for $`\xi 1`$ and so is adequate for describing experiments in large dots . In conclusion, through the properties of kinks in the Coulomb blockade traces, experiments on quantum dots can directly determine the main interaction parameter in these dots and so obtain the exchange energy for the chaotic electron gas. We thank C.M. Marcus for a stimulating conversation which helped initiate this work and acknowledge valuable discussions with K.A. Matveev and I.L. Aleiner. HUB and LIG appreciate the hospitality of the ICCMP, Brasilia Brazil, where this work started. After completion of this work, we learned of Ref. by P.W. Brouwer, Y. Oreg, and B.I. Halperin in which some similar results were obtained. Finally, we acknowledge support of NSF Grant DMR-9731756; the LPTMS is “Unité de recherche de l’Université Paris 11 associée au C.N.R.S.”.
no-problem/9907/physics9907018.html
ar5iv
text
# Measurements of the radiation hardness of selected scintillating and light guide fiber materials ## 1 Introduction Recently - a fiber detector was developed as an alternative solution for the inner tracker of the HERA-B experiment . With an accelerator cycle of 96 nsec and four events per cycle with a charged multiplicity of about 200 the detector modules have to work several years under an estimated integral dose per year in the inner tracker region of about 1 Mrad. The light produced by particles crossing the scintillating fibers of the detector is transported by 3 m long light guide fibers to 64 channel multianode photomultipliers Hamamatsu R5900-M64<sup>1</sup><sup>1</sup>1 Hamamatsu Photonics K.K., Electron tube division, 3124-5, Shimokanzo, Tokyooka Village, Iwatagun, Shizuoka-ken, Japan available only with bialcali photocathodes. Best light output and long term stability were obtained for KURARAY<sup>2</sup><sup>2</sup>2KURARAY Co.Ltd., Nikonbashi, Chuo-ku, Tokyo 103, Japan scintillating double clad fibers SCSF-78M and clear double clad fibers from KURARAY and pol.hi.tech.<sup>3</sup><sup>3</sup>3pol.hi.tech., s.r.l., S.P. Turanense, 67061 Carsoli(AQ), Italy. The corresponding radiation hardness studies were performed with high dose rates in 70 MeV proton and 2 MeV electron beams of the Hahn-Meitner Institute Berlin ,,, and photons from a <sup>60</sup>Co source . In the first case in-situ measurements of the light output even with spectral resolution were possible. In the past there were several arguments - that the presence of oxygen during and after the irradiation may be important for the observed damage. In this case also the dose rate may influence the final result because diffusion processes are time dependent. Our results from high dose rates using charged particle beams will be summarized below and compared to new data from low dose rate exposures of the same materials. The new tests were performed in air and nitrogen atmosphere with glued and non-glued scintillating fibers and compared with non-irradiated test samples. ## 2 Experimental conditions ### 2.1 Fiber samples The fiber samples for proton and electron irradiation have the same global structure as shown in Fig. 1. For high rate irradiation 4$`\times `$4 fibers of 0.48 mm diameter were glued together resulting in a cross section of about 2$`\times `$2 mm<sup>2</sup>. The samples for electron low dose rate irradiation consist of a fiber arrangement of 1$`\times `$7 fibers of the same diameter forming a fiber road in the later detector. Coupling pieces are mounted at both ends of the 30 cm long samples in which the ends of the fibers are inserted, glued and polished. This allows an optical coupling to light guides or photomultipliers with light losses of less than 10 %. The fiber samples are mechanically stabilized by two brass rods of 3 mm diameter. For low dose rate electron irradiation there were two types of samples. The first type is fully glued to shield the fibers from the gaseous environment, whereas the second type is mounted using a minimum of glue in thin strips near the connectors in order to allow the gaseous atmosphere to have contact to the fiber material. For the in-situ measurements single fibers were coupled at one or both ends to glass fibers which transport the light to the corresponding spectrometers (see Fig. 2). ### 2.2 Irradiation setup A schematic view of the irradiation setup in the proton and electron beams is given in Fig. 3. For electron irradiation the beam was extracted from the vacuum system through a window of 100 $`\mu `$m thick Aluminium and 40 $`\mu `$m Inconel. A metallic aperture of 3$`\times `$12 mm<sup>2</sup> was used for beam profile definition. In the case of the proton irradiation the beam was extracted through a 7 $`\mu `$m thick Tantalum foil. The beam size and the emittance angle were limited by two PMMA (polymethyle methacrylate) apertures. The total range of protons in fiber material is about 39 mm which is checked by the profile of the colour changes in the PMMA aperture during the irradiation. The spot size and position was additionally monitored by polyvinylalcohol (PVA) methylene blue plastic detector foils . The dye is radiation sensitive and its degradation yield is proportional to the irradiated particle fluence. The degradation of the dye in the foil has been determined by UV-VIS-spectroscopy. A typical result of such beam homogeneity control for proton irradiation is depicted in Fig. 4. The higher the transparency the higher was the irradiation dose in the given area. At positions 1, 2 a high radiation level with low restriction in the field distribution is registrated. The profile created by the plastic apertures is given by curve 3 and 4. A low non-structured irradiation level is characterized by curves 5 and 6. For the in-situ registration of beam excited scintillation spectra fiber optic PC-plug-in spectrometers (Ocean Optics<sup>4</sup><sup>4</sup>4Ocean Optics Inc., 380 Main Street, Dunedin, FL34698, USA) were used. They were placed outdoors of the cave in order to suppress the high radiation background using 22 m long light-guiding glass fibers . A detailed description of the used experimental setups in both cases can be found in . The proton irradiation was performed quasi point-like at two points along the sample with a dose of $``$1 Mrad at 20 cm and of 0.1 Mrad at 10 cm respectively within a few minutes. The dose rate was about 30 Mrad/h. The irradiation of short areas of the samples gave the possibility to separate the damage of scintillator and optical matrix. The same irradiation procedure was applied for in-situ measurements of radiation damage using a corresponding electron beam. High current irradiations were only carried out under ambient atmosphere using cooling by a powerful fan. The temperature rise during the irradiation could be neglected . A new series of tests has been performed irradiating the fiber material with a relatively low dose rate of 2 MeV electrons to approximate the later experimental conditions. A dose of about 1 Mrad was applied during five periods within about nine weeks. The particle flux was monitored by a matrix of Faraday cups. The distance between scatter foil and sample plane was about 1.5 m. In this case the samples were kept either in air or in nitrogen atmosphere. ### 2.3 Measurement procedure In-situ registration of scintillation spectra first described in was performed for the beam excited regions 1 and 2 (see fig. 2). The spectra were measured during the whole irradiation time in the first irradiated region 1 of fibers and after that in the second region 2 under influence of high absorption in the presumably predamaged region 1 in order to determine the change of the absorption coefficient during the irradiation. Between irradiation procedure 1 and 2 a preparation time of a few minutes was necessary. The beam excited scintillation spectrum served in the second case as changeable light source for absorption measurements in a limited spectral region. For recovery measurements in the laboratory a few hours after irradiation the optical excitation was realized by a high pressure Hg-lamp at $`\lambda `$ = 365 nm . In addition to the in-situ measurements which used single fiber samples and UV-excitation for the measurement in the laboratory investigations were done using multi-fiber bundles. The irradiated multi-fiber samples (see section 2.1) were evaluated using a $`{}_{}{}^{106}Ru`$ source. The fiber sample was mounted within a source collimator slit. The light signal was measured using a Philips<sup>5</sup><sup>5</sup>5Philips Photonique, Av. Roger Roacier, B.P. 520, F-19106 Brive, France XP 2020 photomultiplier and analyzed by an Analog-to-digital converter (ADC). The ADC was triggered by a threefold coincidence of signals coming from a 5 mm thick plastic scintillator mounted behind the fibers using two Philips XP 1911 photomultipliers for readout and from a second photomultiplier XP 2020 coupled to the second coupling piece of the fiber sample. The light output measurement was performed before and after irradiation. In addition the light output of the non-irradiated scintillator reference samples and light attenuation of the light guide reference samples were regularily measured to minimize systematic errors. ## 3 Results From in-situ observations of proton and electron excited spectra no remarkable difference could be found . Consequently, we report here representative results for both charge carrier excited spectra. As described in , all in-situ measured spectra show a two stage decay of the scintillating light intensity in dependence on the energy dissipation (or irradiation time). In Fig.5 a typical degradation process for an electron excited fiber is presented with the wavelength as parameter. At the beginning of the irradiation a similar time constant for all wavelengths can be observed. A faster decay appears for higher energy dose values of about 1 Mrad in dependence on different wavelengths of emission spectra. The short-wave emission shows the fastest degradation in time (see also Fig. 2 in ). A recovery of the damaged fibers could be observed already during the in-situ measurements. A considerable increase in light output was observed several times during the irradiation procedure after switching off the beam for only three minutes (see Fig. 3 in ). Exciting the same fibers by UV-light in the laboratory a few hours after irradiation a long term recovery was measured. After 40 hours a SCSF-78M fiber irradiated to 8.1 Mrad showed 90 $`\%`$ of the light output with respect to pre-irradiation (Fig.4 in ). This process seems however to depend on the fiber material and the integral dose (compare Fig.2 in ). The kind of excitation seems to be of particular importance for the measured fiber light output. This will influence also the observed recovery after irradiation and may explain the corresponding different time constants for in-situ measurements and UV-excitation. In a real experiment the scintillation light in fibers will be produced by crossing charged particles. Therefore the multi-fiber test samples were exposed to electrons from a Ru-source before and after beam irradiation to measure light output and transmission. Indeed a different behaviour was found. As described in , the strongest damage was observed only about 30 hours after irradiation with a dose of 1 Mrad for both light emission and transparency with a complete recovery after two days (see Fig. 4 of ). A dose of 1 Mrad, expected for the inner tracker of the HERA-B experiment within one year, was placed to the above test samples within a few minutes. This may have influenced the observed results in an inadmissible way in particular if oxygen diffusion is important for damage and recovery. To be closer to the experimental conditions the irradiation of scintillator and light guide samples was performed up to a dose of 1.4 Mrad within 70 days. Half of the fiber samples were covered by glue. The irradiation was done in air and in nitrogen atmosphere. The results are displayed in Figs. 6 and 7. Measurements going on for about half a year, using many times the same samples, are difficult to perform keeping systematical errors small due to some instability of the setup in time and mechanical damages of the fragile samples. To minimize those effects, non-irradiated samples were measured every time in addition. All results are presented as ratios of irradiated to non-irradiated fibers R<sub>S</sub> and R<sub>L</sub> for scintillators and light guides, respectively. The maximum errors of these ratios have been estimated to be about 30 $`\%`$ including effects which may arise from sample production. How the irradiation was going on in time is demonstrated in Fig. 6a. The corresponding damage and recovery of four fiber samples is shown in fig 6b. No effect could be observed outside the 30 $`\%`$ error band. Neglecting the measurement errors and relying on the pre-irradiation data points some damage may have happened up to the maximum dose followed by a long term recovery. The damage seems to be smaller for glued fibers in particular in nitrogen atmosphere. Non-glued fibers seem to recover only partly. From Fig. 7a it can be seen that clear fibers were only irradiated up to a dose of 400 krad. For all measurements they were coupled to scintillating fibers which were excited by electrons from a Ru-source. Also here a maximum error of 30 $`\%`$ has to be kept in mind for the ratio R<sub>L</sub> of irradiated to totally non-irradiated (clear plus scintillating) fibers shown in Fig 7b. Neglecting the error band no damage is seen for the clear fiber irradiation itself. However irradiating the scintillator to more than 1 Mrad caused a decrease of the light output from a coupled clear fiber to more than one half, i.e. more than observed for the scintillating sample alone. After two weeks complete recovery was found. The behaviour is the same for KURARAY and pol.hi.tech. clear fibers in air and nitrogen. ## 4 Summary Several radiation hardness tests were performed for KURARAY scintillating fibers SCSF-78M and clear fibers from KURARAY and pol.hi.tech. Using high current proton and electron beams the irradiation was performed both with very high and low dose rates. In-situ observations demonstrated a strong damage of scintillating fibers for high dose rate exposures. Both light emission and transparency were decreased down to 20 $`\%`$ for 1 Mrad. Short and long time recovery effects followed the irradiation. For low dose rate conditions closer to a later experiment, a 30 $`\%`$ decrease of scintillating fiber light output could not be excluded recovering after three weeks. No significant influence of the fiber coverage and the atmosphere during irradiation was found. Clear fibers are apparently not damaged for doses up to 400 krad. Coupled to irradiated scintillating fibers the effective damage of the system seems to increase. ## Acknowledgement The fiber irradiation tests were possible only due to the kind support of the Hahn-Meitner-Institute Berlin. In particular we want to thank the ISL accelerator team. Figure captions Fig. 1 : Sketch of a multifiber test sample with indication of the irradiation and measurement positions. The signal is measured via coupling piece S1, the coupling piece S2 is used for the extraction of a trigger signal. Fig. 2 : Fiber sample for in-situ measurements with irradiation position and connections of fibers to spectrometers. LGF: Light guide fiber, SP1, SP2: Coupling to spectrometers 1, 2. Fig. 3 : Irradiation setup of the high dose rate proton and electron irradiation. The arrows give the positions of plastic detectors for beam profile measurement according to Fig. 4. Fig. 4 : Beam profile measured using polyvinylalcohol methylene blue plastic detector. The figures belonging to each curve correspond to the arrows in Fig. 3 giving the positions of such plastic films in the irradiation setup: 1 - behind the exit window, 2- before the aperture 1, 3 - between aperture 1 and 2, 4- behind aperture 2, 5 - before the Faraday cup, 6 - behind the Faraday cup. Fig. 5 : In-situ measurement of the emission spectra of a scintillating fiber excited by high dose rate electron irradiation in dependence on the irradiation time in seconds. Fig. 6 : a.) Time dependence of irradiation dose for scintillating fibers, b.) ratio R<sub>S</sub> of light output from irradiated to non-irradiated scintillating fiber samples in dependence on measurement time with respect to the first irradiation. Fig. 7 : a.) Time dependence of irradiation dose for scintillating fibers and light guides, b.) ratio R<sub>L</sub> of light output from irradiated to non-irradiated clear and scintillating fiber samples in dependence of the measurement time with respect to the first irradiation. K<sub>L</sub>: Light guide fiber from KURARAY, K<sub>S</sub>: Scintillating fiber from KURARAY, P<sub>L</sub>: Light guide fiber from pol.hi.tech.
no-problem/9907/hep-ph9907484.html
ar5iv
text
# References HZPP-9904 July 25, 1999 Model Investigation of Non-Thermal Phase Transition in High Energy Collisions Wang Qin Li Zhiming Liu Lianshou (Institute of Particle Physics, Huazhong Normal University, Wuhan, 430079) Abstract The Non-thermal phase transition in high energy collisions is studied in some detail in the framework of random cascade model. The relation between the characteristic parameter $`\lambda _q`$ of phase transition and the rank $`q`$ of moment is obtained using Monte Carlo simulation, and the existence of two phases in self-similarly cascading multiparticle systems is shown. The relation between the critical point $`q_c`$ of phase transition on the fluctuation parameter $`\alpha `$ is obtained and compared with the experimental results from NA22. The same study is carried out also by analytical calculation under central limit approximation. The range of validity of the central limit approximation is discussed. Keywords random cascade multifractal anomalous scaling non-thermal phase transition Recently, the prediction<sup></sup> that there exist the property of self-affine fractal in the anisotropic phase space of multiparticle final states in high energe hadron-hadron collisions has been confirmed by experiments<sup></sup>. This breakthrough in the nonlinear study of high energy physics places the further study of nonlinear property of multiparticle final states on the agenda. In this respect, the non-thermal phase transition<sup></sup> is a prolem worthy while further study. In the presently available experiments<sup></sup>, due to the restriction of energy, the average multiplicity is very low, and the rank of the factorial moments could not be high. So, no clear evidence of non-thermal phase transition has been seen. The new Large Hadron Collider (LHC), which is being built and will be put into operation in the beginning of next century, will dramatically raise the collision energy and multiplicity, providing perfect condition for the study of non-thermal phase transition. For a theoretical preparation it is necessary to carry on detailed discussion on this phase transition and to clarify its property. The aim of this short paper is to make a model study of the non-thermal phase transition, especially to make clear of the relation between the critical point of non-thermal phase transition and the strength of dynamical fluctuations. The random cascading $`\alpha `$ model is widely used in the study of nonlinear property of multiparticle final states in high energy collisions. Using this model, it is easy to get a system pocessing the property of intermittency and fractal. We will show that non-thermal phase transition does exist in this system and the relation between the critical point of phase transition and the parameter $`\alpha `$ of fluctuation strength in the model can thus be obtained and compared with the experimental data. Firstly, let us briefly remind the random cascading $`\alpha `$ model<sup></sup> with probability conservation. Consider a region $`\mathrm{\Delta }`$ of one-dimensional phase space. Devide it into $`\lambda `$ cells. The probability of particles falling into the $`i`$th cell is $$p_i=p_0\omega _i,$$ $`(1)`$ where $`p_0=1`$ is the probability in the phase space region $`\mathrm{\Delta }`$, $`\omega _i`$ is the probability of the elementary partition. Next, we divide each sub-bin into $`\lambda `$ even smaller sub-bins. The probability in the $`ij`$th bin $`(i=1,2,\mathrm{},\lambda ;j=1,2,\mathrm{},\lambda )`$ is $$p_{ij}=p_i\omega _j,$$ $`(2)`$ After $`\nu `$ steps, the probability in a sub-bin is $$p_{i_1i_2\mathrm{}i_\nu }=\underset{k=1}{\overset{\nu }{}}\omega _{i_k}.$$ $`(3)`$ The total number of intervals is $`M=\lambda ^\nu `$. In order to guarantee the conservation of probability in each step of cascading, we choose the elementary probability $`\omega _i`$ for $`\lambda =2`$ as<sup></sup>: $$\omega _1=\frac{1+\alpha r}{2},\omega _2=\frac{1\alpha r}{2}.$$ $`(4)`$ where $`r_i`$ is a random number distributed uniformly in the interval $`[1,1]`$, $`\alpha `$ is a model parameter describing the strength of nonlinear dynamical fluctuations ($`0<\alpha <1`$). The definitions of the probability moments and factorial moments are<sup></sup>: $$C_q=\frac{\frac{1}{M}_{m=1}^Mp_m^q}{\left(\frac{1}{M}_{m=1}^Mp_m\right)^q}=M^{q1}\underset{m=1}{\overset{M}{}}p_m^q.$$ $`(5)`$ $$F_q\left(M\right)=\frac{1}{M}\underset{m=1}{\overset{M}{}}\frac{n_m\left(n_m1\right)\mathrm{}\left(n_mq+1\right)}{n_m^q}.$$ $`(6)`$ It can easily be proved that under the assumption of Poisson or Bernoulli type of statistical fluctuations the normalized factorial moments $`F_q`$ are equal to the normalized probability moments $`C_q`$. The character of dynamical fluctuations can be expressed as the anomalous scaling of probability (or factorial) moments: $$C_q\left(M\right)M^{\phi _q},$$ or equivalently $$\mathrm{ln}C_q(M)=A+\phi _q\mathrm{ln}M\left(M\mathrm{}\right).$$ $`(7)`$ where $`\phi _q`$ is called intermittency index. In order to see the anomalous scaling of probability moments more clearly, we choose the fluctuation-strength parameter $`\alpha =0.5`$, the elementary partition number $`\lambda =2`$, the division step $`\nu =12`$, the ranks of moment $`q=5,10,15,20,25,30`$, and make use of Eq.(5) to simulate the relation $`\mathrm{ln}C_q\mathrm{ln}M`$. The results are shown in Fig.1. The intermittency parameters $`\phi _q`$ are obtained through linear fit. We can see from the figure that the higher the rank $`q`$ is, the larger the slope $`\phi _q`$ is. A parameter $`\lambda _q`$ has been introduced <sup></sup> in the multifractal analysis to characterise the non-thermal phase transition in the multiparticle systems. It is related to the intermittency index $`\phi _q`$ by the relation $$\lambda _q=\left(\phi _q+1\right)/q.$$ $`(8)`$ We will try to evaluate this parameter both through analytic calculation and by using Monte Carlo simulation. In the random cascading $`\alpha `$ model, the probability moment is: $$C_q\left(M\right)=\frac{\omega ^q\left(1\right)\mathrm{}\omega ^q\left(\nu \right)}{\omega ^{q\nu }}.$$ $`(9)`$ It can be rewritten as: $$C_q\left(M\right)=\lambda ^{q\nu }\omega ^q\left(1\right)\mathrm{}\omega ^q\left(\nu \right)$$ $$=\lambda ^{q\nu }\mathrm{exp}\left(\underset{i=1}{\overset{\nu }{}}q\epsilon _i\right),$$ $`(10)`$ where $`\epsilon _i=\mathrm{ln}\omega \left(i\right)`$. The parameter $`\zeta =_{i=1}^\nu q\epsilon _i`$ in the above equation is the sum of $`\nu `$ random numbers. Under the central limit approximation $`\zeta `$ approades to Gaussian distribution: $$C_q\left(M\right)=\lambda ^{q\nu }\mathrm{e}^{q\zeta }=\mathrm{exp}\left(q\nu \mathrm{ln}\lambda +\frac{\nu \sigma ^2q^2}{2}q\overline{\zeta }\right).$$ $`(11)`$ Using $`C_1(M)=1`$, we get $$C_q\left(M\right)=\mathrm{e}^{\nu \sigma ^2q\left(q1\right)/2}.$$ $`(12)`$ The intermittency indices can be deduced as: $$\phi _q=\frac{\left(q1\right)q\sigma ^2}{2\mathrm{ln}2}.$$ $`(13)`$ We have also the relation $$\sigma ^2=\mathrm{ln}^2\omega \mathrm{ln}\omega ^2=\frac{1}{3}\sigma ^2+\frac{2}{3}\sigma ^4+\mathrm{}\mathrm{}.$$ Under linear approximation<sup></sup> it becomes $`\sigma ^2=\alpha ^2/3`$. Substituting into Eq.(13) we get $$\phi _q=\frac{q\left(q1\right)\alpha ^2}{6\mathrm{ln}2}.$$ $`(14)`$ from eq.(8): $$\lambda _q=\frac{\phi _q+1}{q}=\frac{\left(q1\right)\alpha ^2}{6\mathrm{ln}2}+\frac{1}{q}.$$ $`(15)`$ The resulting $`\lambda _qq`$ are plotted in Fig.2($`a`$) for $`\alpha =0.2,0.3,0.4,0.5`$ respectively. Fig.2($`a`$) is the result under central limit approximation. The exact relation can not be calculated analytically. Therfore, we use Monte Carlo simulation. The resulting $`\lambda _qq`$ for $`\alpha =0.2,0.3,0.4,0.5`$ respectively, are shown in Fig.2$`(b)`$. It can be seen from the figures that the $`\lambda _qq`$ curves from both the Monte Carlo simulation and the analytical calculation under central limit approximation have the same trend, i.e. with the increasing of $`q`$, $`\lambda _q`$ arrive at a minimum at the point $`q_c`$, which means that there really exists non-thermal phase transition in the self-similar cascading model and two different phases do indeed coexist, $`q_c`$ is the critical point of phase transition. In Fig.2 we alse draw the experimental data from NA22 (open circles). It stops at the rank $`q=5`$ and is unclear whether there is a minimum at some higher rank as required by non-thermal phase transition. The open triangles in the figure is the result from the same experiment selecting only the particles with low transverse momenta ($`p_t<0.15`$ GeV/$`c`$). In this case, with the increasing of $`q`$ (from 4 to 5), $`\lambda _q`$ increases. It seems to show that there is phase transition and the critical point $`q_c<5`$. As is well known, choosing only the particles with low transverse momenta, the strength of intermittency increases<sup></sup>. Therefore, this experimental phenomenon shows that the system with lower transverse momenta, which has larger intermittency strength, has lower critical point of non-thermal phase transition. This is qualitatively the same as the result of our model, where the phase transition point shifts left with the increasing of fluctuation strength (when $`\alpha `$ increases $`q_c`$ decreases). In order to see more clearly the relation between the phase transition point $`q_c`$ and the fluctuation parameter $`\alpha `$ of the model, we draw the figure of $`q_c\alpha `$, as shown in Fig.3. We can see from the figure that the larger $`\alpha `$ is, the earlier $`q_c`$ appears. Comparing the exact values of $`q_c\alpha `$ from Monte Carlo simulation and the analytical results under central limit approximation, it can be seen that both have the same trend of continuously descending. However, the values of $`q_c`$ in central limit approximation are generally smaller than the exact values. This shows that the central limit approximation can reflect qualitatively the property of non-thermal phase transition but there is noticeable quantitative deviation. Fig.1 Log-log plot of various rank probability moments versus partition number in $`\alpha `$ model Fig. 2 Relation between the parameter $`\lambda _q`$ and the rank $`q`$ of moments. The vertical lines indicate the position of minima. ($`a`$) Analytical results under central limit approximation. The open circles are the experimental results from NA22. Open triangles are the results from the same experiments taking only low momentum particles with ($`p_t<0.15`$ GeV/$`c`$) Data taken from Ref.. ($`b`$) Results of Monte Carlo simulation. Fig. 3 The relation between phase trnasition point $`q_c`$ and fluctuation strength $`\alpha `$
no-problem/9907/quant-ph9907096.html
ar5iv
text
# Protecting Quantum Information Encoded in Decoherence Free States Against Exchange Errors ## I Introduction Preserving the coherence of quantum states and controlling their unitary evolution is one of the fundamental goals of Quantum Information Processing . When the system Hamiltonian is invariant under particle permutations, the exchange operator $`E_{ij}`$ interchanging particles $`i`$ and $`j`$ is a constant of the motion, and definite symmetry of a state will be conserved. Models of quantum computers based on identical bosons or fermions must of course respect this elementary requirement. It was pointed out in a recent paper that active quantum error correcting codes (QECCs) designed to correct independent single-qubit errors, will fail for identical particles in the presence of exchange errors. The reason is that exchange acts as a two-qubit error which has the same effect as a simultaneous bit flip on two different qubits. Of course, QECCs dealing explicitly with multiple-qubit errors are also available, so that exchange errors can readily be dealt with provided one accepts longer codewords than are needed to deal with single-qubit errors . For example, in Ref. a nine-qubit code is presented which can correct all single-qubit errors and all Pauli exchange errors. This is to be compared with the five-qubit “perfect” code which protects (only) against all single-qubit errors . While the nine-qubit code is longer than the “perfect” code, it is shorter than a code required to protect against all two-qubit errors. A different error model which has been considered by several authors is that in which qubits undergo collective, rather than independent errors . The underlying physics of this model has a rich history: it dates back at least to Dicke’s quantum optics work on superradiance of atoms coupled to a radiation field, where it arose in the consideration of systems confined to a region whose linear dimensions are small compared to the shortest wavelength of the field . The model was later treated extensively by Agarwal in the context of spontaneous emission . It was only recently realized, however, that in the collective decoherence model there exist large decoherence-free subspaces (DFSs), which are “quiet” Hilbert subspaces in which no environmentally-induced errors occur at all . Such subspaces offer a passive protection against decoherence. Collective decoherence is an assumption about the manner in which the environment couples to the system: instead of independent errors, as assumed in the active QECC approach, one assumes that errors are strongly correlated, in the sense that all qubits can be permuted without affecting the coupling between system and bath. This is clearly a very strong assumption, and it may not hold exactly in a realistic system-bath coupling scenario. To deal with this limitation, we have shown recently how DFSs can be stabilized in the presence of errors that perturb the exact permutation symmetry, by concatenating DFSs with QECCs . Concatenation is a general technique that is useful for achieving fault tolerant quantum computation , and trades stability of quantum information for the price of longer codewords. It is our purpose here to analyze the effect of exchange errors on DFSs for collective decoherence. These errors are fundamentally different from those induced by the system-bath coupling, since they originate entirely from the internal system Hamiltonian. We will show that by use of the very same concatenation scheme as introduced in Ref. (which was designed originally to deal with system-bath induced errors), a DFS can be stabilized in the presence of exchange errors as well. The structure of the paper is as follows. We begin by briefly recalling the origin of the exchange interaction in Sec. II and present some Hamiltonians modelling this interaction. We then present, in Sec. III, a short review of the Hamiltonian theory of DFSs. Next we discuss in Sec. IV the simplest model, of constant exchange matrix elements, and show that DFSs are immune to exchange errors in this case. Our main result is then presented in Sec. V, when we analyze the effect of exchange errors in the case of arbitrary exchange matrix elements. We show that a DFS is invariant under such errors, and conclude that concatenation with a QECC can generally stabilize DFSs against exchange. ## II Modelling Exchange in Qubit Arrays The exchange interaction arises by virtue of permutation symmetry between identical particles, in addition to some interaction potential. Exchange is caused by the system Hamiltonian, and is unrelated to the coupling to an external environment. Exchange thus induces an extraneous unitary evolution on the system, but does not lead to decoherence. To model exchange it is sufficient to consider a Hamiltonian of the form $$H_{\mathrm{ex}}=\frac{1}{2}\underset{ij}{\overset{K}{}}J_{ij}E_{ij},$$ (1) where the sum is over all qubit pairs, $`J_{ij}`$ are appropriate matrix elements, and $$E_{ij}|ϵ_1,\mathrm{},ϵ_i,\mathrm{},ϵ_j,\mathrm{},ϵ_K=|ϵ_1,\mathrm{},ϵ_j,\mathrm{},ϵ_i,\mathrm{},ϵ_K.$$ (2) $`E_{ij}`$ thus written is a general exchange operator operating on qubits $`i`$ and $`j`$ of a $`K`$-qubit state. Typical examples of Hamiltonian leading to exchange are : (i) the Heisenberg interaction between spins $$H_{\mathrm{Heis}}=\frac{1}{2}\underset{ij}{}J_{ij}^\mathrm{H}𝐒_i𝐒_j$$ (3) where $`𝐒_i=(\sigma _i^x,\sigma _i^y,\sigma _i^z)`$ is the Pauli matrix vector of spin $`i`$; (ii) the Coulomb interaction $$H_{\mathrm{Coul}}=\frac{1}{2}\underset{ij}{}\underset{\sigma ,\sigma ^{}}{}J_{ij}^\mathrm{C}a_{i\sigma }^{}a_{i\sigma ^{}}a_{j\sigma ^{}}^{}a_{j\sigma }$$ (4) where $`a_{i\sigma }^{}`$ ($`a_{i\sigma }`$) is the creation (annihilation) operator of an electron of spin $`\sigma `$ in Wannier orbital $`i`$. $`J_{ij}`$ is the exchange matrix element and is given for electrons by $$J_{ij}^\mathrm{C}=e^2𝑑𝐫𝑑𝐫^{}\frac{w^{}(𝐫𝐑_i)w(𝐫^{}𝐑_i)w^{}(𝐫^{}𝐑_j)w(𝐫𝐑_j)}{|𝐫𝐫^{}|},$$ (5) where $`𝐑_i`$ is a lattice vector and $`w`$ is a Wannier function . This is a rather generic form for the exchange matrix element; in other cases $`w`$ would be replaced by the appropriate wave function and the Coulomb interaction $`e^2/|𝐫𝐫^{}|`$ by the appropriate potential. The important point to notice is that the exchange integral depends on the overlap between the wave functions at locations $`i`$ and $`j`$. Thus exchange effects generally decay rapidly as the distance $`|𝐑_i𝐑_j|`$ increases. An important simplification is possible when interactions beyond nearest neighbors can be neglected (i.e., $`J_{ij}=0`$ if $`i`$ and $`j`$ are not nearest neighbors) in which case the approximation $`J_{ij}J`$ is often made. In the Coulomb case the interpretation of $`a_{i\sigma }^{}a_{i\sigma ^{}}a_{j\sigma ^{}}^{}a_{j\sigma }`$ as an exchange operator is quite clear: spin $`\sigma `$ is destroyed at orbital $`j`$ and is created at orbital $`i`$, while spin $`\sigma ^{}`$ is destroyed at orbital $`i`$ and is created at orbital $`j`$. The net effect is that spins $`\sigma `$ and $`\sigma ^{}`$ are swapped between the electrons in orbitals $`i`$ and $`j`$. In the Heisenberg case one can verify that the operator $`𝐒_i𝐒_j`$ also implements an exchange. Let $`I`$ denote the identity operator, $`X_i`$ the Pauli matrix $`\sigma _i^x`$ operating on qubit $`i`$, etc. A qubit state is written as usual as a superposition over $`\sigma _z`$ eigenstates $`|0`$ and $`|1`$. Then, defining $$E_{ij}\frac{1}{2}\left(I+𝐒_i𝐒_j\right)=\frac{1}{2}\left(I+X_iX_j+Y_iY_j+Z_iZ_j\right),$$ (6) it is easily checked that Eq. (2) is satisfied . ## III Review of Decoherence Free Subspaces We briefly recall the Hamiltonian theory of DFSs . Given is a system-bath interaction Hamiltonian $$H_{\mathrm{SB}}=\underset{\lambda }{}F_\lambda B_\lambda ,$$ (7) where $`F_\lambda `$ and $`B_\lambda `$ are, respectively, the system and bath operators. The decoherence free states are those, and only those states $`\{|\psi \}`$ which are simultaneous degenerate eigenvectors of all system operators appearing in $`H_{\mathrm{SB}}`$: $$F_\lambda |\psi =c_\lambda |\psi .$$ (8) The eigenvalues $`\{c_\lambda \}`$ do not depend on $`|\psi `$. The subspace spanned by these states is a DFS, meaning that under $`H_{\mathrm{SB}}`$ the evolution in this subspace is unitary, and there is no decoherence. This results in a passive protection against errors, to be contrasted with the active QECC approach. Of particular interest is the case where the $`\{F_\lambda \}`$ are collective operators, such as the total spin operators $$S_\alpha =\underset{i=1}{\overset{K}{}}\sigma _i^\alpha \alpha =x,y,z.$$ (9) These operators satisfy $`su(2)`$ commutation relations, just like the local $`\sigma _i^\alpha `$ Pauli operators: $$[S_\alpha ,S_\beta ]=2i\epsilon _{\alpha \beta \gamma }S_\gamma .$$ (10) This situation, referred to above as collective decoherence, arises when the bath couples in a permutation-invariant fashion to all qubits. In this paper we shall confine our attention to collective decoherence, and shall employ the term DFS exclusively in this context . With a system-bath interaction of the form $`H_{\mathrm{SB}}=_\alpha S_\alpha B_\alpha `$ (as, e.g., in the Lamb-Dicke limit of the spin-boson model), a combinatorial calculation shows (see appendix) that the number of encoded qubits is $`\mathrm{log}_2K!/\left[(K/2+1)!(K/2)!\right]\stackrel{K\mathrm{}}{}K\frac{3}{2}\mathrm{log}_2K`$. The resulting decoherence free code thus asymptotically approaches unit efficiency (number of encoded qubits per physical qubits), and is therefore of significant interest. In the collective decoherence case, since the $`S_\alpha `$ are the generators of the semisimple Lie algebra $`su(2)`$, the DFS condition Eq. (8) is satisfied with $`c_\alpha =0`$ . This means that the decoherence free states $`\{|j\}`$ are $`su(2)`$ singlets: they are states of zero total spin, and belong to the one-dimensional irreducible representation of $`su(2)`$. For example, for $`K=2`$ qubits undergoing collective decoherence, there is just one decoherence free state: $`(|01|10)/\sqrt{2}`$, i.e., the familiar singlet state of two spin 1/2 particles. For as few as $`K=4`$ there are already two singlet states, spanning a full encoded decoherence free qubit . ## IV Decoherence Free States and Exchange with Constant Matrix Elements A simple situation arises when we can assume that $`J_{ij}J/K`$ for all $`i,j`$, i.e., without the restriction to nearest neighbor interactions. This long-range Ising model is thermodynamically equivalent to the mean-field theory of metallic ferromagnets, and there exist some examples of metals (e.g., HoRh<sub>4</sub>B<sub>4</sub>) that are well described by it . At present the relevance of such materials to quantum computer architectures is not clear. We also stress that in the vast majority of physical examples exchange correlations decay exponentially fast with the distance between particles. The case of arbitrary exchange matrix elements is dealt with in the next section. We consider the long-range model here mainly for its simplicity and for the remarkable result that DFSs are completely immune to exchange errors in this case. We have for $`𝐒=(S_x,S_y,S_z)`$ $$S^2=𝐒𝐒=3KI+2\underset{ij}{}X_iX_j+Y_iY_j+Z_iZ_j,$$ (11) so that the exchange Hamiltonian can be rewritten as $`H_{\mathrm{ex}}`$ $`=`$ $`{\displaystyle \frac{J}{4K}}{\displaystyle \underset{ij}{\overset{K}{}}}\left(I+X_iX_j+Y_iY_j+Z_iZ_j\right)`$ (12) $`=`$ $`{\displaystyle \frac{J}{8K}}\left[\left(K^24K\right)I+S^2\right].`$ (13) Whereas the DFS condition guarantees that no decoherence is caused by the coupling to the bath, uncontrolled unitary evolution due to the system Hamiltonian may still pose a significant problem. This is exactly the case in the presence of exchange errors, as described above. However, using Eq. (13) and recalling that the DFS states have zero total spin, we see that in the collective decoherence case the DFS is in fact automatically protected against exchange errors: $$H_{\mathrm{ex}}|\psi =\left[\nu I+\frac{J}{8K}S^2\right]|\psi =\nu |\psi ,$$ (14) where $`|\psi `$ is a DFS state and $`\nu (J/K)(K^24K)/8`$. Since the constant $`\nu `$ does not depend on $`\psi `$, this implies that under the unitary evolution generated by $`H_{\mathrm{ex}}`$, a DFS state accumulates an overall, global phase $`e^{i\nu t}`$. This phase is not measurable and does not affect the decoherence time. Thus in the $`J_{ij}J`$ model a DFS does not undergo exchange errors, and the smallest DFS ($`K=4`$ physical qubits) already suffices to encode a full logical qubit. ## V Decoherence Free States and Arbitrary Exchange Matrix Elements We now analyze the effect of arbitrary exchange errors on DFS states for collective decoherence. We show that by concatenation with QECCs, DFSs can be stabilized against these errors. ### A Decoherence Free Subspaces are Invariant Under Exchange The exchange operator commutes with the total spin operators. To see this, use the definitions of these operators in Eqs. (2) and (9), and let $`S_\alpha ^{ij}\left(_{ki,j}^K\sigma _k^\alpha \right)`$. Since they act on different qubits, $`S_\alpha ^{ij}`$ clearly commutes with $`E_{ij}`$. Now, using $`\sigma ^\alpha \sigma ^\beta =\delta _{\alpha \beta }I+i\epsilon _{\alpha \beta \gamma }\sigma ^\gamma `$: $`S_\alpha E_{ij}`$ $`=`$ $`\left[S_\alpha \left(\sigma _i^\alpha +\sigma _j^\alpha \right)\right]E_{ij}+\left(\sigma _i^\alpha +\sigma _j^\alpha \right)E_{ij}`$ (15) $`=`$ $`\left({\displaystyle \underset{ki,j}{\overset{K}{}}}\sigma _k^\alpha \right)E_{ij}+{\displaystyle \frac{1}{2}}\left(\sigma _i^\alpha +\sigma _j^\alpha \right)\left(I+{\displaystyle \underset{\beta =x,y,z}{}}\sigma _i^\beta \sigma _j^\beta \right)`$ (16) $`=`$ $`S_\alpha ^{ij}E_{ij}+\sigma _i^\alpha +\sigma _j^\alpha +{\displaystyle \frac{i}{2}}{\displaystyle \underset{\beta ,\gamma }{}}\epsilon _{\alpha \beta \gamma }\left(\sigma _i^\beta \sigma _j^\gamma +\sigma _i^\gamma \sigma _j^\beta \right).`$ (17) The last term in this expression vanishes since $`\epsilon _{\alpha \beta \gamma }=\epsilon _{\alpha \gamma \beta }`$ and we are summing over all $`\beta ,\gamma `$ values. Thus $$S_\alpha E_{ij}=S_\alpha ^{ij}E_{ij}+\sigma _i^\alpha +\sigma _j^\alpha =E_{ij}S_\alpha .$$ (18) Now let $`|\psi `$ be a decoherence free state (which it is for collective decoherence iff $`S_\alpha |\psi =0`$ ). Since $`S_\alpha \left(E_{ij}|\psi \right)=E_{ij}S_\alpha |\psi =0`$, it follows that $`E_{ij}|\psi `$ also is decoherence free. We have thus proved: Theorem I. Let $`\stackrel{~}{}`$ be a decoherence free subspace against collective decoherence errors, and $`E_{ij}`$ an exchange operation on qubits $`i`$ and $`j`$. Then $`E_{ij}\stackrel{~}{}=\stackrel{~}{}`$. The significance of this result is that exchange errors act as errors on the encoded DFS qubits, i.e., they keep decoherence free states inside the DFS. The exact way in which these errors are manifested is a difficult problem. Exchange operations are transpositions in the language of the permutation group $`S_K`$, and are known to generate this group . For a given number $`K`$ of physical qubits the action of the exchange operators will realize a $`2^K`$-dimensional reducible representation of $`S_K`$. The DFS for collective decoherence on these $`K`$ qubits is the set of one-dimensional irreducible subspaces in the irreducible representations (irreps) of $`S_K`$, which appear with multiplicity $`\frac{K!}{(K/2+1)!(K/2)!}`$ (see appendix). For $`K=4`$ the DFS is 2-dimensional (=multiplicity of the 1D irreps), encoding one qubit. Therefore in this case exchange errors will act as the usual Pauli errors on a single (encoded-) qubit. Correction of exchange errors for $`K=4`$ can then be done entirely within the DFS by using a quantum error correcting code for single-qubit errors. This observation naturally leads one to consider concatenating the DFS codewords with such a code, as done in the concatenated code of Ref. . That paper showed that the concatenated DFS-QECC code can in fact deal with the more general case of both errors inside the DFS (as is our case here), and errors that take states outside of the DFS. We investigate the correction of exchange errors in detail for the $`K=4`$ case in the next subsection. For $`K>4`$ qubits, the dimension of the DFS is greater than 2 (e.g., for $`K=6`$ it is 5), and the action of exchange errors will correspondingly be represented by higher dimensional irreps of $`S_K`$. To correct such unitary errors it will be necessary to resort to codes for “qu$`k`$its” ($`k>2`$), such as stabilizer codes for higher-dimensional sytems , or polynomial codes . We defer the discussion of this case to a future publication and focus here on the $`K=4`$ case. ### B Effect of Exchange Errors on the Four Qubit Decoherence Free Subspace Suppose that the qubits undergo collective decoherence in clusters of four identical particles, but different clusters are independent (as they might, e.g., in a polymer with an AAAABBBBAAAA… type of order). Each cluster would then support a two-dimensional DFS, accommodating a single encoded DFS qubit. The $`K=4`$ physical qubits DFS states can then be written as $$|\stackrel{~}{0}=\frac{|a|b}{2},|\stackrel{~}{1}=\frac{2|c|a|b}{2\sqrt{3}},$$ (19) where $$|a|0110+|1001,|b|1010+|0101,|c|0011+|1100.$$ (20) Note that the mutually orthogonal states $`|a`$, $`|b`$ and $`|c`$ are sums of complementary states. Moreover, the four qubits play a symmetrical role (i.e., $`0`$ and $`1`$ appear equally in all four positions in both $`|\stackrel{~}{0}`$ and $`|\stackrel{~}{1}`$). This dictates that exchange of qubits in symmetrical positions should have the same effect. In other words, we expect $`E_{12}`$ to be indistinguishable from $`E_{34}`$, and similarly for $`\{E_{13},E_{24}\}`$ and $`\{E_{23},E_{14}\}`$ (although for a linear geometry most physical exchange mechanisms will yield $`|J_{23}|>|J_{14}|`$). This expectation is born out; in the $`\{|\stackrel{~}{0},|\stackrel{~}{1}\}`$ basis we find, using straightforward algebra, that the six exchange operators can be written as $`E_{12}`$ $`=`$ $`E_{34}=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)=\overline{Z}`$ (23) $`E_{13}`$ $`=`$ $`E_{24}=\stackrel{~}{R}\left(\pi /3\right)={\displaystyle \frac{\sqrt{3}}{2}}\overline{X}+{\displaystyle \frac{1}{2}}\overline{Z}`$ (24) $`E_{14}`$ $`=`$ $`E_{23}=\stackrel{~}{R}\left(\pi /3\right)={\displaystyle \frac{\sqrt{3}}{2}}\overline{X}+{\displaystyle \frac{1}{2}}\overline{Z},`$ (25) where $`\stackrel{~}{R}\left(\theta \right)=R\left(\theta \right)Z`$, and $`R\left(\theta \right)=\left(\begin{array}{cc}\mathrm{cos}\theta & \mathrm{sin}\theta \\ \mathrm{sin}\theta & \mathrm{cos}\theta \end{array}\right)`$. Thus $`\stackrel{~}{R}\left(\theta \right)`$ is a reflection about the $`x`$-axis followed by a counter-clockwise rotation in the $`x,y`$ plane. In writing these expressions, the matrices operate on column vectors such that $`|\stackrel{~}{0}=\left(\genfrac{}{}{0pt}{}{1}{0}\right)`$ and $`|\stackrel{~}{1}=\left(\genfrac{}{}{0pt}{}{0}{1}\right),`$ and $`\overline{X},\overline{Z}`$ are the encoded Pauli matrices, i.e., the Pauli matrices acting on the DFS states (and not on the physical qubits). Thus, exchange errors act as encoded Pauli errors on the DFS states. Using this observation, it is possible to protect DFS states against such errors by concatenation with a QECC designed to correct single qubit errors. The critical point is that this QECC will now correct single encoded qubit errors. This requires an additional encoding layer to be constructed. In particular, suppose we add such an encoding layer by using DFS qubits to build codewords of the five-qubit “perfect” QECC . These codewords have the form $`|\stackrel{~}{ϵ}_1|\stackrel{~}{ϵ}_2|\stackrel{~}{ϵ}_3|\stackrel{~}{ϵ}_4|\stackrel{~}{ϵ}_5`$, where $`ϵ=0,1`$, and $`j`$ in $`\stackrel{~}{ϵ}_j`$ is now a cluster index. Since the five-qubit QECC can correct any single qubit error, in particular it can correct the specific errors of Eq. (25) which the encoded DFS qubits would undergo under an exchange interaction on the physical qubits in a given cluster. However, the error detection and correction procedure must be carried out sufficiently fast so that exchange errors affecting multiple blocks at a time do not occur, or else concatenation with a code that can deal with $`t>1`$ independent errors is needed. The typical time scale for exchange errors to occur is $`1/2|J_{ij}|`$, where $`J_{ij}`$ is the relevant exchange matrix element. This 20-qubit concatenated DFS-QECC code is precisely the one discussed in Ref. , where it was shown that it offers protection against general collective decoherence symmetry breaking perturbations. Our present result shows that this concatenated code is stable against exchange errors as well. We note that it is certainly possible to find a shorter QECC than the five-qubit one to protect against the restricted set of errors in Eq. (25). However, such a code would not offer the full protection against general errors that is offered by concatenation with the perfect five-qubit code, and thus would not be as useful. ## VI Summary and Conclusions To conclude, in this paper we considered the effect of unitary exchange errors between identical qubits on the protection of quantum information by decoherence free subspaces (DFSs) defined for a qubit array. We showed that in the important case of ideal collective decoherence (qubits are coupled symmetrically to the bath), for which a perfectly stable DFS is obtained, DFSs are additionally invariant to exchange errors. Thus such errors generate rotations inside the DFS, but do not take decoherence free states outside of the DFS. Consequently it is possible to use, without any modification, the concatenated DFS-QECC scheme of Ref. in order to protect DFSs against exchange errors, while at the same time relaxing the constraint of ideal collective decoherence, and allowing for symmetry breaking perturbations. This is useful for quantum memory applications. Since exchange interactions preserve a DFS, an interesting further question is whether they can be used constructively in order to perform controlled logic operations inside a DFS. We have found the answer to be positive, and that it is actually possible to perform universal computation in a fault tolerant manner inside a DFS for collective decoherence using only two-body exchange interactions . Acknowledgments.— This material is based upon work supported by the U.S. Army Research Office under contract/grant number DAAG55-98-1-0371, and in part by NSF CHE-9616615. ## Dimension of Decoherence Free Subspaces for Collective Decoherence In view of the fact that the total spin operators $`S_\alpha `$ satisfy spin-1/2 commutation relations, it follows from the addition of angular momentum that the operators $`S^2`$ and $`S_z`$ have simultaneous eigenstates given by $$S^2|S,m=S(S+1)|S,m,S_z|S,m=m|S,m,$$ (26) where $`m=S,S+1,\mathrm{},S`$ and $`S=0,1,\mathrm{},K/2`$ (for $`K`$ even), $`S=1/2,3/2,\mathrm{},K/2`$ (for $`K`$ odd). The $`|S,m`$ states are known as Dicke states . The degeneracy of a state with given $`S`$ is $$\frac{K!(2S+1)}{(K/2+S+1)!(K/2S)!},$$ (27) which for $`S=0`$, i.e., the singlet states, coincides with the dimension of the DFS for $`K`$ qubits undergoing collective decoherence cited in the text. It is interesting to derive this formula from combinatorial arguments relating to the permutation group of $`K`$ objects, which we will do for $`S=0`$. The result follows straightforwardly from the Young diagram technique. As is well known (see, e.g., ), the singlet states of $`su(2)`$ belong to the rectangular Young tableaux of $`K/2`$ columns and $`2`$ rows. The multiplicity $`\lambda `$ of such states is the number of “standard tableaux” (tableaux containing an arrangement of numbers which increase from left to right in a row and from top to bottom in a column), which is also the dimension of the irreducible representation of the permutation group corresponding to the Young diagram $`\eta _{K/2,2}`$ (an empty tableau) of $`K/2`$ columns and $`2`$ rows. This number is found using the “hook recipe” , where one writes the “hook length” $`g_i`$ (the sum of the number of positions to the right of box $`i`$, plus the number of positions below it, plus one) of each box $`i`$ in the Young diagram: $$\lambda (\eta )=\frac{K!}{_{i=1}^Kg_i}.$$ (28) E.g., for $`\eta _{c,2}`$ the hook lengths are: | $`c+1`$ | $`c`$ | $`c1`$ | $`\mathrm{}`$ | $`3`$ | $`2`$ | | --- | --- | --- | --- | --- | --- | | $`c`$ | $`c1`$ | $`c2`$ | $`\mathrm{}`$ | $`2`$ | $`1`$ | (29) and one finds, with $`c=K/2`$: $$\lambda (\eta _{K/2,2})=\frac{K!}{(K/2+1)!(K/2)!},$$ (30) which is indeed the $`S=0`$ case of the general degeneracy formula, Eq. (27).
no-problem/9907/cond-mat9907301.html
ar5iv
text
# Pairing Correlations on 𝑡-𝑈-𝐽 Ladders \[ ## Abstract Pairing correlations on generalized $`t`$-$`U`$-$`J`$ two-leg ladders are reported. We find that the pairing correlations on the usual $`t`$-$`U`$ Hubbard ladder are significantly enhanced by the addition of a nearest-neighbor exchange interaction $`J`$. Likewise, these correlations are also enhanced for the $`t`$-$`J`$ model when the onsite Coulomb interaction is reduced from infinity. Moreover, the pairing correlations are larger on a $`t`$-$`U`$-$`J`$ ladder than on a $`t`$-$`J_{\text{eff}}`$ ladder in which $`J_{\text{eff}}`$ has been adjusted so that the two models have the same spin gap at half-filling. This enhancement of the pairing correlations is associated with an increase in the pair-binding energy and the pair mobility in the $`t`$-$`U`$-$`J`$ model and point to the importance of the charge transfer nature of the cuprate systems. \] Various ab initio quantum chemistry calculations as well as model Hamiltonian studies have been used to determine the electronic properties of Cu-oxide clusters. In particular, these calculations have provided parameters for simpler, effective one-band Hubbard and $`t`$-$`J`$ models which have then been used to study many-body correlations in larger systems. However, both the one-band Hubbard and the $`t`$-$`J`$ models differ in an essential manner from the high $`T_c`$ cuprates which are known to be charge transfer insulators in their undoped state. Thus, the one-band Hubbard model at half-filling is characterized by a Mott-Hubbard gap which is set by $`U`$ and in the $`t`$-$`J`$ model, $`U`$ is taken to infinity with the constraint of no double occupancy. Therefore, while Coulomb fluctuations associated with double occupancy of a site are controlled by $`U`$ in the Hubbard model, $`U`$ also determines the strength of the exchange coupling. In the Hubbard model as $`U`$ increases beyond the bandwidth, $`J`$ decreases as $`4t^2/U`$. Although $`J`$ is an independent parameter in the $`t`$-$`J`$ model, $`U`$ is infinite in this model, suppressing charge fluctuations. While we believe that the basic pairing mechanism arises from the exchange correlations, the charge transfer nature of the cuprates can play an essential role in the doped systems where it allows for a more flexible arrangement between $`J`$ and $`U`$ than reflected in either the one-band Hubbard or $`t`$-$`J`$ models. To explore this, we have carried out density-matrix renormalization group (DMRG) calculations of the pairing correlations on two-leg $`t`$-$`U`$-$`J`$ ladders. Ladders are known to provide model systems which exhibit various phenomena similar to those of the cuprates. In particular, when doped away from half-filling they are known to have power-law pairing correlations which have opposite, $`d_{x^2y^2}`$-like, signs between the rung-rung and rung-leg correlations. These correlations have previously been investigated for both Hubbard and $`tJ`$ models. Here we will study a generalized $`t`$-$`U`$-$`J`$ model which includes both an onsite Coulomb repulsion $`U`$ and a nearest neighbor exchange $`J`$. While both Hubbard and $`t`$-$`J`$ ladders show pairing correlations when doped, we find that these correlations can be significantly enhanced in a model with both $`U`$ and $`J`$. We argue that, in fact, a $`t`$-$`U`$-$`J`$ model is appropriate for a charge-transfer material. The basic one-band Hubbard model is characterized by a one-electron nearest-neighbor hopping $`t`$ and an onsite Coulomb interaction $`U`$. $$H=\underset{ij,\sigma }{}t\left(c_{i\sigma }^{}c_{j\sigma }+c_{j\sigma }^{}c_{i\sigma }\right)+U\underset{i}{}n_in_i.$$ (1) Here $`c_{i\sigma }^{}`$ creates an electron with spin $`\sigma `$ on site $`i`$ and $`ij`$ sums over nearest neighbor sites. As is well known, when $`U/t`$ is large, a strong coupling expansion of Eq. (1) leads to the $`tJ`$ Hamiltonian $`H`$ $`=`$ $`{\displaystyle \underset{ij,\sigma }{}}t\left(c_{i\sigma }^{}c_{j\sigma }+c_{j\sigma }^{}c_{i\sigma }\right)+J{\displaystyle \underset{ij}{}}\left(𝐒_i𝐒_j{\displaystyle \frac{n_in_j}{4}}\right)`$ (3) $`{\displaystyle \frac{J}{4}}{\displaystyle \underset{i,\delta \delta ^{},\sigma }{}}\left(c_{i+\delta ,\sigma }^{}c_{i,\sigma }^{}c_{i,\sigma }c_{i+\delta ^{},\sigma }c_{i+\delta ,\sigma }^{}c_{i,\sigma }^{}c_{i,\sigma }c_{i+\delta ^{},\sigma }\right)`$ with $`J=4t^2/U`$ and $`\delta ,\delta ^{}`$ are vectors separating nearest neighbor sites. Here there is an important restriction that no site can have two fermions. Typically in Eq.. (3), $`t`$ and $`J`$ are treated as independent parameters and for doping near half-filling the latter three-site term is dropped. Now, while these effective models both describe certain aspects of the cuprates system, they lack the flexibility to describe an important feature that arises from the charge-transfer nature of these materials. Specifically, in the insulating state the one-band Hubbard model at large $`U`$ has a Mott-Hubbard gap set by $`U`$ rather than a charge-transfer gap set by the difference in the oxygen and copper sites energies. Furthermore, when $`U`$ is large, $`J4t^2/U`$ decreases as $`U`$ increases rather than saturating at a value set by the charge-transfer gap. That is, in strong coupling, the three-band Hubbard model gives $$J=4\left(\frac{t_{\text{pd}}^2}{\mathrm{\Delta }_{\text{pd}}}\right)^2\left[\frac{1}{U}+\frac{1}{\mathrm{\Delta }_{\text{pd}}}\right]$$ (4) with $`t_{\text{pd}}`$ the Cu ($`d_{x^2y^2}`$) - O($`p\sigma `$) hopping, $`\mathrm{\Delta }_{\text{pd}}`$ the Cu-O site energy difference and $`U`$ the Cu Coulomb energy. There are in fact further contributions to Eq. (4) from O-O hopping terms, as well as modifications due to O and Cu-O Coulomb interactions. However, the basic point is that when $`U`$ is large compared to the effective Cu-Cu hopping $`t_{\text{pd}}^2/\mathrm{\Delta }_{\text{pd}}`$, the exchange remains finite rather than going to zero. Likewise, in the $`tJ`$ model, while $`J/t`$ can be set to a physical value, one has in effect an infinite onsite Coulomb repulsion arising from the restriction of no double occupancy. The suppression of double occupancy reduces the mobility of the pairs, missing the physics associated with the partial occupation of the O sites surrounding a Cu. To address these limitations, we will study a $`tUJ`$ model in which there is a finite Coulomb interaction and an effective exchange term $`J`$. In the limit in which $`J=0`$, this is just the one-band Hubbard model while in the limit $`U/t1`$, this goes over to the $`tJ`$ model The DMRG calculations reported here have been carried out on open ended ladders (up to $`2\times 48`$ sites) keeping up to 800 states, so that the maximum weight of the discarded density matrix eigenvalues is $`10^6`$. We first examine the rung-rung pair-field correlation function $$D(\mathrm{})=\mathrm{\Delta }_{i+\mathrm{}}\mathrm{\Delta }_i^{}$$ (5) for a doped (8 holes) $`2\times 32`$ ladder. The operator $$\mathrm{\Delta }_i^{}=c_{i1,}^{}c_{i2,}^{}c_{i1,}^{}c_{i2,}^{}$$ (6) creates a singlet pair on the $`i^{\text{th}}`$ rung and $`\mathrm{\Delta }_{i+\mathrm{}}`$ destroys it on the $`(i+\mathrm{})^{\text{th}}`$ rung. A similar calculation in which a singlet pair is created on the $`i^{\text{th}}`$ rung and a singlet pair is destroyed on one of the legs at $`i+\mathrm{}`$ has an opposite sign indicating the $`d_{x^2y^2}`$-like structure of the pairing. Because of the finite length of the ladder, we have kept $`\mathrm{}12`$, with the measurements made in the central portion of the ladder, in the plots of $`D(\mathrm{})`$. In this region the effects of the open ends are negligible. In Fig. 1 we show the effect of adding an additional exchange term $`J`$ to a Hubbard model with $`U=6`$. Here and in the following we will measure energy in units of $`t`$. As seen, the addition of $`J`$ clearly enhances the pairing. In all of the plots it is important to recognize that the pair has an internal structure so that $`\mathrm{\Delta }_i^{}`$ and $`\mathrm{\Delta }_{i+\mathrm{}}`$ have only a partial overlap to the state in which a pair is added at the $`i^{\text{th}}`$ rung or removed from the $`i+\mathrm{}`$ rung, and the basic size of $`D(\mathrm{})`$ is reduced by the square of this overlap. As seen in Fig. 1, adding an additional exchange strongly enhances the pair-field correlations. Similarly, in Fig. 2a, we examined the effect of $`U`$ on the pairing correlations of a $`t`$-$`U`$-$`J`$ ladder with $`J=0.25`$. For $`U1`$, we have the usual $`tJ`$ result. As $`U`$ initially decreases, there is again a significant enhancement of the pairing correlations, but eventually as $`U`$ decreases below the bandwidth, the pairing correlations are reduced. This is also shown in Fig. 2b, where we have plotted $$\overline{D}=\underset{\mathrm{}=8}{\overset{12}{}}D(\mathrm{})$$ (7) versus $`U`$ for $`J=0.25`$. Here $`\overline{D}`$ reaches a maximum for $`U6`$. One would, of course, expect that the pairing correlations would depend on the total effective exchange interaction, both the explicit “$`J`$” exchange and the additional exchange associated with a finite $`U`$. Thus, in the $`t`$-$`U`$-$`J`$ model, as $`U`$ initially increases, the effective exchange increases and then as $`U`$ exceeds the bandwidth its contribution to the exchange decreases as $`4t^2/U`$. However, there is more to this than just the enhancement of the exchange interaction which can be seen by comparing the two models. A half-filled Hubbard ladder with $`U=6`$ and $`J=0.25`$ has a spin gap $`\mathrm{\Delta }_s=0.22`$ corresponding to an effective exchange $`J_{\text{eff}}2\mathrm{\Delta }_s=0.44`$. Using this value for the exchange in a $`tJ`$ model we have calculated the pair-field correlation function $`D(\mathrm{})`$ in Fig. 3 and compared it with the pair-field correlations found for the corresponding $`tUJ`$ model. Although both of these models have the same spin gap at half-filling, it is clear that the $`tUJ`$ ladder has significantly stronger pairing correlations. In order to understand the reasons for this, we have calculated the pair-binding energy and the pair mobility for both these models. The pair-binding energy is $$E_{\text{pb}}=2E_0(1)E_0(2)E_0(0)$$ (8) with $`E_0(n)`$ the ground-state energy with $`n`$ holes. We find $`E_{\text{pb}}`$ is equal to $`0.34`$ for the $`t`$-$`U`$-$`J`$ model with $`U=6`$ and $`J=0.25`$. For the $`t`$-$`J_{\text{eff}}`$ ladder with $`J_{\text{eff}}=0.44`$, adjusted so that the two models have the same spin gap at zero doping, the pair-binding energy is $`0.23`$. We have also calculated the effective hopping $`t_{\text{eff}}`$ of a hole pair from the dependence of $$ϵ_p(L_x)=E_0(2)E_0(0)$$ (9) on the length of the ladder for ladders with $`L_x`$ up to 48. In ladders with open boundary conditions, $`ϵ_p(L_x)`$ varies as $$ϵ_p(L_x)=ϵ_p(\mathrm{})+t_{\text{eff}}\frac{\pi ^2}{\left(L_{\text{eff}}+1\right)^2}$$ (10) where the effective length differs from the actual ladder length $`L_x`$ because of end effects. For large enough systems, the difference $`L_{\text{eff}}L_x=\delta L`$ tends to a constant and is considered as a fitting parameter. Fig. 4 shows the results for the $`t`$-$`U`$-$`J`$ and the $`tJ_{\text{eff}}`$ models. The effective hopping, given by the slope divided by $`\pi ^2`$, is $`t_{\text{eff}}=0.99`$ for the $`t`$-$`U`$-$`J`$ ladder and $`t_{\text{eff}}=0.39`$ for the $`tJ_{\text{eff}}`$ ladder. The enhancement of the effective pair hopping which occurs when $`U`$ is finite can be understood as arising from virtual states involving doubly occupied sites. An example of this is illustrated in Fig. 5. Here a pair of holes on the top rung hops to the bottom rung via a set of intermediate states. In this sequence, the second intermediate state, shown in the middle of the figure, has a doubly occupied site. In the $`tUJ`$ model this would not be allowed, leading to a reduction in the effective pair hopping. This effect not only enhances the pair-field correlations on the $`tUJ`$ ladder, but we believe also would act to reduce the stripe stiffness in the 2D $`tJ`$ problem. This would favor a $`d_{x_2y_2}`$-pairing state over the striped state we have typically found in DMRG calculations on $`n`$-leg $`tJ`$ ladders. Thus we conclude that the charge transfer nature of the cuprates can be more appropriately described using a $`tUJ`$ model. Furthermore, this model exhibits enhanced pairing correlations due to (1) an additional exchange coupling reflecting the exchange path in which there is a virtual double occupancy on the oxygen rather than the Cu and (2) an enhanced pair hopping allowed by a finite value of $`U`$ which reflects the alternate paths for electron transfer in the charge transfer system. We wish to thank D. Duffy and M. Fisher for helpful discussions. SD acknowledges support from the Swiss National Science Foundation and the ITP under NSF grant PHY94-07194. DJS and SRW wishes to acknowledge partial support from the US Department of Energy under Grant No. DE-FG03-85ER45197.
no-problem/9907/nucl-th9907084.html
ar5iv
text
# Interactions of quarkonium at low energies ## 1 INTRODUCTION The heavy quarkonium (for which we will use a generic notation $`\mathrm{\Phi }`$) has a small size ($`r1/(\alpha _sm_Q)`$) and a large binding energy ($`ϵ\alpha _s^2m_Q`$). When we consider an interaction with light hadrons ($`h,h^{}`$) at a low (compared to $`ϵ`$) energy, the amplitude can be expanded in multipoles : $`=_ic_ih^{}|O_i(0)|h`$, where $`c_i`$ are the Wilson coefficients (polarizabilities) which reflect the structure of the quarkonium. The matrix elements of the gauge-invariant local operators $`O_i(x)`$ over the light hadron state contain the information about the long distance part of the process; we assume that the factorization scale in this formula is $`ϵ`$. This approach has been applied to the evaluation of inclusive cross sections of heavy quarkonium dissociation in hadron gas in . Here we evaluate exclusive cross sections of $`\pi \mathrm{\Phi }`$ interactions. Previously these interactions were addressed in Refs. ; detailed description of our formalism and results can be found in Ref. . The leading operator in OPE for the amplitude is $`\frac{1}{2}g^2𝐄^{a2}`$, which describes the emission of two gluons in the color-singlet state. (The magnetic coupling is suppressed by the velocity $`v\alpha _s1`$ for a heavy $`\mathrm{\Phi }`$.) One can re-write this operator in the Lorentz covariant form, $`\frac{1}{2}g^2𝐄^{a2}=\frac{4\pi ^2}{9}\theta _\mu ^\mu +\frac{1}{2}g^2\theta _{00}^{(G)}`$. As will be discussed below, the scale anomaly implies that the matrix elements of the stress tensor, $`\theta _\mu ^\mu =(9g^2/32\pi ^2)(G_{\mu \nu }^a)^2`$ do not depend on the coupling $`g^2`$ and dominate over the matrix elements of the gluon tensor operator, $`\theta _{00}^{(G)}`$, which is manifestly suppressed by $`g^2`$. In other words, scale anomaly effectively eliminates the factor of $`g^2`$ in the amplitude! ## 2 $`\pi \mathrm{\Phi }`$ INTERACTIONS Let us consider $`\mathrm{\Phi }`$ interactions with pions, which are the dominant degrees of freedom at low energies/temperatures. Since $`\theta _\mu ^\mu `$ is scale invariant, we can match it onto its expression in terms of the effective pion lagrangian, and find a beautiful relation $`\pi ^+(p^{})|\theta _\mu ^\mu |\pi ^+(p)=(p^{}p)^2=t`$ valid in the chiral limit. In the leading order in OPE, the amplitude of the $`\pi \mathrm{\Phi }`$ elastic scattering can be written as $$^{\mathrm{\Phi }\pi }=\overline{d}_2\frac{a_0^2}{ϵ_0}\frac{4\pi ^2}{9}\pi ^+(p^{})|\theta _\mu ^\mu |\pi ^+(p)=\overline{d}_2\frac{a_0^2}{ϵ_0}\frac{4\pi ^2}{9}tF(t),$$ (1) where, in the polarizability $`\overline{d}_2a_0^2/ϵ_0`$, we have factored out the Bohr radius $`a_0`$ and the Rydberg energy $`ϵ_0`$ of quarkonium, respectively; $`F(t)`$ is the pion formfactor in the scalar-isoscalar channel, which takes account of the resonances in this channel. The resulting elastic cross section of $`\mathrm{\Phi }`$ with mass $`M`$ $$\sigma ^{\mathrm{el}}(s)=\frac{1}{4\pi s}\frac{M^2}{4p_{\mathrm{cm}}^2}\left(\overline{d}_2\frac{a_0^2}{ϵ_0}\right)^2\left(\frac{4\pi ^2}{9}\right)^2_0^{4p_{\mathrm{cm}}^2}d(t)t^2|F(t)|^2$$ (2) appears to be very small (see Fig. 1) because of the factor $`a_0^4/ϵ_0^2`$ stemming from the double-dipole nature of the interaction. Moreover, it is further suppressed by the dependence on the momentum transfer $`t`$ of the cross section dictated by the chiral symmetry. In Fig. 2 we show the $`\pi \mathrm{\Phi }`$ elastic cross section for the $`J/\psi `$ case; one can see that the cross section is very small. This result implies that the $`J/\psi `$’s produced in heavy ion collisions interact with the surrounding $`\pi `$ gas very weakly. The $`p_T`$ broadening of the $`\mathrm{\Phi }`$’s by the interactions inside the pion gas therefore is negligible. Next we consider the inelastic excitation process, $`\pi \mathrm{\Phi }\pi \mathrm{\Phi }^{}`$. The energy transfer in this process is on the order of the binding energy, $`\mathrm{\Delta }=M^{}M=O(ϵ_0)`$, which may invalidate our starting assumption of the factorization. Fortunately the double-dipole form is intact, but non-local in time. We replace the gluon energy appearing in the energy denominator with the typical value, $`\mathrm{\Delta }`$, as an approximation, to make the gluonic operator local: $$^{\pi \mathrm{\Phi }\pi \mathrm{\Phi }^{}}\frac{1}{3N_c}\varphi ^{}|r^i\frac{1}{H_a+ϵ\mathrm{\Delta }}r^i|\varphi \pi |\frac{1}{2}g^2𝐄^2|\pi \overline{d}_2^{}\frac{a_0^2}{ϵ_0}\pi |\frac{1}{2}g^2𝐄^2|\pi ,$$ (3) where the $`\varphi `$ ($`\varphi ^{}`$) is the internal wave function of $`\mathrm{\Phi }`$ ($`\mathrm{\Phi }^{}`$), and $`H_a(r)`$ is the effective hamiltonian describing the intermediate, $`SU(N_c)`$ color–adjoint quark–anti-quark state. With this amplitude we get the transition cross section shown in Fig. 2. Our replacement of the gluon energy in the energy denominator is an approximation with an accuracy that cannot be evaluated a priori. We can, however, apply our formula to the decay process $`\psi ^{}J/\psi \pi \pi `$ and compare our results, $`\mathrm{\Gamma }^{\pi \pi }`$=260 (70) keV obtained with (without) taking account of the formfactor $`F(s)`$, with the experimentally measured value of 135$`\pm `$20 keV. From this comparison we conclude that our calculations are probably valid up to a factor of 2. Our results support the idea that the interactions of heavy quarkonia in the hadron gas and in the quark-gluon plasma are very different. ## 3 ONIUM-ONIUM SCATTERING AT LOW ENERGY Low-energy onium-onium scattering is a very interesting subject, which was addressed by several authors . One may regard this problem as a first step toward the understanding of nuclear force from QCD. Here we derive a sum rule for this interaction . After the multipole expansion for the interactions of both $`\mathrm{\Phi }`$’s with the gluon field is performed, the potential between them can be expressed through the $`\theta _\mu ^\mu `$ correlator: $`V(R)=^{\mathrm{\Phi }\mathrm{\Phi }}(R)`$ $`=`$ $`i{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑t\left(\overline{d}_2{\displaystyle \frac{a_0^2}{ϵ_0}}\right)^2\left({\displaystyle \frac{4\pi ^2}{b}}\right)^20|\text{T}\theta _\mu ^\mu (x)\theta _\nu ^\nu (0)|0`$ (4) $`=`$ $`\left(\overline{d}_2{\displaystyle \frac{a_0^2}{ϵ_0}}\right)^2\left({\displaystyle \frac{4\pi ^2}{b}}\right)^2{\displaystyle _0^{\mathrm{}}}𝑑\sigma ^2\rho _\theta (\sigma ^2){\displaystyle \frac{1}{4\pi R}}e^{\sigma R},`$ where we have introduced the spectral function for the $`\theta _\mu ^\mu `$ correlator as $`0|\text{T}\theta _\mu ^\mu (x)\theta _\nu ^\nu (0)|0=𝑑\sigma ^2\rho _\theta (\sigma ^2)\mathrm{\Delta }_F(x;\sigma ^2)`$ with the coordinate-space scalar Feynman propagator, $`\mathrm{\Delta }_F`$; $`b=(11N_c2N_f)/3=9`$. An important theorem for the $`\theta _\mu ^\mu `$ correlator relates it to the non-perturbative energy density of the QCD vacuum: $$id^4xe^{iqx}0|\text{T}\theta _\mu ^\mu (x)\theta _\nu ^\nu (0)|0=𝑑\sigma ^2\frac{\rho _\theta (\sigma ^2)}{\sigma ^2q^2iϵ}\stackrel{q0}{}40|\theta _\mu ^\mu (0)|0=16ϵ_{\mathrm{vac}}.$$ (5) Since the vacuum energy density is divergent in perturbation theory, we define the r.h.s. by subtracting the perturbative part. This relates the integral of the spectral density to $`ϵ_{\mathrm{vac}}`$ as $`(d\sigma ^2/\sigma ^2)(\rho _\theta ^{\mathrm{phys}}(\sigma ^2)\rho _\theta ^{\mathrm{pt}}(\sigma ^2))=16ϵ_{\mathrm{vac}}`$. In the heavy quark limit, this relation leads to the following sum rule for the potential $$_r^{\mathrm{}}d^3𝐑(V_\theta (R)V_\theta ^{\mathrm{pt}}(R))=\left(\overline{d}_2\frac{a_0^2}{ϵ_0}\right)^2\left(\frac{4\pi ^2}{b}\right)^216|ϵ_{\mathrm{vac}}|;$$ (6) where $`r`$ is the size of the onium. This sum rule relates the overall strength of the interaction between the dipoles to the energy density of the QCD vacuum. The short– and long–distance limits of the potential are analyzed in Ref. . In particular, it appears that the long-distance limit of the potential has an interesting $`R`$ dependence, $`R^{5/2}exp(2\mu _\pi R)`$ ($`\mu _\pi `$ is the pion mass), with a non-trivial dependence on the numbers of colors $`N_c`$ and flavors $`N_f`$, $`(N_f^21)^2/(11N_c2N_f)^2`$.
no-problem/9907/cond-mat9907180.html
ar5iv
text
# Nesting properties and anisotropy of the Fermi surface of LuNi2B2C ## Abstract The rare earth nickel borocarbides, with the generic formula $`R`$Ni<sub>2</sub>B<sub>2</sub>C, have recently been shown to display a rich variety of phenomena. Most striking has been the competition between, and even coexistence of, antiferromagnetism and superconductivity. We have measured the Fermi surface (FS) of LuNi<sub>2</sub>B<sub>2</sub>C, and shown that it possesses nesting features capable of explaining some of the phenomena experimentally observed. In particular, it had previously been conjectured that a particular sheet of FS is responsible for the modulated magnetic structures manifest in some of the series. We report the first direct experimental observation of this sheet. The discovery of superconductivity in the rare earth nickel borocarbides has raised fundamental questions regarding its nature. The presence of moment-bearing rare-earth atoms in addition to high Ni contents makes the very existence of superconductivity surprising, and some of the phenomena associated with its interplay with magnetism have not been observed in any other superconducting material. With the generic formula $`R`$Ni<sub>2</sub>B<sub>2</sub>C, some members of the system ($`R=`$Y, Lu, Tm, Er, Ho and Dy), exhibit moderately high values of T<sub>c</sub> ($``$16K, for $`R`$=Y,Lu), and are either antiferromagnetic or non-magnetic at low temperature. Two other systems, with $`R=`$Gd, Tb, are not superconducting, while the Yb compound shows heavy fermion behavior . One of the most striking observations has been the antiferromagnetic ordering, and its competition (and even coexistence) with superconductivity, conjectured to be driven by a nesting feature in the Fermi surface (FS) . Neutron and x-ray scattering techniques have revealed incommensurate magnetic structures in superconducting Er and Ho compounds, and in the non-superconducting Tb and Gd compounds, characterized by a wave vector $`𝐐_\text{m}(0.55,0,0)`$. In the Lu compound, the 4$`f`$ band is fully occupied and therefore it is non-magnetic. Since the $`f`$ electrons occupy localized core-like states, the FS is expected to be similar to that of the other compounds ; unfettered by the complications introduced by the presence of magnetism, the Lu compound is ideal for investigating the origin of the superconductivity in these materials. Where there is magnetic order, the mutual influence of the moments must occur through indirect Ruderman-Kittel-Kasuya-Yosida (RKKY) type interactions; the resulting magnetic structures would then be determined by maxima in the generalized magnetic susceptibility of the conduction electrons. Band theoretical calculations of this susceptibility, $`\chi (𝐪)`$, in LuNi<sub>2</sub>B<sub>2</sub>C show a peak around the aforementioned vector, $`𝐐_\text{m}`$, indicating that the magnetic ordering in those other compounds is a result of a common FS nesting feature . Moreover, the presence of strong Kohn anomalies in the phonon dispersion curves of LuNi<sub>2</sub>B<sub>2</sub>C for wave vectors close to $`𝐐_\text{m}`$ has lent additional support to this conjecture . Interest in the FS topology has been enhanced by the observation of a four-fold symmetry in the anisotropy of the upper critical field of LuNi<sub>2</sub>B<sub>2</sub>C . This has been interpreted in terms of (a) the anisotropy of the underlying FS topology, and (b) the presence of possible three-dimensional $`d`$-wave superconductivity . In the absence of any corroborative evidence for the latter suggestion, we shall concentrate on the former. The appearance of the square flux-line lattice (FLL) has been successfully explained through nonlocal corrections to the London theory . In general, there is a nonlocal relationship between the supercurrent, $`𝐣`$, and the vector potential, $`𝐀`$, of the magnetic field in a superconductor, arising from the spatial extent ($`\xi _0`$) of the Cooper pair . In this scenario, the shape of the FS (which influences the coupling of the supercurrents with the crystal lattice) is responsible for the square FLL observed at high fields in the mixed state and which reverts to the triangular FLL at lower fields . However, despite extensive theoretical work based on the band structures of the non-magnetic Y and Lu compounds , very few experimental determinations of the electronic structure exist, and many of the predictions, including a “nestable” FS sheet, remain unverified. Most importantly, the de Haas-van Alphen experiments performed on YNi<sub>2</sub>B<sub>2</sub>C in the superconducting and normal states have not delivered the topological information necessary to isolate any such features in the FS. In this Letter, we present a joint experimental and theoretical study of the FS of LuNi<sub>2</sub>B<sub>2</sub>C. The experiment reveals the first direct evidence for the presence of a sheet capable of nesting . Furthermore, we will show that the anisotropy of this FS, as determined by our band structure calculations, is consistent with Kogan’s model for the FLL . The occupied momentum states, and hence the FS, can be accessed via the momentum distribution using the 2-Dimensional Angular Correlation of electron–positron Annihilation Radiation (2D-ACAR) technique . A 2D-ACAR measurement yields a 2D projection (integration over one dimension) of an underlying two-photon momentum density, $`\rho (𝐩)`$ $`=`$ $`{\displaystyle \underset{j,𝐤,𝐆}{}}n^j(𝐤)|C_{𝐆,j}(𝐤)|^2\delta (𝐩𝐤𝐆),`$ (1) where $`n^j(𝐤)`$ is the electron occupation density in $`𝐤`$-space in the $`j^{\text{th}}`$ band, the $`C_{𝐆,j}(𝐤)`$ are the Fourier coefficients of the interacting electron-positron wave function product and the delta function expresses the conservation of crystal momentum. $`\rho (𝐩)`$ is a single-centered distribution having the full point symmetry of the the crystal lattice in question. In a metal, this distribution contains discontinuities at various points $`𝐩_\text{F}=(𝐤_\text{F}+𝐆)`$ when a band crosses the Fermi level, $`E_\text{F}`$. When the FS is of paramount interest, the Lock-Crisp-West procedure is often followed. Here the various FS discontinuities are superimposed by folding $`\rho (𝐩)`$ (or its measured projections) back into the first Brillouin zone (BZ). The result is a new $`𝐤`$-space density, $`_{j,𝐤}n^j(𝐤)_𝐆|C_{𝐆,j}(𝐤)|^2`$, which aside from the factor $`_G|C_{𝐆,j}(𝐤)|^2`$ (usually a weak function of $`𝐤`$ within each band) is simply the electron occupation density. This well-established technique has recently been used by some of the present authors to identify and measure the FS topology in pure Y and in disordered Gd–Y alloys . The virtue of the 2D-ACAR technique in such studies is that it reveals directly the shape of the FS, and hence any propensity for nesting. The experiments were performed on a single crystal of LuNi<sub>2</sub>B<sub>2</sub>C, grown by a Ni<sub>2</sub>B flux method . The sample was cooled to $``$50K, at which temperature the overall momentum resolution of the Bristol 2D-ACAR spectrometer corresponded to $``$10% of the larger BZ dimension. Projections were measured along two different crystallographic directions, $`a`$ and $`c`$. More than 400 million (effective) counts were collected in each spectrum. The spin-dependent momentum densities were calculated using the linearized muffin-tin orbital (LMTO) method within the atomic sphere approximation (ASA), including combined-correction terms . The exchange-correlation part of the potential was described in the local density approximation. The self-consistent band-structure was calculated at 594 $`k`$-points in the irreducible 1/16<sup>th</sup> part of the BZ, which to simplify the calculations had the same volume as the standard BZ but a simpler tetragonal shape. We used a basis set of $`s`$, $`p`$, $`d`$ and $`f`$ functions for the Lu, and a reduced ($`s`$,$`p`$,$`d`$) basis set for Ni, B and C. For the calculation of the density-of-states (DOS), and the FS, a denser mesh of 3696 $`k`$-points was used. The lattice parameters used were experimental values of $`a=3.464`$ Å and $`c=10.631`$ Å . Electronic wave functions from the dense $`k`$-mesh were then used to generate the electron-positron momentum densities, using 1149 reciprocal lattice vectors. A full description of the technique is given in Refs. and . The band-structure obtained (not shown) is very similar to those of Pickett and Singh and Mattheiss , and shows that the electronic structure is certainly three-dimensional (compared to the two-dimensional features exhibited in the high-T<sub>c</sub> cuprates, e.g. ); experimentally, one observes an almost isotropic resistivity , which is consistent with this picture. These bands predict a rather complicated FS (shown in Fig. 1), the principal character being Lu $`d`$, with some Ni $`d`$, and B and C $`p`$ character . The first sheet is a very small electron pocket centered at $`\mathrm{\Gamma }`$, while the second is slighly larger and rather more cuboid in shape. It is the third sheet that possesses the nesting properties previously remarked upon , but it can also be seen that the second sheet has a square cross section between $`M`$ and $`R`$. In Figs. 2 and 3, the BZ electron occupancies (both calculated and experimental) are shown, projected along the $`c`$ and $`a`$ axes, respectively. The occupied states are indicated by the white areas, and unoccupied by black. The agreement between the positron experiment and the LMTO theory is excellent; one can clearly discern the presence of the $`\mathrm{\Gamma }`$-centered electron sheets. The most striking features of Fig. 2 are the electron surfaces with square cross-sections at the corners of the projected BZ. Also noteworthy are the shapes described by the contour lines around this sheet; the protuberances point towards the $`\mathrm{\Gamma }`$-point in both the calculation and experiment. Fig. 3 shows fewer features, but demonstrates that the agreement persists when the projection is along a different direction. Considerable investment has been made in establishing methods of reliably and accurately locating the FS in 2D-ACAR data . Recently, it has been demonstrated that it is possible to accurately caliper the FS from 2D-ACAR data using a criterion based upon edge-detection algorithms employing band-pass filtering or Maximum Entropy techniques . In Fig. 4 we present the experimental FS thus derived , together with a (001) section (passing through the $`\mathrm{\Gamma }`$-point) of the calculated third FS sheet. The nesting is indicated by the arrow, and is calipered from the experimental data to be $`0.54\pm 0.02\times (2\pi /a)`$ (our calculation gives $`0.56\times (2\pi /a)`$). Given that the experimental data represent a projection of the FS, one does not expect a perfect match between the top and bottom halves of the figure. However, the nesting feature is unequivocally revealed, and its size is in excellent agreement with the pronounced Kohn anomalies close to a wave vector $`𝐐_\text{m}(0.5,0,0)`$ , and the observed incommensurate ordering with $`𝐐_\text{m}(0.55,0,0)`$ found in the antiferromagnetic compounds. In conjunction with the calculation, it was possible to estimate from the experiment that the fraction of the FS that would be able to participate in nesting is $`(4.4\pm 0.5)`$ %, which is consistent with the small increase in the resistivity observed when the current is along . Having shown that the calculated FS topology is in good agreement with our experiment, we derived pertinent quantities from our band structure. The DOS at the Fermi energy, $`N(E_F)=4.3`$ (eV cell)<sup>-1</sup>, is similar to values presented by Pickett and Singh and Mattheiss (4.8 (eV cell)<sup>-1</sup>). Most importantly, some of the quantities essential to Kogan’s theory can be derived from the band structure itself. For the calculation of the Fermi velocities, a special mesh was used, comprising six additional points around each original $`k`$-point, to enable accurate evaluation of the relevant derivatives. The Fermi velocities in the $`ab`$ plane and in the $`c`$ direction were $`v_{F,ab}^2^{1/2}=2.62\times 10^7`$cm s<sup>-1</sup>,$`v_{F,c}^2^{1/2}=2.23\times 10^7`$cm s<sup>-1</sup>, with masses $`m_{ab}=0.91`$, $`m_c=1.26`$, implying an average out-of-plane anisotropy in the upper critical field ($`(H_{c2}^{<100>}+H_{c2}^{<110>})/2H_{c2}^{<001>}`$) of 1.17 compared to an experimental value of 1.16 . A calculation for the isovalent compound YNi<sub>2</sub>B<sub>2</sub>C predicted 1.02 for this anisotropy, in good agreement with the experimental value of 1.01 and highlighting its sensitivity to the band structure. In conclusion, we have shown experimentally that the FS topology of LuNi<sub>2</sub>B<sub>2</sub>C does support nesting, thereby accounting for the anomalies observed in its phonon spectrum , and the propensity for magnetic ordering found in some of the other rare-earth nickel borocarbides . In addition, our own calculations of the electronic structure yield a FS in excellent agreement with the experiments, whose anisotropy is consistent with the observation of a square FLL, and with the observed behavior of the upper critical field . The authors would like the thank the EPSRC (UK) for financial support, and B. Harmon and R. Evans for useful discussions. One of us (SBD) ackowledges support from the Lloyd’s of London Tercentenary Foundation. Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. W-7405-Eng-82. This work was supported by the Director for Energy Research, Office of Basic Energy Sciences.
no-problem/9907/astro-ph9907325.html
ar5iv
text
# Rapid X-ray Variability of the BL Lacertae Object PKS 2155-304 ## 1 Introduction BL Lacertae objects represent a subclass of Active Galactic Nuclei (AGNs), emitting non-thermal radiation from radio to gamma-rays, even up to TeV energies. A defining property of BL Lac objects is that the radiation is strongly variable from radio to gamma-rays on different timescales. The mechanisms responsible for the non-thermal emission over such a wide energy range are commonly believed to be synchrotron and inverse Compton scattering from plasma in a relativistic jet oriented at a small angle with respect to the line of sight. PKS 2155$``$304 is the brightest BL Lac object at UV wavelengths and one of the brightest in the X-ray band. It was detected in gamma-rays by the EGRET experiment on CGRO (Vestrand, Stacy & Sreekumar 1995; Sreekumar & Vestrand 1997), and it is one of the few BL Lacs observed at TeV energies (Chadwick et al. 1999). Its broad band spectrum shows two peaks: the first one is synchrotron emission peaking at UV and/or soft X-rays as most X-ray selected BL Lac objects (High frequency peak BL Lac objects , HBLs; Padovani & Giommi 1995). The other one is around the gamma-ray region and it is attributed to Compton scattering by the same high energy electrons which are radiating via synchrotron. It has a very hard gamma-ray spectrum in the 0.1$``$10 GeV region, with a power-law spectral index of $`\alpha _\gamma 0.71`$ (Vestrand et al. 1995), and a time-averaged integral flux of $`4.2\times 10^{11}`$ erg $`\mathrm{cm}^2\mathrm{s}^1`$ above 300 GeV (Chadwick et al. 1999). PKS 2155$``$304 has been one of the best targets of multiwavelength campaigns because of its brightness. This kind of study has proved to be a powerful tool to constrain radiation models through the study of correlated variability among different bands. The first multiwavelength campaign was performed, from radio to X-ray wavelengths, in 1991 November, by ROSAT, IUE and ground-based telescopes, and correlated variability was observed between UV and soft X-rays with the UV lagging by $`2`$ hours (Edelson et al. 1995). However, the source showed a definitely different variability behavior in the 1994 May campaign based on IUE, EUVE and ASCA data. Correlated variability was observed with larger amplitude at shorter wavelengths, and significant soft lags, i.e. the UV lagging the EUV by 1 day, and the EUV in turn lagging the X-rays by 1 day (Urry et al. 1997). Variability can be characterized by the power density spectrum (PDS hereafter) and inter-band correlations: the PDS slopes and the measured time lags impose strong constraints on radiation models. The three long duration and high time resolution observations by BeppoSAX and ASCA are rather suitable to carry on temporal studies. In the present paper we perform detailed timing analysis for these observations, and compare the results. Preliminary cross correlation analysis with Monte Carlo simulations were firstly reported in Treves et al. (1999). We briefly summarize the observations in section 2; the light curves and variability analysis are presented in section 3, followed by the PDS analysis in section 4; in section 5 we carry out comprehensive cross-correlation analysis with detailed Monte Carlo simulations to determine the uncertainties on inter-band lags. The physical implications of the results are discussed in section 6 and conclusions are drawn in the final section 7. ## 2 Observations The BeppoSAX payload (Boella et al. 1997a) consists of four Narrow Field Instruments (NFIs) which point in the same direction, namely one Low Energy Concentrator Spectrometer (LECS) sensitive in the 0.1$``$10 keV range (Parmar et al. 1997), and three identical Medium Energy Concentrator Spectrometers (MECS) sensitive in 1.5$``$10 keV band (Boella et al. 1997b). Both the LECS and MECS detectors are Gas Scintillation Proportional Counters (GSPC) and are in the focus of the four identical X-ray telescopes. There are two more collimated instruments: the High Pressure Gas Scintillation Proportional Counter (HPGSPC) (Manzo et al. 1997) and the Phoswich Detector System (Frontera et al. 1997), which are not however suitable to perform temporal analysis because of the high background and limited statistics on a source like PKS 2155$``$304. Therefore, for the following timing analysis, only LECS and MECS data are used. BeppoSAX NFIs observed PKS 2155$``$304 for more than 2 days during the Performance Verification phase on 20-22 November 1996 (SAX96), and for slightly less than 1.5 days during our AO1 observation from 22 to 24 November 1997 (SAX97). The effective exposure times for MECS and LECS were 63 ks and 22 ks for SAX97, and 108 ks and 36 ks for SAX96, respectively. BeppoSAX data reduction procedure is described in detail by Chiappetti et al. (1999). The light curves were firstly presented by Giommi et al. (1998) and Chiappetti et al. (1999). In particular in the latter work the presence of a soft lag of about 10<sup>3</sup> s and a tendency of the amplitude of variability to increase with energy have been found in the SAX97 data. PKS 2155$``$304 was also monitored by the ASCA satellite for more than two days from 19 to 21 May 1994 (ASCA94) coordinated with a multiwavelength monitoring from radio to X-rays (Pesce et al. 1997; Pian et al. 1997; Urry et al. 1997). ASCA includes two SIS and two GIS focal-plane detectors (Tanaka, Inoue, & Holt 1994). The X-ray light curve considered here – retrieved from the archive – was taken from the GIS detectors. Preliminary results were presented by Makino et al. (1996). In this paper we will perform detailed temporal analysis of the different observations and compare the relative results. The log of the three observations is shown in Table 1. ## 3 Variability Analysis We analyze the light curves with the timing analysis software package XRONOS (Stella & Angelini 1993). Unless otherwise specified, we a priori separate the energy ranges into the following three bands: (1) 0.1$``$1.5 keV as soft energy band referred to as LE band; (2) 1.5$``$3.5 keV as the first medium X-ray band which we refer to as ME1 band; (3) 3.5$``$10 keV as the second medium energy band, i.e. ME2 band. Note that the LE band of the ASCA observation is 0.5$``$1.5 keV. The light curves binned over 1000 or 2000s are shown in Figures 1,2 and 3 for the SAX97, SAX96 and ASCA94, respectively. We compute the hardness ratios (HR) of ME1 to LE (HR1) and of ME2 to ME1 (HR2), which are also presented in the same figures. ### 3.1 Variability parameters In order to quantify the variability properties, here we summarize the general definition of the fractional $`rms`$ variability parameter $`F_{var}`$ (e.g. Rodríguez-Pascual et al. 1997). The data series $`F_i(t)`$ of the light curve has a standard deviation $`\sigma _F^2=\frac{1}{N1}_{i=1}^N(F_i(t)\overline{F})^2`$, where $`\overline{F}`$ is the mean count rate. In addition, we define the expected variance, due to random errors $`\sigma _i(t)`$ associated with $`F_i(t)`$, as $`\mathrm{\Delta }_F^2=\frac{1}{N}_{i=1}^N\sigma _i^2(t)`$. The excess variance, $`\sigma _{exc}`$, is then defined as the difference between the standard deviation $`\sigma _F^2`$ and the expected variance $`\mathrm{\Delta }_F^2`$, i.e. $`\sigma _{exc}^2=\sigma _F^2\mathrm{\Delta }_F^2`$, from which we can define the fractional $`rms`$ variability parameter as $`F_{var}=\sigma _{exc}/\overline{F}`$. The above parameters only characterize the mean variability of a source. However, a direct measurement of the fastest timescale on which the intensity can change is crucial as it may constrain the source size, and thus luminosity density, accretion efficiency or beaming parameters, and black hole mass. This requires to identify rapid variability events rather than the average variability properties. One often considers the so-called “doubling time” as a reasonable measure of the fastest and meaningful timescale of a source (e.g. Edelson 1992). More precisely, here we define the “doubling time” as $`T_2=|\frac{F\mathrm{\Delta }T}{\mathrm{\Delta }F}|`$, where $`\mathrm{\Delta }T=T_jT_i`$, $`\mathrm{\Delta }F=F_jF_i`$, and $`F=(F_j+F_i)/2`$, and consider the minimum value of $`T_2^{ij}`$ over any data pairs as the shortest timescale for each observation, keeping in mind that this quantity is ill-defined, strongly depending on sampling rate, length and signal-to-noise ratio of the observation (Press 1978). The error on $`T_2^{ij}`$ is propagated through the errors on the fluxes $`F_i`$ and $`F_j`$, and a priori we neglect the value of $`T_2^{ij}`$ if the error is larger than $`20\%`$. The variability parameters defined above are reported in Table 2. ### 3.2 Results #### 3.2.1 SAX97 Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304 presents the light curves and hardness ratios. At the beginning of the observation PKS 2155$``$304 exhibited a large flare, with a variation by a factor $`4`$, followed by two other events of smaller amplitude. The second flare presents similar rising and declining timescales. As shown in Table 2, the variability amplitude is to some extent different in the three bands, increasing with increasing energy ($`F_{var}`$ is 0.22, 0.27 and 0.30 in the LE, ME1 and ME2 bands, respectively). No variations on timescales of less than $``$ 1 hour are found. The most rapid variation event – the fastest among the three observations – occurred during the first flare, with minimum values of $`T_2`$ of about 3.4, 1.9 and 1.8 hours in LE, ME1 and ME2 bands, respectively. We notice that these timescales are much shorter and the fluxes about $`50\%`$ higher than those of SAX96 (see Table 2), indicating faster variability with higher intensity. From the last two panels of Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304, one can see that the HR1 presents a global trend similar to that of the intensities (see Chiappetti et al. 1999 for more details). However, no statistically significant correlation seems to be present, as HR1 has the same value during the first two peaks which have significantly different intensities and is smaller during the end of the observation, although the average intensity is similar to that of the second peak. HR2 does not show any trend. #### 3.2.2 SAX96 As shown in Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304, an approximately symmetric flare was seen in the middle of the observation, which is well resolved with similar rising and decaying time scales. A flare of lower intensity is visible at the beginning, while a larger flare probably occurred towards the end of the observation, although the observation is incomplete. Some small-amplitude variability is also detected. The $`F_{var}`$ are comparable for the ME1 and ME2 bands ($`0.13`$), and are $`30\%`$ larger than that relative to the LE band. The estimated “doubling times” are about 22, 14 and 8 hours for the LE, ME1 and ME2 bands, respectively. From Figures 1,2 and Table 2, it is clear that during SAX96 PKS 2155$``$304 was in a relatively faint state with smaller amplitude and longer timescale variability, compared to SAX97. The hardness ratio HR1 shows a behavior similar to that of the light curves, in the sense that the spectrum is harder at higher intensities, while again HR2 does not follow any trend (see Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304). #### 3.2.3 ASCA94 The light curves and hardness ratios relative to this observation are plotted in Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304. A large amplitude flare, with an approximately symmetric shape, is clearly seen at the beginning of the observation although the rising portion of the event is not fully sampled. PKS 2155$``$304 was more variable in this period than during the other two observations, as can be seen from $`F_{var}`$ (Table 2), with a flux intermediate between SAX97 and SAX96. The estimated “doubling times” are about 5.6, 4.8, and 4.5 hours in the LE, ME1 and ME2 bands, respectively. A significant characteristic of ASCA94 is that the hardness ratios present a trend of linear decrease over the whole period, which is a general signature that the spectra become softer when the source is fainter. Urry et al. (1997) showed the same trend through the spectral fits. ## 4 PDS Analysis AGN variability can be statistically characterized by its PDS. The PDS of very few Seyfert galaxies and PKS 2155$``$304 generally behave as power laws, proportional to $`f^\alpha `$ over some temporal frequency range, where $`f`$ is the temporal frequency (e.g. Edelson & Nandra 1999; Hayashida et al. 1998; Tagliaferri et al. 1991). For PKS 2155$``$304, the durations of the observations considered here are much longer ($`2`$ days) than previous ones (e.g. EXOSAT), allowing us to determine the PDS over a range extending towards relatively lower frequencies. Because of low exposure efficiency of the LECS ($`20\%`$), here we focus on the BeppoSAX MECS and ASCA light curves in the 1.5$``$10 keV region. The PDS analysis is carried out with the direct Fourier transform algorithm which is included in the timing series analysis package XRONOS. For these observations, the PDS is calculated for the background-subtracted light curves with 10s time resolution, as each PDS in our cases approaches (white) noise level before $`10^2`$ Hz, clearly smaller than the Nyquist frequency of $`5\times 10^2`$ Hz at 10s bin size. The average count rate is subtracted from the bins before the PDS is calculated. In order to improve the signal-to-noise and study the mean variability properties of PKS 2155$``$304, the light curves are divided into several short intervals with each interval sampling 4096 points. The SAX97 light curve presents 3 good intervals; the SAX96 light curve has 4 good intervals, while we neglect the last part of the light curve which contains a long interruption towards the end of the observation; the ASCA94 observation is divided into 4 good intervals. For each light curve, the power spectra from each interval are then averaged. An important issue, discussed in detail by Tagliaferri et al. (1991), is the data gap filling, which is unavoidable for a low orbit X-ray satellite. The gap-filling procedure could strongly affect the derived PDS slope, artificially increasing the power at high frequencies and introducing spurious quasi-periodic oscillations (QPOs) (Tagliaferri et al. 1996). In order to decrease the effect of data gaps in determining the PDS, we adopt the gap filling procedure defined as “running mean gap filling” in XRONOS. This method replaces the data gaps with the moving average of the light curve calculated in our cases over a duration of about 1.5 hour. In this way, the gaps are bridged in a smooth way, which not only simulates real events but also reduces the bias introduced by the window function. We also determined a posteriori, by considering a somewhat different duration (e.g. 2 hours), that the slope of the PDS is rather insensitive to the filling duration over which the running mean is calculated. This indicates that the running mean could follow the light curve behavior on time scales of hours as long as the gap filling duration is substantially shorter than the whole interval and longer than the data gaps, so that the gap center could be “linked” with the running mean from a relatively high number of points and the statistical fluctuations are reduced. The average PDS (after average noise subtraction) obtained in this way from each individual observation are shown in Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304a,b,c for SAX97, 96 and ASCA94 light curves, respectively. These are rebinned in logarithmic intervals of 0.18 (factor 1.5) to reduce the noise and allow the estimation of error bars. This means that the first point is still the lowest frequency point, but the second point is derived by averaging the next two points, etc. In such a way the PDS appear nearly equi-spaced in a log-log diagram. Each PDS is normalized so that its integral gives the squared $`rms`$ fractional variability $`F_{var}`$ (therefore the power spectrum is in unit of $`\mathrm{F}_{\mathrm{𝑣𝑎𝑟}}^2/\mathrm{Hz}`$), which is normalized to the squared average count rate. The expected (white) noise power level must be subtracted to obtain the $`F_{var}`$ of the light curve (this level is about 1.5, 1.6 and 1.2 for SAX97, 96 and ASCA94 data, respectively). The error bars represent the standard deviation of the average power in each rebinned frequency interval, where the power in each bin is $`\chi ^2`$ distributed with $`2N`$ degrees of freedom, where $`N=ML`$ is the total number of points used to produce the mean power in each frequency bin (from $`M`$ intervals and $`L`$ independent Fourier frequencies). From Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304, one can see that each PDS shows a strong red noise spectral component which decreases with increasing frequencies, without any significant narrow feature that would be indicative of periodic or quasi-periodic variability. This component approaches the noise level at $`6\times 10^3`$ Hz for the SAX97, 96 light curves, and at $`1\times 10^3`$ Hz for the ASCA94 data set. In addition, we note some differences among the three PDS. The SAX97 PDS clearly shows more power than the SAX96 one at lower frequencies, indicating a flatter PDS for SAX96 (this is easy to reconcile with the fact that $`F_{var}`$ of SAX96 ($`0.13`$) is much less than that of SAX97 ($`F_{var}0.3`$)). The ASCA94 PDS has much less power than that of SAX97 over the whole range of temporal frequencies considered here, consistent with the fact that the ASCA94 PDS approaches the noise level at relatively lower frequencies. However, this does not agree with their corresponding $`F_{var}`$ values. Let us consider the origin of this discrepancy. The SAX97, 96 light curves show more or less identical amplitude of variability over the whole observations, i.e. similar $`F_{var}`$ for each interval, while the ASCA94 light curve does not present pronounced variability after the large flare at the beginning of the observation. Thus $`F_{var}`$ for the ASCA94 dataset significantly changes from one interval to another, being about 0.23 and 0.11 for the flare and (almost) constant flux intervals of the ASCA94 light curve, respectively, but 0.35 when calculated over the whole duration. This makes the ASCA94 data set more variable if we consider it as a whole. For this reason we firstly compute the PDS in different intervals and normalized it so that the integral gives its own squared $`F_{var}`$ value, and then obtained the average PDS by averaging the power spectra from each interval. Because the light curve of ASCA94 is characterized by different $`F_{var}`$, we should use a mean value averaged from the (four) intervals considered in deriving the average PDS. This average $`F_{var}`$, which is much smaller than those of the SAX97, 96 light curves, indeed well agrees with the average PDS. Note that instead, for the SAX97 and SAX96 light curves , the average $`F_{var}`$ from each interval is identical to that of the whole observation. In order to quantify the slope of the PDS, power law model is fitted to each average power spectrum in the frequency interval $`6\times 10^5`$ to $`6\times 10^3`$ (SAX97, SAX96) or to $`1.5\times 10^3`$ Hz (ASCA94). The lowest frequency point of each PDS was ignored because they tend to be more noisy, and also for comparison with previous PDS analysis. The best-fit power law slopes are $`2.2,1.5`$ and 2.2 for SAX97, 96 and ASCA94 PDS, respectively. We also compute in the same way the average PDS after the removal of a linear trend from the light curves, in which the power law slopes for the “de–trended” PDS are consistent with the above values within $`1\sigma `$, respectively. The fitting details are shown in Table 3. ## 5 Cross Correlation Analysis The first clear result visible from the light curves is that the variations in the different X-ray bands are all correlated. Indeed, these intensive monitorings with high time resolution and long duration allow detailed measurements of the inter-band cross correlation properties, and in particular to make quantitative estimates of the degree of correlation and of any lags between variations at different X-ray wavelengths. Two cross correlation methods, namely the Discrete Correlation Function (DCF) and Modified Mean Deviation (MMD), are applied. In the following, a positive lag indicates the higher energy X-rays leading the lower energy ones, while negative indicates the opposite. ### 5.1 Cross Correlation Analysis Technique #### 5.1.1 DCF The DCF is analogous to the classical correlation function (e.g. Press et al. 1992) except it can work with unevenly sampled data. The DCF technique was described in detail by Edelson & Krolik (1988) and applied to PKS 2155$``$304 by Edelson et al. (1995) and Urry et al. (1997) to measure the time lags between UV and X-ray during the two multiwavelength campaigns mentioned above. Here we bin the original light curves and fix the DCF resolution according to the following criteria: (1) the bin sizes in both the light curves and the DCF should be at least 3 times smaller than any possible lag; (2)the bin size should also be as large as possible to reduce the error on the DCF. The resulting DCFs, only for the LE/ME2 case, are shown in Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304a,c,e (left panels) for the three observations, respectively. In order to quantify any time lag, we fit the DCF with a Gaussian function plus a constant, and take the Gaussian centroid, rather than the DCF peak, as the lag between the two energy bands (see arguments by Edelson et al. 1995 and Peterson et al. 1998). The two main advantages of this are: (1) the Gaussian fit takes into account the overall symmetry of the cross-correlation distribution around the peak, reducing the possibility of spurious lags due to a particular DCF point that could originate from statistical errors; (2) we found that – under the two conditions mentioned above – the lag and its uncertainty derived from a Gaussian fit are insensitive to the bin sizes of both the light curves and the DCF. The Gaussian fits to the DCFs are also shown in the same figures, and Table 4 reports the fitting results for the LE/ME1, ME1/ME2 and LE/ME2 cases. It should be noted that a Gaussian fit – although representative of the peak position and dispersion for both the DCF and MMD – does not necessarily provide a statistically adequate fit to these functions. #### 5.1.2 MMD In order to check the results suggested by the DCF technique, we perform a similar analysis by using the MMD method introduced by Hufnagel & Bregman (1992). The MMD considers the mean deviation of the two cross correlated time series as the correlation estimator and the minimum value of the MMD should correspond to the best correlation point (lag). Thus, unlike the DCF, it cannot be used to estimate the significance of the correlation between different wavelengths. As with the DCF, we take the centroid of a Gaussian fit as the measured lag. The MMD results with their Gaussian fits, only for the LE/ME2 case, are shown in Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304b,d,f (right hand panels), and the results of the fits are reported in Table 4 for the LE/ME1, ME1/ME2 and LE/ME2 cases. ### 5.2 Monte Carlo Simulations As suggested by Peterson et al. (1998), the uncertainty on the cross-correlation lag are dependent on both the flux uncertainties in individual measurements and the observational sampling uncertainties of the light curves. So, the statistical significance of the detection of a lag can not be assessed just by a cross-correlation analysis. In order to test the dependence of our findings on photon statistics, in this section we apply to our data the model-independent Monte Carlo simulation method introduced by Peterson et al. (1998). Because of the uncertainties just mentioned, the method considers “flux randomization” (FR) and “random subset selection” (RSS). FR assumes that the errors on fluxes resulting from the total photon number in a bin (several hundred photons in our cases) are normally distributed. Thus FR just takes each real flux $`F_i`$ and modifies it by adding a random Gaussian deviation based on the quoted error $`\sigma _i`$ for each data point of the light curves. So, the modification of each data point is statistically independent of each of the others, and therefore the dependence of lags on flux errors can be easily assessed through the FR simulations. RSS tests instead the sensitivity of a cross correlation lag by considering only subsets of the original light curves with no dependence on previous selection but still preserving the temporal order of the data points. The probability of random removal of any data point is $`1/e0.37`$ which is a Poisson probability. Thus each RSS realization is based on a randomly selected subset which is typically $`37\%`$ smaller than the real data set. Peterson et al. (1998) argue that RSS gives a fairly conservative estimate of the uncertainties due to sampling. We thus take the combination of FR and RSS in a single simulation to test together the sensitivity of the cross-correlation lags on flux uncertainties and sampling characteristics. We apply the DCF and the MMD to each FR/RSS Monte Carlo realization to determine individual lags obtained from the centroid of the Gaussian fit to each independent realization. The same process is repeated 2000 times to build up a cross-correlation peak distribution (CCPD; Maoz & Netzer 1989), which is not necessarily a normal distribution (e.g. Peterson et al. 1998). The CCPDs for the three observations are displayed in Figures 6$``$8 (their different widths result from the different photon statistics), respectively. From the CCPD we can determine the probability that a given lag falls in some particular likelihood range. In our cases (2000 realizations), we determine the lower (upper) extrema of the $`68\%`$ and $`90\%`$ confidence ranges by taking the 320th (1680th) and 100th (1900th) smallest values from all realizations, respectively. The results of the simulations are shown in Table 5. In addition, we tested that the results are insensitive to the bin sizes of both the light curves and the cross-correlation. In Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304 the lags derived from the DCF/MMD methods directly and through the simulations are compared: indeed they are fully consistent within the uncertainties estimated from the FR/RSS simulations. ### 5.3 Results #### 5.3.1 SAX97 We remind that during SAX97 the source was in a relatively high state compared to SAX96, and variability was more pronounced. The inter-band correlation coefficients $`r_0`$ (see Table 4) indicate that the X-rays in different bands are highly correlated. The cross correlation analysis show a very short soft lag between the LE and ME2 bands ($``$ 1000 s), while the lags for LE/ME1 and ME1/ME2 are consistent with zero (see Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304(a,b) and Table 4). The FR/RSS Monte Carlo simulations confirm these findings with high significance (see Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304 and Table 5). #### 5.3.2 SAX96 The values of $`r_0`$ derived for LE/ME1, ME1/ME2 and LE/ME2 correlations (see Table 4) suggest that the light curves are also strongly correlated during this relatively faint state. However, in contrast to SAX97, we find significant soft X-ray lags relative to higher energy X-rays. It is apparent from Figures Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304(c,d), and Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304 and Tables 4,5 that the lags estimated with the DCF and MMD methods are compatible within the uncertainties of the FR/RSS Monte Carlo simulations, indicating the presence of a soft positive lag of $`4`$ hours between the LE and ME2 bands. Soft lags of about $`2`$ hours are also shown by the LE/ME1 and ME1/ME2 cross correlation functions. Note also that the soft X-ray lags in this case are the largest recorded so far for BL Lac objects in the X-rays. #### 5.3.3 ASCA94 The state of the source during the ASCA94 observation is intermediate among the two $`Beppo`$SAX observations. Also these data show strong correlations among the different bands (see Table 4). The DCF and MMD analysis (see Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304(e,f) and Table 4) reveal soft lags intermediate between those of the SAX97 and SAX96 data sets: the LE lags the ME2 by about $`0.8`$ hours, while LE lags ME1 and ME1 lags ME2 by $``$ 0.4 hours. These results are also confirmed by the FR/RSS Monte Carlo simulations (see Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304 and Table 5). #### 5.3.4 Comparisons The results from the three observations corresponding to different intensity states of PKS 2155$``$304 suggest that the soft time lags are variable and possibly related to the source intensity, the soft lag becoming larger when the source is fainter. We illustrate this behavior in Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304a, where the lags between LE and ME2 are plotted against the mean fluxes in the ME2 band. A similar behavior is also present between the lags and the ratios of the maximum to the minimum count rate (see Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304b). As a comparison, in Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304c, we include the upper limit to the soft lag between the 0.1$``$2 and the 3$``$6 keV bands obtained from the EXOSAT observation of 24 Oct 1985 (Tagliaferri et al. 1991). This figure shows a power law relation (logarithmic axis) between the lags and the fluxes. This suggestive trend might give crucial clues on the emission processes and physical parameters in PKS 2155$``$304, and strongly requires the comparison with the results of time-dependent emission models. We also notice that the lags ($`\tau `$) are qualitatively anti-correlated with the correlation coefficients ($`r_0`$) between the different energy bands. From Table 4, it can be seen that the $`r_0`$ of SAX97 and ASCA94 are significantly higher than those of SAX96. On the contrary, the soft lags of SAX97 and ASCA94 are smaller than those of SAX96. This anti-correlation is indeed expected. If obvious soft lags were present, the maximum amplitude of the cross correlation function would be significantly shifted away from the zero lag point and thus the standard correlation coefficient $`r_0`$ would obviously decrease if the cross correlation function were a smooth Gaussian function (but not necessarily). Therefore, variable lags in different states are qualitatively suggested by the variable correlation degrees without any measured time shift. This behavior occurs also between the $`\tau `$ and $`r_0`$ within the different inter-band correlations (for example, the lags become larger with $`r_0`$ becoming smaller in SAX96). Interestingly, variations of the soft lags found here in the X-ray bands are reminiscent of the variations of the UV lags with respect to the X-ray (from $`2`$ hours to $`2`$ days) between the 1991 and 1994 intensity states of this same source (see Introduction). ## 6 Discussion The high degree of correlation and time lags between variations at different wavelengths provide strong constraints on the physical parameters of blazars. The previous multiwavelength monitoring campaigns of PKS 2155$``$304 found different variability behaviors (Edelson et al. 1995; Urry et al. 1997). In particular, the 1991 campaign showed the soft X-ray leading the UV by just $`2`$ hours (the result of the cross correlation analysis was recently confirmed by Peterson et al. (1998) on the basis of simulations similar to those used in this paper). However, during the 1994 campaign the UV lagged the X-ray by $`2`$ days. Time lags of the soft compared to the hard X-rays were suggested by the ASCA observations of PKS 2155$``$304 and MKN 421 (Makino et al. 1996; Takahashi et al. 1996), while EXOSAT observations of PKS 2155$``$304 showed no evidence of lags, with upper limits of a few hundred seconds (Tagliaferri et al. 1991). In order to interpret the inter-band variable time lags, the development of time dependent models taking into account the effects of particle injection/acceleration, cooling and diffusion would be required. However, the time dependent problem is in general very complicated and only some simplified and specific cases have been considered so far. Mastichiadis & Kirk (1997) showed that, within the assumptions of an homogeneous SSC model, an increase in the maximum energy of the injected electron population can reproduce the rapid X-ray flares as well as the spectral evolution of blazars like MKN 421. Interestingly, they also show that these features cannot be due to changes of both the magnetic field and the amount of injected electrons. In addition, Chiaberge & Ghisellini (1999) pointed out the importance, for both spectral evolution and time lags, of delays due to light crossing the radiating region. This effect is superposed to the wavelength dependent timescales due to the different cooling times of radiating electrons. In contrast to the above studies, Georganopoulos & Marscher (1998) modelled, using a time-dependent inhomogeneous accelerating jet model, the evolution of flares during the two multiwavelength campaigns on PKS 2155$``$304 . Within this scenario, the different variability features could be reproduced by assuming that plasma disturbances with different physical properties occur in an underlying jet characterized by the same physical parameters. The small time lag between the UV and X-ray bands in the 1991 November campaign would indicate quasi-co-spatiality of the regions radiating at these frequencies, assuming an injected electron distribution similar to that characterizing the underlying jet emission. However, the clear time lag between these same bands in the 1994 May campaign are interpreted as an indication of spatial separation of the emitting regions. The separation can be due to the propagation downstream of the electrons while progressively radiating at lower frequencies. This however also requires the injected electrons to be narrower in energy than during the 1991 November event. Clearly, in order to pin down the origin and nature of variations, both systematic observational trends and a thorough analysis with the different models are needed. In this work, we have concentrated on the first aspect, but let us examine the simplest (and analytical) considerations one can draw from the observational results. Obviously, the homogeneous synchrotron self–Compton model is the simplest interpretation for the X-ray emission and overall spectral energy distribution (SED) of PKS 2155$``$304 (e.g. Chiappetti et al. 1999). According to this picture a quasi-stationary population of particles is responsible for a “quiescent” flux level, while flares result from a uniform injection and/or acceleration of relativistic electrons over a time interval $`\mathrm{\Delta }t`$. The evolution of the particle distribution is governed by the radiative cooling through synchrotron emission which dominates in the X-ray band. As the radiative losses are energy dependent, that is, radiative lifetime of electrons is inversely proportional to the emitted frequency, low energy photons are expected to lag high energy ones (e.g. Urry et al. 1997; Takahashi et al. 1996). In particular, within this simple scenario, it is possible to relate the observed time lag to the physical parameters of the source (see also Tavecchio, Maraschi & Ghisellini 1998), and in the observer’s frame this can be expressed as $$B\delta ^{1/3}=223.5\left(\frac{1+z}{E_l}\right)^{1/3}\left[\frac{1(E_l/E_h)^{1/2}}{\tau _{obs}}\right]^{2/3}(G)$$ where $`E_l`$ and $`E_h`$ refer to the low and high X-ray energies (in units of keV), and $`\tau _{obs}`$ (sec) is the observed lag between $`E_l`$ and $`E_h`$ photons. Under the synchrotron cooling assumption, the observed time lag $`\tau _{obs}`$ depends only on the magnetic field intensity $`B`$ and the bulk Doppler factor $`\delta `$ of the radiating region. If we take $`E_l`$ as 0.8 keV, $`E_h`$ as 7 keV, and $`\tau _{obs}=4.0,0.8`$, and $`0.4`$ hours for each of the observations, our results would imply $`B\delta ^{1/3}`$ $``$ 0.32, 0.94 and 1.49 $`G`$ for SAX96, ASCA94 and SAX97, respectively. Interestingly, Chiappetti et al. (1999) found that the model parameters derived through the fitting of the broad band spectrum during the SAX97 observation are consistent with those estimated from the observed soft lag. A further piece of information is given by the trend between observed soft time lags and fluxes (see Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304). If we for example assume that $`\delta `$ has not changed, the simplest scenario would suggest that $`B`$ varied by a factor $``$ 5 from SAX96 to SAX97. Although qualitatively consistent with the variation in the flux, this can not quantitatively reproduce the observed correlation. In fact, under the (simplistic) assumptions of variations occurring only in the magnetic field, one would expect $`FB^{1+\alpha _x}`$ and $`B\delta ^{1/3}\tau _{obs}^{\frac{2}{3}}`$, thus implying that the relation between intensity and lag is given by $`F\tau _{obs}^{\frac{2(1+\alpha _x)}{3}}`$ (assuming $`\delta `$ constant), where $`\alpha _x`$ is the X-ray spectral index. For $`\alpha _x1.0`$, we have $`F\tau _{obs}^{\frac{4}{3}}`$. For example, the change in the lag by a factor $`10`$ from SAX97 to SAX96 would imply a flux variation of a factor $`22`$. However, the corresponding observed flux just changed by a factor $`1.4`$ in the 0.1$``$1.5 and 3.5$``$10 keV bands. The predicted relation between the lags and the fluxes under this hypothesis is also shown in Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304a,c, where it can be clearly seen this is much flatter than that of the observed one between all of the observations. Therefore, as one might expect, other physical quantities, such as the density and energy spectra of the electron population and/or the Doppler factor, have to vary if the observed relation between flux and lag holds within the homogeneous synchrotron self–Compton scenario. One more interesting piece of information for PKS 2155$``$304 is given by the good correlation found between variability parameters and source intensity. These have been plotted in Figure Rapid X-ray Variability of the BL Lacertae Object PKS 2155$``$304, as the fractional variability parameter ($`F_{var}`$) and “doubling time” ($`T_2`$) against the source flux in the 1.5$``$10 keV band: as the source gets brighter, the average amplitude of variability is larger, and the fastest variability timescale shorter. Although this is only a suggestive trend - due to the limited statistics - it seems to indicate that the properties characterizing variability are not random: any mechanism(s) invoked to account for (variable) radiative dissipation has to intrinsically produce this behavior. More observations with high time resolution are clearly required to confirm and quantify this trend. We also (qualitatively) stress that the flux–time lag relationship could be associated with the importance of light crossing effects with respect to the cooling timescales. A more intense flux could be associated with relatively efficient dissipation, e.g. occurring at a shock front, which for a quasi–planar geometry (shock-in-jet model, in which a thin shock wave moves down a cylindrically symmetric jet, Marscher & Gear 1985) could imply that light crossing effects do not dominate, and thus small time lags. A low source state, more similar to the quiescent underlying jet emission might be associated with an acceleration/injection of particles in a larger region: due to significant light crossing effects, the observed variability would be smoother and result in larger time lag between different frequencies. Finally, let us consider the information given by the PDS, which statistically characterizes the variability of a source. It should be noted that with the available data it is not easy to determine through the PDS the typical (or minimum) variability timescale, as the light curves on small time bins becomes very noisy. More powerful techniques, e.g. the structure function, have to be considered. In general, the amplitude of variability decreases as the timescales become shorter. Previous studies, mainly of Seyfert galaxies, show that their PDS can be approximated by power laws with slopes ranging between 1.5 and 2.0 (e.g. McHardy 1999), providing valuable constraints to discriminate among possible models. However, the PDS of BL Lac objects has not yet been well studied in the X-ray band. The best determined PDS has been derived by Tagliaferri et al. (1991) for PKS 2155$``$304 using EXOSAT observations. An average power law slope of $`\alpha 2.5`$ was obtained for the PDS in the 1$``$6 keV band, which however reduced to 1.9 after the removal of the linear trend (as required in that case). Our analysis shows, over the same temporal frequencies, that the slope of the average PDS from each observation is consistent within $`1\sigma `$ with that of the “de–trended” PDS derived from EXOSAT data. The fastest variability time scale inferred from the PDS may reach $`1000s`$, although this is largely uncertain because of the noisy PDS. We note that this time scale is consistent with that estimated from the PDS of the EXOSAT observations (Tagliaferri et al. 1991). Moreover, Paltani (1999) recently determined a similar minimum time scale ($`600s`$) from the EXOSAT data by using the structure function. Interestingly, the most rapid variability estimated from the “doubling time” in the three observations occurred on a similar timescale ($`1`$ hour), at least in SAX97. Of course, longer and uninterrupted X-ray monitoring will be crucial for constraining the PDS of blazars. ## 7 Conclusions We have considered three long-duration X-ray light curves of PKS 2155$``$304 with high time resolution, and performed detailed time series analysis on them. The intensities in soft and medium X-rays are always well correlated, but with significantly different soft lags, suggesting that variability properties are time dependent and/or different mechanisms responsible for the variability may be at work. The three light curves presented here, which are sampled over short timescales, do not seem to show any direct correspondence with the overall/long term variability properties of the source, as suggested by the two closely similar ROSAT light curves with about 5 years separation discussed by Brinkmann & Siebert (1999). The most important conclusions presented in this paper can be summarized as follows: (1) PKS 2155$``$304 shows several well-defined symmetric X-ray flares with similar rising and declining timescales. The amplitude of variability increases with increasing frequency. Very rapid variability events are not found on timescales of less than 1 hour; (2) the average PDS of SAX97 has significantly more power than those relative to the other two observations, indicating that PKS 2155$``$304 was more variable in this period. In addition, the rapid timescales and average amplitudes of variability may correlate with the source intensity, in the sense that higher brightnesses correspond to shorter timescales and larger amplitudes; (3) the inter-band X-rays are highly correlated in all cases, but show different soft time lags, possibly correlated with the source intensity. During SAX96 the source was in a relatively low state and showed the longest soft lag ($`4`$ hours) recorded so far at X-ray wavelengths in BL Lac objects. The SAX97 light curves, which correspond to a high state, do not show significant time lag, while the ASCA94 light curves present intermediate time lag; (4) within the simple homogeneous synchrotron (self–Compton) model for PKS 2155$``$304, the time lags could be interpreted as related to the cooling time scale of the relativistic emitting electrons, although the simplest change in the field intensity cannot quantitatively account for the observed dependence of lag on intensity; (5) the variability of the (X–ray) inter-band soft time lags of PKS 2155$``$304 is reminiscent of variations of lags between the UV and X-ray bands observed during the 1991 and 1994 multiwavelength campaigns (Edelson et al. 1995; Urry et al. 1997). The anonymous referee is thanked for constructive comments. YHZ and AC acknowledge the Italian MURST for financial support. This work was partly done in the research network “Accretion onto black holes, compact stars and protostars” funded by the European Commission under contract number ERBFMRX-CT98-0195.
no-problem/9907/cond-mat9907269.html
ar5iv
text
# SCALING OF FOLDING PROPERTIES IN SIMPLE MODELS OF PROTEINS Recent advances in understanding of protein folding have been made, to a large extent, through studies of lattice heteropolymers with a small number of beads, $`N`$, . In these toy models of proteins, the beads represent aminoacids. Lattice models allow for an exact determination of the native state, i.e. of the ground state of the system, and are endowed with a simplified dynamics. An $`N`$ of order 125 is considered to be large in such studies and then special sequences are considered . There are real life proteins with $`N`$ as small as of order 30, but most of them are built of several hundreds aminoacids. Apparently there is no protein with $`N`$ exceeding 5000 which is orders of magnitude smaller than the number of base pairs in a DNA. The question we ask in this Letter is: how do folding properties of proteins scale with $`N`$ and can they lead to a deterioration in stability and kinetic accessibility of the native state that exceed bounds of functionality? A previous numerical analysis of the scaling has been done by Gutin, Abkevich, and Shakhnovich who studied three dimensional (3$`D`$) lattice sequences with $`N`$ up to 175. For each $`N`$, they considered 5 sequences and selected one that folded the fastest under its optimal temperature $`T_{min}`$. The corresponding folding time, $`t_{01}`$, was the quantity that was used in studies of scaling. They discovered that $`t_{01}`$ grows as a power law with the system size: $$t_{01}N^\lambda .$$ (1) The exponent $`\lambda `$ was found to be non-universal – it depended on the kind of distribution of the contact energies $`B_{ij}`$ in the Hamiltonian $$H=\underset{i<j}{}B_{ij}\mathrm{\Delta }_{ij},$$ (2) which pointed to existence of a variety of kinds of the energy landscapes . In eq.(2), $`\mathrm{\Delta }_{ij}`$ is either 1 or 0 depending on whether the monomers $`i`$ and $`j`$ face each other, but not along the chain, or not. For random and designed sequences, with the $`B_{ij}`$’s generated from the data base of Ref. , $`\lambda 6`$ and $`4`$, respectively . Finally, for the Go model , in which $`B_{ij}=1`$ for native contacts and 0 for non-native contacts, $`\lambda 2.7`$. There were also phenomenological arguments which suggested that the folding times scale with $`N`$ exponentially for all temperatures. Thus the nature of the scaling laws for the folding times remains puzzling. Perhaps more importantly, Gutin et al. did not study scaling of any of the characteristic temperatures that are relevant for folding nor the effects of the dimensionality were explored. In this Letter, we report on studies of the 2 and 3$`D`$ Go model, with $`N`$ up to 56 and 100 respectively. In the 2$`D`$ and $`N`$=16 case, we consider all 37 maximally compact conformations (there are 69 such conformations but only 38 of them are distinct due to the end-to-end symmetry of the Go model; furthermore, one conformation cannot be accessed kinetically). In the remaining cases, we study 15 conformations, except for $`N`$=80 and 100 when only 10 and 5, respectively, are considered. Note that each of these structures is equally designable within the model because each is a nondegenerate ground state to one Go sequence. We demonstrate that in this case, $`t_{01}`$ is indeed given by eq. (1). In 2$`D`$, $`\lambda `$ is $`5.9\pm 0.2`$ Thus the constraint for the heteropolymer to lie in a plane increases $`\lambda `$ compared to the 3$`D`$ Go model. Our larger statistics also allows us to study median values, not just minimal, of the folding times. The median values also follow the power law with an effective $`\lambda `$ of $`6.3\pm 0.2`$ and $`3.1\pm 0.1`$ in 2 and 3$`D`$ respectively. Actually, the effective $`\lambda `$ depends on whether the folding is studied at $`T_{min}`$ or at the folding temperature $`T_f`$. $`T_f`$ is defined operationally as a temperature at which the equilibrium probability of finding the native state is $`\frac{1}{2}`$. We find that in 2$`D`$ and at $`T_f`$, $`\lambda =\mathrm{\hspace{0.33em}6.6}\pm 0.1`$ (the exponent for the minimal time at $`T_f`$ is $`6.3\pm 0.3`$) which means that by moving away from conditions which are optimal for the folding kinetics one generates a somewhat increased exponent in the power law. Notice that good folding takes place for sequences for which $`T_f`$ is comparable to or bigger than $`T_{min}`$. Otherwise the folding is poor. An important novel aspect of of our research is that we determine the scaling properties of $`T_{min}`$ and those of the folding temperature, $`T_f`$. We conclude that, both in $`2`$ and $`3D`$, there are indications that there could be a size related limit to good foldicity. We find that $`T_{min}`$ grows linearly with $`N`$ whereas $`T_f`$ first grows like $`T_{min}`$ but then it falls off and possibly saturates asymptotically. This makes the gap between $`T_{min}`$ and $`T_f`$ increase linearly with $`N`$ asymptotically which would change the folding kinetics from excellent to bad. One stumbling block in studies of scaling of random systems is the necessity to compare quantities which are averaged statistically and to have some control of the statistical ensemble used. The advantage of the Go model is that there is no randomness in the values of the contact energies and the ensemble is generated by the set of possible maximally compact conformations that can act as native states – i.e. the variety is only due to the geometry of the native states. The advantage of studying 2$`D`$ models is that, for $`N`$=16, it is feasible to determine the full distribution of $`T_f`$, $`T_{min}`$, and of the folding times among all of the 37 targets and then to realize that the median folding time probes vicinities of the peak in the distribution. Thus, on going to larger $`N`$ and taking, as we usually do, 15 targets, it is reasonable to expect that the corresponding median time still probes the peaks of good foldicity. Median quantities are, in addition, more stable statistically, in general, whenever one deals with wide distributions. As to the selection of the 15 native maximally compact targets: in 2$`D`$ 10 were obtained by a random construction and 5 were obtained by a multiple quenching of randomly shaped homopolymers until a maximally compact conformation was obtained. The homopolymers had identical attraction in each possible contact. In both methods, we generate targets to which there is a path of kinetic access. In 3$`D`$, all targets were obtained by the random construction. Figure 1 illustrates definitions of quantities that will be studied here. It shows the dependence of the median folding time, $`t_{fold}`$, on temperature for one target. The target has $`N`$ of 16 and is shown in the center of the figure. The optimal temperature, $`T_{min}`$, is where $`t_{fold}`$ is the shortest. $`T_{min}`$ signifies the onset of glassy kinetics. This quantity is better suited to study scaling than the glass transition temperature $`T_g`$ because the latter involves a cutoff time which necessarily must be $`N`$ dependent. $`t_{fold}`$ at $`T_{min}`$ will be denoted by $`\tau _1`$. $`\tau _2`$ is defined to be $`t_{fold}`$ at $`T_f`$ ($`T_f`$ is larger than $`T_{min}`$ for the target shown in Figure 1). In the statistical ensemble, $`t_1`$ is defined to be the median value of $`\tau _1`$ and $`t_2`$ – the median value of $`\tau _2`$. We also study $`t_{01}`$ and $`t_{02}`$ which are the minimal values of $`\tau _1`$ and $`\tau _2`$ among the targets considered. The folding times were obtained through a Monte Carlo procedure that satisfies the detailed balance condition , and was motivated by studies presented in Ref. . For each conformation of the polymer, one first determines the number of possible single and double-monomer (crankshaft) moves – these numbers will be denoted here by $`A_1`$ and $`A_2`$ respectively. The maximum value of $`A_1+A_2`$, among all conformations, is equal to $`A_{max}=N+2`$. Probability to attempt a single monomer move is taken to be $`rA_1/A_{max}`$ ($`r`$=0.2). For a double monomer move it is $`(1r)A_2/A_{max}`$. The attempts are rejected or accepted as in the standard Metropolis method. The folding time is defined as the first passage time and is measured by the number of Monte Carlo attempts divided by $`A_{max}`$. For $`N>16`$, it is determined based on 50 to 200 trajectories. It should be noted that ref. does not specify whether the detailed balance condition was enforced. Figure 2 shows the distribution of $`\tau _1`$ and $`\tau _2`$ for all targets with $`N`$=16. There is a substantial scatter in the values of $`\tau _i`$ so the usage of the median $`t_i`$ appears to be justified. The inset shows the corresponding distributions of $`T_f`$ and $`T_{min}`$. Both are centered and the median and mean values almost coincide. Note that there is very little variation in $`T_f`$: all Go targets with $`N`$=16 have almost identical stability properties: $`T_f`$ varies between 0.489 and 0.565. On going to larger $`N`$’s, the distributions of $`\tau _1`$ remain clustered around $`t_1`$ but the long time tail appears to extend towards longer and longer times. This results in an overall flattening of the distributions on the scale set by $`t_1`$. For $`N`$=16, the exact distribution of $`\tau _1/t_1`$ ends at about 8 whereas our sampling of $`N`$=20 and 42 yields tails in $`\tau _1/t_1`$ which are located at around 16 and 10 respectively. Within our statistics, we have not spotted any relatively long lasting folding processes for other values of $`N`$. However, their very existence for $`N`$=20 and 42 suggests an emergence of the tails in distributions if those could be sampled fully. Figure 3 summarizes our results on the scaling of folding times. It demonstrates the validity of the power laws both for the median and for the minimal folding times. The effective exponents $`\lambda `$ depend on $`T`$, i.e., they depend on whether the kinetics was monitored at $`T_{min}`$ or $`T_f`$. This dependence is not substantial but it indicates variations of the free energy landscape with $`T`$ and underscores a more general lack of universality. The generic power laws obtained by Gutin et. al. and by us contradict the exponential laws derived in the random energy model . They support a generally accepted view that the folding process is a finite volume version of the first order transition . In this picture one may visualise the transition stage as an inhomogeneous mixture of the ”new” phase in the sea of an ”old” phase . The random energy model does not capture such inhomogeneities. Figure 4 shows the $`N`$-dependence of the characteristic temperatures. For both 2 and 3 $`D`$ $`T_{min}`$ grows linearly with $`N`$ whereas $`T_f`$ shows a more complex behavior. For $`N>16`$, $`T_f`$ is determined from the Monte Carlo simulations: a) we vary the $`T`$ in steps of 0.05 or smaller, b) at each $`T`$ we start from the native state and monitor the probability of occupying it, c) in most cases the results are averaged over 50 different trajectories. The number of Monte Carlo steps for each $`T`$ depends on $`N`$ and it ranges from $`10^6`$ to 7$`\times 10^6`$. We checked that doubling the selected cutoff times had negligible effect on $`T_f`$. The procedure yields results which agree with those obtained by the exact enumeration for $`N`$=16. In 2$`D`$, the dependence of $`<T_f>`$ on $`N`$ initially follows that of $`T_{min}`$. However, on crossing $`N_c`$ of 36, $`T_f`$ falls off and it may saturate which is suggested by the declining rate of growth. Thus 2$`D`$ Go conformations appear to have intrinsic limits to their thermodynamic stability. Beyond $`N_c`$, the foldicity becomes gradually poorer and poorer. The same scenario appears to be present also in the 3$`D`$ case except that the small $`N`$ value of $`T_f`$ is substantially larger than $`T_{min}`$. $`T_f`$ starts showing signs of the saturation around $`N`$=80. We were unable to explore values of $`N`$ that were larger than 100 but a saturation of $`T_f`$ is expected on general grounds due to the existence of the (first order) phase transition to the folded phase in the thermodynamic limit. $`T_{min}`$, on the other hand, is expected to grow indefinitely due to the growth of kinetic barriers to cross. In 3$`D`$, $`T_f`$ and $`T_{min}`$ appear to cross somewhere around $`N_c`$=300. In conclusion, we have studied the scaling properties not only of the fastest sequences, as in ref., but also of those with typical folding rates. The exponents in the resulting power laws for the folding times depend on $`D`$, values of the $`B_{ij}`$’s, and on $`T`$. In addition to the deterioration of the folding kinetics with $`N`$, as described by the growth of $`T_{min}`$ and of the folding times, a relative deterioration of the thermodynamic stability also appears to set in. Thus there will be no rapidly folding heteropolymers of a large size. It would be interesting to determine the scaling properties for more realistic models of proteins. This work was supported by KBN (Grant No. 2P03B-025-13). Fruitful discussions with Jayanth R. Banavar and Dorota Cichocka are gratefully acknowledged.
no-problem/9907/physics9907001.html
ar5iv
text
# I Introduction ## I Introduction Owing to the springing up of the new concepts about the wave function such as quantum cosmology and protective measurement, the demand for the last objective interpretation of the wave function is skyrocketing in recent years. On the other hand, the analyses about motion have never ceased since the old Greece times, but from Zeno Paradox to Einstein’s relativity, only classical continuous motion is discussed. In fact, there exist some other motion modes in nature, and the mysterious quantum motion in microscopic world may be one of these snubbed motion modes. In this paper we present one kind of new motion, which is called quantum discontinuous motion, and after a deep and reasonable analysis we find that the last reality underlying the wave function is just the new motion undergone by the microscopic particle. From a historical point of view, interpreting the wave function in terms of new motion of particle inherits the essence of thought accumulated since the foundation of quantum mechanics, the first essential concept is the ontology concept widely adopted in interpreting quantum mechanics, it aims at finding the last reality behind the mysterious wave function, this concept can be traced back to Einstein, Schrödinger, and the followers Bohm, Bell et al, especially in recent years, the protective measurement presented by Aharonov et al has not only consolidated this concept from the inside of quantum mechanics, but also provided the experiments to confirm it, thus this advance cries for an objective elucidation about the reality underlying the wave function more strongly than ever, and it may be the time to disclose the whole quantum mystery now. The second is the particle concept hold by Born’s generally-accepted probability interpretation and nearly most following interpretations of quantum mechanics, according to this particle concept the last reality described by the wave function in quantum mechanics is essentially a particle, which is in only one position in space at any instant. The most predominant character of this concept is that it can not only describe the particle picture in nonrelativistic quantum mechanics, but also be extended to describe the field picture in relativistic quantum field theory when uniting new motion with special relativity; on the other hand, it provides a precondition to further study the boring collapse process as one kind of objective process. The third is the renunciation of classical continuous motion, for example, it is widely demonstrated and accepted by ontologists that the microscopic particle can not pass through only one slit without influenced by another slit in two-slit experiment, and sometimes they even say that the microscopic particle ”passes through both slits” or ”in two positions at the same time”, although this kind of description is too vague to form a strict definition or elucidation about the objective quantum motion undergone by the microscopic particle, which may be extremely different from classical continuous motion, needless to say, the renunciation of classical continuous motion sheds light on the road to interpret the wave function in terms of new motion other than classical continuous motion, and the vague description ”in two positions at the same time” indeed grasps something real and moves a peg along this road. Certainly, there exist other deeper logical reasons leading to interpreting the wave function in terms of new motion of particle, some of them are as follows: (1). The linear superposition principle of wave function in quantum mechanics strongly implies that the real object described by the wave function, if any, is not a field, but a particle, since as to the field its different branches will generally interact, and the linear superposition principle of the wave function describing the field will be broken; while as to the particle, it is in only one position in space at any instant, and in nonrelativistic quantum mechanics the transfer of interaction is instantaneous, thus for the wave function describing the quantum discontinuous motion of particle, its different branches will not interact. (2). Since the founding of science people have been studying the classical continuous motion of particle, and taking it for granted that it is the only type of motion in Nature, but from the mathematical analyses it is shown that there exists one kind of complementary motion mode of classical continuous motion, which can be called quantum discontinuous motion, they form the complete set of motion modes of particle, and the former seems more natural than the latter. In physics, as we will demonstrate, the macroscopic world is governed by the classical continuous motion, while the microscopic world is governed by the quantum discontinuous motion. (3). In order to unify particle and field, we have been searching for the lost connection between them, now the new motion—quantum discontinuous motion is just the lost golden bridge. On the one hand, the object undergoing the new motion is a particle, this is the particle-like aspect of the new motion, owing to this particle-like property, the new motion of particle provides a essential basis to account for the objective collapse process of the wave function during quantum measurement, since the definite measurement result such as one spot on the photoplate, is produced only in one local region, not in many different local regions in space; On the other hand, the particle undergoing the new motion moves throughout the whole space with a certain position measure density $`\rho (x,t)`$ during infinitesimal time interval, this is the field-like aspect of the new motion, owing to this field-like property, the new motion of particle can describe the objective evolution process of the wave function during normal evolution, which determines the interference pattern of the particle, and will provide the objective origin of probability in quantum mechanics. (4). When the new motion marries with special relativity, we will find that it can be extended to describe the quantum field picture, and quantum mechanics will naturally be taken over by quantum field theory, concretely speaking, when in relativistic quantum field the transfer velocity of interaction is finite, thus for the wave function describing the quantum discontinuous motion of particle, its different branches will interact through the transfer particle of the interaction, for example, in quantum electrodynamics the different branches of the electron wave function will interact through the photon, which is characterized by the interaction term $`\overline{\psi }\gamma ^\mu A_\mu \psi `$. (5). The new motion of particle will provide a broader framework for objectively studying the microscopic process, in which the notorious collapse problem may be naturally solved, for example, there may exist many kinds of concrete motion modes among the new motion, and the new motion may display differently in the nonrelativistic and relativistic domains, especially when involving relativistic gravity the new motion of particle may naturally provide the origin of randomness in the collapse process, and further result in the objective collapse process. (6). It is generally accepted that the main accepted objective interpretations of quantum mechanics are Everett’s relative state interpretation and Bohm’s hidden variables interpretation, but besides other unsatisfactory characters they all interpret neither the wave function, nor Schrödinger equation of the wave function, as to the former, the wave function is directly taken as a physical entity with no a priori interpretation, while as to the latter, the wave function is taken as a real field in the configuration space, which is assumed to satisfy Schrödinger equation in quantum mechanics with no further interpretation, in fact, it does not interpret this objective field in real space either. Now, in this paper we will interpret both the wave function and its Schrödinger equation in terms of the new motion of particle. The plan of this paper is as follows: In Sect. 2 we first give three general presuppositions, which relate physical reality with abstract mathematics, and are the basis of the following analysis about motion. In Sect. 3 we give a strict mathematical analysis about motion by use of point set theory, especially the discontinuous motion described by regular dense point set is analyzed in detail. In Sect. 4 a strict physical definition of the new motion—quantum discontinuous motion is given, the wave function in quantum mechanics is interpreted as a mathematical complex describing the particle undergoing the quantum discontinuous motion, and the consistent axiom system of quantum mechanics is deduced out, especially Schrödinger equation of the wave function is shown to be the simplest nonrelativistic evolution equations for the particle undergoing the quantum discontinuous motion. Furthermore, in Sect. 5 we demonstrate that the quantum measurement theories just confirm the existence of the quantum discontinuous motion of the microscopic particle described by the wave function. At last, in Sect. 6 the notorious characters of wave function is consistently interpreted, and two concrete examples are given to explain the weird displays of the wave function in microscopic world in terms of the quantum discontinuous motion of particle. ## II Three general presuppositions First, we will give three general presuppositions about the relation between physical reality and abstract mathematics, which are basic conceptions and correspondence rules before we discuss the physical motion of particle, we suppose they are the commonness for all kinds of physical motion of particle. (1). Time and space in which the particle moves are both continuous point set. (2). The moving particle is represented by one point in time and space. (3). The motion of particle is represented by the point set in time and space. For simplicity but lose no generality, in the following we will mainly analyze one-dimension motion, namely the point set in two-dimension time and space. ## III Mathematical analysis about new motion ### A Point set and its law As we know, the point set theory has been deeply studied since the beginning of this century, nowadays we can grasp it more easily. According to this theory, we know that the general point set is dense point set, whose basic property is the measure of the point set, while the continuous point set is one kind of particular dense point set, its basic property is the length of the point set. Naturally, as to the point set in two-dimension time and space, the general situation is the dense point set in this two-dimension space, while the continuous curve is one kind of extremely particular dense point set, surely it is a wonder that so many points bind together to form one continuous curve by order, in fact, the probability for its formation is zero. Now, we will generally analyze the law of the point set, as we know, the law about the points in point set, which can be called point law, is the most familiar law, and it is widely taken as the only rational law, for example, as to the continuous curve in two-dimension time and space there may exist a certain expressible analytical formula for the points in this particular point set( people cherish this kind of point laws owing to their infrequent existence, but perhaps Nature detests and rejects them, since the probability of creating them is zero), but as to the dense point set in two-dimension time and space the point law does not exist, since the dense point set is discontinuous everywhere, even if the difference of time is very small, or infinitesimal, the difference of space can be very large, then infinitesimal error in time will result in finite error in space, thus even if it exists we can not formulate it in nature, and owing to finite error in time measurement, we can not confirm it either, let alone predict the evolution of the point set using it, in one word, there does not exist point law for dense point set in mathematics and physics. Because of nonexistence of the point law for general dense point set, people cherish only the particular dense point set—continuous curve with point law, which corresponds to classical continuous motion, and detest the general dense point set without point law, let alone regard it as another kind of real motion mode. But when we consider the confirmation of law, we will find more truth about the law for point set, as we know, as to the point law of continuous curve, we must confirm it by means of the following process:$`\mathrm{\Delta }tdt0`$, among these processes the process $`\mathrm{\Delta }tdt`$ is complete for confirming the differential law for point set, and the process $`dt0`$ is only necessary for the confirmation of point law, but evidently this process can not be achieved in reality, in fact, only the process $`\mathrm{\Delta }tdt`$ can possess real physical meaning through testing the law more and more accurately, thus there does not exist point law for both general dense point set and continuous curve, and the privilege of continuous curve and the corresponding classical continuous motion is also lost. On the other hand, in physical there exist only the dynamical quantities defined during infinitesimal time interval, this fact can be seen from the familiar differential quantities such as dt and dx, whereas the point quantities come only from mathematics, people always mix up these two kinds of quantities, this is a huge obstacle for the development of physics. Thus we can only discuss the quantities defined during infinitesimal time interval, as well as their differential laws if we study the point set corresponding to real physical motion. ### B Deep analysis about dense point set Now, we will further study the differential description of point set in detail. First, in order to find the differential description of the peculiar dense point set—continuous curve, we may measure the rise or fall size of the space $`\mathrm{\Delta }x`$ corresponding to any finite time interval $`\mathrm{\Delta }t`$ near each instant $`t_j`$, then at any instant $`t_j`$ we can get the approximate information about the continuous curve through the quantities $`\mathrm{\Delta }t`$ and $`\mathrm{\Delta }x`$ at that instant, and when the time interval $`\mathrm{\Delta }t`$ turns smaller, we will get more accurate information about the curve. Theoretically we can get the complete information through this infinite process, that is to say, in theory we can build up the basic description quantities for the peculiar dense point set—continuous curve, which are the differential quantities dt and dx, then given the initial condition the relation between dt and dx at all instants will completely describe the continuous curve. Then, we will deeply analyze the differential description of the general dense point set. As to this kind of point set, we still need to study the concrete situation of the point set corresponding to finite time interval near every instant. Now, when time is during the interval $`\mathrm{\Delta }t`$ near instant $`t_j`$, the points in space are no longer limited in the local space interval $`\mathrm{\Delta }x`$, they distribute throughout the whole space instead, so we should study this new nonlocal point set, which is also dense point set, for simplicity but lose no generality, we consider finite space such as x$``$, first, we may divide the whole space in small equal interval $`\mathrm{\Delta }x`$, then calculate the measure of the local dense point set in the space interval $`\mathrm{\Delta }x`$ near each $`x_i`$, which can be written as $`M_{\mathrm{\Delta }x,\mathrm{\Delta }t}`$($`x_i`$,$`t_j`$), since the measure sum of all local dense point sets in time interval $`\mathrm{\Delta }t`$ just equals to the length of the continuous time interval $`\mathrm{\Delta }t`$, we have: $$\underset{i}{}M_{\mathrm{\Delta }x,\mathrm{\Delta }t}\text{(}x_i\text{,}t_j\text{)= }\mathrm{\Delta }t$$ (1) On the other hand, since the measure of the local dense point set in the space interval $`\mathrm{\Delta }x`$ and time interval $`\mathrm{\Delta }t`$ will also turn to be zero when the intervals $`\mathrm{\Delta }x`$ and $`\mathrm{\Delta }t`$ turn to be zero, it is not an useful quantity, and we have to create a new quantity on the basis of this measure. Through further analysis, we find that a new quantity $`\rho _{\mathrm{\Delta }x,\mathrm{\Delta }t}(x_i,t_j)=M_{\mathrm{\Delta }x,\mathrm{\Delta }t}(x_i,t_j)/(\mathrm{\Delta }x\mathrm{\Delta }t)`$, which can be called average measure density, will be an useful one, it generally does not turn to be zero when $`\mathrm{\Delta }x`$ and $`\mathrm{\Delta }t`$ turn to be zero, especially if the limit $`lim_{\mathrm{\Delta }x0,\mathrm{\Delta }t0}\rho _{\mathrm{\Delta }x,\mathrm{\Delta }t}(x_i,t_j)`$ exists, it will no longer relate to the observation sizes $`\mathrm{\Delta }x`$ and $`\mathrm{\Delta }t`$, so it can accurately describe the whole dense point set, as well as all local dense point sets near every instant, now we let: $$\rho (x,t)=lim_{\mathrm{\Delta }x0,\mathrm{\Delta }t0}\rho _{\mathrm{\Delta }x,\mathrm{\Delta }t}(x,t)$$ (2) then we can get: $$_\mathrm{\Omega }\rho (x,t)𝑑x=1$$ (3) this is just the normalization formula, where $`\rho `$(x,t) is called position measure density, $`\mathrm{\Omega }`$ denotes the whole integral space, we call this kind of dense point set regular dense point set. Now, we will analyze the new quantity $`\rho `$(x,t) in detail, first, the position measure density $`\rho `$(x,t) is not a point quantity, it is defined during infinitesimal interval, this fact is very important, since it means that if the measure density $`\rho (x,t)`$ exists, then it will be continuous relative to both t and x, that is to say, contrary to the position function x(t), there does not exist the discontinuous situation for the measure density function $`\rho `$(x,t), furthermore, this fact also results in that the continuous function $`\rho `$(x,t) is the last useful quantity for describing the regular dense point set; Secondly, the essential meaning of the position measure density $`\rho `$(x,t) lies in that it represents the dense degree of the points in the point set in two-dimension space and time, and the points are denser where the position measure density $`\rho `$(x,t) is larger. ### C The evolution of regular dense point set Now, we will further discuss the evolution law for regular dense point set. Just like the continuous position function $`x(t)`$, although the continuous position measure density function $`\rho `$(x,t) completely describes the regular dense point set, it is one kind of static description about the point set, and it can not be used for prediction itself, so in order to predict the evolution of the regular dense point set we must create some kind of quantity describing its change, enlightened by the theory of fluid mechanics we can define the fluid density for the position measure density $`\rho `$(x,t) as in the following: $$\frac{\rho (x,t)}{t}+\frac{j(x,t)}{x}=0$$ (4) we call this new quantity $`j(x,t)`$ position measure fluid density, it is evident that this quantity just describes the change of the measure density of the regular dense point set, thus the general evolution equations of the regular dense point set can be written as in the following: $$\frac{\rho (x,t)}{t}+\frac{j(x,t)}{x}=0$$ (5) $$\frac{j(x,t)}{t}+f(\rho ,j,\frac{\rho }{x},\frac{j}{x},\mathrm{})=0$$ (6) where $`f(\rho ,j,\frac{\rho }{x},\frac{j}{x},\mathrm{})`$ is a certain function containing $`\rho (x,t)`$,$`j(x,t)`$ and their partial derivatives relative to x. ## IV Physical analysis about new motion In this part, we will return to the physical world and analyze the physical motion. ### A The definition of new motion Through the presuppositions presented in the beginning of this paper, we will give a strict physical definition of new motion in three-dimension space, the definition for other abstract spaces or many-particle situation can be easily extended. (1). The new motion of particle in space is described by regular dense point set in four-dimension space and time. (2). The new motion state of a particle in space is described by the position measure density $`\rho (x,y,z,t)`$ and the position measure fluid density $`𝐣(x,y,z,t)`$ of the corresponding regular dense point set, in the simplest situation they form an integrated abstract wave function $`\psi (x,y,z,t)=\rho ^{1/2}e^{iS(x,y,z,t)/\mathrm{}}`$, where $`S(x,y,z,t)=m_{\mathrm{}}^xj_x(x^{^{}},y,z,t)/\rho (x^{^{}},y,z,t)𝑑x^{^{}}`$, $`m`$ is the mass of the particle and $`\mathrm{}`$ is a constant quantities. (3). The evolution of new motion corresponds to the evolution of regular dense point set, and one of its evolution equations containing the wave function $`\psi (x,y,z,t)`$ is Schrödinger equation in quantum mechanics. We call the new motion quantum discontinuous motion, compared with classical continuous motion, the commonness of these two kinds of motion is that the particle exists only in one position in space at one instant, their difference lies in the behavior of the particle during infinitesimal time interval \[t,t+dt\], for classical continuous motion, the particle is limited in a certain local space interval \[v,v+dv\], while for quantum discontinuous motion, the particle moves throughout the whole space with a certain position measure density $`\rho (x,y,z,t)`$. On the other hand, in order to analyze the physical evolution of the new motion, one point needs to be emphasized, according to the above definition of new motion, the particle is in only one position at any instant, but this is not a direct physical statement, but a simple metaphysical statement about the time-division existence of the reality undergoing the new motion, which is logically deduced from the experiment fact that the wave function satisfies the linear superposition principle, or there does not exist self-interaction for the wave function in quantum mechanics, strictly speaking, the particle may be not in only one position at one instant( we can not confirm it in physics either ), the only rational physical requirement is that the instants set, at any instant of which the particle is in only one position, forms a dense point set, whose measure equals to the length of the time interval. In fact, the following physical states of the new motion are all defined during infinitesimal time interval in the meaning of measure, not at one instant, for example, the momentum eigenstate $`\psi _p(x,t)=e^{ipxiEt}`$, especially even the position eigenstate $`\psi (x,t)=\delta (xx_0)`$ are still defined during infinitesimal time interval. ### B The evolution of new motion In the following, we will give the main clues for finding the possible evolution equations of the new motion, at the same time, the axiom system of quantum mechanics will be deduced out by use of a logical analysis, and Schrödinger equation will prove to be just the simplest nonrelativistic evolution equations of the new motion. For simplicity but lose no generality, we mainly analyze one-dimension motion, but the results can be easily extended to three-dimension situation. 1. Two kinds of description bases As we know, contrary to classical continuous motion, the state of quantum discontinuous motion is generally nonlocal, and the particle undergoing this new motion moves throughout the whole space with a certain position measure density $`\rho (x,t)`$ during infinitesimal time interval, thus there exist two kinds of description bases for quantum discontinuous motion in essence, one should be the position of particle, which is local description basis; the other will be the corresponding nonlocal description basis, in the following we will demonstrate that it is just the momentum of particle. In fact, in order to find the nonlocal description basis, we only need to analyze the simplest situation of the free evolution of the new motion, where $`\rho `$(x,t) is constant during the evolution, for this situation we can easily find that $`j(x,t)`$ is also constant, and we have: $$\frac{\rho (x,t)}{t}=0$$ (7) $$\frac{j(x,t)}{x}=0$$ (8) $$\frac{j(x,t)}{t}=0$$ (9) $$\frac{\rho (x,t)}{x}=0$$ (10) but the mathematical and physical meaning of these four equations need to be analyzed, firstly, these equations can also hold in classical fluid mechanics, but as to describing the evolution of the new motion of a particle, their meaning will be very different, according to these four equations, the quantities position measure density $`\rho (x,t)`$ and position measure fluid density $`j(x,t)`$ will be constant irrelevant to both x and t, generally we can let $`\rho (x,t)`$=1, then we have $`j(x,t)`$=$`\rho (x,t)v_0`$=$`v_0`$=$`p_0`$/m, where $`m`$ is the mass of the particle, these two results mean that for the free particle undergoing the new motion with one constant momentum, its position will spread throughout the whole space with the same position measure density, thus we demonstrate that the momentum of particle is just the nonlocal description basis, and the momentum state of new motion is completely nonlocal. 2. One-to-one relation Now we have shown there are two kinds of description bases for quantum discontinuous motion, but it is evident that there exists only one definite motion state at any instant, so the state description using these two kinds of description bases should be equivalent, this means that there exists a one-to-one relation between these two descriptions, and this relation is irrelevant to the concrete motion state, in the following we will mainly discuss how to find this one-to-one relation, and our analysis will also show that this relation essentially determines the distinct evolution for quantum discontinuous motion, as well as the axiom framework of Hilbert space for quantum mechanics. Like the measure density $`\rho (x,t)`$ and measure fluid density $`j(x,t)`$ for local position x of the particle, we can also define the measure density $`f(p,t)`$ and measure fluid density $`J(p,t)`$ for the nonlocal momentum p of the particle, and according to the above analysis there should exists a one-to-one relation between the local position description $`(\rho ,j)`$ and nonlocal momentum description $`(f,J)`$. First, it is evident that there exists no direct one-to-one relation between the measure density functions $`\rho (x,t)`$ and $`f(p,t)`$, since even for the above simplest situation, we have $`\rho (x,t)=1`$ and $`f(p,t)=\delta ^2(pp_0)`$( this result can be directly obtained when considering the general normalization relation $`_\mathrm{\Omega }\rho (x,t)𝑑x=_\mathrm{\Omega }f(p,t)𝑑p`$), and there is no one-to-one relation between them. Then in order to obtain the one-to-one relation, we have to create new properties on the basis of the above position description $`(\rho ,j)`$ and momentum description $`(f,J)`$, this needs a little more mathematical trick, here we only give the main clues and the detailed mathematical demonstrations are omitted, first, we disregard the time variable $`t`$ and let $`t=0`$, as to the above free evolution state with one momentum, we have $`(\rho ,j)=(1,p_0/m)`$ and $`(f,J)=(\delta ^2(pp_0),0)`$, thus we need to create a new position state function $`\psi (x,0)`$ using $`1`$ and $`p_0/m`$, a new momentum state function $`\phi (p,0)`$ using $`\delta ^2(pp_0)`$ and $`0`$, and find the one-to-one relation between these two state functions, this means there exists an one-to-one transformation between the state functions $`\psi (x,0)`$ and $`\phi (p,0)`$, we generally write it as follows: $$\psi (x,0)=_{\mathrm{}}^+\mathrm{}\phi (p,0)T(p,x)𝑑p$$ (11) where $`T(p,x)`$ is called transformation function and generally continuous and finite for finite $`p`$ and $`x`$, since the function $`\phi (p,0)`$ will contain some form of the basic element $`\delta ^2(pp_0)`$, normally we may expand it as $`\phi (p,0)=_{i=1}^{\mathrm{}}a_i\delta ^i(pp_0)`$, while the function $`\psi (x,0)`$ will contain the momentum $`p_0`$, and be generally continuous and finite for finite $`x`$, then it is evident that the function $`\phi (p,0)`$ can only contain the term $`\delta (pp_0)`$, because the other terms will result in infiniteness. On the other hand, since the result $`\phi (p,0)=\delta (pp_0)`$ implies that there exists the relations f(p,0)=$`\phi (p,0)^{}\phi (p,0)`$ and $`\rho (x,0)`$=$`\psi (x,0)^{}\psi (x,0)`$, we may let $`\psi (x,0)=e^{iG(p_0,x)}`$ and have $`T(p,x)=e^{iG(p,x)}`$, then considering the symmetry between the properties position and momentum( this symmetry essentially stems from the equivalence between these two kinds of descriptions, the direct implication is for $`\rho (x,0)=\delta ^2(xx_0)`$ we also have $`f(p,0)=1`$ ) we have the extension $`G(p,x)=_{i=1}^{\mathrm{}}b_i(px)^i`$, but the symmetry between the properties position and momentum further results in the symmetry between the transformation $`T(p,x)`$ and its reverse transformation $`T^1(p,x)`$, where $`T^1(p,x)`$ satisfies the relation $`\phi (p,0)=_{\mathrm{}}^+\mathrm{}\psi (x,0)T^1(p,x)𝑑p`$, thus we can only have the term $`px`$ in the function $`G(p,x)`$, for this situation the symmetry relation between these two transformations is $`T^1(p,x)=T^{}(p,x)=e^{ipx}`$, and we let $`b_1=1/\mathrm{}`$, where $`\mathrm{}`$ is a constant quantity, for simplicity we let $`\mathrm{}=1`$ in the following discussions. Then mainly owing to the essential symmetry involved in the new motion we work out the basic one-to-one relation, it is $`\psi (x,0)=_{\mathrm{}}^+\mathrm{}\phi (p,0)e^{ipx}𝑑p`$, where $`\psi (x,0)=e^{ip_0x}`$ and $`\phi (p,0)=\delta (pp_0)`$. In fact, there may exist other complex forms for the state functions $`\psi (x,0)`$ and $`\phi (p,0)`$, for example, they are not the above simple number functions but multidimensional vector functions such as $`\psi (x,0)=(\psi _1(x,0),\psi _2(x,0),\mathrm{},\psi _n(x,0))`$ and $`\phi (p,0)=(\phi _1(p,0),\phi _2(p,0),\mathrm{},\phi _n(p,0))`$, but the above one-to-one relation still exists for every component function, and these vector functions still satisfy the above modulo square relations, namely $`\rho (x,0)=_{i=1}^n\psi _i(x,0)^{}\psi _i(x,0)`$ and $`f(p,0)=_{i=1}^n\phi _i(p,0)^{}\phi _i(p,0)`$, these complex forms will correspond to more complex theories, say, involving more inner properties of the particle such as charge and spin etc. Now, since the one-to-one relation between the position state description and momentum state description is irrelevant to the concrete motion state of the new motion, the above one-to-one relation for the free motion state with one momentum should apply for any motion state of the new motion, and the states which satisfy the one-to-one relation will be the possible motion states of the new motion. Furthermore, it is evident that this one-to-one relation will directly result in the famous uncertainty relation $`\mathrm{}x\mathrm{}p\mathrm{}/2`$, and as we will demonstrate, it will essentially result in the consistent axiom system of quantum mechanics. 3. Axiom I of quantum mechanics Now, the direct linear superposition of the above momentum state $`\psi _p(x,0)=e^{ipx}`$, which can be called momentum eigenstate, will evidently satisfy the one-to-one relation, and should be the possible motion state of the new motion, at the same time, Fourier analysis further shows that any normal motion state can be expanded as the linear superposition of the momentum eigenstates, then all normal motion states of the new motion will form an abstract complex linear space, which has been called Hilbert space, thus we have shown that any motion state of the system undergoing the new motion will correspond to the state vector in Hilbert space, namely the Axiom I of quantum mechanics, for example, the free new motion state with two momenta corresponds to the state vector in Hilbert space, which is a certain kind of linear superposition of two free new motion states with one momentum in this space. 4. Axiom II of quantum mechanics From a mathematical point of view, the above one-to-one relation will also physically determine the linear operator structure in Hilbert space, which may give a more abstract but deeper description of the new motion, thus the Axiom II of quantum mechanics is further included, namely, every observable of the system corresponds to the self-adjoint operator in the Hilbert state space, the self-adjoint requirement guarantees the real value of the corresponding physical observable, which may be the position observable $`x`$, the momentum observable $`p`$ and the energy observable $`E`$ etc, and in general the relations of these observables can be extended to the corresponding operators. Furthermore, through defining the linear operators $`\widehat{x}`$ and $`\widehat{p}`$, the above one-to-one relation describing the new motion will result in the famous noncommuting relation $`[\widehat{x},\widehat{p}]=i\mathrm{}`$, which is taken as the basis for quantizing anything. 5. Axiom III of quantum mechanics Now, we will demonstrate that the Axiom III of quantum mechanics, as well as the irreducibility of probability in quantum mechanics, results from the objective nature of the quantum discontinuous motion, first, the proper measurement in physics should reflect the property of the measured system as truly as possible, and it is also rational to presuppose the existence of such proper measurement for the new motion, as well as for other realities; secondly, if we assume that each measurement about the system undergoing the new motion will bring about only one definite result, then the objective measure density $`\rho `$ of the new motion will naturally result in the probability distribution of the measurement results, which is $`P=|\psi |^2`$ for discrete observable and $`P=_{E_1}^{E_2}|\psi |^2𝑑E`$ for continuous observable, where $`[E_1,E_2]`$ is the result interval, this is just the Axiom III of quantum mechanics. Furthermore, there does not exist one kind of point description for the quantum discontinuous motion in essence, so the probability for a single definite measurement result is irreducible, it is essentially determined by the discontinuity and value-dispersion nature of the quantum discontinuous motion. 6. Axiom IV of quantum mechanics Now, we will work out the dynamical evolution law of the new motion, namely the Axiom IV of quantum mechanics. First, in order to find how the time variable $`t`$ is included in the functions $`\psi (x,t)`$ and $`\phi (p,t)`$, we may consider the linear superposition of two momentum eigenstates, namely $`\psi (x,t)=\frac{1}{\sqrt{2}}[e^{ip_1xic_1(t)}+e^{ip_2xic_2(t)}]`$, then the position measure density is $`\rho (x,t)=[1+\mathrm{cos}(\mathrm{}c(t)\mathrm{}px)]/2`$, where $`\mathrm{}c(t)=c_2(t)c_1(t)`$ and $`\mathrm{}p=p_2p_1`$, now we let $`\mathrm{}p0`$, then we have $`\rho (x,t)1`$ and $`\mathrm{}c(t)0`$, especially using the conservation relation we can get $`dc(t)/dt=dpp/m`$, namely $`dc(t)=d(p^2/m)t`$ or $`dc(t)=dEt`$, where $`E=p^2/m`$, is the energy of the particle in the nonrelativistic domain, thus as to any momentum eigenstate we have the time-included formula $`\psi (x,t)=e^{ipxiEt}`$. Now, as to the free motion state with one momentum, namely the momentum eigenstate $`\psi (x,t)=e^{ipxiEt}`$, using the nonrelativistic relation $`E=\frac{p^2}{m}`$ and including the constant quantity $`\mathrm{}`$ we can easily find its nonrelativistic evolution law, which is $$i\mathrm{}\frac{\psi (x,t)}{t}=\frac{\mathrm{}^2}{2m}\frac{^2\psi (x,t)}{x^2}$$ (12) then owing to the linearity of this equation, this evolution equation also applies to the linear superposition of the momentum eigenstates, namely all possible free notion states, or we can say, it is the free evolution law for the new motion; Secondly, we will consider the evolution law for the new motion under outside potential, when the potential $`U(x,t)`$ is a constant $`U`$, the evolution equation will be $$i\mathrm{}\frac{\psi (x,t)}{t}=\frac{\mathrm{}^2}{2m}\frac{^2\psi (x,t)}{x^2}+U\psi (x,t)$$ (13) then when the potential $`U(x,t)`$ is related to x and t, the above equation will also essentially determine the evolution equation, it is $$i\mathrm{}\frac{\psi (x,t)}{t}=\frac{\mathrm{}^2}{2m}\frac{^2\psi (x,t)}{x^2}+U(x,t)\psi (x,t)$$ (14) for three-dimension situation the equation will be $$i\mathrm{}\frac{\psi (𝐱,t)}{t}=\frac{\mathrm{}^2}{2m}^2\psi (𝐱,t)+U(𝐱,t)\psi (𝐱,t)$$ (15) this is just the Schrödinger equation in quantum mechanics, thus we have deduced the Axiom IV of quantum mechanics. On the other hand, according to the Axiom II of quantum mechanics, every observable of the system corresponds to a self-adjoint operator in the Hilbert state space, and in general the relations of these observables can be extended to the corresponding operators, thus the above evolution equation of new motion can be reduced to the familiar nonrelativistic energy equality $`E=p^2/m+U`$, and the equation can be rewritten as $`\widehat{E}\psi (𝐱,t)=\widehat{p}^2/m\psi (𝐱,t)+U(\widehat{x},t)\psi (𝐱,t)`$, where $`\widehat{E}=i\mathrm{}/t`$, and $`\widehat{p}=i\mathrm{}`$. At last, the above analysis also shows that the state function $`\psi (x,t)`$ provides a complete description of the quantum discontinuous motion, since the new motion is completely described by the measure density $`\rho (x,t)`$ and measure fluid density $`j(x,t)`$, and according to the above evolution equation the state function $`\psi (x,t)`$ can be formulated by these two functions, namely $`\psi (x,t)=\rho ^{1/2}e^{iS(x,t)/\mathrm{}}`$, where $`S(x,t)=m_{\mathrm{}}^xj(x^{^{}},t)/\rho (x^{^{}},t)𝑑x^{^{}}`$( Note: when in three-dimension space, the formula for S(x,y,z,t) will be $`S(x,y,z,t)=m_{\mathrm{}}^xj_x(x^{^{}},y,z,t)/\rho (x^{^{}},y,z,t)𝑑x^{^{}}=m_{\mathrm{}}^yj_y(x,y^{^{}},z,t)/\rho (x,y^{^{}},z,t)𝑑y^{^{}}=m_{\mathrm{}}^zj_z(x,y,z^{^{}},t)/\rho (x,y,z^{^{}},t)𝑑z^{^{}}`$, since in general there exists the relation $`\times \{𝐣(x,y,z,t)/\rho (x,y,z,t)\}=0`$ when $`\rho (x,y,z,t)0`$), and these two functions can also be expressed by the state function, namely $`\rho (x,t)=|\psi (x,t)|^2`$ and $`j(x,t)=[\psi ^{}\psi /t\psi \psi ^{}/t]/2i`$, thus there exists a one-to-one relation between $`(\rho (x,t),j(x,t))`$ and $`\psi (x,t)`$, and the state function $`\psi (x,t)`$ also provides a complete description of the quantum discontinuous motion. On the other hand, we can see that the absolute phase of the wave function $`\psi (x,t)`$, which may depends on time in nonrelativistic domain, is useless for describing the new motion, since according to the above analysis it disappears in the measure density $`\rho (x,t)`$ and measure fluid density $`j(x,t)`$, which completely describe the quantum discontinuous motion, thus from the point of view of the new motion it is natural that the absolute phase of the wave function possesses no physical meaning. 7. Axiom V of quantum mechanics As we know, quantum mechanics is self-consistent when defined by the above four axioms, since the elucidation about the corresponding relation between the physical reality and mathematical language is just a conditional statement, namely if the measurement about the quantum system brings about only one definite result, then the probability distribution of the measurement results will satisfy the formula, say for the discrete observable, $`P=|\psi |^2`$; but if only the above four axioms are included, quantum mechanics will be evidently incomplete, since it does not describe the real measurement result, thus its founders resorted to the projection postulate or Axiom V in order to account for the measurement process, whereas this postulate is still a direct description about the measurement result, it says nothing about how the measurement can and does bring about one definite result, so it needs to be further explained in physics, now the new motion of particle just provide such a broad framework for objectively studying the microscopic world that it may solve the collapse problem, for example, there may exist many kinds of concrete motion modes among the new motion, and the new motion may display differently in the nonrelativistic and relativistic domains, especially when involving gravity the new motion of particle may naturally result in the objective collapse process, but owing to the weird difficulty of this problem, we will tackle it in another paper. 8. The value of $`\mathrm{}`$ in quantum mechanics Up to now, one problem is still left, it is how to determine the value of $`\mathrm{}`$ in quantum mechanics or our world, according to the above analysis we only know that the constant $`\mathrm{}`$ possesses a finite nonzero value, certainly, just like the other physical constants such as c and G, its value can be determined by the experience, but its existence need to be explained, and the above analysis about the new motion can provide the answer, namely the existence of $`\mathrm{}`$ essentially results from the irreducibility of the nonlocal momentum definition, or nonexistence of the velocity or local momentum of the particle undergoing the new motion, this kind of irreducibility denotes that momentum is no longer related to space-time or velocity as for classical continuous motion, it is also an essential property of the new motion just like position, especially it provides an equivalent nonlocal description of the new motion, while position provides a local description of the new motion, this equivalence further results in the one-to-one relation between these two kinds of descriptions, then it is just this one-to-one relation which requires the existence of a certain constant $`\mathrm{}`$ to cancel out the unit of the physical quantities $`px`$ and $`Et`$ in the relation, at the same time, the existence of $`\mathrm{}`$ also indicates some kind of balance between the properties( concretely speaking, their value distribution dispersions ) limited by the one-to-one relation ( there is no such limitation for classical continuous motion and $`\mathrm{}=0`$ ), or we can say, the existence of $`\mathrm{}`$ essentially indicates some kind of balance between the nonlocality and locality of the new motion in space-time. Certainly, the new motion provides a broader motion framework for the particle in microscopic world, in which we can understand the weird displays of the microscopic objects objectively and consistently, which can not be grasped consistently in the old framework of classical continuous motion, but it can not give the concrete value of $`\mathrm{}`$ in our world by itself, as special relativity can not determine the value of light velocity $`c`$, or general relativity can not determine the value of gravity constant $`G`$, surely, there may exist some deeper reasons for the particular value of $`\mathrm{}`$ in our universe, but the new motion can not determine this value alone, the solution may have to resort to other subtle realities in this world, for example, gravity (G), space-time(c), or even the existence of mankind. ## V The confirmation of the new motion In the following, we will give two main methods to confirm the new motion underlying the wave function in microscopic world. ### A Protective measurement The first method is called protective measurement, it aims at measure the new motion state of a single particle through repeatedly measuring it without destroying its state, in real experiment a small ensemble of similar particles may be required. By use of this kind of measurement, the new motion state or wave function of a particle does not change appreciably when the measurement is being made on it, its clever way is to let the system undergo a suitable interaction so that it is in a non-degenerate eigenstate of the whole Hamiltonian, then the measurement is made adiabatically so that the new motion state described by the wave function neither changes appreciably nor becomes entangled with the measurement device, this suitable interaction is called the protection. In the following, we will demonstrate how to use the protective measurement to confirm the new motion of a single particle in microscopic world, which is described by the wave function and Schrödinger equation, for simplicity but lose no generality, we only consider a particle in a discrete nondegenerate energy eigenstate $`\psi (x)`$, the interaction Hamiltonian for measuring the value of an observable $`A_n`$ in this state is:$`H=g(t)PA_n`$, which couples the system to a measuring device, with coordinate and momentum denoted respectively by $`Q`$ and $`P`$, where $`A_n`$ is the normalized projection operator on small regions $`V_n`$ having volume $`v_n`$, namely: $$A_n=\{\begin{array}{cc}\frac{1}{v_n},\hfill & \text{if }xV_n\text{,}\hfill \\ 0,\hfill & \text{if }xV_n\text{.}\hfill \end{array}$$ $`(3)`$ the time-dependent coupling $`g(t)`$ is normalized to $`_0^Tg(t)𝑑t=1`$, we let $`g(t)=1/T`$ for most of the time $`T`$ and assume that $`g(t)`$ goes to zero gradually before and after the period $`T`$ to obtain an adiabatic process when $`T\mathrm{}`$, the initial state of the pointer is taken to be a Gaussian centered around zero, and the canonical conjugate $`P`$ is bounded and also a motion constant not only of the interaction Hamiltonian, but of the whole Hamiltonian. Now using this kind of protective measurement, the measurement of $`A_n`$ yields the result: $$A_n=\frac{1}{v_n}_{V_n}|\psi |^2𝑑v=|\psi _n|^2$$ the result $`|\psi _n|^2`$ is just the average of the measure density $`\rho (𝐱)=|\psi (𝐱)|^2`$ over the small region $`V_n`$, so when $`v_n0`$ and after performing measurements in sufficiently many regions $`V_n`$ we can find the measure density $`\rho (𝐱)`$ of the new motion state of the measured particle. Then we will measure the measure current density $`𝐣(𝐱)`$ of the new motion state, namely we need measure the value of an observable $`B_n`$ in this state, where $`B_n=\frac{1}{2i}(A_n+A_n)`$, the measurement result will be $`B_n`$, ant it is just the average value of the measure fluid density $`𝐣(𝐱)=\frac{1}{2i}(\psi ^{}\psi \psi \psi ^{})`$ in the region $`V_n`$, so when $`v_n0`$ and after performing measurements in sufficiently many regions $`V_n`$, we can also find the measure fluid density $`𝐣(𝐱)`$ of the new motion state of the measured particle. Thus we have demonstrated that the new motion of a single particle, which is described by the measure density $`\rho (𝐱)`$ and measure fluid density $`𝐣(𝐱)`$, or the abstract wave function $`\psi (𝐱)`$, can be confirmed through the above protective measurement. ### B Standard impulse measurement Certainly, the standard impulse measurement in quantum mechanics can also confirm the new motion of a single particle described by the wave function in microscopic world, its wrinkle lies in that we first prepare an ensemble of a large number of particles in the same state of new motion, then using this kind of measurement measure every particle in the ensemble only one time, and we need not repeatedly measure the same particle, thus even if the state of the new motion or wave function of a single particle will be destroyed after each measurement so that the following measurement will no longer reveal the real information about the original state of new motion, but according to quantum mechanics, all the individual measurement results about the ensemble can reveal the state of the ensemble, and also the state of the new motion of a single particle in the ensemble, since every particle in the ensemble is in the same state of new motion. ### C Understanding the wave function in terms of new motion As we know, many ontological interpretations of quantum mechanics still consider the wave function as one kind of objective field, as Bell said, ”No one can understand this theory until he is willing to think of $`\psi `$ as a real objective field rather than a ’probability amplitude’”, but the difficulties involved in this kind of ontological interpretation had been pointed out since the beginning of quantum mechanics, for example, the existence of complex wave function, the multidimensionality of the wave function and the representation problem etc can not be solved in the framework of objective field, these difficulties greatly prevented physicists from accepting the objective view about the microscopic object described by the wave function. Now, according to the new quantum discontinuous motion, we can easily overcome these difficulties in the framework of objective particle: (1). As to the problem of complex wave function, since the wave function is not one kind of objective field at all, it is just one kind of indirect abstract mathematical symbol, which is used to describe the objective quantum discontinuous motion of the particles in microscopic world, concretely speaking, it is an abstract complex of the measure density $`\rho (x,t)`$ and measure fluid density $`j(x,t)`$, which directly describe the state of the new motion in physics, and its appearance essentially results from the symmetry involved in the new motion and the resulting linear evolution principle, thus whether the wave function is complex or not is not a problem for the new motion described by the wave function. (2). The multidimensionality of the wave function is very natural from the point of view of new motion of particle, as we know, the measure density $`\rho (𝐱,t)`$, or wave function $`\psi (𝐱,t)`$ for a single particle depends on three space variables, and as to two particles generally we can not define their respective measure densities $`\rho _1(𝐱,t)`$ and $`\rho _2(𝐱,t)`$, or wave functions $`\psi _1(𝐱,t)`$ and $`\psi _2(𝐱,t)`$, whereas we should define their joint measure density $`\rho (𝐱_1,𝐱_2,t)`$ according to point set theory, which means the joint measure density of particle 1 in position $`𝐱_1`$ and particle 2 in position $`𝐱_2`$, this further results in that the one-to-one relation is two-manifold Fourier transformation involving the wave function $`\psi (𝐱_1,𝐱_2,t)`$, which depends on six space variables, namely $$\psi (𝐱_1,𝐱_2,t)=_{\mathrm{}}^+\mathrm{}_{\mathrm{}}^+\mathrm{}\phi (𝐩_1,𝐩_2,t)e^{i(𝐱_1𝐩_1+𝐱_2𝐩_2)}𝑑𝐩_1𝑑𝐩_2$$ (16) and Schrödinger equation for this two-particle situation is $$i\mathrm{}\frac{\psi (𝐱_1,𝐱_2,t)}{t}=\frac{\mathrm{}^2}{2m}[_1^2+_2^2]\psi (𝐱_1,𝐱_2,t)+U(𝐱_1,𝐱_2,t)\psi (𝐱_1,𝐱_2,t)$$ (17) Certainly, when these two particles are independent, the joint measure density $`\rho (𝐱_1,𝐱_2,t)`$ can be reduced to $`\rho _1(𝐱_1,t)\rho _2(𝐱_2,t)`$, and the joint wave function $`\psi (𝐱_1,𝐱_2,t)`$ can also be reduced to $`\psi _1(𝐱_1,t)\psi _2(𝐱_2,t)`$. Certainly, as Bohr had taught us, if one does not wander about quantum mechanics he surely has not understood this theory yet. Indeed, the quantum discontinuous motion in microscopic world is extremely different from the familiar classical continuous motion in macroscopic world, so it is natural that the experience from classical mechanics contradicts the picture of new motion, especially the particle undergoing the new motion can move far away in a very small time interval, or even infinitesimal time interval, which appears to conflict with the local spreading of energy, but in fact, this just provides the objective origin of quantum nonlocality, which has been confirmed in experiments; on the other hand, when confirming this kind of weird display of new motion we have to consider the more weird quantum measurement, which needs to be further studied in another paper, but here we should keep in mind something, namely when we go into the new field of quantum discontinuous motion, we should re-understand the meaning of everything in principle, including energy and the accepted law that it moves locally or even the concept of particle, if we still use them, in one word, our understanding about reality can only be determined by the reality itself, not our belief. ## VI Explaining the weird display of the wave function in terms of new motion At last, we will give two familiar examples, which have evidently manifested the existence of new motion in microscopic world, to explain the weird displays of the wave function in terms of the new motion of particle. The first is the base state of Hydrogen atom, its position distribution density, which can be found through the above measurements, is written in the following: $$\rho (𝐱)=|\psi (𝐱)|^2=\frac{4}{a_0}exp(\frac{2r}{a_0})$$ (18) According to the new motion of the particle, at any instant the electron will be in only one position in space, but during infinitesimal time interval \[t,t+dt\], the electron will move throughout the whole space where the above function does not equal to zero, and its position measure density will be the same as the above position distribution density function obtained from the wave function, thus during infinitesimal time interval \[t,t+dt\], according to Gauss theorem the charge distribution of the whole system will be equivalent to the zero charge distribution for the outside observer, and there exists no change of the whole charge distribution either, so it can be easily understood that no energy is radiated during finite time interval, as well as during infinitesimal time interval, this is just the objective origin of the mysterious stability in atom world. The second example is the double-slit experiment, people have been trying to understand the formation process of the double-slit interference pattern objectively, but few people can give an ontological description for it up to now, the essential reason, as we think, is that people all ignored the difference between instant and infinitesimal time interval. By means of the new quantum discontinuous motion of the particle, the mystery of this process can be disclosed, the real process should be that the particle undergoing the new motion passes through both slits in the double-slit experiment, this means that the particle is still in only one of the two slits at any instant, but during the time interval $`\mathrm{\Delta }t`$, which can approach to zero, the particle moves throughout both slits and passes through them, and the position measure density of the particle always satisfies the function $`\rho (𝐱,t)=|\psi (𝐱,t)|^2`$, which is finite and the same in both slits. Since the particle undergoing the new motion can pass through both slits in this objective way, we can more easily understand the forming of double-slit interference pattern, which is not a simple superposition of two one-slit interference patterns. ## VII Conclusions On the whole, a new point of view about the motion of the particle is presented to interpret the weird display of the wave function describing the microscopic object, in mathematics, the point set theory casts a new light on the study of physical motion, especially through deeply analyzing the regular dense point set, we present a new kind of motion, which is called quantum discontinuous motion contrary to classical continuous motion. Then the physical meaning of this new motion is carefully examined, at the same time, we give a strict physical definition about the quantum discontinuous motion, and show that the notorious wave function is just a mathematical complex describing the new motion of the microscopic particle, and Schrödinger equation in quantum mechanics is just the simplest nonrelativistic evolution equations for the new motion, especially the consistent axiom system of quantum mechanics is also deduced out. At last, the protective measurement and standard impulse measurement are used to confirm the existence of the new motion of the microscopic particle described by the wave function, and two famous examples are also given to explain the weird display of the wave function in terms of new motion. Acknowledgments Thanks for helpful discussions with X.Y.Huang ( Peking University ), A.Jadczyk ( University of Wroclaw ), P.Pearle ( Hamilton College ), F.Selleri ( University di Bari ), Y.Shi ( University of Cambridge ), A.Shimony, A.Suarez ( Center for Quantum Philosophy ), L.A.Wu ( Institute Of Physics, Academia Sinica ), Dr S.X.Yu ( Institute Of Theoretical Physics, Academia Sinica ), H.D.Zeh.
no-problem/9907/hep-th9907064.html
ar5iv
text
# References In two spatial dimensions the group relevant to the quantum statistics of particles is the braid group , rather than the permutation group. As a result, the possibility for non-standard statistics exists. A well-studied case is (abelian) anyons , transforming in a unitary abelian representation of the braid group. Anyons in the lowest Landau level, in particular, are relevant to the quantum Hall effect and constitute realizations of ideal exclusion statistics . A natural generalization is nonabelian anyon statistics, based on nonabelian representations of the braid group. These would be the anyonic analogs of parastatistics . Just as abelian anyons can be thought of as ordinary (bosonic or fermionic) particles interacting through a non-dynamical abelian gauge field, nonabelian anyons can be though of as particles carrying internal degrees of freedom in some irreducible representation $`R`$ of a nonabelian group $`SU(n)`$ and interacting through an appropriate non-dynamical nonabelian gauge field. What fixes the statistics, then, is the group $`SU(n)`$, the representation $`R`$ and the coupling strength $`g`$ of the internal degrees of freedom to the gauge field. A field-theoretic approach to achieving such statistics, in analogy to the abelian case, is to couple the particles to a nonabelian gauge field with a Chern-Simons action . Such a particle-field model was proposed by Verlinde . $`g`$, then, is essentially the inverse of the coefficient of the Chern-Simons term and, as such, inherits the quantization condition $$g=\frac{2}{n},n\mathrm{integer}$$ (1) This condition does not seem to be crucial for the purely first-quantized approach and, at any rate, will not play any role in this paper. It is of interest to derive the thermodynamics and statistical mechanical properties of nonabelian anyons in order to probe the possibility of new physics deriving from the nonabelian nature of the system. In a recent paper, Isakov, Lozano and Ouvry examined these questions for the simplest case of $`SU(2)`$ anyons in the fundamental (spin-half) representation. They found that the virial coefficients up to the fifth one do not depend on the statistics parameter $`g`$. They conjectured that this holds for all the coefficients and posed the question of a possible underlying symmetry that explains this vanishing dependence. The purpose of this note is to give a complete proof of the independence of all virial coefficient of this model on the statistics parameter $`g`$, valid for any group and any representation. It is based on a diagrammatic expression of the cluster coefficients which is useful in deriving them in a simple way and reveals their scaling properties with the volume. It will be apparent that the only feature of the statistical interaction which is relevant for this result is that it is a traceless operator in the space of internal degrees of freedom of the particles. We repeat here the main results for the system that will be used in this paper, as presented in . The model consists of $`N`$ non-interacting spinless particles on the plane with internal degrees of freedom transforming in some finite-dimensional unitary irreducible representation $`R`$ of $`SU(n)`$. (We shall refer to these degrees of freedom as flavor.) In the gauge where the hamiltonian of the particles is free, the nontrivial statistics manifests in the wavefunction of the system, which is not single valued. Under an exchange of particles following a path belonging to an element of the braid group the wavefunction transforms in some nonabelian representation of the braid group parametrized by the irrep $`R`$ of $`SU(n)`$ and a statistics parameter $`g`$. In principle an abelian part can also be included, parametrized by a second coupling constant $`\alpha `$, endowing the particles with (abelian) anyonic statistics. The contributions of the abelian and nonabelian part to the statistical mechanis decouple, however, as will be apparent in the sequel, so we are not going to be concerned with the abelian part. The particles are taken as bosons as far as their abelian statistics is concerned. In analogy with the abelian case, we can perform a singular nonabelian gauge transformation that makes the wavefunction single-valued and bosonic, at the expense of introducing a gauge field coupling the particles. We also introduce an external strong constant magnetic field $`B=2\omega _c/e`$, as well as an external rotationally invariant harmonic oscillator potential of frequency $`\omega `$ (which serves as a ‘box’ to bound the particles). Upon extracting from the wavefunction an analytic nonabelian factor that accounts for its short-distance analytic and braiding behavior and a gaussian factor, we are left with an effective hamiltonian reading $`H`$ $`=`$ $`{\displaystyle \underset{i}{}}\left(2_i\overline{}_i(\omega _t\omega _c)z_i_i(\omega _t+\omega _c)\overline{z}_i\overline{}_i+\omega _t\right)`$ (2) $``$ $`2g{\displaystyle \underset{i<j}{}}T_i^AT_j^A\left({\displaystyle \frac{\overline{}_i\overline{}_j}{z_iz_j}}{\displaystyle \frac{\omega _t\omega _c}{2}}\right)`$ In the above $`z=x+iy`$ is a complex coordinate on the plane, $`/_z`$ is the corresponding derivative, and $`\omega _t^2=\omega _c^2+\omega ^2`$. $`T_i^A`$ are generators of the group $`SU(n)`$ in the $`R`$-representation, each acting in the flavor space of particle $`i`$; so $`T^A`$ are $`d_R\times d_R`$ dimensional matrices and $`A=1,\mathrm{}n^21`$. Summation over repeated indices is always implied. All homogeneous analytic wavefunctions are eigenstates of the above hamiltonian. When $`B>0`$ the analytic wavefunctions become degenerate in the pure magnetic field limit $`\omega 0`$ and constitute the lowest Landau level (LLL) for the system. For large $`B`$ all higher levels will acquire a large gap and decouple. Good analytic behavior of the wavefunction near coincidence points in that case requires $`g>0`$. Conversely, for $`B<0`$ we can extract an anti-analytic nonabelian factor from the wavefunction and arrive at an analogous expression for $`H`$. In that case, it is the anti-analytic wavefunctions that constitute the LLL and we must have $`g<0`$. From now on we will consider the case $`B,g>0`$, the opposite one being similar. We shall also assume that $`g`$ is not too big, so that no new states descend to the LLL from the excited spectrum. The end result is that on states in the LLL the hamiltonian assumes the form $$H=H_o+S$$ (3) where $`H_o`$ is the hamiltonian of a non-interacting bosonic system and $`S`$ is the statistics part coupling the internal degrees of freedom of the particles: $$H_o=N\omega _t+(\omega _t\omega _c)\underset{i}{}z_i_i,S=g(\omega _t\omega _c)\underset{i<j}{}T_i^AT_j^A$$ (4) The spectrum of the above hamiltonian can be easily obtained. $`H_o`$ essentially counts the degree of homogeneity in $`z_i`$ of the analytic wavefunction, which can then be chosen a homogeneous polynomial in $`z_i`$. $`S`$ can be expressed as $$S=\frac{g}{2}(\omega _t\omega _c)\left[\left(\underset{i}{}T_i^A\right)^2\underset{i}{}\left(T_i^A\right)^2\right]$$ (5) Under the total flavor group with generators $`T^A=_iT_i^A`$, states transform in the tensor product of $`N`$ $`R`$-irreps $`R\times \mathrm{}R`$, which can be decomposed into irreducible components. On states transforming under an irreducible representation $`R_t`$ of the total flavor, $`S`$ becomes $$S=\frac{g}{2}(\omega _t\omega _c)\left[C_2(R_t)NC_2(R)\right]$$ (6) where $`C_2(R)`$, $`C_2(R_t)`$ are the quadratic Casimir of $`R`$ and $`R_t`$. The total wavefunction must carry the $`R_t`$ representation of total flavor and be symmetric under total particle exchange (coordinate and internal degrees of freedom). This calls for some group theory for constructing the states . From this spectrum the partition function, cluster and virial coefficients can be calculated. This approach was followed in (for $`R`$ the spin-half of $`SU(2)`$) and the first few virial coefficients in the thermodynamic limit were thus calculated. We shall take here an alternative route, based on a diagrammatic expansion. The facts central to the derivation are: 1. The cluster and virial coefficients at the thermodynamic limit can be calculated by taking the strength of the external potential to zero (corresponding to taking the volume $`V`$ of the ‘box’ to infinity). The correct scaling limit for the $`k`$-th cluster coefficient is $$\frac{1}{k\beta (\omega _t\omega _c)}V\frac{\omega _c}{\pi }$$ (7) 2. The statistical interaction $`S`$ is of order $`1/V`$. 3. The statistical interaction $`S`$ is a sum of two-body terms, each of which is traceless with respect to the internal space of each particle. We now give the rules for the path-integral representation of the system. (For a more detailed discussion see .) The $`N`$-body partition function $`Z_N`$ can be expressed as a many-body path integral in periodic euclidean time $`\beta `$. For short, we shall call such path-integral configurations diagrams. Since the particles are identical, the configuration at time $`\tau =\beta `$ can be any permutation of the one at $`\tau =0`$. This means that the paths of particles can braid and interchange as they go round the periodic time direction. Such periodically connected paths of $`p`$ particles constitute one ‘thread’ wrapping $`p`$ times around the time circle. Appropriate symmetry factors must be included in each diagram to avoid overcounting of degrees of freedom. Further, since the particles have color degrees of freedom, each path is also labeled by a color index $`a=1,\mathrm{}d_R`$. Summation over all possible values of such indices is assumed. The interaction $`S`$ can be taken into account perturbatively. It is two-body and instantaneous, so each insertion corresponds to coupling two distinct particle paths at a given time. Since it acts on the flavor space of the two particles, it will change the flavor index on the two paths before and after the interaction, say from $`a`$ to $`b`$ on one and from $`c`$ to $`d`$ on the other. The strength of this interaction is given by the matrix element $$S_{ab;cd}=g(\omega _t\omega _c)(T^A)_{ab}(T^A)_{cd}$$ (8) The symmetry factors of diagrams with such insertions are modified, since the paths connected by $`S`$ are obviously singled out. A typical configuration for the path integral in the case of five particles and two insertions of $`S`$ is depicted in fig. 1.a, where the constraints of periodicity for the paths and their flavors have been taken into account. For our purposes only the topology and connectivity of these diagrams will be important, so we will depict them in the simplified fashion of fig. 1b. The grand partition function for the system $`𝒵`$ is given by the sum of the $`N`$-body partition functions for all $`N`$ weighted by fugacity factors $`z^N=e^{\mu \beta N}`$ with $`\mu `$ the chemical potential: $$𝒵=\underset{N=0}{\overset{\mathrm{}}{}}Z_Nz^N$$ (9) As such, it is the sum of all many-body diagrams. The grand potential $`\mathrm{\Omega }`$ is the logarithm of $`𝒵`$ and, by the standard argument, it will be given by the sum of all connected diagrams. Two parts of a diagram are disconnected if the particle paths of each diagram do not mix with the other and if there are no interactions $`S`$ coupling the two diagrams. The coefficients $`b_k`$ of the expansion of $`\mathrm{\Omega }`$ in powers of $`z`$ are the cluster coefficients: $$\mathrm{\Omega }=\underset{k=1}{\overset{\mathrm{}}{}}b_kz^k$$ (10) Therefore, the $`k`$-th cluster coefficient $`b_k`$ is simply the sum of all connected $`k`$-particle diagrams. We come, now, to the question of determining $`b_k`$ in the thermodynamic limit. We must isolate, in the class of $`k`$-body connected diagrams, the leading contribution in $`V`$ (or, equivalently, in $`(\omega _t\omega _c)^1`$), which, for a proper extensive behavior, must be of order $`V`$. To achieve this, note that each topologically connected part of a diagram, consisting of a single thread looping $`p`$ times, in the absence of interaction insertions is of order $`V`$. Indeed, this is simply the $`p`$-th cluster coefficient of noniteracting bosons coming in $`d_R`$ flavors, which is properly of order $`V`$. (Alternatively, if the infrared regulator were a flat ‘box’ rather than an oscillator potential, the factor $`V`$ would come from the translation invariance of the connected diagram within the box.) Thus, if a diagram contains $`q`$ topologically connected components, it will be, a priori, of order $`V^q`$. For it to be connected, there must be enough insertions of $`S`$ to connect the $`q`$ components to each other. We must have a minimum number of $`q1`$ insertions in order to fully connect the components in a tree-like topology (fig. 2). Since each insertion $`S`$ contributes a factor $`1/V`$, such minimally connected diagrams are of order $`V`$. Any further insertion of $`S`$ will give a sub-leading in $`V`$ diagram. In fact, by simple topological counting arguments, we see that the number of loops in non-minimally connected diagrams counts the sub-leading powers of $`1/V`$. So far we concluded that $`b_k`$ will be given by the sum of all minimally connected $`k`$-particle diagrams with any number of components $`q`$ ($`1qk`$). Now comes the final observation: each tree must have two or more ‘endpoints,’ that is, components where only one insertion of $`S`$ connects. The entire thread of this diagram must clearly carry the same flavor index $`a`$; thus, the corresponding matrix element for the insertion $`S`$ connecting to this diagram is $`S_{aa;bc}`$, with $`b,c`$ the flavor indices connecting at the other end of the insertion. Upon summing over $`a`$ we have $$S_{aa;bc}=g(\omega _t\omega _c)(T^A)_{aa}(T^A)_{bc}=g(\omega _t\omega _c)(T^A)_{bc}\mathrm{tr}T^A=0$$ (11) Therefore, all such diagrams vanish. The only surviving diagram is the one with a single topologically connected component and no $`S`$ insertions, which reproduces the cluster coefficient of free bosons with $`d_R`$ flavors. Since virial coefficients are uniquely expressed in terms of cluster coefficients, we have proved that the virial coefficients of the system are independent of their nonabelian statistics, for any $`R`$ of any $`SU(n)`$. The above reasoning can also be used to show that the contribution of the abelian part is the same as in the absence of the nonabelian part. An abelian part can be included by appending a $`U(1)`$ generator $`T^0=Q`$ to the $`T^A`$, proportional to the unit matrix. The trace in the above insertion, then, would give $$\underset{a}{}S_{aa;bc}=g(\omega _t\omega _c)(T^0)_{aa}(T^0)_{bc}=gd_RQ^2(\omega _t\omega _c)\delta _{bc}$$ (12) This is a contribution proportional to an anyonic parameter $`\alpha =gQ^2d_R`$. The effect of the insertion on the flavor indices of the remaining diagrams is effectively suppressed (since, due to $`\delta _{bc}`$, $`b=c`$). Repeating the argument with all endpoint graphs, we eventually reduce the whole graph to a set of components with non-interacting flavor indices and the standard abelian statistics interaction between the components. The cluster coefficients are simply $`d_R`$ times the single-flavor anyonic coefficients. We remark here that the above techniques can be used to easily obtain the subleading in $`1/V`$ contributions to the cluster and virial coefficients. To each component of the diagram at least two $`S`$ insertions are attached (else the diagram vanishes by the previous argument). Summing over the flavor indices on a component where $`m`$ insertions attach gives a term proportional to $$g^m(\omega _t\omega _c)^mD_R^{A_1\mathrm{}A_m}$$ (13) where the $`m`$-index symbol $`D_R`$ is $$D_R^{A_1\mathrm{}A_m}=\mathrm{tr}(T^{A_1}\mathrm{}T^{A_m})$$ (14) and the total diagram involves multiplying the $`D`$-symbols of each component and contracting the group indices $`A_i`$ according to the connectivity of the components through $`S`$-insertions. A diagram with $`q`$ components and $`q+s1`$ insertions will be of subleading order $`1/V^s`$ and of order $`g^{q+s1}`$ in the statistics parameter. Since $`q`$ can range from 1 to $`k`$ for a $`k`$-particle diagram we conclude that the $`1/V^s`$ correction to the cluster coefficient $`b_k`$ will be a polynomial in $`g`$ with powers ranging from $`g^s`$ to $`g^{k+s1}`$. The task of calculating the above corrections simplifies further in the special case that $`R`$ is the fundamental of $`SU(n)`$. In that case, a well-known completeness relation simplifies $`S`$: $$S_{ab;cd}=g(\omega _t\omega _c)\underset{A}{}(T^A)_{ab}(T^A)_{cd}=g(\omega _t\omega _c)\frac{1}{2}(\delta _{ad}\delta _{cb}\frac{1}{n}\delta _{ab}\delta _{cd})$$ (15) So $`S`$ is the sum of a part that simply interchanges the flavor indices of the strands that it couples plus a part proportional to the identity operator. The evaluation of diagrams in this case becomes a simple matter with no group theory required. Having described the above, we should still point out that the $`1/V`$ corrections obtained this way are specific to the ‘harmonic box’ regularization of the system. They are, thus, likely not universal and of little interest. Concluding, the results of this paper are somewhat disappointing, since they indicate that no new physics are expected at the thermodynamic limit from any nonabelian statistics of particles at the LLL. The result seems completely generic since it relies on little else than the very nonabelian nature of the statistics, that is, the vanishing of the trace of its generators. Still it is expected that nonabelian statistics will influence the properties of systems not in the LLL. A calculation of the properties of such systems along the lines presented here may be an interesting endeavor. I am thankful to S. Isakov and to S. Ouvry for discussing their results on nonabelian anyons prior to publication, and to the Les Houches 1998 organizers for hosting an exciting summer school where the ideas in this paper were initiated.
no-problem/9907/quant-ph9907071.html
ar5iv
text
# Nonclassical effects in a driven atoms/cavity system in the presence of arbitrary driving field and dephasing ## I Introduction In this paper we report on extensions to previous work on dynamical cavity QED effects in the photon statistics of transmitted light from a driven optical cavity coupled to an ensemble of two level atoms. Much work has been done on structural cavity QED effects such as energy level shifts and the modification of spontaneous emission rates. These structural effects can be seen to arise from semiclassical models. In addition work has been done on dynamical effects where the coupling between the cavity field and atoms has a significant effect on the evolution of the system, in particular in the strong coupling regime where a single quantum of energy and hence single quantum fluctuations give rise to nontrivial dynamics. In this regime the field can neither be viewed as mildly perturbed by the atoms (good cavity limit), nor are the atoms mildly perturbed by the field (bad cavity limit). For a review of the work on structural and dynamical effects in cavity QED, see Ref. . The problem of a single two level atom coupled to a single mode field was originally studied by Jaynes and Cummings and extended to many atoms by Tavis and Cummings . These models have been extended in recent theoretical work to include spontaneous emission and cavity field decay , and atomic transit time broadening and detunings . Nonclassical correlations in photon statistics which violate a Schwarz inequality have been predicted for this system, including photon antibunching \[defined here as $`g^{(2)}(0)_+>g^{(2)}(0)`$\], and sub-Poissonian statistics \[$`g^{(2)}(0)<1`$\]. Also other effects have been predicted which we refer to as overshoots and undershoots \[$`|g^{(2)}(\tau )1|>|g^{(2)}(0)1|`$ where $`g^{(2)}(\tau )`$ is the normalized second order correlation function\]. Examples of these nonclassical correlations from previous weak field results are shown in Fig. 1. Figure 1(a) shows photon antibunching and sub-Poissonian statistics, (b) shows an overshoot violation, and (c) shows an undershoot violation. Photon antibunching has been seen experimentally in this system by Rempe et. al. . Overshoot violations have recently been seen by Mielke, Foster, and Orozco . In general the theory matches the experiments in terms of qualitative behavior while the quantitative size of the nonclassical effect does not. This has led us to consider complications in the experiments which may be responsible for the discrepancy including deviations from the weak field limit, and dephasing due to the atoms entering and leaving the cavity. The experiments use an atomic beam to introduce atoms into the cavity so that the time of flight across the mode is on the order of ten spontaneous emission lifetimes . We expect that dephasing due to atomic traversal of the cavity will have a detrimental effect on nonclassical correlations. In addition deviations from the weak driving field limit and interactions with ’spectator’ atoms far from the mode waist may be important. These effects are investigated in this paper by numerically solving the master equation for the system and by quantum trajectory simulations. Rather than investigating all possible effects at once, we isolate them and try to understand what is most critical. The general outline of this paper is as follows. In Section II we present the physical model of the system under investigation and describe the methods of solution. In Section III we discuss the photon statistics of the transmitted light outside the weak field limit. Section IV presents two models of atomic transit dephasing and the resulting photon statistics. In Section V we include effects of a spectator atom with a coupling which is a fraction of the maximum coupling, and finally we conclude in Section VI. ## II Physical model The system under investigation is an extension of the Jaynes-Cummings Hamiltonian which includes effects of atomic and cavity field decay as well as a coherent driving field. A schematic diagram of the system is shown in Fig. 2. The field and atomic Hamiltonians are given by $$H_F=\mathrm{}\omega _ca^{}a$$ (1) $$H_A=\underset{j}{}\mathrm{}\omega _a\sigma _z^j$$ (2) and the atom-field interaction in the rotating wave approximation is given by $$H_{AF}=\underset{j}{}i\mathrm{}g_j\left(a^{}\sigma _{}^ja\sigma _+^j\right).$$ (3) The cavity field creation and annihilation operators are $`a^{}`$ and $`a`$ respectively and $`\sigma _\pm ^j`$ and $`\sigma _z^j`$ are Pauli operators for the $`j`$th two-level atom. The atom-field coupling strength is determined by $$g_j=\mu \left(\frac{\omega _c}{2\mathrm{}ϵ_0V}\right)^{1/2}\mathrm{sin}kz_j$$ (4) where $`\mu `$ is the dipole transition matrix element between the two atomic states, $`V`$ is the cavity mode volume, and $`\mathrm{sin}kz_j`$ takes into account the position of the atom in the mode. In previous work it was assumed that the atoms sit at antinodes of the field where the coupling is a maximum. Here we allow the atoms to be placed anywhere in the mode so that a range of couplings are allowed for different atoms. The cavity field is driven by a classical laser field with the driving field-cavity field coupling described by the Hamiltonian $$H_L=i\mathrm{}E\left(a^{}e^{i\omega _dt}ae^{i\omega _dt}\right)$$ (5) where $`E`$ is the classical laser intensity scaled such that $`E/\kappa `$ is the photon flux injected into the cavity. Throughout we assume the atom, cavity field, and driving field are on resonance ($`\omega _0\omega _c=\omega _a=\omega _d`$). Dissipation in the system gives rise to nontrivial irreversible dynamics. Cavity field damping and atomic population and polarization decay are described by superoperators acting on the density matrix of the system which are derived using standard methods . Cavity field damping is described by $$_F\rho =\kappa \left(2a\rho a^{}a^{}a\rho \rho a^{}a\right)$$ (6) where $`\kappa `$ is the rate of cavity field damping. Atomic population and polarization decay are described by $$_A\rho =\frac{\gamma }{2}\underset{j}{}\left(2\sigma _{}^j\rho \sigma _+^j\sigma _+^j\sigma _{}^j\rho \rho \sigma _+^j\sigma _{}^j\right)$$ (7) where $`\gamma `$ is the spontaneous emission rate of an atom. The full master equation in the Born-Markov approximation is then $$\dot{\rho }=\frac{i}{\mathrm{}}[H_A+H_F+H_{AF}+H_L,\rho ]+_F\rho +_A\rho \rho .$$ (8) A numerical solution of the master equation is carried out in the Fock state basis, and also a quantum trajectory simulation is developed from the master equation. ### A Numerical solution of the master equation The master equation in the Fock state basis is $`\dot{\rho }_{n,+;m,+}`$ $`=`$ $`g\sqrt{n+1}\rho _{n+1,;m,+}g\sqrt{m+1}\rho _{n,+;m+1,}+E\sqrt{n}\rho _{n1,+;m,+}`$ (12) $`+E\sqrt{m}\rho _{n,+;m1,+}E\sqrt{n+1}\rho _{n+1,+;m,+}E\sqrt{m+1}\rho _{n,+;m+1,+}`$ $`+2\kappa \sqrt{(n+1)(m+1)}\rho _{n+1,+;m+1,+}\left[\kappa (n+m)+\gamma \right]\rho _{n,+;m,+}`$ $`\dot{\rho }_{n,;m,}`$ $`=`$ $`g\sqrt{n}\rho _{n1,+;m,}+g\sqrt{m}\rho _{n,;m1,+}+E\sqrt{n}\rho _{n1,;m,}`$ (15) $`+E\sqrt{m}\rho _{n,;m1,}E\sqrt{n+1}\rho _{n+1,;m,}E\sqrt{m+1}\rho _{n,;m+1,}`$ $`+2\kappa \sqrt{(n+1)(m+1)}\rho _{n+1,;m+1,}\kappa (n+m)\rho _{n,;m,}+\gamma \rho _{n,+;m,+}`$ $`\dot{\rho }_{n,+;m,}`$ $`=`$ $`g\sqrt{n+1}\rho _{n+1,+;m,}+g\sqrt{m}\rho _{n,+;m1,+}+E\sqrt{n}\rho _{n1,+;m,}`$ (18) $`+E\sqrt{m}\rho _{n,+;m1,}E\sqrt{n+1}\rho _{n+1,+;m,}E\sqrt{m+1}\rho _{n,+;m+1,}`$ $`+2\kappa \sqrt{(n+1)(m+1)}\rho _{n+1,+;m+1,}\left[\kappa (n+m)\gamma /2\right]\rho _{n,+;m,}`$ $$\dot{\rho }_{n,;m,+}=\dot{\rho }_{m,+;n,}^{}$$ (19) where $`\rho _{n,\pm ;m,\pm }=n,\pm |\rho |m,\pm `$ and $`+`$ and $``$ denote upper and lower atomic states respectively. We have numerically solved the master equation for the steady state for arbitrary driving field by truncating the Fock basis at a point where the population of $`|n_{max},\pm `$ is less than $`10^4`$. The second order correlation function $$g^{(2)}(\tau )=\frac{a^{}(0)a^{}(\tau )a(\tau )a(0)}{a^{}a_{ss}^2}$$ (20) is calculated from steady state matrix elements using the quantum regression theorem due to Lax . ### B Quantum trajectory simulation We have developed a quantum trajectory simulation of this system from the master equation following the formalism of Carmichael . We unravel the master equation into both a piece describing continuous evolution and a set of collapse operators in a way which is based on a simulated photon counting experiment. $$\rho =\left(𝒮\right)\rho +𝒮\rho $$ (21) where $`\left(𝒮\right)\rho `$ is identified as the terms which can be written as commutators or anticommutators and $`𝒮\rho `$ is identified as all terms which can be written as $`\widehat{O}^{}\rho \widehat{O}`$. This particular unraveling is well suited for studies of photon statistics as the $`\widehat{O}`$’s represent quantum jumps due to emission of a photon. The continuous evolution of the system is described by $`\left(𝒮\right)\rho `$ while $`𝒮\rho `$ describes collapse events which punctuate the evolution. We define a closed system Hamiltonian and a dissipative Hamiltonian from the unraveled master equation as $`\left(𝒮\right)\rho `$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[H_S,\rho ]+[H_D,\rho ]_+`$ (22) $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[H_A+H_F+H_{AF}+H_L,\rho ][\left(a^{}a+{\displaystyle \underset{j}{}}\sigma _+^j\sigma _{}^j\right),\rho ]_+`$ (23) where $`[\widehat{O},\rho ]_+`$ denotes the anticommutator of $`\widehat{O}`$ and $`\rho `$. A non-Hermitian Hamiltonian which reproduces the continuous evolution of the density matrix is defined as $$H=H_S+i\mathrm{}H_D.$$ (24) The rest of the master equation enters as collapse operators which are applied at random times when $`R(0,1)<P_c`$ where $`R(0,1)`$ is a random number between zero and one and $$P_c=\psi |\widehat{O}^{}\widehat{O}|\psi dt.$$ (25) The time step size is $`(20r)^1`$ where $`r`$ is the fastest rate in the problem. In the event that we get two collapse processes in a single time step, we use a random number to choose one of the collapses. The time step is chosen so as to minimize such occurrences. For this system, the collapse operators are $`\widehat{a}`$ and $`\sigma _{}^j`$ corresponding to emission of a photon from the cavity field and the atom respectively. In Section IV, we describe dephasing due to an atom leaving the cavity using another collapse operator. Because this unraveling of the master equation is based on photon counting experiments, the calculation of the second order correlation function is carried out quite naturally. The collapse operator $`\widehat{a}`$ corresponds to emission and detection of a photon from the cavity field mode. We calculate $`g^{(2)}(\tau )`$ by building up a histogram of delay times between photon detections averaged over a long evolution time in a way analogous to experimental measurement. ## III Non-weak driving field The photon statistics of the transmitted light have already been calculated in the weak field limit using a truncated five state basis where the system has up to two quanta of energy in it . The three types of nonclassical behavior previously discussed have been seen in subsequent experiments, however these experiments are not strictly in the weak field limit. It is of interest then to calculate the photon statistics for arbitrary driving field and to see to what extent the nonclassical effects persist. It is expected that for a strong enough driving field, the atoms saturate and the nonclassical photon correlations will be washed out because the cavity will basically contain a coherent state which is only mildly perturbed by the presence of the atom. Looking at the photon correlations in the weak field limit from the point of view of quantum trajectories, we can interpret the nonclassical effects as resulting from the collapse of the wavefunction. The detection of the first photon emitted from the steady state collapses the wavefunction of the system ($`|\psi _{ss}a|\psi _{ss}`$) and the subsequent time evolution as the system returns to the steady state determines the photon correlations. The second order correlation function is given by the probability of detecting a second photon normalized to the probability of detecting a photon in the steady state. Here we start the system in the steady state, collapse the wavefunction, and let it evolve to get $$g^{(2)}(\tau )=\frac{a^{}(\tau )a(\tau )_c}{a^{}a_{ss}}.$$ (26) This assumes that there is usually only one photon emitted as the system returns to steady state which is a good approximation in the weak field limit. An example of this is shown in Fig. 3 for the case of the overshoot violation. Outside of the weak field limit the photon correlation s are altered for two reasons. Most simply, the time evolution following a collapse from steady state will be altered by the stronger driving field. Another effect, however, is the presence of multiple collapses before the system returns to the steady state. Consider a multiple collapse process. The first photon comes from the steady state and collapses the wavefunction of the system ($`|\psi _{collapse1}=a|\psi _{ss}`$). Now the time evolution occurs as before. However, the second photon collapses the system to a new state which depends on the delay time since the emission of the first photon \[$`|\psi _{collapse2}=a|\psi _{collapse1}(\tau )`$\]. If a third photon is emitted before the system returns to steady state then its delay time will depend on the details of the evolution from $`|\psi _{collapse2}`$. When averaged over many instances, this process will wash out the nonclassical effects because of the different evolution following different possible states, $`|\psi _{collapse2}`$. An example of this process is shown in Fig. 4 where the conditioned photon number undergoes two collapse events. This figure shows three delay times of $`1/\gamma `$, $`2/\gamma `$, and $`3/\gamma `$ with different evolutions of the conditioned photon number resulting in each case. Figure 5 shows the time evolution of $`a^{}a_c`$ following a photon emission from the steady state for a variety of system parameters. The overshoot persists in the evolution of the field following emission of a photon from the cavity for a driving field as large as $`E/E_{sat}=0.41`$. The undershoot and sub-Poissonian statistics survive for driving fields as large as $`E/E_{sat}=0.8`$ and $`E/E_{sat}=0.37`$ respectively. The saturation field strength $`E_{sat}`$ is the driving field for which $$n=n_{sat}=\frac{\gamma ^2}{8g^2}.$$ (27) The photon statistics of the transmitted field are shown in Fig. 6 for the three types of nonclassical effects seen in this system at a variety of driving field intensities. Figure 6(a) shows $`g^{(2)}(\tau )`$ for system parameters ($`g/\gamma =1,\kappa /\gamma =0.77`$) which produce an overshoot violation of the Schwarz inequality \[$`g^{(2)}(\tau )>g^{(2)}(0)`$\] in the weak field limit. At a driving field of $`E/E_{sat}=0.17`$ the overshoot violation is gone, thus this nonclassical effect is quite dependent on the weak driving field. Figure 6(b) shows photon statistics for system parameters ($`g=2/\gamma ,\kappa /\gamma =5`$) which produce an undershoot violation of the Schwarz inequality \[$`1g^{(2)}(\tau )_{min}>g^{(2)}(0)1`$\] in the weak field limit. Here the nonclassical effect disappears at a driving field of $`E/E_{sat}=0.28`$ showing that this is a more robust effect. Figure 6(c) shows photon statistics for system parameters ($`g/\gamma =1,\kappa /\gamma =1.6`$) which produce photon antibunching \[$`g^{(2)}(0)_+>g^{(2)}(0)`$\] and sub-Poissonian statistics \[$`g^{(2)}(0)<1`$\] in the weak field limit. In this case the nonclassical effect persists until $`E/E_{sat}=0.16`$ where the system shows slight bunching and super-Poissonian statistics. (Notice that the nonclassical effects are not as robust as the time evolution of the cavity field would indicate. Therefore the destruction of nonclassical effects are in part a result of multiple photon processes.) For all system parameters the transmitted light becomes super-Poissonian as the driving field is increased. ## IV Atomic transit dephasing We now turn our attention to the effects of atomic traversal of the cavity on the photon statistics. In previous work it has been assumed that the atoms are all fixed at antinodes of the cavity field and so had the maximum coupling $`g_0=\mu \left(\omega _c/2\mathrm{}ϵ_0V\right)^{1/2}`$. Experiments on this system have used atomic beams to send atoms through a cavity. This atomic traversal of the cavity will introduce two new effects. First, the atom/cavity field coupling will depend on the position of the atom in the cavity. One might think that this would destroy the nonclassical correlations. However, the atoms with the largest coupling interact most strongly with the field and are most likely to contribute to the correlations. So the atoms near an antinode will have the largest contribution and other atoms may have little effect on the correlations. This issue will be further addressed in Sec. V. The second effect of atomic traversal is dephasing which occurs when an atom enters or leaves the cavity. It is this effect which we consider in this section. We have used two approaches to model the dephasing due to atomic traversal. The first is to add a term to the master equation which describes nonradiative decay of atomic polarization. $$\dot{\rho }=\rho +\gamma _{ph}\left(\sigma _z\rho \sigma _z\rho \right)$$ (28) This term in the master equation has its origins in collisional processes and so may or may not accurately describe the dephasing which occurs when an atom leaves the cavity. The second approach uses a quantum trajectory simulation of the system to model the dephasing. In this approach we assume that there is always exactly one atom in the cavity. An atom leaves the cavity and another atom enters the cavity in the ground state at a rate $`\gamma _{ph}`$. We can assume the atom enters the cavity in the ground state, but it is not immediately clear how to deal with the state of the exiting atom. This atom is in some superposition of excited and ground state and these states are entangled with the cavity field state. One approach would be to leave the photon number distribution of the cavity field unchanged using a collapse operator which has the following action on the state of the system: $$|\psi =\underset{n}{}\left(c_{e,n}|e,n+c_{g,n}|g,n\right)|\psi _c=\underset{n}{}\left(c_{e,n}^2+c_{g,n}^2\right)|g,n.$$ (29) However, this is not a consistent application of the quantum trajectories. Consider the evolution of the atom after it leaves the cavity. The atom at some later time may emit a photon into the vacuum meaning it was in the excited state when it left the cavity; or it will never emit a photon, meaning it was in the ground state when it left the cavity. In general, the atom and environment and by entanglement the atom/cavity system will then be described by a density operator. However, we wish to use a pure state to describe the atom/cavity system conditioned on the detection of transmitted photons. To be consistent we must use a pure state to describe the atom after it has left the cavity. We use a collapse operator which picks either the excited state field distribution or the ground state field distribution of the system and then places the new atom in the ground state. This operator has the following action: $`|\psi ={\displaystyle \underset{n}{}}\left(c_{e,n}|e,n+c_{g,n}|g,n\right)|\psi _c={\displaystyle \underset{n}{}}c_{e,n}|g,n\mathrm{with}\mathrm{probability}c_{e,n}^2`$ (31) $`|\psi ={\displaystyle \underset{n}{}}\left(c_{e,n}|e,n+c_{g,n}|g,n\right)|\psi _c={\displaystyle \underset{n}{}}c_{g,n}|g,n\mathrm{with}\mathrm{probability}c_{g,n}^2.`$ (32) This collapse operator is then applied at a Gaussian distributed series of times with average $`1/\gamma _{ph}`$ and a full width of $`1/\gamma _{ph}`$ as this approximates the traversal times of atoms with a Maxwell-Boltzmann velocity distribution. This model of dephasing differs from collisional dephasing in two important ways. First, it does not enter the deterministic Hamiltonian evolution between collapses at all, whereas the collisional dephasing in the trajectory picture would have both a collapse component as well as the decay of coherence in the continuous evolution. Second, this dephasing always places the atom into the ground state whereas the collisional dephasing places the atom in the ground or excited state with probabilities determined by the relative populations. Now we turn to our results and a comparison of the two types of dephasing. Because the traversal dephasing does not affect the deterministic evolution of the system we expect that for a given dephasing rate it will be less destructive of the nonclassical photon statistics than collisional dephasing. In Fig. 7 we show the second order correlation function with collisional dephasing. All three types of nonclassical effects are quite sensitive to this dephasing with $`\gamma _{ph}=0.05`$ destroying the overshoot \[Fig. 7(a)\] and undershoot \[Fig. 7(b)\] while the sub-Poissonian statistics survive until $`\gamma _{ph}=0.2`$ in Fig. 7(c). Our results for the transit dephasing are shown in Fig. 8. The overshoot in Fig. 8(a) is again extremely sensitive to dephasing with classical statistics for $`\gamma _{ph}/\gamma =0.05`$ while the sub-Poissonian statistics in Fig. 8(b) are more robust, surviving up to $`\gamma _{ph}/\gamma =0.5`$. ## V Two atom effects We now consider the effect of placing two atoms inside the cavity either both at antinodes of the field, or allowing one of the atoms to be arbitrarily placed so that its coupling is in the range $`0`$ to $`g_0`$. Dephasing is not considered in this Section. In the experiments conducted on this system it is likely that there is some effect from ’spectator atoms’ which are located away from an antinode of the field and so do not contribute to the nonclassical photon statistics. If there are enough of these atoms they may simply absorb light and emit it out of the cavity thus effectively decreasing the quality of the cavity. As a first step towards understanding the effect of spectator atoms, we place one extra atom in the cavity with a coupling which is some fraction of the original atom’s coupling. We use a quantum trajectory simulation to calculate photon statistics. In Fig. 9 we plot $`g^{(2)}(\tau )`$ while allowing the coupling of the spectator atom ($`g_2`$) to vary from $`g_0/10`$ to $`g_0`$. We see that for all three sets of parameters the photon statistics vary continuously from the single atom result to the two atom result with no qualitative deviation in the photon statistics. This suggests that the spectator atoms can have an observable effect on the statistics, but they do not destroy the nonclassical correlations at least when close to the ideal condition of having a single atom in the cavity at a time. A group of many spectator atoms may be more detrimental to nonclassical correlations. We now consider the case in which both atoms are placed at antinodes of the field. The photon statistics for this system have been solved for an arbitrary number of maximally coupled atoms using a set of symmetrized Dicke states to describe the atomic excitation . This would correspond to an experimental setup where it is not possible to tell which atom spontaneously emitted a given photon. However, there would be experimental situations where the symmetrized states are not valid states with respect to spontaneous emission. Here, using quantum trajectories, we consider both a symmetrized collapse and an unsymmetrized collapse for spontaneous emission. The operator for the Dicke collapse is $$\widehat{C}_{Dicke}=\frac{1}{\sqrt{2}}\left(\sigma _{}^1+\sigma _{}^2\right)$$ (33) so that when a spontaneous emission event occurs, both atoms are collapsed symmetrically. For the non-Dicke collapse the atomic operators are used separately so that one atom or the other collapses. For weak driving field we expect that there will be no difference between these types of collapse because in this limit we only detect photons emitted from steady state. However, for stronger driving fields we begin to detect photons emitted from the collapsed state and the two collapses give a different collapsed state ($`\sigma _{}^1|\psi _{ss}`$ or $`\sigma _{}^2|\psi _{ss}`$ versus $`\widehat{C}_{Dicke}|\psi _{ss}`$), and therefore, different photon statistics. In Fig. 10 $`g^{(2)}(\tau )`$ is plotted for the two types of collapse for a driving field of $`E=0.5`$. Figure 10(a) shows a significant difference as the nonclassical statistics are completely gone for the non-symmetrized collapse. Figure 10(b) shows no dependence on the type of collapse. Figure 10(c) shows a mild dependence on the type of collapse with a slightly larger value of $`g^{(2)}(0)`$ for the non-symmetrized collapse. ## VI Conclusion We have investigated extensions to previous theoretical work on a driven atoms/cavity system with dissipation. We have calculated the normalized second order correlation function for the transmitted light including effects of arbitrary driving fields, non-radiative dephasing, and arbitrary coupling strength for multiple atoms. We have found that nonclassical field states are easily destroyed by deviations from the weak field limit and by non-radiative dephasing modeled as both collisional dephasing and as atomic transit dephasing. We have also found that allowing two atoms in the cavity with different atom/field coupling strengths does not have a detrimental effect on the nonclassical field. The experiments which have been done on this system have not really in the weak field limit. As $`E0`$ the number of counts also goes to zero so it is difficult to carry out experiments in this regime, but this work suggests that it is important. ## Acknowledgments The authors would like to thank Howard J. Carmichael, Luis Orozco, and Greg Foster for many useful conversations.
no-problem/9907/cond-mat9907455.html
ar5iv
text
# Computer investigation of the energy landscape of amorphous silica ## Abstract The multidimensional topography of the collective potential energy function of a so-called strong glass former (silica) is analyzed by means of classical molecular dynamics calculations. Features qualitatively similar to those of fragile glasses are recovered at high temperatures : in particular an intrinsic characteristic temperature $`T_c3500`$K is evidenced above which the system starts to investigate non-harmonic potential energy basins. It is shown that the anharmonicities are essentially characterized by a roughness appearing in the potential energy valleys explored by the system for temperatures above $`T_c`$. Even though the manufacturing of glasses by a quench from the high temperature liquid phase is a standard practice, the precise understanding of what happens at a molecular level in these materials, when lowering the temperature through the “glass transition”, remains an important theoretical challenge. Several approaches have emerged in the literature depending whether the structural or the dynamical aspects are emphasized. In particular the use of the concept of topological frustration or the use of the mode-coupling theory proceeds from completely different points of view. A promising way to conciliate these approaches is the analysis of the shape of the potential energy function $`\mathrm{\Phi }(\stackrel{}{r_1},\stackrel{}{r_2},\mathrm{})`$ in the multi-dimensional space of the coordinates of the interacting particles . It has been recently shown that the topological analysis of the energy landscape of a fragile glass-forming liquid described by a two-body Lennard-Jones (LJ) potential allows to explain the distinct dynamical regimes experimentally observed. In this letter, we present for the first time such an energy landscape analysis in the case of a so-called strong glass former, namely vitreous silica, by means of classical molecular-dynamics (MD) calculations. As in the case of fragile glasses, from a quantitative analysis of the inherent potential energy basins explored at various temperatures, one can define an intrinsic characteristic temperature, here equal to $`T_c3500`$K, above which the system experiences anharmonicities. These findings are in agreement with a recent simulation study in which a change in the dynamical properties of a similar liquid silica system has been observed around this temperature , suggesting that supercooled liquid silica behaves like a fragile glass former at temperatures above $`T_c`$. Here we show that the anharmonicities exhibited by the energy basins explored at temperatures above $`T_c`$ are essentially due to a roughness in the shape of the basins, as previously observed in simpler liquid systems. Using an original procedure, we perform a quantitative analysis of this roughness and show that it is characterized by a typical length (in the multidimensional configuration space) of about 0.2Å which is the signature of the annihilation of structural defects along the path down the potential energy valleys. Our silica system consists of 216 silicon and 432 oxygen atoms confined in a cubic box of edge length $`L=21.48`$Å which corresponds to a mass density very close to the experimental value of vitreous silica. The constant energy, constant volume classical MD calculations have been performed using the sophisticated potential first introduced by van Beest et al. and justified by ab initio calculations. It has been recently shown to describe quite well structural , and vibrational , as well as relaxational and thermal properties of both, supercooled viscous liquid and glassy silica. We have used the same input parameters and the same modus operandi as in our previous structural study . After a full equilibration of the liquid (about 28 ps) the system has been cooled down to zero temperature at a quench rate of $`2.3\times 10^{14}`$ Ks<sup>-1</sup> which has been obtained by removing the corresponding amount of energy from the total energy of the system at each iteration. At several temperatures during the quenching process the configurations (positions and velocities) have been saved. With the use of these configurations as inputs of the MD calculations, the system has then been allowed to relax for a maximum duration of 84 ps after the quench (during this constant volume relaxation period the temperature of the system is only slightly increasing). The same procedure has been repeated for ten different initial liquid configurations and consequently all the physical quantities reported below always result from an average over these ten independent samples. For each collected temperature, immediately after the quench or after a given relaxation time (42 or 84 ps), the typical potential energy basin sampled by the system has been investigated using a procedure described by Della Valle and Andersen which allows to perform a steepest descent from the initial multicomponent space configuration down to the closest underlying potential energy minimum, often called the “inherent structure” in the literature . For that purpose a modified version of the MD algorithm is adopted. At each MD step, the scalar product of the velocity by the force is calculated for each particle. If the product is positive the velocity of the particle is replaced by its projection on the force and otherwise the velocity is set to zero. During the descent, both the distance and the potential energy $`\mathrm{\Phi }`$ are calculated, allowing to determine precisely the shape of the energy basin. There are different ways to define the distance per particle in the multiparticle space. We have chosen to calculate either the direct distance $`x`$ from the initial configuration, defined by : $$x=\sqrt{\frac{\underset{i}{}m_i(\stackrel{}{r_i}\stackrel{}{r_i}^{(0)})^2}{_im_i}}$$ where the sum runs over all the particles located at $`\stackrel{}{r_i}`$ with a mass $`m_i`$, or the cumulated distance, $`X`$, calculated along the steepest descent path by : $$X=\underset{n}{}\sqrt{\frac{\underset{i}{}m_i(\stackrel{}{r_i}^{(n)}\stackrel{}{r_i}^{(n1)})^2}{_im_i}}$$ where $`n`$ labels the MD steps. The process is stopped when $`\mathrm{\Phi }`$, which is a monotonically decreasing function of $`x`$ (resp. $`X`$), reaches a minimum $`\mathrm{\Phi }_m`$ at $`x=x_m`$ (resp. $`X=X_m`$). Practically we have chosen to stop the descent when $`\mathrm{\Phi }^{(n)}\mathrm{\Phi }^{(n+1)}<10^6`$ eV. As a first result we present in Fig.1, the potential energy minima $`\mathrm{\Phi }_m`$ obtained at different temperatures. The curve obtained immediately after the quench is similar to the one obtained previously with a different potential . Of course, the system investigates energy basins of lower minima at lower temperatures. The slowing down of the decrease of $`\mathrm{\Phi }_m`$ with decreasing temperature occurring between 4000K and 3000K is the signature of the glass transition as the system gets trapped after the quench in energy basins with almost the same minimum below a given temperature $`T_g`$. This temperature is consistent with the estimate $`T_g3500`$K obtained from a structural analysis done for the same system with the same quenching rate but also with the extrapolated value obtained in a different study concerning the influence of the quenching rate on the properties of the system . Moreover, in agreement with the study done on the LJ glass , one observes in Fig.1 that the inherent structure depends on the duration of the relaxation process after the quench. The relaxed curves exhibit a minimum around $`T_g`$ which is more and more marked as the aging time increases. This can be easily understood since for temperatures close to $`T_g`$ the system takes advantage of the relaxation process in order to find lower energy minima, because it has enough kinetic energy to have a chance to cross the energy barriers between minima. Of course this chance becomes considerably smaller at lower temperatures in agreement with what is known from the thermal evolution of the relaxation time in the glassy phase of silica (note that in the energy per particle at $`T=0`$K depends on the cooling rate while we do not observe significant relaxation effects at $`T=0`$K). These results are consistent with a previous analysis of history effects in the same system and also with the fact that $`T_g`$ should be smaller for a lower quenching rate . If we take into account both, the 15 ps necessary to reach 3500K from 7000K and the 84 ps of further relaxation at 3500K, we obtain an effective quenching rate about six times smaller than the one we have used. According to the dependence of $`T_g`$ with the quenching rate proposed by Vollmayr et al. , this would correspond to a decrease of $`T_g`$ by more than 500K. In Fig.2 we have plotted, as a function of temperature, $`x_m^2`$ the square of the distance between a given initial configuration and the corresponding position of the inherent structure. This curve is remarkably similar to what has been previously obtained for Lennard-Jones glasses and can be interpreted in the same manner. The squared distance $`x_m^2`$ is linear with temperature up to a characteristic temperature $`T_c3500`$K (here very close to $`T_g`$) above which anharmonicities appear. The same analysis as the one done in shows that $`T_c`$ corresponds to a change in the nature of the relaxation process, i.e. from diffusion above $`T_c`$ to hopping below $`T_c`$. This is consistent with the recent work of Horbach and Kob on the same system who found a breakdown of the Arrhenius behavior of the transport coefficients above a temperature close to 3300K and suggested that this characteristic temperature corresponds to the $`T_c`$ predicted by the mode coupling theory. A more striking result is that the curves in Fig.2, in contrast with those in Fig.1, are not very sensitive to the duration of the relaxation process after the quench. This is also true for another characteristic quantity of the energy basins leading to the inherent structures, namely $`\mathrm{\Delta }\mathrm{\Phi }=\mathrm{\Phi }^{(0)}\mathrm{\Phi }_m`$, the energy difference between the initial structure and the inherent structure, which has been plotted versus $`T`$ in the inset of Fig.2. It exhibits the same departure from the expected harmonic linear regime ($`\mathrm{\Delta }\mathrm{\Phi }=\frac{3}{2}k_BT`$ represented by the dashed line in the inset) at $`T_c`$ and also the same remarkable stability against relaxation. This shows that as the aging time increases the system explores deeper and deeper basins (especially around $`T_g`$) as shown by the variation of $`\mathrm{\Phi }_m`$ in Fig.1 but the intrinsic characteristics of these basins, the mean width and the mean height (associated with $`x_m^2`$ and $`\mathrm{\Phi }^{(0)}\mathrm{\Phi }_m`$ resp.), do not depend on their absolute position $`\mathrm{\Phi }_m`$ in the energy scale. Therefore the characteristic temperature that we can define here, $`T_c3500`$K, is intrinsic and does not depend on the quenching rate. The fact that we obtain $`T_g`$ close to $`T_c`$ is simply due to our very large quenching rate. It is worth noticing that the same qualitative behavior than the one observed in Fig.2 is also obtained by plotting $`X_m^2`$ the square of the cumulated distance instead of $`x_m^2`$. In that case the departure from a linear regime above $`T_c`$ is even more pronounced as we find that the ratio $`X_m/x_m`$ remains constant (of order 1.2) for $`T<T_c`$ and increases markedly with temperature for $`T>T_c`$. To investigate further the nature of the anharmonicities of the energy basins explored by the system for temperatures above $`T_c`$, we propose a quantitative analysis of the curves $`F(X)=d\mathrm{\Phi }/dX`$ where $`X`$ is the above defined cumulated distance (we have used $`d\mathrm{\Phi }/dX`$ instead of $`d\mathrm{\Phi }/dx`$ because it corresponds better to a local characteristic of the shape of the basins). Typical examples of such curves are given in Fig.3 for samples relaxed during 84 ps after the quench (as already shown in Fig.2 the aging time does not influence significantly the numerical results). It turns out that for $`T>T_c`$, the $`\mathrm{\Phi }(X)`$ curves, while always decreasing, exhibit step-like singularities evidenced by clearly visible peaks in the derivative. Note that another manifestation of what can be called a roughness is the increase with temperature of the ratio $`X_m/x_m`$ above $`T_c`$, mentioned earlier (the same observation has already been done for LJ glasses ). To analyze quantitatively the roughness of the $`F(X)`$ curves, we have first eliminated the overall mean general evolution by calculating the difference $`f(X)=F(X)<F(X)>`$ where $`<F(X)>`$ is a local average of the data between $`X\delta X`$ and $`X+\delta X`$ (for convenience we have chosen $`\delta X=X_m/20`$ and limited the range of $`X`$ values between $`4\delta X`$ and $`X_m\delta X`$). The $`f(X)`$ curves corresponding to the $`F(X)`$ curves depicted in Fig.3 are shown in the inset of the figure (they have been artificially shifted in the vertical direction for clarity). Subsequently the roughness of the curves $`f(X)`$ has been analyzed by following a standard method which consists in calculating the power spectrum $`S(k)`$ defined as the Fourier transform of the autocorrelation product $`g(\xi )`$ = $`<f(X+\xi )f(X)>`$ where the average is performed not only over the $`X`$ values but also over ten independent samples. The results of this analysis are reported in Fig.4. In this figure we have reported as a function of temperature the mean intensity of the peaks measured by the standard deviation $`\sigma _f`$ of the $`f(X)`$ curves, which is equal to the square root of $`g(0)`$. Despite the error bars the curve exhibits a characteristic sigmoidal shape, indicating that the roughness only exists in the higher energy basins explored for $`T>T_c`$ and that the mean intensity of the peaks seems to saturate at very high temperatures. Furthermore, for $`T>T_c`$, we observe that the power spectrum goes through a maximum and decays like $`k^1`$ after this maximum. This means that there exists a typical length (the inverse of the location of the maximum) $`X_r`$ characteristic of the mean distance between successive peaks while the curve between two peaks can practically be considered as “smooth”. For $`T<T_c`$ there is no visible maximum in the power spectrum which behaves roughly as $`k^1`$ over the whole $`k`$ range. This is a further indication that the very weak roughness in the basins explored below $`T_c`$ has no significance and that the low temperature basins can be considered as smooth. The estimated values for $`X_r`$ above $`T_c`$ have been reported in Fig.4. This typical length increases slightly with $`T`$ from about 0.15 Å at 4000K to about 0.23 Å at 7000K. It is interesting to relate the typical distance $`X_r`$ in the multidimensional configuration space to peculiar structural rearrangements occurring during the down-hill potential energy minimization process. On some specific high temperature samples we have compared the configurations between two successive peaks in $`F(X)`$ and we have in each case observed the elimination of a single specific defect like a triconnected silicon atom or edge-sharing tetrahedra etc… In such rearrangements a ”perturbed” cluster of about 30 to 50 connected atoms, is observed. Generally the largest displacement of about 0.5 to 0.7 Å is observed for an oxygen atom at (or very close to) the defect. Therefore the typical value of $`X_r`$ results from an average between the largest displacement near the defect and the “screening cloud” of displaced atoms connected to it. One can understand that $`X_r`$ becomes insignificant below $`T_c`$ because the defects become rare in the low temperature basins as already shown in . In conclusion we have numerically investigated the potential energy landscape of super cooled liquid silica described by the BKS potential using a steepest-descent molecular-dynamics scheme. We have shown that the inherent structures sampled depend strongly on the effective cooling-rate especially around $`T_g`$ similarly to what was found in a Lennard-Jones glass . Nevertheless at a given temperature the characteristics of the energy basins (mean height and mean width) seem to be insensitive to the history of the system. From a quantitative analysis of the potential energy valleys explored at various temperatures we have evidenced a characteristic temperature $`T_c`$ above which non-harmonic effects become dominant. This is consistent with an other recent study and indicates that strong and fragile glass formers are quite similar when studied near $`T_c`$ where a change in the nature of the relaxation process takes place in the liquid phase. We think that the distinct characteristics of a strong glass former appear mainly in the supercooled liquid and glassy phases below $`T_c`$. Unfortunately the high value of $`T_g`$ that we obtain in our MD simulations does not allow us to study the temperature range between $`T_g`$ and $`T_c`$. Furthermore using an original quantitative analysis, we have shown that the anharmonic character of the higher energy valleys explored by the system above $`T_c`$, is due to some roughness occurring in the shape of the potential energy basins. The existence of such roughness has already been invoked in the case of simpler systems but not quantitatively analyzed. In the case of silica we have shown that this roughness is characterized by a typical length of about 0.2 Å in the multi-dimensional configuration space, and can be associated with a sequential elimination of defects when following the path leading down to the inherent structures. Part of the numerical calculations were done at CNUSC (Centre National Universitaire Sud de Calcul), Montpellier.
no-problem/9907/astro-ph9907417.html
ar5iv
text
# Formation of Elliptical and S0 Galaxies by Close Encounters ## 1 Introduction Most massive elliptical galaxies can be divided into two groups with different physical properties. Bright ellipticals are slow anisotropic rotators, have boxy distorted isophotes, are radio-loud, and are surrounded by gaseous X-ray halos (Bender et al., 1989). Faint elliptical galaxies are preferentially oblate isotropic rotators with disky isophotes. In contrast to boxy ellipticals they are radio-quiet and show no X-ray emission in excess to their discrete source contribution. Gravitational N-body simulations, starting with the work by Toomre & Toomre (1972,1977) using a restricted three-body approximation, and continued by others (e.g.: Barnes, 1988) using a hierarchical tree algorithm, have shown that mergers of equal mass disk-galaxies can produce slowly-rotating remnants with a de Vaucouleurs-like surface-brightness profile and disky or boxy isophotes depending on the viewing angle (Heyl et al., 1994). It is generally believed that merger remnants are supposed to have properties resembling observed boxy elliptical galaxies, such as slow figure rotation, kinematically distinct cores, so the question of how disky and fast rotating ellipticals have formed is still unanswered. Here we study a possible formation mechanism for disky elliptical galaxies in an encounter scenario. An approximated massive galaxy perturbs an equal mass disk galaxy and flies away. We analyze the triaxial shape, the rotation properties and the isophotal shape of the remnant. ### 1.1 The model The initial conditions for our simulations were derived following Hernquist (1993) as described briefly below. The N-body model consists of an exponential disk surrounded by a dark halo. The thickness of the disk is determined by the velocity dispersion which is a function of radius. The halo is initially spherical and has an isothermal density profile with a core and a cutoff-radius to reduce the computational costs. Velocities are initialized by taking moments of the collisionless Boltzmann equation and approximating the distribution function in phase space by Gaussians. This produces stable models that are nearly in equilibrium. The Toomre Q-parameter is normalized to the value of 1.5 at a radius similar to the Solar neighborhood. The mass and size of the disk is scaled to physical values of the Milky Way, i.e. scale length h=3.5 kpc and disk mass $`M_d=5.6\times 10_{odot}^{10}`$. For the perturbing galaxy, we used an equal mass particle realization of a profile approximating a dark halo. At the beginning of the simulation the perturber is on a prograde hyperbolic orbit with an impact paramter of 63 kpc, and a relative velocity of $`343`$ km/s. The simulation was performed with a direct N-body code using the special hardware device GRAPE (Sugimoto et al., 1990). We followed the simulation for 6 Gyrs. The simulation presented here has 50,000 disk particles, 100,000 halo particles, and 20,000 particles representing the perturber. ### 1.2 Results Figure 1 shows a sequence of snapshots for our simulation. The encounter induces a strong bar in the disk, although the disk is stable against bar formation if simulated in isolation. This bar becomes unstable to bending oscillations (Raha et al. 1991, Merritt & Sellwood, 1994) due to an increase of the velocity dispersion in radial direction. As a result of this instability the initially disklike system becomes spheroidal. To estimate the three-dimensional shape of the remnant we computed the axis ratios of the distribution of disk particles from the eigenvalues of the moment-of-inertia tensor. For the 60% most tightly bound particles the triaxiality parameter $`T(a^2b^2)/(a^2c^2)`$ is 0.88. The effective Hubble types for projections along the three principal axes are E5.7, E7.3 and E4 respectively. The surface density follows a de Vaucouleurs-like profile over a large radial interval and is comparable to the remnants of merger simulations. After the strong rotating bar has formed, its dynamical friction with the live halo component leads to an effective transport of angular momentum to the halo. The rotation velocity in the inner parts decreases rapidly. Figure 2 shows the rotation properties of the system after 6 Gyrs. We have plotted $`v_m/\sigma `$ ($`v_m`$: maximum rotation velocity, $`sigma`$: central velocity dispersion) against the ellipticity. The points show the values for the same remnant in different projections. One can see that some projections follow the line for oblate rotators, but we also have anisotropic remnants and those with very high rotation velocities at low ellipticities. This effect can be influenced by the determination of the ellipticity; some observers see the same effect in their data (see Nieto et al., 1988). We also investigated the deviations of the isophotes from pure ellipses applying the method described by Bender el al. (1988) after binning the particle distribution and convolving it with a Gaussian with a FWHM comparable to the seeing conditions of the observations. We find that the remnant has disky isophotes (positive $`a4/a`$) for almost all projections. The value of $`a4`$ is higher for remnants with higher ellipticity (Figure 2). With increasing radius the $`a4`$ coefficient shows basically two global features: either a disky inner part changing to boxy in the outer parts, or a continuously rising positive $`a4`$. Those features could be explained as suggested before (Nieto et al., 1991) by a faint disk surrounded by a spheroidal component or, in the case of rising $`a4`$, tidal extensions of a round inner part. In our simulation the shape of the profile just depends on the projection angle. Future simulations will show how sensitive our results are to the resolution and different initial conditions.
no-problem/9907/nucl-th9907025.html
ar5iv
text
# Hydrodynamic simulation of elliptic flowThis work was supported by BMBF, DFG and GSI. ## 1 Hydrodynamic model with longitudinal boost invariance The transverse expansion dynamics in non-central heavy-ion collisions at SPS energies has recently attracted much attention . We here study it within the hydrodynamic model. In order to reduce the complexity of the numerical task we follow and implement analytically Bjorken scaling flow with $`v_z=z/t`$ in the longitudinal direction and only solve the transverse dynamics numerically. The Bjorken ansatz holds exactly at infinite beam energy, but properly restricted to a finite rapidity interval it is phenomenologically successful also at SPS and AGS energies . It breaks down, however, near target and projectile rapidities; using it we can therefore reliably compute the transverse expansion only near midrapidity. The system of hydrodynamic equations is closed by an equation of state (EOS) $`p(e,n)`$ giving the pressure as a function of energy and baryon density. Hydrodynamics thus provides a direct relation between the EOS and the dynamical evolution of the system. To study the dynamical effects of a softening of the EOS in the neighborhood of a phase transition to quark-gluon plasma we use three different equations of state. EOS I is the hard equation of an ideal ultrarelativistic gas, $`p=e/3`$. EOS H is the much softer EOS for a gas of interacting hadron resonances; for $`n0`$ it satisfies $`p0.15e`$. A Maxwell construction between these two EOS, adding a bag pressure $`B^{1/4}=230`$ MeV, results in EOS Q which has a phase transition at $`T_{\mathrm{cr}}(n=0)=164`$ MeV with a latent heat of 1.15 GeV/fm<sup>3</sup> . The system is frozen out at a fixed decoupling temperature $`T_{\mathrm{dec}}`$, and all unstable resonances are allowed to decay before we compare with experimental data. ## 2 Space time evolution of the reaction zone We initialize the reaction zone with transverse energy and baryon density profiles which are taken to be proportional to the transverse density of wounded nucleons calculated from the Glauber model . The initial configuration is thus parametrized by the equilibration time $`\tau _0`$ and the maximum energy and baryon densities $`e_0`$ and $`n_0`$ in central Figure 1. Time evolution of the spatial eccentricity $`ϵ_x`$ (top), the momentum anisotropy $`ϵ_p`$ (middle), and the radial flow $`v_{}`$ (bottom). collisions. For each EOS these parameters and the decoupling temperature are fixed by a fit to the negative hadron and net proton $`m_t`$-spectra at midrapidity from central Pb + Pb collisions at 158 $`A`$GeV . The spectra for non-central collisions are then predicted without extra parameters. The hydrodynamic evolution provides the time-dependence of the matter in coordinate and momentum space. In non-central collisions, the initial spatial deformation of the reaction zone in the transverse plane, characterized by its spatial eccentricity $`ϵ_x=\frac{y^2x^2}{y^2+x^2}`$, leads to anisotropic pressure gradients and a preferred buildup of transverse flow in the shorter $`x`$-direction . ($`x`$ lies inside, $`y`$ points orthogonal to the collision plane. $`\mathrm{}`$ denotes the energy density weighted spatial average at fixed time.) This leads to a growing flow anisotropy, characterized by $`ϵ_p=\frac{T^{xx}T^{yy}}{T^{xx}+T^{yy}}`$. At freeze-out this hydrodynamic quantity is directly related to the elliptic flow $`v_2=\mathrm{cos}(2\phi )`$, defined by an average over the final particle momentum spectrum; for pions $`ϵ_p2v_2`$ . In contrast to $`v_2`$, $`ϵ_p`$ can be studied as a function of time and gives access to the buildup of elliptic flow. The developing stronger flow into the collision plane leads to a decrease of $`ϵ_x`$ with time; the buildup of elliptic flow thus slows down and eventually shuts itself off. This is clearly seen in the upper two panels of Fig. 1: $`ϵ_p`$ saturates when $`ϵ_x`$ passes through zero. For a hard EOS this happens faster than for a soft one; also, the total amount of elliptic flow which can be generated by a given EOS increases with its hardness $`c_s^2=\frac{e}{p}`$. Fig. 1 was computed for 158 $`A`$ GeV Pb+Pb collisions at $`b=8`$ fm. For EOS Q the vertical lines indicate when the center of the reaction zone goes from plasma to mixed phase and from mixed to hadron phase, respectively. One sees that 1/6 of the final elliptic flow is generated before the pure QGP phase disappears, 1/2 in the mixed phase, and about 1/3 in the hadronic phase. The stars indicate the freeze-out point ($`T_{\mathrm{dec}}=120`$ MeV). For EOS I the system freezes out before the elliptic flow is fully developed; for EOS H and EOS Q the opposite is true. The radial flow, characterized by $`v_{}=\gamma \sqrt{v_x^2+v_y^2}/\gamma `$ (where $`\gamma `$ is the Lorentz factor), does not saturate: Fig. 1 shows that it continues to grow monotonously until freeze-out, even after the elliptic flow has saturated, due to the continued presence of essentially azimuthally symmetric radial pressure gradients. Note the important role of the EOS: a softer EOS, especially its softening near a phase transition, delays the buildup of both radial and elliptic flow. It also reduces the maximally achievable value of the latter. At low energies, freeze-out (driven by cooling and expansion due to radial flow) happens before the elliptic flow has fully developed. To achieve fully developed elliptic flow lower beam energies are required for softer equations of state . This reflects both the lower saturation value of $`ϵ_p`$ for the softer EOS and the slower buildup of radial flow, resulting in more available time until freeze-out. ## 3 Transverse mass spectra and applicability of hydrodynamics Collective flow affects the measurable momentum spectra of the produced particles. We showed in that our model is able to describe the measured asymmetries of the particle spectra; the calculated elliptic flow $`v_2=\mathrm{cos}(2\phi )`$ at midrapidity agrees well with the published data . Here we discuss the impact parameter dependence of the azimuthally integrated transverse mass spectra and show that, once tuned to central collision data, hydrodynamics successfully reproduces the magnitude and shape of the spectra up to impact parameters of about 8-10 fm. Figs. 2 and 3 show preliminary data on transverse mass spectra of neutral pions (WA98 Collaboration ) and net protons (CERES Collaboration ) from 158 $`A`$ GeV Pb+Pb collisions of varying centrality. The lines indicate our hydrodynamical results at midrapidity, obtained with EOS Q and initial conditions tuned to central collisions as described above. The WA98 data extend to very peripheral collisions: the lowest spectrum in Fig. 2 corresponds to $`b`$=13 fm where hydrodynamics certainly looses its applicability. At such large impact parameters one does not observe the collision of two nuclei, but rather two dilute nucleon clouds penetrating each other. At smaller impact parameters the model fails in the high-$`m_t`$ region; here hard scattering processes begin to dominate which cannot be modeled hydrodynamically. Up to $`b10`$ fm (bin 6 represents impact parameters up to 11 fm ) and transverse masses of about 2 GeV, however, hydrodynamics works very well, both for the pion and net proton spectra. (For the latter the CERES data in Fig. 3 do not extend to very peripheral collisions, the largest measured impact parameters corresponding to about 8.4 fm .) ## 4 Summary We have demonstrated the interplay between spatial eccentricity as the driving force for generating momentum space asymmetries and the back-reaction of the latter on the former. Comparison with measured spectra from Pb+Pb collisions with varying impact parameter showed that the hydrodynamical model successfully reproduces the data up to $`b`$=8-10 fm. The good quantitative agreement between data and model suggests rather rapid thermalization in the reaction zone. If final data confirm that the elliptic flow is essentially saturated in Pb+Pb collisions at the SPS, this would provide strong evidence for very early pressure in the system. In our calculations a large fraction of the finally observed elliptic flow is generated while the energy density exceeds the critical value $`e_{\mathrm{cr}}=1`$ GeV/fm<sup>3</sup> for deconfinement. This confirms the suggestion that elliptic flow is a probe for the early collision stage. We thank Th. Peitzmann (WA98) and F. Ceretto (CERES) for sending us their preliminary data prior to publication. P.K. wishes to express his gratitude to the CERN Summer Student Programme and thanks T. Peeter’s group for their warm hospitality.
no-problem/9907/astro-ph9907360.html
ar5iv
text
# Kilohertz QPOs in Neutron Star Binaries modeled as Keplerian Oscillations in a Rotating Frame of Reference ## 1 Introduction Kilohertz quasi-periodic oscillations (KQPO) have been discovered by the Rossi X-ray Timing Explorer (RXTE) in a number of low mass X-ray binaries (Strohmayer et al. 1996). The existence of two observed peaks with frequencies $`\omega _K`$ and $`\omega _h`$ in the upper part of the QPO spectrum became a natural starting point in modeling the phenomenon. Attempts have been made to relate $`\omega _K`$, $`\omega _h`$ and the peak difference frequency $`\mathrm{\Delta }\omega =\omega _h\omega _K`$ with the neutron star spin and possible Keplerian motion of the hot matter surrounding the star \[see origin of Keplerian motion discussed in Titarchuk, Lapidus & Muslimov 1998 (hereafter TLM) and Titarchuk & Osherovich 1999\]. In the beat frequency model, the burst oscillations and the kHz peak separation $`\mathrm{\Delta }\omega `$ are both considered to be close to the neutron star spin frequency and thus $`\mathrm{\Delta }\omega `$ is predicted to be constant (see, e.g., review by van der Klis 1998). However, recent observations of kHz QPO’s in Sco X-1, 4U 1608-52 and 4U 1702-429 showed that $`\mathrm{\Delta }\omega `$ decreases systematically when $`\omega _K`$ and $`\omega _h`$ increases (van der Klis et al. 1997; Mendez et al. 1998; Markwardt, Strohmayer & Swank 1999). Indications of such a decrease was found by Wijnands et al. (1998) and Ford et al. (1998) for 4U 1735-44 as well. Psaltis et al. (1998) showed that the measured $`\mathrm{\Delta }\omega `$ in all other known sources is consistent with the values found in Sco X-1 and 4U 1608-52. These sources have become standards for comparison between the kHz QPO phenomena in different sources because of the high accuracy of measurements and the wide range of values of $`\omega _K`$ and $`\omega _h`$ available. For Sco X-1 and 4U 1608-52, $`\mathrm{\Delta }\omega /2\pi `$ changes from $`320`$ Hz to $`220`$ Hz when $`\omega _K/2\pi `$ changes from $`500`$ Hz to $`850`$ Hz In the lower part of the Sco X-1 spectra, van der Klis et al. (1997) found two frequencies $`45`$ Hz (referred to below as $`\omega _L/2\pi `$) and $`90`$ Hz which also slowly increase with increase of $`\omega _K`$ and $`\omega _h`$; presumably 90 Hz is the second harmonic of $`\omega _L/2\pi `$. Any consistent model faces a challenging task of describing the dependence of $`\mathrm{\Delta }\omega `$ and $`\omega _L`$ on $`\omega _K`$ and $`\omega _h`$ for the available sources. In the beat-frequency model (BFM) considered by Miller, Lamb & Psaltis (1998), the kHz QPO’s are identified with the orbital frequency at the sonic radius and its beat with the neutron star spin, and the lower frequency of horizontal branch oscillations (HBO) in Z-sources with the beat of the orbital motion at the magnetospheric radius with the neutron star spin. In the general-relativistic (GR) precession/apsidal motion discussed by Stella and Vietri (1998,1999), the kHz peaks are due to orbital motion and GR apsidal motion of a slightly eccentric orbit, and low-frequency QPO to the Lense-Thirring precession of the same orbit. Titarchuk and Muslimov (1997) suggested that a set of peaks seen in the kHz QPOs can be naturally explained in terms of the effect of rotational splitting of the main oscillation frequency in the disk. The type of peak (eigenmodes) is dependent on mode numbers and a parameter describing disk structure. In TLM the authors argue that this effect of the rotational splitting originated in a disk region identified as the centrifugal barrier (CB) region oscilatted in the vertical and radial directions. The size oscillations of the CB disk region and reprocessing of the disk photons there (due to Comptonization) led to an oscillation signal. In this Letter, we take a different approach and we suggest that a kHz QPO can be explained as oscillations of large scale inhomogeneities (hot blobs) thrown into the neutron star’s magnetosphere. Participating in the radial oscillations with Keplerian frequency $`\omega _K`$, such blobs are simultaneously under the influence of the Coriolis force. The present model is very different than the TLM one where the site of the rotational splitting was associated with the CB disk region. Thus, studying the Keplerian oscillator in the rotating frame of reference (magnetosphere), we attempt to relate $`\mathrm{\Delta }\omega `$ and $`\omega _L`$ with $`\omega _K`$ and $`\omega _h`$. In our model, we compare results of linear theory with the observations for three sources. We stress the fundamental role of the Keplerian frequency (which we identify with $`\omega _K`$) and the upper hybrid frequency relation between $`\omega _K`$ and $`\omega _h`$. In this first Letter, we depict the main features of the kHz QPO phenomena. The discussion identifies some limitations of our approach and suggests further directions of the research we shall pursue. ## 2 Keplerian Oscillator Under the Influence of the Coriolis Force We assume that the interplay between the centrifugal, gravitational force and magnetic force maintains radial oscillations (along the axis x) with the Keplerian frequency $$\omega _K=\left(\frac{GM}{R^3}\right)^{1/2}$$ (1) where G is the gravitational constant, M is the mass of the compact object and R is the radius of an orbit. Self-similar spheromak-type models of a magnetic atmosphere can serve as an example of the exact time dependent MHD solutions in which non-radial equilibrium is maintained by magnetic force, while all motion is strictly radial (Farrugia et al. 1995 and references there in). Taking the magnetospheric rotation with angular velocity $`𝛀`$ (not perpendicular to the plane of the equatorial disk), the small amplitude oscillations of such a blob thrown into the magnetosphere are described by equations (Landau and Lifshitz, 1960) $$\ddot{x}+\omega _K^2x2\dot{y}\mathrm{\Omega }\mathrm{cos}\delta =0$$ (2) $$\ddot{y}+2\dot{x}\mathrm{\Omega }\mathrm{cos}\delta 2\dot{z}\mathrm{\Omega }\mathrm{sin}\delta =0$$ (3) $$\ddot{z}+2\dot{y}\mathrm{\Omega }\mathrm{sin}\delta =0$$ (4) where $`\delta `$ is the angle between $`𝛀`$ and the vector normal to the plane of radial oscillations, y is the azimuthal component of the displacement vector and z is the vertical component which is perpendicular to the plane of Keplerian oscillations. The dispersion relation for the frequency $`\omega `$ $$\omega ^2[\omega ^4(\omega _K^2+4\mathrm{\Omega }^2)\omega ^2+4\mathrm{\Omega }^2\omega _K^2\mathrm{sin}^2\delta ]=0$$ (5) besides the non-oscillating mode ($`\omega =0`$), describes two spectral branches (eigenmodes), high and low: $$(\omega ^2)_{1,2}=\frac{\omega _h^2\pm (\omega _h^416\mathrm{\Omega }^2\omega _K^2\mathrm{sin}^2\delta )^{1/2}}{2}$$ (6) where $$\omega _h=(\omega _K^2+4\mathrm{\Omega }^2)^{1/2}$$ (7) is the analog of the upper hybrid frequency $`f_{uh}`$ in plasma physics (Akhiezer et al., 1975; Benson 1977). When the angle $`\delta `$ is small (as we believe is indeed our case): $$\omega _1=\omega _h$$ (8) is related with the radial eigenmode and $$\omega _2\omega _L=2\mathrm{\Omega }(\omega _K/\omega _h)\mathrm{sin}\delta $$ (9) is related with the vertical eigenmode. For $`\delta 1`$, corrections for $`\omega _1`$ are of the second order in $`\delta `$, but for $`\omega _2`$, the angle dependent term is the main term. ## 3 Comparison of the Model with Observations According to formula (7) $$\mathrm{\Omega }=\frac{(\omega _h^2\omega _K^2)^{1/2}}{2}$$ (10) is the rotational frequency, which, for the highly conductive plasma in the case of corotation, should be approximately constant ($`\mathrm{\Omega }=\mathrm{\Omega }_0`$). Interpreting the two frequencies in the upper part of the kHz spectrum as $`\omega _K`$ and $`\omega _h`$ \[where $`\omega _K<\omega _h`$\] for Sco X-1, we plotted in Figure 1, $`\nu =\mathrm{\Omega }/2\pi `$ versus $`\nu _K=\omega _k/2\pi `$ (solid circles). Indeed, one may notice that as a first approximation $`\mathrm{\Omega }=\mathrm{\Omega }_0=const.`$ The slow (but systematic) variation of $`\mathrm{\Omega }/2\pi `$ is between 330 and 350 Hz which should be compared with the variation by a factor of 1.5 for $`\mathrm{\Delta }\omega =\omega _h\omega _K`$ for the same range of $`\omega _K`$. If the magnetosphere corotates with the neutron star (solid body rotation), then our procedure would determine the spin rotation of the star. In fact, there is a differential rotation profile which depends on the magnetic field $`𝐁`$ which can be presented in terms of the Chandrasekhar potential $`A`$ (Chandrasekhar, 1956) $$𝐁=\frac{1}{R}(\frac{1}{R}\frac{A}{\mu },\frac{1}{(1\mu ^2)^{1/2}}\frac{A}{R},\frac{B^{}(A)}{(1\mu ^2)^{1/2}})$$ (11) where $`\mu =\mathrm{cos}\theta `$, $`\theta `$ is the colatitude in the spherical system of coordinates and $`B^{}(A)`$ is a function of $`A`$ only. For a combination of dipole, quadrupole and octupole fields $$A=A_0(1\mu ^2)\left[\frac{1}{R}+\frac{q}{R^2}\mu \frac{b}{R^3}(5\mu ^2+2)\right]$$ (12) where $`A_0`$, $`q`$ and $`b`$ are constants. For a highly conductive plasma, the MHD induction equation leads to the iso-rotation theorem of Ferraro (see Alfvén and Falthammar, 1963) which in our notation means that $`\mathrm{\Omega }`$ is a function of $`A`$ only. Following a recent suggestion of Osherovich and Gliner (1999), we assume that in the vicinity of a magnetic star $`\mathrm{\Omega }(A)`$ has a maximum and therefore can be presented as $$\mathrm{\Omega }(A)=\mathrm{\Omega }_0a^2A^2$$ (13) where $`\mathrm{\Omega }_0`$ and $`a`$ are constants. Since in the equatorial plane ($`\mu =0`$), according to (12), the quadrupole term does not contribute to $`A`$, from formula (13) we find $$\mathrm{\Omega }(A)=\mathrm{\Omega }_0\left(\frac{\alpha ^{}}{R}\frac{\beta ^{}}{R^3}\right)^2$$ (14) where $`\alpha ^{}`$ and $`\beta ^{}`$ are positive constants. Expressing $`R`$ through the Keplerian frequency $`\nu _K=\omega _K/(2\pi )`$ from equation (1), we find that formula (14) leads to the following presentation for $`\mathrm{\Omega }`$: $$\mathrm{\Omega }(\nu _K)/2\pi =C_0+C_1\nu _K^{4/3}+C_2\nu _K^{8/3}+C_3\nu _K^4$$ (15) where $`C_2=2\sqrt{C_1C_3}`$ (not a free parameter because the radial part of the expansion is a full square of $`\nu _K^{2/3}`$ and $`\nu _K^2`$ with new positive constants $`\alpha `$ and $`\beta `$). The fit for Sco X-1 (solid line in Figure 1) by formula (16) demonstrates the agreement of the proposed model with the data. Indeed, for Sco X-1, we find $`C_0=\mathrm{\Omega }_0/2\pi =345`$ Hz, $`C_1=\alpha ^2=3.2910^2`$ Hz<sup>-1/3</sup>, $`C_2=2\alpha \beta =1.01710^5`$ Hz<sup>-5/3</sup> and $`C_3=\beta ^2=7.7610^{10}`$ Hz<sup>-3</sup> with high $`\chi ^2=37.6/39`$. In our fitting procedure, $`C_0`$, $`C_1`$, $`C_2`$ and $`C_3`$ are treated as four free parameters, i.e., the relation between $`C_2`$ and the two constants $`C_1`$ and $`C_3`$ is not enforced. The fact that the derived $`C_2`$ indeed satisfies the relation $`C_2=2\sqrt{C_1C_3}`$ shows the internal consistency of the model. Three available data points for the 4U 1702-429 source are consistent with a constant value of rotation frequency $`\mathrm{\Omega }_0/2\pi =380`$ Hz, but the small number of data points does not allow us to recover the profile of $`\mathrm{\Omega }(\nu _K)`$. However, the fit of 4U 1608-52, shown in Figure 2, demonstrates that the excellent agreement between our model and observations for Sco X-1 is not accidental. For 4U 1608-52, we found $`C_0=\mathrm{\Omega }_0/2\pi =345`$ Hz, $`C_1=3.9110^2`$ Hz<sup>1/3</sup>, $`C_2=1.2310^5`$ Hz<sup>-5/3</sup> and $`C_3=9.4510^{10}`$ Hz<sup>-3</sup> with $`\chi ^2=12.02/13`$. Sco X-1 allows us to check the prediction of the model for $`\omega _L`$ \[formula (10)\]. Taking the profile $`\mathrm{\Omega }(\nu _K)`$, presented in Figure 1, we plot $`\nu _L=\omega _L/2\pi `$ in Figure 3. Choosing $`\delta =5.5^o`$, we found that our plot for $`\nu _L`$ and $`2\nu _L`$ fit the data for the observed frequencies of $`45`$ Hz and $`90`$ Hz, respectively. From these results, we conclude that the rotational axis of Sco X-1 and the normal to the plane of radial oscillations do not coincide but are separated by $`5.5^o`$. ## 4 Summary and Discussion The main thrust of this work is based on the hypothesis that two of the observed frequencies in the upper part of the kHz QPO spectrum are the Keplerian frequency ($`\omega _K`$) and the upper hybrid frequency ($`\omega _h`$) of the Keplerian oscillator in the frame of reference rotating with the angular frequency $`\mathrm{\Omega }`$. For three sources, we showed that $`\mathrm{\Omega }`$ calculated according to the formula $`\mathrm{\Omega }=\sqrt{\omega _h^2\omega _K^2}/2`$ changes with $`\omega _K`$ significantly less than $`\mathrm{\Delta }\omega =\omega _h\omega _K`$, suggesting approximately constant rotation of the star’s magnetosphere. The detailed profile of $`\mathrm{\Omega }`$ is successfully modeled as a function of $`\nu _K`$ within the dipole-quadrupole-octupole approximation of the magnetic field for Sco X-1 and for 4U 1608-52. For Sco X-1, the derived $`\mathrm{\Omega }(\nu _K)`$ is used to compare predictions of the model for the lower branch frequency $`\omega _L/2\pi `$ with observations of $`45`$ Hz and $`90`$ Hz (presumably $`2\omega _L/2\pi `$). The last fit is done with only one unknown parameter $`\delta `$ (angle between the $`𝛀`$ and the normal vector to the plane of Keplerian oscillations). A good fit for $`\omega _L`$ and $`2\omega _L`$ confirmed the suggested model and revealed the expectedly small angle $`\delta =5.5^o`$. The profile for $`\mathrm{\Omega }(\nu _K)`$ has been modeled strictly in the equatorial plane ($`\mu =0`$) for the ideally aligned rotator ($`\delta =0`$). With $`\delta 0`$, the next approximation will reveal the asymmetry of the QPO spectrum due to the quadrupole term. It has been shown (Osherovich, Tzur and Gliner, 1984) for the solar corona that the quadrupole term introduces North-South asymmetry in the makeup of the magnetic atmosphere surrounding a gravitating object. We shall pursue a similar study for the neutron star with the expectation of finding a signature of such asymmetry in the kHz QPO. Recent measurements show that the octupole field is comparable to the global dipole field near the sun, while the quadrupole field contributes $`2030\%`$ of the total field (Wang et al., 1997; Osherovich et al., 1999). In this Letter, we restrict the scope of the work to a model for the linear Keplerian oscillator. The presence of the second harmonic ($`90`$ Hz) of the $`45`$ Hz mode, noticed by van der Klis et al. (1997 ), strongly suggests that a weakly nonlinear model is desirable. Some oscillations below $`\omega _L`$ (for Sco X-1 oscillations with frequencies $`1020`$ Hz) we attribute to the innermost edge of the Keplerian disk, which adjusts itself to the rotating central object (i.e., neutron star). The physics of these oscillations related to accreting matter with a large angular momentum is outside the scope of this Letter and it have been described in a separate paper (Titarchuk & Osherovich 1999). We are grateful to J. Fainberg and R.F. Benson for help in the preparation of this paper and discussions and for fruitful suggestions by the referee. L.T. thanks NASA for support under grants NAS-5-32484 and RXTE Guest Observing Program and Jean Swank for support of this work. In particular, we are grateful to Michiel van der Klis and Mariano Mendez for the data which enabled us to make detailed comparisons of results of our model with the data.
no-problem/9907/astro-ph9907023.html
ar5iv
text
# Thermal Stability of Cold Clouds in Galaxy Halos ## 1. Introduction Walker & Wardle (1998) showed that a population of neutral, AU-sized clouds in the Galactic halo could be responsible for the “Extreme Scattering Events” (ESEs) observed in the radio flux towards several quasars (Fiedler et al. 1987, 1994). In this model the cloud surfaces are exposed to UV radiation from hot stars in the Galactic disk, producing a photo-ionised wind. When one of these clouds crosses the line of sight to a compact radio source, the flux varies as a result of refraction by the ionised gas (cf. Henriksen & Widrow 1995). This model explains the observed flux variations quite naturally; but if the clouds are self-gravitating, then the ESE event rate implies that the cloud population comprises a significant fraction of the Galaxy’s mass. This halo cloud population cannot contain much dust mixed with the gas as this would lead to optical extinction events of distant stars: either the clouds have extremely low metallicity, or any dust grains have sedimented to the cloud centre. Given this, several factors make the clouds difficult to detect (Pfenniger, Combes & Martinet 1994): cold molecular hydrogen is, by and large, invisible; the clouds are small; they are transparent in most regions of the electromagnetic spectrum; and they cover a small fraction of the sky. The clouds are not sufficiently compact to cause gravitational lensing towards the LMC, although Draine (1998) has shown that there is substantial optical refraction by the neutral gas, so that microlensing experiments (Paczyński 1996) already place useful constraints on the properties of low-mass halo clouds. Given that this hypothesised cloud population does not violate observational constraints, the primary issues that need to be addressed are theoretical: (i) how and when did these clouds form? and (ii) how do they resist gravitational collapse? The second of these is addressed in this *Letter*. We begin by writing down equations describing a simple “one-zone” model of a cloud, characterised by a single temperature and pressure (§2), and show that particles of solid H<sub>2</sub> may exist in the clouds (cf. Pfenniger & Combes 1994). At temperatures above the microwave background temperature, these particles cool the cloud by thermal continuum radiation, admitting equilibria in which this cooling balances heating by cosmic rays. In §3, we show that (for optically thin emission) these equilibria are thermally stable: if the cloud contracts the coolant is destroyed by the increase in temperature, and the power deposited by cosmic rays causes the cloud to expand and the temperature to return to its original value. We conclude that, within the context of our one-zone model, the viable mass range for Galactic clouds is $`10^6`$$`10^{1.7}\mathrm{M}_{\mathrm{}}`$. ## 2. Cloud model Virial equilibrium implies that for a self-gravitating cloud characterised by mass $`M`$, temperature $`T`$, and radius $`R`$, $$RGM\mu /kT$$ (1) where $`\mu `$ is the mean molecular weight. The pressure in the cloud can be related to the temperature upon noting that $`PGM^2/R^4`$, yielding $$P=\frac{q}{G^3M^2}\left(\frac{kT}{\mu }\right)^4,$$ (2) where $`q`$ depends on the cloud’s structure. For polytropes $`q`$ rises monotonically from 9.2 to 40 as the polytropic index runs from 3/2 to 9/2, so we adopt $`q=20`$. At sufficiently high pressures a fraction $`x`$ of the molecular hydrogen assumes solid (or liquid) form. Then $`\mu =(1x+2y)m/(1x+y)`$, where $`m`$ is the mass of an H<sub>2</sub> molecule, and $`y1/6`$ is the abundance ratio He:H<sub>2</sub> by number. Neglecting the temperature difference between the phases, in equilibrium the partial pressure of H<sub>2</sub> equals the saturated vapour pressure, i.e. $$\frac{1x}{1x+y}P=\left(\frac{2\pi m}{h^2}\right)^{3/2}(kT)^{5/2}e^{T_v/T},$$ (3) (valid for $`0<x<1`$), where $`kT_v`$ is the heat of vapourisation for H<sub>2</sub> (Phinney 1985). With $`T_v=91.5\mathrm{K}`$, the vapour pressure given by the RHS of eq. (3) is within 20 % of the available experimental data (Souers 1986). Hydrogen grains can cool the gas in a manner similar to dust grains in molecular clouds: the gas cools via collisions with slightly colder solid particles, which in turn cool by thermal continuum emission. To calculate the cooling by solid H<sub>2</sub>, first consider the net power radiated by a single particle (we employ an ‘escape probability’ formulation of radiative transfer): $$L_s=4\sigma \left(\frac{C(T)T^4}{1+\tau }\frac{C(T_b)T_b^4}{1+\tau _b}\right),$$ (4) where $`C(T)`$ is the Planck-mean absorption cross-section, $`T_b`$ is the cosmic microwave background temperature, and $`\tau `$ and $`\tau _b=\tau C(T_b)/C(T)`$ are the Planck-mean optical depth of the cloud to thermal radiation characterized by $`T`$ and $`T_b`$ respectively. Assuming that the particle size is $`\lambda `$ ($`0.1\mathrm{cm}`$ at the temperatures of relevance here), we may write $`C(T)=C_m(T)m_s`$, $`m_s`$ being the particle mass, and for spherical grains we have (Draine & Lee 1984) $$C_m(T)\frac{15(4\pi )^3}{28\rho _s}\frac{\lambda _2}{(\epsilon _1+2)^2}\left(\frac{kT}{hc}\right)^2,$$ (5) where $`\rho _s=0.087\mathrm{g}\mathrm{cm}^3`$ is the density of solid H<sub>2</sub> (Souers 1986),the complex dielectric function of the solid is $`\epsilon _1+i\epsilon _2`$, and we have assumed that $`\epsilon _2=\lambda _2/\lambda `$ as expected at low frequencies. The net cooling rate per unit mass of cloud material (gas and solid) is then $$\mathrm{\Lambda }=\frac{4\pi R^2\sigma }{M}\left(\frac{\tau T^4}{1+\tau }\frac{\tau _bT_b^4}{1+\tau _b}\right),$$ (6) where $`\tau _b=\tau (T_b/T)^2`$, and $$\tau =C_m\frac{x}{1+2y}\frac{M}{\pi R^2}.$$ (7) To evaluate $`\mathrm{\Lambda }`$, we require optical constants for solid $`\mathrm{H}_2`$ in the microwave. The particles are expected to be almost pure para-hydrogen as an ortho-para mixture of the solid relaxes to para ($`J=0`$) form in a few days (Souers 1986). The low frequency value of $`\epsilon _1`$ for para-hydrogen has been measured (Souers 1986) as $`\epsilon _11.25`$; the low frequency limit of $`\epsilon _2`$ is less certain. Jochemsen et al. (1978) measured the extinction coefficient of a single crystal of solid para-hydrogen in the region of interest ($`\lambda 0.1\mathrm{cm}`$), but could not determine whether this continuum extinction was due to absorption or scattering within the crystal. Because these measurements do not conform to the anticipated low-frequency behaviour ($`1/\lambda `$), and absorption bands are not expected below the S(0) line, it is likely that the absorption of pure crystalline $`\mathrm{H}_2`$ is much smaller than the measured extinction, and we can only infer a limit: $`\epsilon _21.8\times 10^3`$. However, the low-frequency absorption of solid H<sub>2</sub> grains could be strongly enhanced by impurity species and lattice defects. For the purposes of this paper we adopt $`\epsilon _2=\lambda _2/\lambda `$ and $`\lambda _2=10^4\mathrm{cm}`$. Within the confines of the model, this assumption represents one of our main areas of uncertainty. For a given cloud mass, the fraction of H<sub>2</sub> in the solid phase can be determined from $`T`$ using eqs (2) and (3). This allows the cooling by solid particles to be calculated as a function of $`T`$. The upper panel of Fig 1 illustrates this for a cloud of mass $`10^3\mathrm{M}_{\mathrm{}}`$. For comparison, the cooling contributed by rotational lines of gas-phase H<sub>2</sub>, HD and LiH is also plotted; energies and A-values from Turner, Kirby-Docken & Dalgarno (1977), Abgrall, Roueff & Viala (1982), and Gianturco et al. (1996). The adopted deuterium and LiH abundances are $`3\times 10^5`$, and $`1.2\times 10^{10}`$ respectively (Schramm & Turner 1998). We employ an escape-probability formulation of radiative transfer, in which the optically-thin cooling rates are divided by $`(1+\tau )`$ where $`\tau `$ is the optical depth at line centre (rotational transitions) or the Planck-mean (continuum emission). There is a critical temperature $`T_c`$ at which H<sub>2</sub> in the cloud lies on the border between the solid and gaseous phases, i.e. the partial pressure of H<sub>2</sub> is equal to its saturated vapour pressure. From this point $`x`$ and $`\mathrm{\Lambda }`$ increase precipitously as $`T`$ is reduced, until $`\tau 1`$ and the cooling is then roughly black-body. $`\mathrm{\Lambda }`$ then increases more slowly, peaking when $`x1/2`$ and subsequently dropping to zero as the temperature approaches that of the microwave background. At lower temperatures the cloud is heated by the background radiation. The solid H<sub>2</sub> cooling curves for cloud masses of $`10^3`$, $`10^5`$ and $`10^7\mathrm{M}_{\mathrm{}}`$ are compared in the lower panel of figure 1. Decreasing the cloud mass has two consequences: the critical temperature $`T_c`$ decreases, whereas the maximum value of $`\mathrm{\Lambda }`$ decreases (because in this circumstance the emission is optically thick, and cloud surface area is proportional to $`M^2`$ at a given temperature). For cloud masses $`10^7\mathrm{M}_{\mathrm{}}`$, $`T_c`$ is above the H<sub>2</sub> triple point (13.8 K) and liquid droplets of H<sub>2</sub> form instead. This does not qualitatively affect the calculations as the density and saturated vapour pressure of the liquid are within 50% of those of the solid for $`T20K`$ (Souers 1986), and the optical properties are similar (in the sense that $`\epsilon _2`$ is small and uncertain), thus in Fig. 1 we continue the cooling curve for a $`10^7\mathrm{M}_{\mathrm{}}`$ cloud above the triple point as a dotted curve. Thermally stable solutions do not exist for masses $`10^{7.5}\mathrm{M}_{\mathrm{}}`$, as the partial pressure of H<sub>2</sub> exceeds the saturated vapour pressure unless $`x>0.5`$ (see §3). On the other hand, for $`M10^{1.7}\mathrm{M}_{\mathrm{}}`$ $`T_c`$ is below the CMB temperature and the solid phase warms the cloud rather than cooling it. ## 3. Thermal Equilibrium In thermal equilibrium, the cloud temperature is set by the balance between cooling and heating. We assume that clouds in the Galactic halo are heated primarily by cosmic rays. The local interstellar cosmic-ray ionisation rate in the Galactic disc, $`3\times 10^{17}\mathrm{s}^1\mathrm{H}^1`$ (Webber 1998), implies a heating rate $`\mathrm{\Gamma }3\times 10^4\mathrm{erg}\mathrm{g}^1\mathrm{s}^1`$ (Cravens & Dalgarno 1978). The cosmic-ray heating in the halo is uncertain but should be somewhat lower, say $`10^5\mathrm{erg}\mathrm{g}^1\mathrm{s}^1`$. In Fig. 2, we show the cooling rates for clouds of masses $`10^7`$$`10^{1.7}\mathrm{M}_{\mathrm{}}`$. The solid curves are contours of constant optical depth; the dashed curve shows the optically-thick limit and represents the maximum cooling rate for each cloud mass. It appears that solid hydrogen can provide the necessary cooling for planetary-mass gas clouds at the cosmic-ray heating rates expected in the Galactic disk and halo. The upper panel of Fig. 1 shows that there are typically three equilibrium temperatures available for cloud masses between $`10^{7.5}`$ and $`10^{1.7}\mathrm{M}_{\mathrm{}}`$ at the expected heating rates: solid H<sub>2</sub> provides one barely above $`T_b`$ and one a few degrees higher; the gas-phase coolants provide an equilibrium above 30 K. We now show that thermal stability requires $`\mathrm{\Lambda }`$ to be a decreasing function of $`T`$, and therefore only the second of these three equilibria is stable. In virial equilibrium the total energy per unit mass is approximately $`\frac{3}{2}kT/\mu `$ (the internal excitation of the gas is negligible at the low temperatures of interest here), so the thermal evolution of the cloud is determined by $$\frac{3k}{2\mu }\frac{\mathrm{d}T}{\mathrm{d}t}=\mathrm{\Lambda }\mathrm{\Gamma }.$$ (8) In the absence of heating the cloud contracts on the Kelvin-Helmholtz time-scale $`t_{\mathrm{KH}}=\frac{3}{2}kT/(\mu \mathrm{\Lambda })`$. Note that this time-scale can be a substantial fraction of the Hubble time for temperatures of a few Kelvin: the dashed curves in figure 1 show the cooling rate that yields $`t_{\mathrm{KH}}=10\mathrm{Gyr}`$. In thermal equilibrium cosmic-ray heating replaces the energy radiated away by the cloud implying, for example, $`t_{\mathrm{KH}}2\times 10^6`$ yr for $`\mathrm{\Gamma }10^5\mathrm{erg}\mathrm{g}^1\mathrm{s}^1`$ (at $`T10`$ K). This is much greater than the sound crossing time ($`10^2`$ yr), so the response of a cloud to dynamical perturbations is adiabatic, to a good approximation, and dynamical stability is assured. However, eq. (8) shows that perturbations to the cloud temperature grow or decay as $`e^{\alpha t}`$ where $$\alpha =t_{\mathrm{KH}}^1\frac{T}{\mathrm{\Lambda }}\frac{\mathrm{d}}{\mathrm{d}T}(\mathrm{\Lambda }\mathrm{\Gamma }),$$ (9) and the right-hand side of this equation is evaluated at the equilibrium temperature. Thus a cloud is thermally stable only if a decrease (increase) in cloud temperature leads to cooling outstripping (lagging) heating. For cosmic ray heating, $`\mathrm{\Gamma }`$ is independent of $`T`$ if the column through the cloud is insufficient to cause significant attenuation of cosmic rays (changes in temperature affect the cloud’s column density through the virial relationship $`R1/T`$). Thermal stability then requires that $`\mathrm{\Lambda }`$ be a decreasing function of $`T`$, and we conclude that only the equilibrium on the high-temperature shoulder of the solid hydrogen cooling curve is stable. In fact the column density of each cloud ($`10^2\mathrm{g}\mathrm{cm}^2`$: Walker 1999) is sufficient to stop sub-GeV cosmic-ray protons (and all electrons), leading to a dependence of $`\mathrm{\Gamma }`$ on $`T`$; this dependence is too weak to affect our conclusions concerning stability. ## 4. Discussion The suggestion that cold gas could comprise a significant fraction of the Galaxy’s dark matter is not new, previous proposals include: a fractal medium in the the outer reaches of the Galactic disk (Pfenniger et al. 1994); isolated halo clouds (Gerhard & Silk 1996); and mini clusters of clouds in the halo (de Paolis et al. 1995; Gerhard & Silk 1996). However, to date there has been no compelling reason to believe that isolated, cold gas clouds – as inferred by Walker & Wardle (1998) – could support themselves for long periods against gravitational collapse. We have shown that such clouds can be stabilised by the precipitation/sublimation of particles of solid hydrogen (or by the condensation/evaporation of droplets of liquid hydrogen) if these particles dominate the radiative cooling of the cloud. The key feature which confers thermal stability is that these particles are destroyed, hence cooling becomes less efficient, as the cloud temperature increases. This feature will be present in any model where condensed hydrogen is the principal coolant, and consequently we expect that more sophisticated structural treatments will also admit stable solutions. The masses of thermally stable clouds lie in the approximate range $`10^{7.5}`$$`10^{1.7}\mathrm{M}_{\mathrm{}}`$. The lower limit is increased to $`10^6\mathrm{M}_{\mathrm{}}`$ if subject to cosmic ray heating similar to that in the Galactic disc for an interval $`kT/\mathrm{\Gamma }10^5\mathrm{yr}`$. As halo clouds take much longer than this to pass through the cosmic-ray disc, this limit is appropriate even for a halo cloud population. For cloud masses in the range $`10^6`$$`10^{1.7}\mathrm{M}_{\mathrm{}}`$ the radiative cooling simply readjusts, on the timescale $`t_{\mathrm{KH}}`$, to maintain equilibrium as $`\mathrm{\Gamma }`$ varies through the orbit ($`10^8`$ yr) of a cloud around the Galaxy. The typical particle radius $`a`$ is constrained by the requirement that the clouds not produce significant extinction at optical wavelengths: the geometrical optical depth of the particles, $`\tau _g\tau /C_ma\rho _s`$ should be less than 1. Adopting $`T=5`$ K and $`\tau =0.01`$ this translates to $`a(\mathrm{mm})(10^4\mathrm{cm})/\lambda _2`$. Millimeter-size particles settle to the centre of the cloud in $`10^4\mathrm{yr}`$, this time is shortened if $`\lambda _2`$ is significantly less than $`10^4\mathrm{cm}`$ and the particles are required to be larger. Settling may be counteracted by convective motions or sublimation resulting from the higher temperatures deeper within the cloud — issues that must await a more sophisticated treatment of the cloud structure. If the hypothesised population of clouds exists, their thermal microwave emission may be detectable as a Galactic continuum background at temperatures just above the Cosmic Microwave Background; a Galactic component of this kind has in fact been isolated in the COBE FIRAS data (Reach et al. 1985). One would like to compare these data with the theory presented here, but it is difficult to predict the total microwave intensity for our model because the distribution of cosmic-ray density away from the Galactic plane is only loosely constrained (see Webber et al. 1994). A similar uncertainty afflicts the modelling of $`\gamma `$-ray production from baryonic material in the Galactic halo (cf. de Paolis et al. 1995; Salati et al. 1996; de Paolis et al. 1999; Kalberla, Shchekinov & Dettmar 1999). Nevertheless, microwave and $`\gamma `$-ray emissivities are each proportional to the local cosmic-ray flux (assuming the cosmic-ray spectrum does not vary greatly), so we can write $`I_\mu I_\gamma j_\mu /j_\gamma `$, for emissivities $`j`$ and intensities $`I`$. At high latititudes the Galactic $`\gamma `$-ray background is $`I_\gamma 10^6\mathrm{ph}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1`$, above 1 GeV (Dixon et al. 1998); local to the Sun the corresponding (optically thin) emissivity is $`1.1\times 10^3\mathrm{ph}\mathrm{s}^1\mathrm{g}^1\mathrm{sr}^1`$ (Bertsch et al. 1993). Thus for a cosmic-ray heating rate (again, local to the Sun) of $`\mathrm{\Gamma }=4\pi j_\mu 3\times 10^4\mathrm{erg}\mathrm{s}^1\mathrm{g}^1`$ (Cravens & Dalgarno 1978), we expect $`I_\mu 2\times 10^8\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1`$. This is roughly 1% of that observed in the FIRAS cold component at high latitude (Reach et al. 1985), so the microwave data do not exclude the possibility of a cold cloud population heated by cosmic rays. Sciama (1999) proposed that all of the cold excess may be accounted for by cosmic-ray heating of cold clouds, but this appears to be based on an overestimate of the gamma-ray flux, and an underestimate of the high-latitude FIRAS flux. ## 5. Conclusions We have demonstrated that, by virtue of the solid/gas phase transition of hydrogen, cold, planetary-mass Galactic gas clouds can be thermally stable even when they are heated at a temperature-independent rate. Our analysis applies to the present epoch, with the microwave background temperature at $`T_b<3`$ K; for background temperatures $`T_b6`$ K, our model admits no stable mass range. Consequently the longevity of the clouds at redshifts $`z1`$ is problematic. We cannot, however, hope to address this issue until a firm theoretical basis for the formation of such clouds has been established. We thank Sterl Phinney for making available a copy of an unpublished preprint and Bruce Draine for thoughtful comments on the manuscript. The Special Research Centre for Theoretical Astrophysics is funded by the Australian Research Council under its Special Research Centres programme.
no-problem/9907/hep-th9907001.html
ar5iv
text
# Hawking Radiation As Tunneling ## I Introduction Several derivations of Hawking radiation exist in the literature . None of them, however, correspond very directly to one of the heuristic pictures most commonly proposed to visualize the source of the radiation, as tunneling. According to this picture, the radiation arises by a process similar to electron-positron pair creation in a constant electric field. The idea is that the energy of a particle changes sign as it crosses the horizon, so that a pair created just inside or just outside the horizon can materialize with zero total energy, after one member of the pair has tunneled to the opposite side. Here we shall show that this schematic can be used to provide a short, direct semi-classical derivation of black hole radiance. In what follows, energy conservation plays a fundamental role: one must make a transition between states with the same total energy, and the mass of the residual hole must go down as it radiates. Indeed, it is precisely the possibility of lowering the black hole mass which ultimately drives the dynamics. This supports the idea that, in quantum gravity, black holes are properly regarded as highly excited states. Broadly speaking, there are two standard approaches to Hawking radiation. In the first, one considers a collapse geometry. The response of external fields to this can be done explicitly or implicitly by abstracting appropriate boundary conditions. In the second, one treats the black hole immersed in a thermal bath. In this approach, one shows that (in general, metastable) equilibrium is possible. By detailed balance, this implies emission from the hole. In both the standard calculations, the background geometry is considered fixed, and energy conservation is not enforced during the emission process. Here we will consider a hole in empty Schwarzschild space, but with a dynamical geometry so as to enforce energy conservation. (Despite appearances, the geometry is not truly static, since there is no global Killing vector.) Because we are treating this aspect more realistically, we must – and do – find corrections to the standard results. These become quantitively significant when the quantum of radiation carries a substantial fraction of the mass of the hole. ## II Tunneling To describe across-horizon phenomena, it is necessary to choose coordinates which, unlike Schwarzschild coordinates, are not singular at the horizon. A particularly suitable choice is obtained by introducing a time coordinate, $$t=t_s+2\sqrt{2Mr}+2M\mathrm{ln}\frac{\sqrt{r}\sqrt{2M}}{\sqrt{r}+\sqrt{2M}},$$ (1) where $`t_s`$ is Schwarzschild time. With this choice, the line element reads $$ds^2=\left(1\frac{2M}{r}\right)dt^2+2\sqrt{\frac{2M}{r}}dtdr+dr^2+r^2d\mathrm{\Omega }^2.$$ (2) There is now no singularity at $`r=2M`$, and the true character of the spacetime, as being stationary but not static, is manifest. These coordinates were first introduced by Painlevé (who used them to criticize general relativity, for allowing singularities to come and go!). Their utility for studies of black hole quantum mechanics was emphasized more recently in . For our purposes, the crucial features of these coordinates are that they are stationary and nonsingular through the horizon. Thus it is possible to define an effective “vacuum” state of a quantum field by requiring that it annihilate modes which carry negative frequency with respect to $`t`$; such a state will look essentially empty (in any case, nonsingular) to a freely-falling observer as he or she passes through the horizon. This vacuum differs strictly from the standard Unruh vacuum, defined by requiring positive frequency with respect to the Kruskal coordinate $`U=\sqrt{r2M}\mathrm{exp}\left(\frac{t_sr}{4M}\right)`$ . The difference, however, shows up only in transients, and does not affect the late-time radiation. The radial null geodesics are given by $$\dot{r}\frac{dr}{dt}=\pm 1\sqrt{\frac{2M}{r}},$$ (3) with the upper (lower) sign in Eq. 3 corresponding to outgoing (ingoing) geodesics, under the implicit assumption that $`t`$ increases towards the future. These equations are modified when the particle’s self-gravitation is taken into account. Self-gravitating shells in Hamiltonian gravity were studied by Kraus and Wilczek . They found that, when the black hole mass is held fixed and the total ADM mass allowed to vary, a shell of energy $`\omega `$ moves in the geodesics of a spacetime with $`M`$ replaced by $`M+\omega `$. If instead we fix the total mass and allow the hole mass to fluctuate, then the shell of energy $`\omega `$ travels on the geodesics given by the line element $$ds^2=\left(1\frac{2(M\omega )}{r}\right)dt^2+2\sqrt{\frac{2(M\omega )}{r}}dtdr+dr^2+r^2d\mathrm{\Omega }^2,$$ (4) so we should use Eq. 3 with $`MM\omega `$. Since the typical wavelength of the radiation is of the order of the size of the black hole, one might doubt whether a point particle description is appropriate. However, when the outgoing wave is traced back towards the horizon, its wavelength, as measured by local fiducial observers, is ever-increasingly blue-shifted. Near the horizon, the radial wavenumber approaches infinity and the point particle, or WKB, approximation is justified. The imaginary part of the action for an s-wave outgoing positive energy particle which crosses the horizon outwards from $`r_{\mathrm{in}}`$ to $`r_{\mathrm{out}}`$ can be expressed as $$\mathrm{Im}S=\mathrm{Im}_{r_{\mathrm{in}}}^{r_{\mathrm{out}}}p_r𝑑r=\mathrm{Im}_{r_{\mathrm{in}}}^{r_{\mathrm{out}}}_0^{p_r}𝑑p_r^{}𝑑r.$$ (5) Remarkably, this can be evaluated without entering into the details of the solution, as follows. We multiply and divide the integrand by the two sides of Hamilton’s equation $`\dot{r}=+\frac{dH}{dp_r}|_r`$, change variable from momentum to energy, and switch the order of integration to obtain $$\mathrm{Im}S=\mathrm{Im}_M^{M\omega }_{r_{\mathrm{in}}}^{r_{\mathrm{out}}}\frac{dr}{\dot{r}}𝑑H=\mathrm{Im}_0^{+\omega }_{r_{\mathrm{in}}}^{r_{\mathrm{out}}}\frac{dr}{1\sqrt{\frac{2\left(M\omega ^{}\right)}{r}}}\left(d\omega ^{}\right),$$ (6) where we have used the modified Eq. 3, and the minus sign appears because $`H=M\omega ^{}`$. But now the integral can be done by deforming the contour, so as to ensure that positive energy solutions decay in time (that is, into the lower half $`\omega ^{}`$ plane). In this way we obtain $$\mathrm{Im}S=+4\pi \omega \left(M\frac{\omega }{2}\right),$$ (7) provided $`r_{\mathrm{in}}>r_{\mathrm{out}}`$. To understand this ordering – which supplies the correct sign – we observe that when the integrals in Eq. 5 are not interchanged, and with the contour evaluated via the prescription $`\omega \omega iϵ`$, we have $$\mathrm{Im}S=+\mathrm{Im}_{r_{\mathrm{in}}}^{r_{\mathrm{out}}}_M^{M\omega }\frac{dM^{}}{1\sqrt{\frac{2M^{}}{r}}}𝑑r=\mathrm{Im}_{r_{\mathrm{in}}}^{r_{\mathrm{out}}}\pi rdr.$$ (8) Hence $`r_{\mathrm{in}}=2M`$ and $`r_{\mathrm{out}}=2\left(M\omega \right)`$. (Incidentally, comparing the above equation with Eq. 5, we also find that $`\mathrm{Im}p_r=\pi r`$.) Although this radially inward motion appears at first sight to be classically allowed, it is nevertheless a classically forbidden trajectory because the apparent horizon is itself contracting. Thus, the limits on the integral indicate that, over the course of the classically forbidden trajectory, the outgoing particle starts from $`r=2Mϵ`$, just inside the initial position of the horizon, and traverses the contracting horizon to materialize at $`r=2(M\omega )+ϵ`$, just outside the final position of the horizon. Alternatively, and along the same lines, Hawking radiation can also be regarded as pair creation outside the horizon, with the negative energy particle tunneling into the black hole. Since such a particle propagates backwards in time, we have to reverse time in the equations of motion. From the line element, Eq. 2, we see that time-reversal corresponds to $`\sqrt{\frac{2M}{r}}\sqrt{\frac{2M}{r}}`$. Also, since the anti-particle sees a geometry of fixed black hole mass, the upshot of self-gravitation is to replace $`M`$ by $`M+\omega `$, rather than $`M\omega `$. Thus an ingoing negative energy particle has $$\mathrm{Im}S=\mathrm{Im}_0^\omega _{r_{out}}^{r_{in}}\frac{dr}{1+\sqrt{\frac{2\left(M+\omega ^{}\right)}{r}}}𝑑\omega ^{}=+4\pi \omega \left(M\frac{\omega }{2}\right),$$ (9) where to obtain the last equation we have used Feynman’s “hole theory” deformation of the contour: $`\omega ^{}\omega ^{}+iϵ`$. Both channels – particle or anti-particle tunneling – contribute to the rate for the Hawking process so, in a more detailed calculation, one would have to add their amplitudes before squaring in order to obtain the semi-classical tunneling rate. Such considerations, however, only concern the pre-factor. In either treatment, the exponential part of the semi-classical emission rate, in agreement with , is $$\mathrm{\Gamma }e^{2\mathrm{Im}S}=e^{8\pi \omega \left(M\frac{\omega }{2}\right)}=e^{+\mathrm{\Delta }S_{\mathrm{B}\mathrm{H}}},$$ (10) where we have expressed the result more naturally in terms of the change in the hole’s Bekenstein-Hawking entropy, $`S_{\mathrm{B}\mathrm{H}}`$. When the quadratic term is neglected, Eq. 10 reduces to a Boltzmann factor for a particle with energy $`\omega `$ at the inverse Hawking temperature $`8\pi M`$. The $`\omega ^2`$ correction arises from the physics of energy conservation, which (roughly speaking) self-consistently raises the effective temperature of the hole as it radiates. That the exact result must be correct can be seen on physical grounds by considering the limit in which the emitted particle carries away the entire mass and charge of the black hole (corresponding to the transmutation of the black hole into an outgoing shell). There can be only one such outgoing state. On the other hand, there are $`\mathrm{exp}\left(S_{\mathrm{B}\mathrm{H}}\right)`$ states in total. Statistical mechanics then asserts that the probability of finding a shell containing all the mass of the black hole is proportional to $`\mathrm{exp}\left(S_{\mathrm{B}\mathrm{H}}\right)`$, as above. Following standard arguments, Eq. 10 with the quadratic term neglected implies the Planck spectral flux appropriate to an inverse temperature of $`8\pi M`$: $$\rho \left(\omega \right)=\frac{d\omega }{2\pi }\frac{|T\left(\omega \right)|^2}{e^{+8\pi M\omega }1},$$ (11) where $`|T\left(\omega \right)|^2`$ is the frequency-dependent (greybody) transmission co-efficient for the outgoing particle to reach future infinity without back-scattering. It arises from a more complete treatment of the modes, whose semi-classical behavior near the turning point we have been discussing. The preceding techniques can also be applied to emission from a charged black hole. However, when the outgoing radiation carries away the black hole’s charge, the calculations are complicated by the fact that the trajectories are now also subject to electromagnetic forces. Here we restrict ourselves to uncharged radiation coming from a Reissner-Nordström black hole. The derivation then proceeds in a similar fashion to that above. The charged counterpart to the Painlevé line element is $$ds^2=\left(1\frac{2M}{r}+\frac{Q^2}{r^2}\right)dt^2+2\sqrt{\frac{2M}{r}\frac{Q^2}{r^2}}dtdr+dr^2+r^2d\mathrm{\Omega }^2,$$ (12) which is obtained from the standard Reissner-Nordström line element by the coordinate transformation, $`t`$ $`=`$ $`t_r+2\sqrt{2MrQ^2}+M\mathrm{ln}\left({\displaystyle \frac{r\sqrt{2MrQ^2}}{r+\sqrt{2MrQ^2}}}\right)`$ (14) $`+{\displaystyle \frac{Q^2M^2}{\sqrt{M^2Q^2}}}\mathrm{arctanh}\left({\displaystyle \frac{\sqrt{M^2Q^2}\sqrt{2MrQ^2}}{Mr}}\right),`$ where $`t_r`$ is the Reissner time coordinate. The line element now manifestly displays the stationary, nonstatic, and nonsingular nature of the spacetime. The equation of motion for an outgoing massless particle is $$\dot{r}\frac{dr}{dt}=+1\sqrt{\frac{2M}{r}\frac{Q^2}{r^2}},$$ (15) with $`MM\omega `$ when self-gravitation is included . The imaginary part of the action for a positive energy outgoing particle is $$\mathrm{Im}S=\mathrm{Im}_0^{+\omega }_{r_{\mathrm{in}}}^{r_{\mathrm{out}}}\frac{dr}{1\sqrt{\frac{2\left(M\omega ^{}\right)}{r}\frac{Q^2}{r^2}}}\left(d\omega ^{}\right),$$ (16) which is again evaluated by deforming the contour in accordance with Feynman’s $`w^{}w^{}iϵ`$ prescription. The residue at the pole can be read off by substituting $`u\sqrt{2\left(M\omega ^{}\right)rQ^2}`$. This yields an emission rate of $$\mathrm{\Gamma }e^{2\mathrm{Im}S}=e^{4\pi \left(2\omega \left(M\frac{\omega }{2}\right)(M\omega )\sqrt{(M\omega )^2Q^2}+M\sqrt{M^2Q^2}\right)}=e^{+\mathrm{\Delta }S_{\mathrm{B}\mathrm{H}}}.$$ (17) To first order in $`\omega `$, Eq. (17) is consistent with Hawking’s result of thermal emission at the Hawking temperature, $`T_H`$, for a charged black hole: $$T_H=\frac{1}{2\pi }\frac{\sqrt{M^2Q^2}}{\left(M+\sqrt{M^2Q^2}\right)^2}.$$ (18) Again, energy conservation implies that the exact result has corrections of higher order in $`\omega `$; these can all be collected to express the emission rate as the exponent of the change in entropy. Moreover, since the emission rate has to be real, the presence of the first square root in Eq. (17) ensures that radiation past extremality is not possible. Unlike in the traditional formulas, the third law of black hole thermodynamics is here manifestly enforced. Note that only local physics has gone into our derivations. There was neither an appeal to Euclideanization nor any need to invoke an explicit collapse phase. The time asymmetry leading to outgoing radiation arose instead from use of the “normal” local contour deformation prescription in terms of the nonstatic coordinate $`t`$. ## III Relation to Electric Discharge The calculation presented above is formally self-contained, but some additional discussion is in order, to elucidate its physical meaning and to dispel a puzzle it poses. When considered at the very broadest level, radiation of mass from a black hole resembles tunneling of electric charge off a charged conducting sphere. Upon a moment’s reflection, however, the difference in the physics of the two situations appears so striking as to pose a puzzle. For while the electric force between like charges is repulsive, the gravitational force is always attractive. Related to this, the field energy of electric fields is positive, while (heuristically) the field energy of gravitational fields is negative. On this basis one might think that the electric process should proceed spontaneously, and need not require tunneling, while the gravitational process has no evident reason to proceed at all. Consider a conducting sphere of radius $`R`$ carrying charge $`Q`$. The electric field energy can be lowered by emitting a charge $`q`$ so we expect this process to occur spontaneously. If we neglect back-reaction of the charge $`q`$ on the conducting sphere, the force is repulsive at all distances, and there is no barrier to emission. In a more accurate treatment, however, we must take into account the induced non-uniformity of the charge on the sphere, which is easily done using the method of images. The effective potential is $$V(r)=q\left(\frac{Qq}{r}\frac{qR}{r^2R^2}\right),$$ (19) where we consider configurations of image charge which leave the potential on the sphere constant and the field at infinity fixed. In the formal limit $`Qq`$ the first term dominates, and the potential decreases monotonically with $`r`$, indicating no barrier. However the second term increases monotonically with $`r`$, and always dominates for $`rR`$, producing a barrier. In the gravitational problem, the situation is just the reverse. With back-reaction neglected, there is nothing but barrier. Yet our calculation including back-reaction indicates the possibility of redistributing mass-energy of the gravitating sphere (black hole) into kinetic energy of emitted radiation. Since the intrinsic energy of the gravitational field is negative, it is disadvantageous to reduce it, point by point. However, since in general the spacetime containing a black hole is not globally static, there exist freely propagating negative energy modes inside the horizon which cause the black hole to shrink. As a consequence, the black hole’s radius decreases and the external volume of space, over which one integrates the field, increases. This, kinematically, is why the radiation process is allowed. Were the hole geometry to be regarded as fixed, there would be no possible source for the kinetic energy of the radiation, and a genuine tunneling interpretation of Hawking radiation would appear to be precluded. ## IV Conclusion We have derived Hawking radiation from the heuristically familiar perspective of tunneling. Our derivation is in consonance with intuitive notions of black hole radiance but, by taking into account global conservation laws, we are led to a modification of the emission spectrum away from thermality. The resulting corrected formula has physically reasonable limiting cases and, by virtue of nonthermality, suggests the possibility of information-carrying correlations in the radiation. Acknowledgement F.W. is supported in part by DOE grant DE-FG02-90ER-40542.
no-problem/9907/cond-mat9907194.html
ar5iv
text
# Interface resistance in ferromagnet/superconductor junctions ## I Introduction The rapidly emerging field of spin polarized transport is based on the ability of a ferromagnetic metal to conduct and accumulate spin-polarized currents . Spin polarized transport between ferromagnets (F) and superconductors (S) received considerable attention recently because of new physical phenomena and potential device applications. An introduction of the hybrid structures based on a combination of ferromagnetic and superconducting materials are not only interesting from a fundamental point of view but can bring further advantages for devices . In particular, spin accumulation effects in superconductors may play an important role because of a number of reasons. First, due to the gap in the excitation spectrum, the spin diffusion length in a superconductor can become quite long at low temperatures . Second, spin accumulation can take place at a FS interface since spin-polarized current in a ferromagnet has to be transformed into spinless supercurrent in a superconductor . An important step in the quantitative analysis of spin accumulation and spin injection in superconductors is the knowledge of the dependence of the resistance of a FS interface on the spin polarization in a ferromagnet. Furthermore, it was recently demonstrated that a combination of F and S metals can be used advantageously for measuring spin polarization in metallic ferromagnets either by measuring $`T_c`$ of FS multilayers , or by directly measuring the resistance of FS point contacts . So far, theoretical studies of the FS contact resistance were limited by calculations for ballistic FS contacts . It was argued in that the effects of impurity scattering are quite important in spin-polarized tunneling since the degree of spin polarization is defined differently in ballistic and diffusive contacts. However, no quantitative calculations on the effects of disorder have been published up to now. Moreover, the contribution of the contact resistance is most easily measured only in a point contact geometry, while for larger area contacts the contribution of an interface becomes rather small. The purpose of the present paper is twofold. First, we extend the theory in order to include the effects of impurity scattering in a contact. Second, we argue that an additional sensitive probe for spin polarization is the excess resistance $`R_{ex}`$ of a FS contact. This resistance is due to penetration of an electric field into a superconductor over macroscopically large charge-imbalance relaxation length $`\lambda _Q`$ and may exceed the direct interface resistance. We show that the magnitude of $`R_{ex}`$ is sensitive to spin polarization in a ferromagnet and provide an estimate for this effect. ## II Ballistic FS contact We start from the derivation of the general expression for the conductance of a FS contact in the absence of impurity scattering (ballistic case). We consider the atomically sharp interface barrier at $`x=0`$ separating F metal ($`x<0`$) and S metal ($`x>0`$), modeled by a potential $`U(r)=H\delta (x)`$ and arbitrary relation between Fermi velocities in F and S, $`v_F,`$ $`v_F`$ and $`v_{Fs}`$. Here $`H`$ is the barrier strength parameter, $`v_{F,}=\sqrt{2E_{F,}/m}\sqrt{2E_F(1\pm h)/m}`$, $`v_{Fs}=\sqrt{2E_{Fs}/m}`$, where $`E_{ex},E_F`$ and $`E_{Fs}`$ are respectively exchange energy in a ferromagnet and Fermi energies in F and S metals, the indices $`,`$ refer to the spin subbands and $`h=E_{ex}/E_F`$ denotes the dimensionless spin polarization in a ferromagnet. We assume that the effective electronic masses $`m_F`$, $`m_S`$ are equal to the free electron mass $`m_e`$, the mean free path is larger than the size of the contact and the pair potential is approximated by the step function $`\mathrm{\Delta }(x)=\mathrm{\Delta }(T)\theta (x)`$, $`\mathrm{\Delta }(T)`$ being the bulk pair potential in a superconductor. Charge and spin currents can be calculated within the framework of the BTK approach , i.e. considering explicitly Andreev and normal reflections at the FS interface and taking into account that an incoming electron and an Andreev reflected hole occupy opposite spin subbands . The electron- and hole–like excitations are represented by two-component wave functions, which obey the Bogoliubov de-Gennes equations. An electron, incoming from the ferromagnet F into the superconductor S, is described by a plane wave with a wave vector $`k_{}^+`$ $$\psi _{inc}=\left(\begin{array}{c}1\\ 0\end{array}\right)e^{ik_x^+x}.$$ (1) Due to the four-fold degeneracy of an excitation in a superconducting state, the electron is partially reflected into F as an electron with the opposite wave vector $`k_x^+`$ or as a hole $`k_x^{}`$ $$\psi _{refl}=a\left(\begin{array}{c}0\\ 1\end{array}\right)e^{ik_x^{}x}+b\left(\begin{array}{c}1\\ 0\end{array}\right)e^{ik_x^+x}$$ (2) and partially transmitted into S without branch crossing $`k_{sx}^+`$, or with branch crossing $`k_{sx}^{}`$ $$\psi _{trans}=c\left(\begin{array}{c}u\\ v\end{array}\right)e^{ik_s^+x}+d\left(\begin{array}{c}v\\ u\end{array}\right)e^{ik_s^{}x}.$$ (3) Here $`k_x^\pm ,`$ $`k_x^\pm `$ and $`k_s^\pm `$ are the projections of the Fermi wave vectors in two spin sibbands and in a superconductor on the direction $`x`$ normal to the contact plane, index $`+`$ ($``$) refers to electron- or hole-like quasiparticles. The amplitudes $`a,b,c`$ and $`d`$ have to be determined from the matching conditions for $`\mathrm{\Psi }_F=\psi _{inc}+\psi _{refl}`$ and $`\mathrm{\Psi }_S=\psi _{trans}`$ at the interface, $`x=0`$ $`\mathrm{\Psi }_F(0)`$ $`=`$ $`\mathrm{\Psi }_S(0),`$ (4) $`{\displaystyle \frac{d\mathrm{\Psi }_F(0)}{dx}}{\displaystyle \frac{d\mathrm{\Psi }_S(0)}{dx}}`$ $`=`$ $`2m_eH\mathrm{\Psi }_F(0)/\mathrm{}.`$ (5) Using these conditions we find the Andreev and normal reflection coefficients, $`A=\left|a\right|^2`$and $`B=\left|b\right|^2`$, and the transmission coefficients with or without branch crossing, $`C=\left|c\right|^2`$ and $`D=\left|d\right|^2`$, respectively, which determine charge and spin currents in a ballistic FS contact. Charge current in a FS contact is given by $$I=\frac{e}{2\pi \mathrm{}}\underset{,}{}P_,\frac{d^2k_{}}{(2\pi )^2}_{\mathrm{}}^+\mathrm{}\left[1+\frac{k_{F,}}{k_{F,}}A_,(E)B_,(E)\right]\left[f(E+eV)f(E)\right]𝑑E,$$ (6) where $`P_{}=(E_F\pm E_{ex})/2E_F=(1\pm h)/2`$, $`v_{F_x,}`$ is the projection of the Fermi velocity on the direction $`x`$, $`f(E)`$ is the Fermi distribution function, and $`k_{}`$ is the component of the Fermi momentum parallel to the junction plane, which is conserved for each individual scattering process. The ratio $`k_{F,}/k_{F,}`$ provides the normalization of the total probability current, taking into account that Andreev scattering involves different spin subbands. In the limit of low temperatures $`TT_c`$, we arrive the following expression for the charge conductance of a ballistic FS contact at the subgap bias voltage $`eV<\mathrm{\Delta }(T)`$ $$G_{FS}=2G_0T_{}T_{}\frac{(1+\alpha ^2)P_{}(v_F+v_F)/v_{Fs}}{(1r_{}r_{})^2+\alpha ^2(1+r_{}r_{})^2}.$$ (7) Here $`G_0=e^2k_F^2S/4\pi ^2\mathrm{}`$ is the normal state (Sharvin) conductance of the contact, $`S`$ is the contact area, $`T_{}`$ and $`T_{}`$ are the transmission probabilities for scattering from the spin up(down) subband into a superconductor $$T_{}=\frac{4v_Fv_{Fs}}{4Z^2+(v_F+v_{Fs})^2},\text{ }T_{}=\frac{4v_Fv_{Fs}}{4Z^2+(v_F+v_{Fs})^2},\text{ }$$ (8) $$r_,=\sqrt{1T_,},\text{ }Z=H/\mathrm{}v_{Fs},\text{ }\alpha =\sqrt{\mathrm{\Delta }^2(T)(eV)^2}/eV.$$ (9) In relevant limits the expressions (7)-(9) agree with the results derived in , while the advantage of the representation (7)-(9) is, that the charge conductance is directly expressed in terms of the individual probabilities $`T_,`$ and therefore is particularly suitable for consideration of the impurity scattering, as explained in the next section. It follows from eq.(7) that in a NS contact ($`T_{}=T_{}=T`$ ) at zero bias the BTK result $`G_{NS}=2G_0T^2/(2T)^2`$ is recovered, which yields the conductance doubling $`G_{NS}=2G_0`$ for a fully transmissive contact, $`T=1`$. This conductance enhancement is suppressed in a FS contact when $`T_{}T_{}`$ due to spin polarization in a ferromagnet. It is straightforward to extend the expressions (7)-(9) to the regime of high bias voltage, $`eV>\mathrm{\Delta }(T)`$. The results of numerical calculations of the dependence of charge current and zero-bias conductance are presented in Figs.1-3 for various values of spin polarization. It is seen that the zero-bias charge conductance is quite sensitive to the spin polarization $`h=E_{ex}/E_F`$. Fig.3 shows that for small values of $`h`$ this dependence is linear, which reflects the simple fact that the number of transmitted electronic modes scales like $`1h`$ due to spin reversal by the Andreev reflection. At $`h`$ close to 1 a nonlinearity appears in the $`G_{FS}`$ vs $`h`$ dependence due to the dependence of the transmission coefficients $`T_,`$ on $`h`$ (8). Spin current in a FS contact is given by $$I=\frac{e}{2\pi \mathrm{}}\underset{,}{}P_,\frac{d^2k_{}}{(2\pi )^2}_{\mathrm{}}^+\mathrm{}\left[1\frac{k_{F,}}{k_{F,}}A_,(E)B_,(E)\right]\left[f(E+eV)f(E)\right]𝑑E$$ (10) Difference in the sign of the contributions of the Andreev coefficients $`A_{}(E)`$ in eqs. (6) and (10) reflects the fact that, while an Andreev reflected hole carries charge in the same direction as an incoming electron, it carries spin in the opposite direction. The low temperature spin current vanishes at subgap voltages, while at $`eV>\mathrm{\Delta }(T)`$ the spin conductance $`G_{FS}^{(s)}`$ is given by the following expression $`G_{FS}^{(s)}`$ $`=`$ $`{\displaystyle \frac{4G_0\beta }{\left[1+\beta r_{}r_{}(1\beta )\right]^2}}\left[{\displaystyle \frac{T_{}v_F}{2v_{Fs}}}+{\displaystyle \frac{T_{}P_{}v_F}{2v_{Fs}}}\right],`$ (11) $`\beta `$ $`=`$ $`\sqrt{(eV)^2\mathrm{\Delta }^2(T)}/eV.`$ (12) From eq.(11) one can calculate numerically the spin conductance as a function of bias voltage and spin polarization. Figs.1, 2 show the results of numerical calculations of the dependences of the low temperature spin conductance on the polarization in a ferromagnet for two different values of the barrier strength parameter, $`Z=0`$ and $`Z=1`$. ## III Diffusive FS contact In the previous section the case of a ballistic FS contact was considered, when the contact size is smaller than the electronic mean free path. However the latter condition is not always fulfilled in experiments, and it is therefore of interest to evaluate the effect of impurity scattering in a contact. As a model for a FS contact we consider two bulk reservoirs (S and F), which in addition to the interface potential $`U(r)=H\delta (x)`$ are separated by the scattering region (a diffusive conductor) with a size smaller than the electronic mean free path. The expressions derived above are particularly suitable for the introduction of impurity scattering, since they allow straightforward application of the scattering theory. According to this theory (see and references therein) any diffusive conductor having size larger than the electronic mean free path is characterized by universal distribution of transmission eigenvalues $`t`$ over different channels. An average conductance of a diffusive metal is then given as a sum of contributions of those channels, each having the conductance $`G_0=e^2/2\pi \mathrm{}`$ $$G_\sigma =\frac{G_{N\sigma }}{G_0}_0^1g_\sigma (t)\rho (t)𝑑t,$$ (13) where $`\sigma =,`$ is the spin index, $`g_\sigma `$ is the conductance of a channel with transmission coefficient $`t`$, $`G_{N\sigma }=e^2N_\sigma (0)D_\sigma `$ is the normal state conductance per spin direction, $`N_\sigma `$ is the density of states at the Fermi level. The expression (13) is valid when the impurity scattering does not mix different spin directions. Function $`\rho (t)`$ is the universal distribution function of transmission eigenvalues for different channels given by $$\rho (t)=\frac{1}{2t\sqrt{1t}}$$ (14) and does not depend on microscopic parameters of a diffusive conductor. Eq.(14) shows that the transmission eigenvalues have a bimodal distribution with a peak at unit transmission and a peak at exponentially small transmission. As a model for the diffusive SF contact we consider two scattering regions in series: an incoming electron is first transmitted through the diffusive region with probability $`t`$, then crosses the FS interface with probability $`T_{}`$. In turn, an Andreev-reflected hole is first scattered by the interface (probability $`T_{}`$), then by the diffusive region. The probabilities of these two-step processes $`T_{1,2}`$ are given by the expression $$T_{1,2}=\frac{tT_,}{t+T_,tT_,},$$ (15) which follows from averaging over transmission resonances between two scattering regions, assuming that all relevant distances exceed the electronic wave-length. The charge conductance in a diffusive FS contact is given by the expression eq.(7) in which the probabilities $`T_,`$ should be substituted by the probabilities $`T_{1,2}`$ of the two-step scattering processes described above. Here we present the result for low temperatures and $`eV<\mathrm{\Delta }(T)`$ $$G_{FS}=G_N_0^1\frac{T_1T_2(1+\alpha ^2)P_{}(v_F^2/v_{Fs}^2+1)}{(1r_1r_2)^2+\alpha ^2(1+r_1r_2)^2}\frac{dt}{t\sqrt{1t}},$$ (16) where the probabilities $`T_{1,2}`$ are given by eq.(15), $`r_{1,2}=\sqrt{1T_{1,2}}`$, $`\alpha =\sqrt{\mathrm{\Delta }^2(T)(eV)^2}/eV`$ and $`G_N=2e^2N(E_F)D(E_F)`$ is the conductance of a contact in the unpolarized state. In the NS case with a transparent interface (Z=0) and $`v_F=v_F=v_{Fs}`$ the above expression at $`V=0`$ yields $$G_{NS}(V=0)=G_N_0^1\frac{tdt}{(2t)^2\sqrt{1t}}G_N,$$ (17) i.e. we reproduce the well known result that the zero-bias conductance of the diffusive contact $`G_{NS}=`$ $`G_N`$, in contrast to the ballistic case when $`G_{NS}=2G_N`$, first obtained by Artemenko,Volkov and Zaitsev by a different method. Fig.4 shows the results of numerical calculations of the dependence of the zero-bias conductance of a disordered FS contact vs spin polarization. It is seen by comparison of Figs.3 and 4, that assuming ballistic transport in a FS contact one can overestimate the spin polarization in a ferromagnet. The results presented here correspond however to the strong scattering regime. For a more quantitative comparison with experiments the model should be further extended to the regime of arbitrary scattering strength. ## IV Excess resistance So far we have taken into account both the interface and impurity scattering in the contact, but neglected the contribution of an electric field penetrating a superconductor. The latter can be indeed neglected in a point contact geometry, while it becomes important in planar contacts, in particular close to $`T_c`$, when an electric field penetrates into a superconductor over the macroscopically large charge-imbalance relaxation length $`\lambda _Q`$ . The corresponding contribution to the boundary resistance of a FS contact can be calculated by the generalization of the approach of valid for a clean superconductor. Excess resistance $`R_{ex}`$ is given $$R_{ex}=F\lambda _Q\rho _s/S.$$ (18) Here $`\rho _s`$ is the normal state resistivity of a superconductor and $`F=Y^{}/Y`$, where $`Y^{}`$ represents the charge current in FS contact $$Y^{}=\underset{}{}P_,\frac{d^2k_{}}{(2\pi )^2}_{\mathrm{}}^+\mathrm{}\left(\frac{f}{E}\right)\left[1C_,(E)D_,(E)\right]N_s^1(E)𝑑E$$ (19) and $`Y`$ represents the total current $$Y=\underset{}{}P_,\frac{d^2k_{}}{(2\pi )^2}_{\mathrm{}}^+\mathrm{}\left(\frac{f}{E}\right)\left[1+\frac{k_{F,}}{k_{F,}}A_,(E)B_,(E)\right]𝑑E.$$ (20) Here $`N_s(E)=E/\sqrt{E^2\mathrm{\Delta }^2(T)}`$ is the density of states in a superconductor. At $`E<\mathrm{\Delta }(T)`$ the coefficients $`C,D`$ vanish, while $`A,B`$ are given by $$\frac{k_{F,}}{k_{F,}}A_,(E)=1B_,(E)=\frac{T_{}T_{}(1+\alpha ^2)}{(1r_{}r_{})^2+\alpha ^2(1+r_{}r_{})^2},$$ (21) where $`\alpha =\sqrt{\mathrm{\Delta }^2(T)E^2}/E`$. At $`E>\mathrm{\Delta }(T)`$ $$\frac{k_{F,}}{k_{F,}}A_,(E)=\frac{T_{}T_{}(1\beta ^2)}{\left[1+\beta r_{}r_{}(1\beta )\right]^2};\text{ }1B_,(E)=\frac{T_{}T_{}(1\beta )^2+4\beta (T_{}+T_{})}{\left[1+\beta r_{}r_{}(1\beta )\right]^2};$$ (22) $$C_,(E)=\frac{2(1+\beta )(T_{}+T_{})}{\left[1+\beta r_{}r_{}(1\beta )\right]^2};\text{ }\frac{k_{F,}}{k_{F,}}A_,(E)+B_,(E)+C_,(E)+D_,(E)=1,$$ (23) where $`\beta =\sqrt{E^2\mathrm{\Delta }^2(T)}/E`$. The results of the calculations of the excess resistance factor $`F=Y^{}/Y`$ for a FS contact are shown in Fig.5. It is seen that $`F`$ increases strongly at temperatures close to $`T_c`$. Given the fact that the charge-imbalance relaxation length $`\lambda _Q`$ becomes macroscopically large near $`T_c`$ we conclude that measuring the excess resistance in a FS contact can provide a sensitive probe for measuring spin polarization. In conclusion, we have presented the results of a theoretical study of interface resistance in ferromagnet/superconductor junctions. The Andreev reflection theory is extended in order to take into account the impurity scattering within the contact in the regime of strong disorder. The model is applied to the calculation of the excess resistance of a FS contact caused by penetration of an electric field into a superconductor. The latter contribution could be important in contacts with planar geometry and provides an additional method for measuring spin polarization in ferromagnets. Acknowledgments. This work was supported in part by NWO program for Dutch-Russian research cooperation. Stimulating discussions with J. Aarts, G. Gerritsma, M.Yu. Kupriyanov, I.I. Mazin, B. Nadgorny and V.V. Ryazanov are gratefully acknowledged.
no-problem/9907/quant-ph9907073.html
ar5iv
text
# Continuous Variable Quantum Cryptography ## Abstract We propose a quantum cryptographic scheme in which small phase and amplitude modulations of CW light beams carry the key information. The presence of EPR type correlations provides the quantum protection. Quantum cryptographic schemes use fundamental properties of quantum mechanics to ensure the protection of random number keys . In particular the act of measurement in quantum mechanics inevitably disturbs the system. Further more, for single quanta such as a photon, simultaneous measurements of non-commuting variables are forbidden. By randomly encoding the information between non-commuting observables of a stream of single photons any eavesdropper (Eve) is forced to guess which observable to measure for each photon. On average, half the time Eve will guess wrong, revealing her self through the back action of the measurement to the sender (Alice) and receiver (Bob). There are some disadvantages in working with single photons, particularly in free-space where scattered light levels can be high. Also it is of fundamental interest to quantum information research to investigate links between discrete variable, single photon phenomena and continuous variable, multi-photon effects. This motivates a consideration of quantum cryptography using multi-photon light modes. In particular we consider encoding key information as small signals carried on the amplitude and and phase quadrature amplitudes of the beam. These are the analogues of position and momentum for a light mode and hence are continuous, conjugate variables. Although simultaneous measurements of these non-commuting observables can be made in various ways, for example splitting the beam on a 50:50 beamsplitter and then making homodyne measurements on each beam, the information that can be obtained is strictly limited by the generalized uncertainty principle for simultaneous measurements . If an ideal measurement of one quadrature amplitude produces a result with a signal to noise of $$(S/N)^\pm =\frac{V_s^\pm }{V_n^\pm }$$ (1) then a simultaneous measurement of both quadratures cannot give a signal to noise result in excess of $$(S/N)_{sim}^\pm =(\frac{\eta ^\pm V_s^\pm }{\eta ^\pm V_n^\pm +\eta ^{}V_m^\pm })S/N^\pm $$ (2) Here $`V_s^\pm `$ and $`V_n^\pm `$ are, respectively, the signal and noise power of the amplitude ($`+`$) or phase ($``$) quadrature at a particular rf frequency with respect to the optical carrier. The quantum noise which is inevitably added when dividing the mode is $`V_m^\pm `$. The splitting ratio is $`\eta ^\pm `$ and $`\eta ^+=1\eta ^{}`$ (e.g a 50:50 beamsplitter has $`\eta ^+=\eta ^{}=0.5`$). The spectral powers are normalized to the quantum noise limit (QNL) such that a coherent beam has $`V_n^\pm =1`$. Normally the partition noise will also be at this limit ($`V_m^\pm =1`$). For a classical light field, i.e. where $`V_n^\pm >>1`$ the penalty will be negligible. However for a coherent beam a halving of the signal to noise for both quadratures is unavoidable when the splitting ratio is a half. The Hartley-Shannon law applies to Gaussian, additive noise, communication channels such as we will consider here. It shows, in general, that if information of a fixed bandwidth is being sent down a communication channel at a rate corresponding to the channel capacity and the signal to noise is reduced, then errors will inevitably appear at the receiver. Thus, under such conditions, any attempt by an eavesdropper to make simultaneous measurements will introduce errors into the transmission. In the following we will first examine what level of security is guaranteed by this uncertainty principle if a coherent state mode is used. We will then show that the level of security can in principle be made as strong as for the single quanta case by using a special type of two-mode squeezed state. The question of optimum protocols and eavesdropper strategies is complex and has been studied in detail for the single quanta case . Here we only examine the most obvious strategies and do not attempt to prove equal security for all possible strategies. Consider the set up depicted in Fig.1. A possible protocol is as follows. Alice generates two independent random strings of numbers and encodes one on the phase quadrature, and the other on the amplitude quadrature of a bright coherent beam. Bob uses homodyne detection to detect either the amplitude or phase quadrature of the beam when he receives it. He swaps randomly which quadrature he detects. On a public line Bob then tells Alice at which quadrature he was looking, at any particular time. They pick one quadrature to be the test and the other to be the key. For example, they may pick the amplitude quadrature as the test signal. They would then compare results for the times that Bob was looking at the amplitude quadrature. If Bob’s results agreed with what Alice sent, to within some acceptable error rate, they would consider the transmission secure. They would then use the undisclosed phase quadrature signals, sent whilst Bob was observing the phase quadrature, as their key. By randomly swapping which quadrature is key and which is test throughout the data comparison an increased error rate on either quadrature will immediately be obvious. To quantify our results we will consider the specific encoding scheme of binary pulse code modulation, in which the data is encoded as a train of 1 and 0 electrical pulses which are impressed on the optical beam at some RF frequency using electro-optic modulators. The amplitude and phase signals are imposed at the same frequency with equal power. Let us now consider what strategies Eve could adopt (see Fig.2). Eve could guess which quadrature Bob is going to measure and measure it herself (Fig.2(a)). She could then reproduce the digital signal of that quadrature and impress it on another coherent beam which she would send on to Bob. She would learn nothing about the other quadrature through her measurement and would have to guess her own random string of numbers to place on it. When Eve guesses the right quadrature to measure Bob and Alice will be none the wiser, however, on average 50% of the time Eve will guess wrong. Then Bob will receive a random string from Eve unrelated to the one sent by Alice. These will agree only 50% of the time. Thus Bob and Alice would see a 25% bit error rate in the test transmission if Eve was using this strategy. This is analogous to the result for single quanta schemes in which this type of strategy is the only available. However for bright beams it is possible to make simultaneous measurements of the quadratures, with the caveat that there will be some loss of information. So a second strategy that Eve could follow would be to split the beam in half, measure both quadratures and impose the information obtained on the respective quadratures of another coherent beam which she sends to Bob (Fig.2(b)). How well will this strategy work? Suppose Alice wishes to send the data to Bob with a bit error rate (BER) of about 1%. For bandwidth limited transmission of binary pulse code modulation $$BER=\frac{1}{2}erfc\frac{1}{2}\sqrt{\frac{1}{2}S/N}$$ (3) Thus Alice must impose her data with a S/N of about 13dB. For simultaneous measurements of a coherent state the signal to noise obtained is halved (see Eq.2). As a result, using Eq.3, we find the information Eve intercepts and subsequently passes on to Bob will only have a BER of 6%. This is clearly a superior strategy and would be less easily detected. Further more Eve could adopt a third strategy of only intercepting a small amount of the beam and doing simultaneous detection on it (Fig.2(c)). For example, by intercepting 16% of the beam, Eve could gain information about both quadratures with a BER of 25% whilst Bob and Alice would observe only a small increase of their BER to 1.7%. In other words Eve could obtain about the same amount of information about the key that she could obtain using the “guessing” strategy, whilst being very difficult to detect, especially in the presence of losses. The preceding discussion has shown that a cryptographic scheme based on coherent light provides much less security than single quanta schemes . We now consider whether squeezed light can offer improved security. For example amplitude squeezed beams have the property $`V_n^+<1<V_n^{}`$. Because the amplitude quadrature is sub-QNL greater degradation of S/N than the coherent case occurs in simultaneous measurements of amplitude signals (see Eq.2). Unfortunately the phase quadrature must be super-QNL, thus there is less degradation of S/N for phase signals. As a result the total security is in fact less than for a coherent beam. However in the following we will show that by using two squeezed light beams, security comparable to that achieved with single quanta can be obtained. The set-up is shown in Fig.3. Once again Alice encodes her number strings digitally, but now she impresses them on the amplitude quadratures of two, phase locked, amplitude squeezed beams, $`a`$ and $`b`$, one on each. A $`\pi /2`$ phase shift is imposed on beam $`b`$ and then they are mixed on a 50:50 beamsplitter. The resulting output modes, $`c`$ and $`d`$, are given by $`c`$ $`=`$ $`\sqrt{{\displaystyle \frac{1}{2}}}(a+ib)`$ (4) $`d`$ $`=`$ $`\sqrt{{\displaystyle \frac{1}{2}}}(aib)`$ (5) These beams are now in an entangled state which will exhibit Einstein, Podolsky, Rosen (EPR) type correlations . Local oscillator beams (LO’s) of the same power as, and with their polarizations rotated to be orthogonal to $`c`$ and $`d`$ are then mixed with the beams on polarizing beamsplitters. A rapidly varying random time delay is imposed on one of the beams. Both mixed beams are then transmitted to Bob who uses polarizing beamsplitters to extract the local oscillator from each beam. Bob cannot remix the signal beams ($`c`$ and $`d`$) to separate $`a`$ and $`b`$ because the random time delay introduced between the beams has destroyed their coherence at the signal frequency. However, because each beam has a corresponding local oscillator which has suffered the same time delays, Bob can make individual, phase sensitive measurements on each of the beams and extract either the information on $`a`$ or the information on $`b`$ by amplifying the local oscillators and using balanced homodyne detection. Note that the noise of the LO’s is increased by amplification but balanced homodyne detection is insensitive to LO noise. He randomly chooses to either: (i) measure the amplitude quadratures of each beam and add them together, in which case he obtains the power spectrum $`V^+`$ $`=`$ $`<|(\stackrel{~}{c}^{}+\stackrel{~}{c})+(\stackrel{~}{d}^{}+\stackrel{~}{d})|^2>`$ (6) $`=`$ $`V_{s,a}+V_{n,a}^+`$ (7) where the tildes indicate Fourier transforms. Thus he obtains the data string impressed on beam $`a`$, $`V_{s,a}`$, imposed on the sub-QNL noise floor of beam $`a`$, $`V_{n,a}^+`$, or; (ii) measure the phase quadratures of each beam and subtract them, in which case he obtains the power spectrum $`V^{}`$ $`=`$ $`<|(\stackrel{~}{c}^{}\stackrel{~}{c})(\stackrel{~}{d}^{}\stackrel{~}{d})|^2>`$ (8) $`=`$ $`V_{s,b}+V_{n,b}^+`$ (9) i.e. he obtains the data string impressed on beam $`b`$, $`V_{s,b}`$, imposed on the sub-QNL noise floor of beam $`b`$, $`V_{n,b}^+`$. Thus the signals lie on conjugate quadratures but both have sub-QNL noise floors. This is the hallmark of the EPR correlation . Consider now eavesdropper strategies. Firstly, like Bob, Eve cannot remix $`c`$ and $`d`$ optically to obtain $`a`$ and $`b`$ due to the randomly varying phase shift ($`\varphi (t)`$) introduced by the time delay. For small phase shifts beam $`c`$ becomes $`c^{}=(a+ib)(1+i\varphi )`$. Mixing $`c^{}`$ and $`d`$ on a beamsplitter will produce outputs with amplitude power spectra $`V_{c^{}+d}`$ $`=`$ $`V_{s,a}+V_{n,a}^++\alpha ^2V_\varphi `$ (10) $`V_{c^{}d}`$ $`=`$ $`V_{s,b}+V_{n,b}^++\alpha ^2V_\varphi `$ (11) where $`\alpha ^2`$ is proportional to the intensity of beams $`a`$ and $`b`$ and $`V_\varphi `$ is the power spectrum of the phase fluctuations. If $`\varphi (t)`$ has a white power spectrum over frequencies from well below to well above the signal frequency the signals will be obscured. It is not possible to directly control the phase shifts with out similarly suppressing the signals. However the phase shifts are also present on the LO co-propagating with $`c^{}`$. Mixing the two LO’s will produce an output with amplitude power spectra $$V_{+LO}=1+E^2V_\varphi $$ (12) where $`E^2`$ is proportional to the intensity of the LO’s and the “one” is from the quantum noise of the LO’s. It is possible to use this output to control the phase noise on the mixed signal beams giving (ideally) the amplitude power spectra $`V_{c^{}+d}^C`$ $`=`$ $`V_{s,a}+V_{n,a}^++{\displaystyle \frac{\alpha ^2}{E^2}}`$ (13) $`V_{c^{}d}^C`$ $`=`$ $`V_{s,b}+V_{n,b}^++{\displaystyle \frac{\alpha ^2}{E^2}}`$ (14) where the remaining penalty arises from the quantum noise of the LO’s. If $`E^2>>\alpha ^2`$ (as is normally the case for a LO) then this penalty can be made negligible, thus retrieving the signals. This is why it is essential that the LO’s have the same power as the signal beams at the point where the phase fluctuations are imposed. This makes the ratio of the correlated phase noise to the independent quantum noise the same for the LO and the signal beam. This cannot be changed by Eve. With $`E^2=\alpha ^2`$ the penalty is at the quantum limit. As we shall see in a moment this is sufficient to reveal Eve. Eve can still adopt the guessing strategy by detecting a particular quadrature of both beams and then using a similar apparatus to Alice’s to re-send the beams. As before she will only guess right half the time thus introducing a BER of 25%. Suppose instead she tries the second strategy of simultaneous detection of both quadratures on each beam. She will obtain the following power spectra for the summed amplitude quadratures and the differenced phase quadratures. $`V^+`$ $`=`$ $`{\displaystyle \frac{1}{2}}(V_{s,a}+V_{n,a}^++1)`$ (15) $`V^{}`$ $`=`$ $`{\displaystyle \frac{1}{2}}(V_{s,b}+V_{n,b}^++1)`$ (16) The signal to noise is reduced as predicted by Eq.2 but where the noise power for both quadrature measurements is sub-QNL . This leads to improved security. For example with 10 dB squeezing ($`V_{n,a}=V_{n,b}=0.1`$) the signal to noise in a simultaneous measurement will be reduced by a factor of .09. As a result,assuming initial S/N of 13dB and using Eq.3, we find the information Eve intercepts and subsequently passes on to Bob will now have a BER of about 24%. In other words, the security against an eavesdropper using simultaneous measurements is now on a par with the guessing strategy. The third strategy is also now of no use to Eve as small samples of the fields carry virtually no information. For example, with 10 dB squeezing, intercepting 16% of the field will give Eve virtually no information (a BER of 49.5%) whilst already producing a 5% BER in Bob and Alice’s shared information. In any realistic situation losses will be present. Losses tend in general to reduce security in quantum cryptographic schemes . The problem for our system is that losses force Alice to increase her initial S/N in order to pass the information to Bob with a low BER. Eve can take advantage of this by setting up very close to Alice. Never-the-less reasonable security can be maintained with sufficiently high levels of squeezing. For example with 10 dB squeezing and 10% loss, strategy two will result in a 15% BER in the shared information. Also Eve must intercept 29% of the light to obtain a 25% BER using the third strategy which will cause a 20% BER in Alice and Bob’s information. With 6 dB squeezing and 20% loss the second strategy penalty is reduced to a BER of 7.5%, similar to that of the coherent state scheme. However, for the third strategy, Eve must still intercept 29% of the light to obtain a BER of 25% and this will cause an 11% BER in Alice and Bob’s shared information, much larger than for the coherent case. Although these results demonstrate some tolerance to loss for our continuous variable system it should be noted that single quanta schemes can tolerate much higher losses making them more practical from this point of view. In summary we have examined the quantum cryptographic security of two continuous variable schemes, one based on coherent light, the other based on 2-mode squeezed light. Whilst the coherent light scheme is clearly inferior to single quanta schemes, the squeezed light scheme offers, in principle, equivalent security. The quantum security is provided by the generalized uncertainty relation. It is also essential that the coherence between the two squeezed modes is destroyed. More generally this system is an example of a new quantum information technology based on continuous variable, multi-photon manipulations. Such technologies may herald a new approach to quantum information.
no-problem/9907/math9907064.html
ar5iv
text
# Permutations and Primes ## Abstract The problem of digital sets (DS) all permutations of which generate primes is discussed. Such sets may include only four types of digits: 1, 3, 7 and 9. The direct computations show that for N-digit (ND) integers such DS’s occur at N = 1, 2, 3, and are absent in the 4 - 10 interval of N. On the other hand in N = 19, 23, 317 and 1031 cases (as well as in N = 2 case) the formal ”total” permutation is provided by ”repunits”, integers with all digits being 1. The existence/nonexistence of other (not repunits) full-permutation DS’s for arbitrary large N is an open question with probable negative answer. Maximal-permutation DS’s with maximal number of primes are given for various numbers of digits. Keywords: Number theory, Permutations, Prime numbers The primes are real pearls among all integers and are subject of long-standing (several thousand year) researches. The many amazing results are established on primes . The number of primes are infinite (by the way the largest known prime with 2 098 960 digits, $`2^{\mathrm{6\hspace{0.17em}972\hspace{0.17em}593}}1`$, was found quite recently ) and their general behavior with $`n`$ is at present well known and is described, e.g., by some built-in functions of MATHEMATICA : PrimeQ\[n\] gives True if n is prime and False otherwise, Prime\[n\] gives nth prime (with assumption Prime = 2), and PrimePi\[n\] gives number of all primes $`n`$. Using mainly MATHEMATICA’s means I discuss here a problem of the permutations of given digital set all giving prime. Consider some examples first. Already for two-digit primes one may note pairs 13/31, 17/71, 37/73, 79/97 - all primes, (one may add here fifth ”pair” 11/11 as well), while 19 ”loses” its partner, 91 which is not a prime, see Table 2. One may wonder: is it possible, in the more-digit cases, that full permutations of some DS all give primes? In 3D case (Table 2, case B) the answer is positive, there are 3 full permutation DS’s: 113, 199 and 337, that is, for instance, all three possible permutations 113, 131 and 331 give primes. Other two interesting DS’s are 179 and 379 each giving even more primes, 4, which is however less than a corresponding total number of possible permutations - 6. Another DS with 6 permutations, 137, gives only 3 primes. Notice that in this paper the ”basic” DS’s are written as first pemutations in lexicografic order and they themselves are not necessarily primes. Two smallest such sets are 119 and 133 which are not primes while two other members of each permutation family (191, 911 and 313, 331, respectively) are primes. (And the largest such DS considered in this note is $`7_29_8=\mathrm{\hspace{0.17em}7799999999}`$, see Table 8.) Here is the place to clarify the point of terminology: speaking about, e.g. 137 as DS, I consider it as what in MATHEMATICA is $`List\{1,3,7\}`$ with 3 elements; at another hand, 137 is also used as 3-digit integer in our usual 10-base arithmetic system. To avoid cumbersome notations I assume this not-too-rigorous approach. It is evident that full permutation may give all primes for DS’s comprising only of digits 1, 3, 7 and 9, and none of 0, 2, 4, 5, 6, 8 digits. This greatly reduces the field of search, removing all integers containing at least one of the digits 0, 2, 4, 5, 6 or 8 . The program was written in MATHEMATICA for searching such full permutations among ”1-3-7-9” primes with the negative answer for 4 to 10-digit integers. In all cases number of primes provided by any given DS is less than number of all possible permutations, see attached Figures and Tables. This is the main (and negative ) result, but some other remarks may be made on the by-products of calculations. 1. Primes among all ”1-3-7-9” integers my be briefly described by ”N-P-D” code as follows: 1 - 2 - 2, 2 - 10 - 6, 3 - 30 - 12, 4 - 63 - 14, 5 - 249 - 35, 6 - 757 - 54, 7 - 2 709 - 74, 8 - 9 177 - 101, 9 - 33 191 - 142, 10 - 118 912 - 184, where N is number of digits, P is number of primes and D is number of DS’s. For all primes (with arbitrary digits) ”N-P-D” figures are: 1 - 4 - 4, 2 - 21 - 17, 3 - 143 - 86, 4 - 1 061 - 336, 5 - 8 363 - 1 109, 6 - 68 906 - 2 967, 7 - 586 081 - 7 041, 8 - 5 096 876 - 15 259, 9 - 45 086 079 - ?, 10 - 404 204 977 - ?. There is of some interest that for smaller N ”mean productivity” (P/D) of ”1-3-7-9” DS’s are larger than that of all DS’s, while starting from N=5 number of primes per DS is larger for all DS: ``` Table 1. Number of N digit primes per basic DS as function of N for "1-3-7-9" DS’s and all DS’s ======================== N P/D ======================== "1379" all ======================== 1 1.0000 1.0000 2 1.6667 1.2353 3 2.5000 1.6628 4 4.5000 3.1577 5 7.1143 7.5410 6 14.0185 23.2241 7 36.6081 83.2383 8 90.8614 334.0242 9 233.7394 ? 10 646.2609 ? ``` I do not present values of D and distribution of number of primes per given DS in general case by two reasons, first due to necessity of the too time-consuming calculations, and second due to ambiguity of the problem, e.g. what to do with primes with zeros, as in this case some permutation of corresponding DS, with zeros at the beginning, give integers (and possible primes) with less number digits. First three such primes are 101, 103, 107 and last three such 10 digit primes are $`9_7701,9_7703\text{and}9_7707`$. Another observation is that while there is a very strong correlation between p and c, the more rich permutation family does not necessarily give more primes; cf. 139 vs 113, or 1139 vs 1379. 3. As among primes with 4 to 10 digits there is no single full permutaion, it is interesting to look for maximal permutations. Quite surprisingly, these ”maximal” DS’s are not at all among ”1-3-7-9” DS’s. From Table 2 one may note, that maximal ”1-3-7-9” DS’s with 4 digits are 1379 with 7 primes and 1139 with 8 primes. At the same time, there are 4 ”ordinary” DS’s (1349, 1457, 3479, 3679) with 9 primes, two (1579 and 1789) with 10, and two ”super-sets” 1237 and 1279 with 11 primes. To be able to give more primes DS should comprise the even digits as well, not only odd ones. This fact is really very suprising, because number distribution of digits in, for example, 4D primes is not quite at all random, with great favor to digits 1, 3, 7, 9: Count\[n\] = 217, 603, 359, 602, 326, 327, 336, 574, 321, 579, at n=0 to 9, each digit from ”1-3-7-9” family is roughly two times more common than any one of other family ”0-2-4-5-6-8”. Inspite of this, DS’s of ”mixed races” are more productive in making primes. A very interesting observation which may find its application in the fields very far from primes and integers. 4. Some maximal permutation ”1379” DS’s for the larger numbers of digits are: ``` b c p c/p ======================= 5D 11339 15 30 .5000 13379 18 60 .3000 13799 29 60 .4833 6D 113779 60 180 .3333 133379 35 120 .2917 133799 55 180 .3056 7D 1113799 113 420 .2690 1133779 182 630 .2889 1137799 169 630 .2683 8D 11333779 419 1680 .2494 11337779 403 1680 .2399 11377999 397 1680 .2363 9D 113337799 1388 7560 .1836 111337799 1525 7560 .2017 113377999 1550 7560 .2050 10D 1113337799 4555 25200 .1808 1133777999 4606 25200 .1828 1133377799 4384 25200 .1740 ``` In all cases c\[i\]$`<<`$p\[i\], and relation c/p has a tendency of decreasing with increasing N, that is in some sense, ”probability” of appearing of full permutation among ”1379” primes diminishes with increasing number of digits. One may considered it as a hint that the full-permutation ”1379” DS is absent for arbitrary large N. I conclude with guess that the existence/nonexistence of (not repunits) DS’s for arbitrary large N is an open question with probable negative answer. ``` Table 2. A: 2-10-6 case. b - basic digit sets, c - counts, p - permutations. Total number 10 = Sum[c[i]]. B: 3-30-12 case. c[i] = p[i] at 3 cases, for b=113, 199 and 337. C: 4-63-14 case. c[i] < p[i] at all cases. Note that basic digit sets are written as first ones in lexicografic order and they are not necessary primes themselves. Two least such sets are 119 and 133 which are not primes while two other members of each permutation family (191, 911 and 313, 331, respectively) are primes. Another observation is that while there is a very strong correlation between p and c, the more rich permutation family does not necessarily give more primes; cf. 139 vs 113, or 1139 vs 1379 =============================================== A B C ================================================ b c p b c p b c p ================================================ 11 1 1 113 3 3 1117 2 4 13 2 2 119 2 3 1139 8 12 17 2 2 133 2 3 1333 2 4 19 1 2 137 3 6 1337 5 12 37 2 2 139 2 6 1339 5 12 79 2 2 179 4 6 1379 7 24 ---------- 199 3 3 1399 6 12 10 337 3 3 1777 3 4 377 1 3 1799 5 12 379 4 6 1999 2 4 779 2 3 3337 3 4 799 1 3 3379 6 12 ----------- 3779 5 12 30 3799 4 12 ----------- 63 ``` ``` Ω Table 3. 5-249-35 case. b - 35 basic digit sets, c - counts, p - permutations. Total number 249 = Sum[c[i]]. c[i] < p[i] at all i. b c p b c p b c p ============================================== 11113 3 5 11117 2 5 11119 1 5 11137 5 20 11177 5 10 11179 9 20 11333 4 10 11339 15 30 11377 10 30 11399 13 30 11777 3 10 11779 12 30 11999 5 10 13333 2 5 13337 9 20 13339 9 20 13379 18 60 13399 8 30 13777 8 20 13799 29 60 13999 7 20 17777 1 5 17779 5 20 17999 7 20 19999 1 5 33377 3 10 33379 7 20 33779 10 30 33799 9 30 37777 2 5 37799 10 30 37999 9 20 77779 4 5 77999 3 10 79999 1 5 ----------------------------------------------- 249 ``` ``` Table 4. 6-757-54 Case ==================== b c p ==================== 111113 3 6 111119 3 6 111133 3 15 111137 9 30 111139 4 30 111179 8 30 111199 5 15 111337 28 60 111377 25 60 111379 27 120 111779 15 60 111799 15 60 113333 3 15 113339 19 60 113377 16 90 113399 16 90 113777 16 60 113779 60 180 113999 13 60 117779 20 60 117799 23 90 119999 1 15 133333 3 6 133337 11 30 133339 7 30 133379 35 120 133399 15 60 133777 18 60 133799 55 180 133999 23 60 137777 10 30 137779 32 120 137999 28 120 139999 9 30 177779 3 30 177799 20 60 179999 9 30 199999 3 6 333337 2 6 333377 2 15 333379 13 30 333779 25 60 333799 12 60 337777 4 15 337799 20 90 337999 21 60 377777 2 6 377779 9 30 377999 21 60 379999 6 30 777779 1 6 777799 3 15 779999 1 15 799999 2 6 ``` ``` Table 5. 7-2709-74 Case ======================= b c p ======================= 1111117 3 7 1111133 6 21 1111139 6 42 1111177 5 21 1111199 5 21 1111333 11 35 1111337 22 105 1111339 29 105 1111379 50 210 1111399 25 105 1111777 8 35 1111799 25 105 1111999 9 35 1113337 38 140 1113377 51 210 1113379 89 420 1113779 95 420 1113799 113 420 1117777 9 35 1117799 49 210 1117999 37 140 1133333 10 21 1133339 38 105 1133377 47 210 1133399 52 210 1133777 65 210 1133779 182 630 1133999 68 210 1137779 106 420 1137799 169 630 1139999 25 105 1177777 6 21 1177799 37 210 1177999 47 210 1199999 6 21 1333333 4 7 1333337 10 42 1333339 11 42 1333379 51 210 1333399 31 105 1333777 32 140 1333799 103 420 1333999 28 140 1337777 38 105 1337779 101 420 1337999 90 420 1339999 19 105 1377779 52 210 1377799 98 420 1379999 55 210 1399999 8 42 1777799 26 105 1777999 37 140 1799999 12 42 1999999 1 7 3333337 3 7 3333377 2 21 3333379 14 42 3333779 28 105 3333799 23 105 3337777 8 35 3337799 50 210 3337999 48 140 3377777 3 21 3377779 24 105 3377999 48 210 3379999 19 105 3777779 10 42 3777799 23 105 3779999 19 105 3799999 10 42 7777799 6 21 7777999 16 35 7799999 5 21 ``` ``` Table 6. 8-9177-101 Case ========================== b c p ========================== 11111113 4 8 11111117 3 8 11111119 3 8 11111137 13 56 11111179 12 56 11111333 13 56 11111339 42 168 11111377 46 168 11111399 41 168 11111777 15 56 11111779 42 168 11111999 15 56 11113333 9 70 11113337 70 280 11113339 79 280 11113379 197 840 11113399 49 420 11113777 66 280 11113799 209 840 11113999 70 280 11117777 13 70 11117779 63 280 11117999 62 280 11119999 11 70 11133337 63 280 11133377 147 560 11133379 173 1120 11133779 412 1680 11133799 383 1680 11137777 51 280 11137799 392 1680 11137999 178 1120 11177777 11 56 11177779 65 280 11177999 138 560 11179999 73 280 11333339 41 168 11333377 67 420 11333399 45 420 11333777 115 560 11333779 419 1680 11333999 123 560 11337779 403 1680 11337799 392 2520 11339999 53 420 11377777 41 168 11377799 376 1680 11377999 397 1680 11399999 36 168 11777779 36 168 11777999 138 560 11779999 52 420 13333333 4 8 13333337 13 56 13333339 14 56 13333379 64 336 13333399 43 168 13333777 75 280 13333799 228 840 13333999 64 280 13337777 62 280 13337779 173 1120 13337999 191 1120 13339999 60 280 13377779 190 840 13377799 382 1680 13379999 192 840 13399999 41 168 13777777 9 56 13777799 201 840 13777999 172 1120 13799999 52 336 13999999 10 56 17777777 1 8 17777779 14 56 17777999 61 280 17779999 53 280 17999999 10 56 19999999 2 8 33333337 2 8 33333379 9 56 33333779 37 168 33333799 44 168 33337777 10 70 33337799 52 420 33337999 44 280 33377777 17 56 33377779 65 280 33377999 127 560 33379999 68 280 33777779 38 168 33777799 54 420 33779999 62 420 33799999 39 168 37777777 1 8 37777799 32 168 37777999 60 280 37799999 35 168 37999999 10 56 77777999 17 56 77779999 6 70 ``` ``` Table 7. 9-33191-142 Case =============================== b c p =============================== 111111113 4 9 111111119 1 9 111111133 11 36 111111137 14 72 111111139 14 72 111111179 14 72 111111199 11 36 111111337 48 252 111111377 56 252 111111379 90 504 111111779 41 252 111111799 47 252 111113333 28 126 111113339 89 504 111113377 169 756 111113399 155 756 111113777 108 504 111113779 265 1512 111113999 69 504 111117779 92 504 111117799 159 756 111119999 24 126 111133333 22 126 111133337 131 630 111133339 135 630 111133379 503 2520 111133399 286 1260 111133777 231 1260 111133799 634 3780 111133999 237 1260 111137777 113 630 111137779 504 2520 111137999 470 2520 111139999 136 630 111177779 126 630 111177799 218 1260 111179999 117 630 111199999 21 126 111333337 119 504 111333377 248 1260 111333379 495 2520 111333779 833 5040 111333799 954 5040 111337777 272 1260 111337799 1525 7560 111337999 1012 5040 111377777 116 504 111377779 470 2520 111377999 886 5040 111379999 528 2520 111777779 84 504 111777799 227 1260 111779999 237 1260 111799999 89 504 113333333 8 36 113333339 59 252 113333377 137 756 113333399 161 756 113333777 240 1260 113333779 776 3780 113333999 242 1260 113337779 931 5040 113337799 1388 7560 113339999 255 1260 113377777 136 756 113377799 1376 7560 113377999 1550 7560 113399999 170 756 113777777 56 252 113777779 308 1512 113777999 918 5040 113779999 642 3780 113999999 39 252 117777779 48 252 117777799 127 756 117779999 183 1260 117799999 157 756 119999999 12 36 133333333 2 9 133333337 16 72 133333339 6 72 133333379 95 504 133333399 48 252 133333777 117 504 133333799 300 1512 133333999 66 504 133337777 161 630 133337779 428 2520 133337999 487 2520 133339999 116 630 133377779 440 2520 133377799 1047 5040 133379999 522 2520 133399999 81 504 133777777 58 252 133777799 719 3780 133777999 885 5040 133799999 254 1512 133999999 42 252 137777777 11 72 137777779 89 504 137777999 403 2520 137779999 513 2520 137999999 100 504 139999999 13 72 177777779 8 72 177777799 54 252 177779999 135 630 177799999 94 504 179999999 10 72 333333337 3 9 333333377 5 36 333333379 11 72 333333779 65 252 333333799 38 252 333337777 14 126 333337799 135 756 333337999 110 504 333377777 23 126 333377779 126 630 333377999 257 1260 333379999 113 630 333777779 90 504 333777799 185 1260 333779999 230 1260 333799999 93 504 337777777 7 36 337777799 128 756 337777999 290 1260 337799999 150 756 337999999 47 252 377777777 1 9 377777779 14 72 377777999 121 504 377779999 96 630 377999999 38 252 379999999 14 72 777777799 5 36 777779999 13 126 777799999 32 126 779999999 9 36 799999999 1 9 ``` ``` Table 8. 10-118912-184 Case ============================== b c p ============================== 1111111117 2 10 1111111133 5 45 1111111139 9 90 1111111177 4 45 1111111199 5 45 1111111333 22 120 1111111337 79 360 1111111339 64 360 1111111379 97 720 1111111399 62 360 1111111777 23 120 1111111799 54 360 1111111999 29 120 1111113337 147 840 1111113377 157 1260 1111113379 492 2520 1111113779 496 2520 1111113799 482 2520 1111117777 20 210 1111117799 180 1260 1111117999 147 840 1111133333 45 252 1111133339 241 1260 1111133377 455 2520 1111133399 437 2520 1111133777 491 2520 1111133779 1379 7560 1111133999 479 2520 1111137779 709 5040 1111137799 1379 7560 1111139999 246 1260 1111177777 41 252 1111177799 433 2520 1111177999 490 2520 1111199999 45 252 1111333333 20 210 1111333337 210 1260 1111333339 259 1260 1111333379 1184 6300 1111333399 388 3150 1111333777 754 4200 1111333799 2242 12600 1111333999 673 4200 1111337777 449 3150 1111337779 2320 12600 1111337999 2500 12600 1111339999 451 3150 1111377779 1096 6300 1111377799 2288 12600 1111379999 1077 6300 1111399999 222 1260 1111777777 20 210 1111777799 414 3150 1111777999 799 4200 1111799999 235 1260 1111999999 24 210 1113333337 174 840 1113333377 435 2520 1113333379 702 5040 1113333779 2132 12600 1113333799 2225 12600 1113337777 763 4200 1113337799 4555 25200 1113337999 2473 16800 1113377777 421 2520 1113377779 2171 12600 1113377999 4252 25200 1113379999 2131 12600 1113777779 713 5040 1113777799 2343 12600 1113779999 2426 12600 1113799999 721 5040 1117777777 24 120 1117777799 428 2520 1117777999 806 4200 1117799999 503 2520 1117999999 127 840 1133333333 2 45 1133333339 76 360 1133333377 151 1260 1133333399 145 1260 1133333777 410 2520 1133333779 1392 7560 1133333999 453 2520 1133337779 2311 12600 1133337799 2823 18900 1133339999 376 3150 1133377777 431 2520 1133377799 4166 25200 1133377999 4384 25200 1133399999 400 2520 1133777777 194 1260 1133777779 1346 7560 1133777999 4606 25200 1133779999 2755 18900 1133999999 148 1260 1137777779 505 2520 1137777799 1317 7560 1137779999 2240 12600 1137799999 1317 7560 1139999999 56 360 1177777777 2 45 1177777799 153 1260 1177777999 442 2520 1177799999 459 2520 1177999999 142 1260 1199999999 4 45 1333333333 1 10 1333333337 18 90 1333333339 16 90 1333333379 101 720 1333333399 67 360 1333333777 174 840 1333333799 502 2520 1333333999 163 840 1333337777 250 1260 1333337779 697 5040 1333337999 729 5040 1333339999 219 1260 1333377779 993 6300 1333377799 2332 12600 1333379999 1066 6300 1333399999 218 1260 1333777777 137 840 1333777799 2279 12600 1333777999 2419 16800 1333799999 733 5040 1333999999 172 840 1337777777 69 360 1337777779 455 2520 1337777999 2107 12600 1337779999 2310 12600 1337999999 473 2520 1339999999 67 360 1377777779 87 720 1377777799 454 2520 1377779999 1118 6300 1377799999 659 5040 1379999999 86 720 1399999999 12 90 1777777777 2 10 1777777799 70 360 1777777999 130 840 1777799999 247 1260 1777999999 166 840 1799999999 11 90 1999999999 1 10 3333333377 2 45 3333333379 16 90 3333333779 58 360 3333333799 68 360 3333337777 25 210 3333337799 181 1260 3333337999 145 840 3333377777 38 252 3333377779 233 1260 3333377999 405 2520 3333379999 226 1260 3333777779 233 1260 3333777799 415 3150 3333779999 419 3150 3333799999 221 1260 3337777777 21 120 3337777799 395 2520 3337777999 765 4200 3337799999 486 2520 3337999999 147 840 3377777777 1 45 3377777779 70 360 3377777999 409 2520 3377779999 430 3150 3377999999 144 1260 3379999999 65 360 3777777779 12 90 3777777799 62 360 3777779999 182 1260 3777799999 256 1260 3779999999 62 360 3799999999 20 90 7777777799 2 45 7777777999 19 120 7777799999 41 252 7777999999 21 210 7799999999 5 45 ```
no-problem/9907/cond-mat9907327.html
ar5iv
text
# Bogoliubov’s Theory: A Paradigm of Quantum Phase Transitions This short essay discusses the application of Bogoliubov’s theory of superfluidity in the context of quantum phase transitions. The importance of N. N. Bogoliubov’s ground-braking paper On the Theory of Superfluidity in the development of an understanding of superfluidity cannot be underestimated . More than 50 years after the publication of this seminal work, it continues to play a dominant role in contemporary condensed matter physics. It therefore seems appropriate on the occasion of commemorating Bogoliubov’s 90s birthday to submit a short essay discussing a modern application of his theory in the context of quantum phase transitions. Some of the material presented here is more extensively discussed in the review . Other recent reviews can be found in Refs. . Bogoliubov’s theory of superfluidity starts with the Lagrangian $$=\varphi ^{}\left[i_0ϵ(i)+\mu _0\right]\varphi \lambda _0|\varphi |^4,$$ (1) where the complex scalar field $`\varphi (x)`$ describes the atoms of mass $`m`$ constituting the liquid, $`i_0`$ is the total energy operator, while $`ϵ(i)=^2/2m`$ is the kinetic energy operator, and $`\mu _0`$ the chemical potential. The last term with a positive coupling constant, $`\lambda _0>0`$, represents a weak repulsive contact interaction. The theory features a global U(1) symmetry, under which the matter field acquires an extra phase factor $`\varphi (x)\mathrm{e}^{i\alpha }\varphi (x)`$, with $`\alpha `$ the transformation parameter. Depending on the ground state, which is determined by the minimum of the potential energy, the symmetry can be realized in two different ways. When $`\mu _0<0`$, the ground state is at $`\varphi =0`$, and the system is in the symmetrical state. As the chemical potential tends to zero, the theory becomes critical, and when $`\mu _0>0`$, the global U(1) symmetry is spontaneously broken by a nontrivial ground state, given by $`|\overline{\varphi }|^2=\mu _0/2\lambda _0`$. This quantity physically denotes the number density $`\overline{n}_0`$ of particles residing in the Bose-Einstein condensate. The spectrum of the single-particle excitations in this state is given by the celebrated Bogoliubov form , $$E(𝐤)=\sqrt{ϵ^2(𝐤)+2\mu _0ϵ(𝐤)},$$ (2) whose most important signature is that at low momentum it takes the phonon form $`E(𝐤)\sqrt{\mu _0/m}|𝐤|`$ predicted by Landau. The spectrum was shown by Beliaev to remain gapless when one-loop quantum corrections are included. And this was subsequently proven to hold to all orders in perturbation theory by Hugenholtz and Pines , meaning that the Bogoliubov theory describes a gapless mode. This mode is nothing but the Goldstone mode accompanying the spontaneous symmetry breakdown of the global U(1) symmetry, and is the only degree of freedom present in this state. In other words, the Bogoliubov theory is a phase-only theory. At zero temperature and in the absence of impurities, the phase field is governed by the effective Lagrangian $$_{\mathrm{eff}}=\overline{n}\left[_0\phi +\frac{1}{2m}(\phi )^2\right]+\frac{\overline{n}}{2mc^2}\left[_0\phi +\frac{1}{2m}(\phi )^2\right]^2,$$ (3) where $`\overline{n}`$ is the average particle number density of the system at rest characterized by a constant phase field $`\phi (x)=\mathrm{const}`$, and $`c`$ is the sound velocity, which to a first approximation equals $`c=\sqrt{\mu _0/m}`$. The phase rigidity in the spatial directions, i.e., the coefficient of $`\frac{1}{2}(\phi )^2`$, is seen to be given by $`\overline{n}/m`$, while that in the temporal direction by the compressibility $`\kappa `$ because $$\frac{\overline{n}}{mc^2}=\overline{n}^2\kappa .$$ (4) Both these rigidities are response functions. Since the chemical potential $`\mu `$ is represented in the effective theory (3) by $$\mu (x)=_0\phi (x),$$ (5) a single differentiation of the effective Lagrangian with respect to $`\mu `$ yields the particle number density $`n(x)=\overline{n}(\overline{n}/mc^2)[_0\phi +(\phi )^2/2m]`$ of the system slowly varying in space and time $$\frac{_{\mathrm{eff}}}{\mu (x)}=n(x),$$ (6) while a second differentiation yields the compressibility $$\frac{^2_{\mathrm{eff}}}{\mu ^2}=\overline{n}^2\kappa ,$$ (7) as required. It also follows from Eqs. (5) and (6) that $`n`$ and $`\varphi `$ are canonically conjugate variables . The form of the effective theory (3), especially the combination $`_0\phi +(\phi )^2/2m`$ in square brackets is dictated by Galilei invariance . In cases where this symmetry is explicitly broken, as in the presence of impurities and at finite temperature, we expect changes in the relative weights of the coefficients (see below). Another, for the further development of the theory of superfluidity , momentous observation made by Bogolibov was the so-called depletion of the condensate. He showed that even at the absolute zero of temperature not all the particles reside in the ground state, but $$\frac{\overline{n}}{\overline{n}_0}1\frac{8}{3}\left(\frac{\overline{n}a^3}{\pi }\right)^{1/2},$$ (8) where we replaced the coupling constant with the s-channel scattering length $`a=m\lambda /2\pi `$ . (Recall that $`\overline{n}_0`$ denotes the density of particles in the condensate.) Due to the interparticle repulsion, particles are removed from the condensate and put in states of finite momentum. In a strongly interacting system like superfluid <sup>4</sup>He, the depletion is such that no more than about 8% of the particles condense in the zero-momentum state . Despite the depletion of the condensate, the phase rigidity in the spatial directions was found in Eq. (3) to be given at the absolute zero of temperature and in the absence of impurities by the total average particle number density $`\overline{n}/m`$. Since this coefficient denotes the superfluid particle number density $`\rho _\mathrm{s}`$ (divided by $`m^2`$), all the particles—not just those residing in the condensate—participate in the superfluid motion . This changes at finite temperature and also when impurities are included: Galilei invariance is broken then and $`\rho _\mathrm{s}`$ no longer equals $`m\overline{n}`$. On the other hand, the phase rigidity in the temporal direction as well as the first term in the effective Lagrangian (3) stay the same. This is because relation (5) remains true. In general we thus have as effective theory $$_{\mathrm{eff}}=\overline{n}_0\phi \frac{\rho _\mathrm{s}}{2m^2}(\phi )^2+\frac{1}{2}\overline{n}^2\kappa (_0\phi )^2+\mathrm{}.$$ (9) Up to this point we have not specified the external parameter which must be varied to tune the chemical potential to its critical value where the system undergoes a phase transition. In the conventional application of the Bogoliubov theory, the control parameter is the temperature $`T`$. The critical temperature $`T_\mathrm{c}`$ can be determined within the theory by calculating the finite-temperature effective potential and identifying the temperature at which the minimum starts to shift away from the origin. At the one-loop level, one finds : $$T_\mathrm{c}=\pi \left[\sqrt{2}\zeta (\frac{3}{2})\right]^{2/3}\frac{1}{m}\left(\frac{\mu }{\lambda }\right)^{2/3}\frac{2}{3}\frac{\zeta (\frac{1}{2})}{\zeta (\frac{3}{2})}\mu ,$$ (10) where in obtaining this result a high-temperature expansion has been used. This is justified because the leading term is of the order $`\lambda ^{2/3}`$, which is large for weak-coupling. Equation (10) expresses the critical temperature in terms of the chemical potential. From the experimental point of view, however, it is more realistic to have the particle number density as independent variable. One then finds instead : $$\frac{T_\mathrm{c}T_0}{T_0}=c_0\left(\overline{n}a^3\right)^\gamma ,$$ (11) where we again replaced $`\lambda `$ with the scattering length $`a`$, $`c_0=\frac{8}{3}\zeta (\frac{1}{2})/\zeta (\frac{3}{2})2.82`$, $`\gamma =\frac{1}{3}`$, and $`T_0=(2\pi /m)\left[\overline{n}/\zeta (\frac{3}{2})\right]^{2/3}`$ is the critical temperature of a free Bose gas $`(\lambda =0)`$. It follows that the critical temperature is increased by the weak repulsive interaction. This is qualitatively different from the strongly interacting <sup>4</sup>He system. A free gas with <sup>4</sup>He parameters at vapor pressure would have a critical temperature of about 3.1 K, whereas liquid <sup>4</sup>He becomes superfluid at the lower temperature of 2.2 K. A similar picture emerges from path-integral Monte Carlo simulations carried out by Grüter, Ceperley, and Laloë . They found that at low densities, corresponding to small $`a`$, the critical temperature is increased by the repulsive interaction, while at higher densities it is decreased. In the weak-coupling limit, they found numerically the same exponent $`\gamma =0.34\pm 0.03`$ as in Eq. (11), while the value of $`c_0`$ was found to be an order of magnitude smaller: $`c_0=0.34\pm 0.06`$. As argued by these authors, a moderate repulsive interaction suppresses density fluctuations, resulting in a more homogeneous system. This facilitates the formation of large so-called exchange rings necessary to form a Bose-Einstein condensate. These exchange rings, as they appear in Feynman’s theory of Bose-Einstein condensation , consist of bosons which are cyclicly permuted in imaginary time (see Ref. for a recent account). At higher densities, the exchange is obstructed because due to the strong repulsive interaction it is more difficult for the particles to move. This leads to a lower critical temperature. We now turn to the main subject of this essay, and consider the quantum critical behavior of the Bogoliubov theory first studied by Uzunov . The critical behavior of a system close to a quantum phase transition is dominated not by thermal fluctuations as in a classical phase transition at finite temperature, but by quantum fluctuations. In this context, the Bogoliubov theory is considered to be a phenomenological theory similar to the Landau theory of classical phase transitions. The system undergoes a quantum transition at the absolute zero of temperature when the chemical potential approaches a critical values $`\mu _\mathrm{c}`$, which is not necessarily zero as in the case of the finite-temperature classical transition. The fine tuning of the chemical potential can be achieved by varying a number of external parameters, such as the charge carrier density, the applied magnetic field, or the impurity strength. For values of the renormalized parameter larger than the critical value $`\mu >\mu _\mathrm{c}`$, the global U(1) symmetry is spontaneously broken and the system is superfluid with a single-particle spectrum given by the gapless Bogoliubov spectrum, implying that the system is compressible. On lowering $`\mu `$, this state is destroyed and replaced by an insulating state . In the absence of impurities, the insulating state is a so-called Mott-insulator, characterized by the absence of phase rigidity in both spatial and temporal directions, and by an energy gap in the single-particle spectrum. This insulating state, which arises solely due to the repulsive interaction, is consequently incompressible. On the other hand, in the presence of impurities, the bosons become trapped by the impurities, i.e., Anderson localized. The resulting insulating state is a so-called Bose glass characterized by a single-particle spectrum that is—like in the superfluid state—gapless. This state is therefore also compressible, so that the compressibility remains finite at the transition. To account for (quenched) impurities, the following term is added to the Bogoliubov theory: $$_\mathrm{\Delta }=\psi (𝐱)|\varphi (x)|^2,$$ (12) with $`\psi (𝐱)`$ a real random field whose distribution is assumed to be Gaussian $$P(\psi )=\mathrm{exp}\left[\frac{1}{\mathrm{\Delta }_0}\text{d}^dx\psi ^2(𝐱)\right],$$ (13) and characterized by the impurity strength $`\mathrm{\Delta }_0`$. Physically, $`\psi `$ describes impurities randomly distributed in space. These impurities lead to an additional depletion of the condensate given in $`d`$ space dimensions by $$\overline{n}_\mathrm{\Delta }=2^{d/25}\pi ^{d/2}\mathrm{\Gamma }(2d/2)m^{d/2}\lambda ^{d/22}\overline{n}_0^{d/21}\mathrm{\Delta }.$$ (14) The superfluid and normal mass density $`\rho _\mathrm{s}`$ and $`\rho _\mathrm{n}`$, respectively now become at the absolute zero of temperature $$\rho _\mathrm{s}=m\left(\overline{n}\frac{4}{d}\overline{n}_\mathrm{\Delta }\right),\rho _\mathrm{n}=\frac{4}{d}m\overline{n}_\mathrm{\Delta }.$$ (15) It follows that the normal density is a factor $`4/d`$ larger than the mass density $`m\overline{n}_\mathrm{\Delta }`$ knocked out of the condensate by the impurities. (For $`d=3`$ this gives the factor $`\frac{4}{3}`$ first found in Ref. .) As argued by Huang and Meng , this implies that part of the zero-momentum states belongs (for $`d<4`$) not to the condensate, but to the normal fluid. Being trapped by the impurities, this fraction of the zero-momentum states are localized. In other words, the phenomenon of Anderson localization can be accounted for in the Bogoliubov theory of superfluidity by including a random field. The universality class defined by the zero-temperature Bogoliubov theory is not only relevant to describe the critical behavior of superfluid films (either with or without impurities), but also to describe that of other systems, including Josephson junction arrays and superconducting films. In the so-called composite-boson limit, where Cooper pairs form tightly bound states, the BCS theory directly maps onto the Bogoliubov theory , which is as we argued a phase-only theory. But even a weakly interacting BCS system was argued to be in the same universality class . The reason is that the amplitude fluctuations of the order parameter are not critical at the transition, not even in the classical superconductor-to-normal transition in $`d=3`$ , only the phase fluctuations are. The phase of the order parameter therefore constitutes the relevant degree of freedom, which is precisely the one described by the Bogoliubov theory. (See, however, Ref. , where it is argued that the amplitude fluctuations cannot be neglected, when considering quantum phase transitions in impure superconducting films.) The Bogoliubov theory presumably also forms the basis for the description of the critical behavior of fractional quantized Hall systems . To investigate the role of quantum fluctuations in the Bogoliubov theory we start with a dimensional analysis. Since, as far as the quantum critical behavior of this theory is concerned, the mass $`m`$ is an irrelevant parameter, it can be scaled away by introducing $`t^{}=t/m,\mu _0^{}=m\mu ,\lambda _0^{}=\lambda _0m`$. The engineering dimension of the various variables is then easily determined as: $$[𝐱]=1,[t]=2,[\mu _0]=2,[\lambda _0]=2d,[\varphi ]=\frac{1}{2}d,$$ (16) with $`d`$ the number of space dimensions, and where we dropped the primes again. Note that the time dimension counts double as compared to the space dimensions. This is typical for nonrelativistic theories where the time derivative is accompanied by two space derivatives \[see Eq. (1)\]. In two space dimensions, the coupling constant $`\lambda _0`$ has zero engineering dimension, showing that the $`|\varphi |^4`$-term is a marginal operator, and that $`d_\mathrm{c}=2`$ is the upper critical space dimension above which the quantum critical behavior of the Bogoliubov theory becomes Gaussian. For $`d>d_\mathrm{c}`$ quantum fluctuations are irrelevant, while for $`d<d_\mathrm{c}`$ these fluctuations become crucial. Let us next compute the one-loop effective potential $$𝒱_{\mathrm{eff}}=\frac{\mu _0^2}{4\lambda _0}+\frac{1}{2}\frac{\text{d}^dk}{(2\pi )^d}E(𝐤),$$ (17) with $`E(𝐤)`$ the gapless Bogoliubov spectrum (2). The integral over the loop momentum yields close to the upper critical dimension $`d=2`$: $$𝒱_{\mathrm{eff}}=\frac{\mu _0^2}{4\lambda _0}\frac{1}{4\pi ϵ}\frac{m\mu _0^2}{\kappa ^ϵ}+𝒪(ϵ^0),$$ (18) where $`ϵ=2d`$, and $`\kappa `$ is an arbitrary renormalization group scale parameter, with the dimension of an inverse length. The right-hand side of Eq. (18) is seen to diverge when the upper critical dimension is approached. The theory can be rendered ultraviolet finite by introducing a renormalized coupling constant $`\lambda `$ $$\frac{1}{\widehat{\lambda }}=\frac{\kappa ^ϵ}{\lambda _0}+\frac{m}{\pi ϵ},$$ (19) where $`\widehat{\lambda }=\lambda /\kappa ^ϵ`$. Its definition is such that for arbitrary $`d`$, $`\widehat{\lambda }`$ has the same engineering dimension as $`\lambda _0`$ in the upper critical dimension $`d=2`$. As renormalization prescription we used the modified minimal subtraction. The beta function $`\beta (\widehat{\lambda })`$ follows as $$\beta (\widehat{\lambda })=\kappa \frac{\widehat{\lambda }}{\kappa }|_{\lambda _0}=ϵ\widehat{\lambda }+\frac{m}{\pi }\widehat{\lambda }^2.$$ (20) In the upper critical dimension, this yields only one fixed point, viz. the infrared-stable (IR) fixed point $`\widehat{\lambda }^{}=0`$. Below $`d=2`$, this fixed point point is shifted to $`\widehat{\lambda }^{}=ϵ\pi /m`$, implying that the system undergoes a 2nd-order quantum phase transition. Above the upper critical dimension, there is no (nontrivial) renormalization of the coupling constant, which explains why we omitted the subscript $`0`$ on $`\mu `$ and $`\lambda `$ in Eq. (10). Since Eq. (18) could be rendered finite solely by a renormalization of the coupling constant, it follows that the chemical potential is not renormalized to this order. As shown by Uzunov these results remain true to all orders in perturbation theory . The reason for this behavior is the special analytic structure of the nonrelativistic propagator at criticality, representing only particles propagating forward in time. As a result, the self-energy (and consequently $`\mu `$) is not renormalized and the full 4-point vertex function is given by a geometric series, leading to the same beta function (20) found at the one-loop order. Closely connected to this is that, despite the nontrivialness of the IR fixed point in $`d<2`$, the critical indices characterizing it are Gaussian . This conclusion was confirmed by numerical simulations in $`d=1`$ . This changes when impurities are included. A direct application of the renormalization group lead to the conclusion that the IR fixed point becomes instable. A more careful analysis, using a so-called double epsilon expansion, shows that the fixed point remains stable upon including impurities. The double epsilon expansion was originally introduced in statistical mechanics by Dorogovtsev to treat impurities of finite extend in a classical system. To consistently account for these in perturbation theory, one must assume their dimensionality $`ϵ_\mathrm{d}`$ to be small, and perform in addition to the usual epsilon expansion, also an expansion in $`ϵ_\mathrm{d}`$. The impurities described by Eq. (12) are static grains which trace out a straight worldlines when time is included. In other words, the impurities are line-like in spacetime, and have also to be treated in a double epsilon expansion, assuming that their dimensionality $`ϵ_\mathrm{d}`$ is not $`1`$, but small instead. The quantum critical behavior of the Bogoliubov theory in $`d`$ space dimensions with randomly distributed static impurities tracing out “worldlines” of dimensionality $`ϵ_\mathrm{d}`$ falls in the universality class of a $`d`$-dimensional classical system with randomly distributed extended impurities of dimensionality $`2ϵ_\mathrm{d}`$—at least to the one-loop order . The factor $`2`$ arises because, as we mentioned before, in the nonrelativistic Bogoliubov theory, time dimensions count double as compared to space dimensions. Besides having a diverging correlation length $`\xi `$, 2nd-order quantum phase transitions also have a diverging correlation time $`\xi _t`$, indicating the time period over which the system fluctuates coherently. The way the diverging correlation time scales with the diverging correlation length, $$\xi _t\xi ^z,$$ (21) defines the so-called dynamic exponent $`z`$. The traditional scaling theory of classical 2nd-order phase transitions is easily extended so as to include the time dimension . Let $`\delta KK_\mathrm{c}`$, with $`K`$ the external control parameter, denote the distance from the phase transition, so that $`\xi |\delta |^\nu `$, with $`\nu `$ the correlation length exponent. At the absolute zero of temperature, a physical observable $`O(k_0,|𝐤|,K)`$ at finite energy $`k_0`$ and momentum $`𝐤`$ can in the critical region be written as $$O(k_0,|𝐤|,K)=\xi ^{d_O}𝒪(\xi _tk_0,\xi |𝐤|),(T=0),$$ (22) where $`d_O`$ is the scaling dimension of the observable $`O`$. The right-hand side does depend not explicitly on $`K`$, but only implicitly through $`\xi `$ and $`\xi _t`$. Since a physical system is always at some finite temperature, we have to investigate how the scaling law (22) changes when the temperature becomes nonzero. The easiest way to include temperature in a quantum field theory is to go over to imaginary time $`\tau =it`$, with $`\tau `$ restricted to the interval $`0\tau \beta `$, where $`\beta =1/T`$ is the inverse temperature. The time dimension thus becomes compactified. The critical behavior of a phase transition at finite temperature is still controlled by the quantum critical point provided $`\xi _t<\beta `$, so that the system does not notice the finite extend of the time dimension. Instead of the zero-temperature scaling (22), we now have the finite-size scaling $$O(k_0,|𝐤|,K,\beta )=\beta ^{d_O/z}𝒪(\beta k_0,\beta ^{1/z}|𝐤|,\beta /\xi _t),(T0).$$ (23) The distance to the quantum critical point is measured by the ratio $`\beta /\xi _t|\delta |^{z\nu }/T`$. Let us apply these general considerations to the effective theory (9) . The singular part of the free energy density $`f_{\mathrm{sing}}`$, which scales near the transition as $$f_{\mathrm{sing}}\xi ^{(d+z)},$$ (24) arises from the low-energy, long-wavelength fluctuations of the Goldstone field. The ensemble averages give $$(\phi )^2\xi ^2,(_0\phi )^2\xi _t^2\xi ^{2z}.$$ (25) Combined, these hyperscaling arguments yield the following scaling of the rigidity constants: $$\rho _\mathrm{s}\xi ^{(d+z2)},\overline{n}^2\kappa \xi ^{(dz)}|\delta |^{(dz)\nu }.$$ (26) The first conclusion is consistent with the universal jump in the superfluid density predicted by Nelson and Kosterlitz for a Kosterlitz-Thouless phase transition which corresponds to taking $`z=0`$ and $`d=2`$. In an impure system undergoing an Anderson transition, the compressibility $`\overline{n}^2\kappa `$ is nonsingular at the critical point and hence $`z=d`$ for repulsively interacting bosons in an impure media . Surprisingly, the same conclusion holds for an impure fermionic system . For $`d=1`$ it follows that space and time appear symmetric as in a relativistic theory. In a clean system, on the other hand, with a density-driven Mott transition, i.e., $`\delta \mu \mu _\mathrm{c}`$, $`f_{\mathrm{sing}}`$ can also be directly differentiated with respect to the chemical potential to yield for the singular part of the compressibility $$\overline{n}^2\kappa _{\mathrm{sing}}|\delta |^{(d+z)\nu 2}.$$ (27) In this case $`\overline{n}^2\kappa \overline{n}^2\kappa _{\mathrm{sing}}`$, so that $`z\nu =1`$ in accord with the Gaussian values $`\nu =\frac{1}{2},z=2`$ found by Uzunov for the pure case in $`d<2`$. The above hyperscaling arguments have been extended by Fisher, Grinstein, and Girvin to include a $`1/|𝐱|`$-Coulomb potential. This potential is important for the quantum phase transitions in charged systems because the Coulomb repulsion suppresses fluctuations in the charge density and simultaneously enhances those in the canonically conjugate variable $`\varphi `$, thereby disordering the ordered state. The quadratic terms of the effective theory in Fourier space become when the $`1/|𝐱|`$-Coulomb potential is included $$_{\mathrm{eff}}^{(2)}=\frac{1}{2}\left(\rho _\mathrm{s}𝐤^2\frac{1}{\widehat{e}^2}k_0^2|𝐤|^{d1}\right)|\phi (k_0,𝐤)|^2,$$ (28) where $`\widehat{e}`$ is the renormalized charge. Using similar hyperscaling arguments as before, one finds that this charge scales as $$\widehat{e}^2\xi ^{1z}.$$ (29) Arguing that in the presence of random impurities the charge is nonsingular at the transition, the authors of Ref. concluded that $$z=1.$$ (30) This again is an exact result which replaces the value $`z=d`$ of the neutral system in an impure media. Most experiments on quantum phase transitions in charged systems measure the conductivity $`\sigma `$. To describe such type of systems, we minimally couple the Bogoliubov theory to an electromagnetic vector potential $`(A_0,𝐀)`$. The conductivity turns out to be related to the superfluid mass density via $$\sigma (k)=i\left(\frac{e}{m}\right)^2\frac{\rho _\mathrm{s}(k)}{k_0}.$$ (31) On account of the scaling relation (26), it then follows that $$\sigma \xi ^{(d2)},$$ (32) implying that the conductivity and therefore the resistivity is a marginal operator in two space dimensions . The magnetic field $`H`$ scales with $`\xi `$ as $`H\mathrm{\Phi }_0/\xi ^2`$, where $`\mathrm{\Phi }_0=2\pi /e`$ is the magnetic flux quantum. This implies that the scaling dimension $`d_𝐀`$ of $`𝐀`$ is unity, $$d_𝐀=1,$$ (33) so that $`|𝐀|\xi ^1`$. From this it in turn follows that the electric field $`E=|𝐄|`$ scales as $`E\xi _t^1\xi ^1\xi ^{(z+1)}`$, and that the scaling dimension $`d_{A_0}`$ of $`A_0`$ is $`z`$, $$d_{A_0}=z,$$ (34) so that $`A_0\xi _t^1\xi ^z`$. Let us now be specific and consider quantum phase transitions triggered by changing either the applied magnetic field, i.e., $`\delta HH_\mathrm{c}`$, or the charge carrier density, i.e., $`\delta nn_\mathrm{c}`$. For DC ($`k_0=0`$) conductivities in the presence of an external electric field $`E`$ we have on account of the general finite-size scaling form (23) with $`k_0=|𝐤|=0`$: $$\sigma (K,T,E)=\varsigma (|\delta |^{\nu z}/T,|\delta |^{\nu (z+1)}/E).$$ (35) This shows that conductivity measurements close to a quantum critical point of the kind discussed here should in general collapse onto two branches when plotted as function of the dimensionless combinations $`|\delta |^{\nu z}/T`$ and $`|\delta |^{\nu (z+1)}/E`$: a lower branch bending down for the insulating state and an upper branch tending to infinity for the other state. The best collapse of the data determines the values of $`\nu z`$ and $`\nu (z+1)`$. In other words, the temperature and electric-field dependence determine the critical exponents $`\nu `$ and $`z`$ independently. The table below shows experimental data for the critical exponents $`z`$ and $`\nu `$ of the superconductor-to-insulator transition in thin films, the Hall-liquid-to-insulator transition in fractional quantized Hall systems, and the conductor-to-insulator transition in silicon MOSFET’s at extremely low electron number densities. | Transition | $`z`$ | $`\nu `$ | | --- | --- | --- | | Superconductor-to-Insulator | $`1.0\pm 0.1`$ | $`1.36\pm 0.05`$ | | Hall-Liquid-to-Insulator | $`1.0`$ | $`2.3`$ | | Conductor-to-Insulator | $`0.8\pm 0.1`$ | $`1.5\pm 0.1`$ | A few remarks are in order. First, the values for the dynamic exponent $`z`$ found in these systems are in accordance with the prediction $`z=1`$ recorded in Eq. (30), which was obtained using general hyperscaling arguments for an impure system with a $`1/|𝐱|`$-Coulomb potential. Second, the values of the critical exponents characterizing the Hall-liquid-to-insulator transition are universal and independent of the filling factor—whether an integer or a fraction. Third, earlier experiments on silicon MOSFET’s at lower densities seemed to confirm the general believe, based on the work by Abrahams et al. , that such two-dimensional electron systems do not undergo a quantum phase transition. In that paper, where electron-electron interactions were ignored, it was demonstrated that impurities always localize the electrons at the absolute zero of temperature, thus excluding conducting behavior. Apparently, the situation changes drastically at low electron number densities, where the $`1/|𝐱|`$-Coulomb interaction becomes important. The values of the critical exponents found for this transition are surprisingly close to those found for the superconductor-to-insulator transition. Since further experiments in an applied magnetic field also revealed a behavior closely resembling that near the superconductor-to-insulator transition, it is speculated that the conducting state in silicon MOSFET’s is in fact superconducting. Acknowledgments I’m grateful to NCTS and B. Rosenstein for the financial support and the hospitality at the Center in Hsinchu, Taiwan, where this work was completed.
no-problem/9907/cond-mat9907161.html
ar5iv
text
# Scaling of the distribution of price fluctuations of individual companies ## I Introduction The study of financial markets poses many challenging questions. For example, how can one understand a strongly fluctuating system that is constantly driven by external information? And, how can one account for the role of the feedback between the markets and the outside world, or of the complex interactions between traders and assets? An advantage for the researcher trying to answer these questions is the availability of huge amounts of data for analysis. Indeed, the activities at financial markets result in several observables, such as the values of different market indices, the prices of the different stocks, trading volumes, etc. Some of the most widely studied market observables are the values of market indices. Previous empirical studies show that the distribution of fluctuations —measured by the returns— of market indices has slow decaying tails and that the distributions apparently retain the same functional form for a range of time scales . Fluctuations in market indices reflect average behavior of the price fluctuations of the companies comprising them. For example, the S&P 500 is defined as the sum of the market capitalizations (stock price multiplied by the number of outstanding shares) of 500 companies representative of the US economy. Here, we focus on a more “microscopic” quantity: individual companies. We analyze the tic-by-tic data for the 1000 publicly-traded US companies with the largest market capitalizations and systematically study the statistical properties of their stock price fluctuations. A preliminary study reported that the distribution of the 5 min returns for 1000 individual companies and the S&P 500 index decays as a power-law with an exponent $`\alpha 3`$ —well outside the stable Lévy regime ($`\alpha <2`$). Earlier independent studies on individual stock returns on longer time scales yield similar results . These findings raise the following questions: First, how does the nature of the distribution of individual stock returns change with increasing time scale $`\mathrm{\Delta }t`$? In other words, does the distribution retain its power-law functional form for longer time scales, or does it converge to a Gaussian, as found for market indices ? If the distribution indeed converges to Gaussian behavior, how fast does this convergence occur? For the S&P 500 index, for example, one finds the distribution of returns to be consistent with a non-stable power-law functional form ($`\alpha 3`$) until approximately 4 days, after which an onset of convergence to Gaussian behavior is found . Second, why is it that the distribution of returns for individual companies and for the S&P 500 index have the same asymptotic form? This finding is unexpected, since the returns of the S&P 500 are the weighted sums of the returns of 500 companies. Hence, we would expect the S&P 500 returns to be distributed approximately as a Gaussian, unless there were significant dependencies between the returns of different companies which prevent the central limit theorem from applying. To answer the first question, we extend previous work on the distribution of returns for 5 min returns by performing empirical analysis of individual company returns for time scales up to 46 months. Our analysis uses two distinct data-bases detailed below. We find that the cumulative distribution of individual-company returns is consistent with a power-law asymptotic behavior with exponent $`\alpha 3`$, which is outside the stable Lévy regime. We also find that these distributions appear to retain the same functional form for time scales up to approximately 16 days. For longer time scales, we observe results consistent with a slow convergence to Gaussian behavior. To answer the second question, we randomize each of the 500 time series of returns for the constituent 500 stocks of the S&P 500 index. A surrogate “index return” thus constructed from the randomized time series, shows fast convergence to Gaussian. Further, we find that the functional form of the distribution of returns remains unchanged for different system sizes (measured by the market capitalization) while the standard deviation decays as a power-law of market capitalization. The organization of this paper is as follows. Section II describes the databases studied and the data analyzed. Sections III, IV, and V present results for the distribution of returns for individual companies for a wide range of time scales. Section VI discusses the role of cross-correlations between companies and possible reasons why market indices have statistical properties very similar to those of individual companies. Section VII contains some concluding remarks. ## II The Data analyzed We analyze two different databases covering securities from the three major US stock markets, namely (i) the New York Stock Exchange (NYSE), (ii) the American Stock Exchange (AMEX), and (iii) the National Association of Securities Dealers Automated Quotation (Nasdaq) stock market. NYSE is the oldest stock exchange, tracing its origin to the Buttonwood Agreement of 1792. The NYSE is an agency auction market, that is, trading at the NYSE takes place by open bids and offers by Exchange members, acting as agents for institutions or individual investors. Buy and sell orders are brought to the trading floor, and prices are determined by the interplay of supply and demand. As of the end of November 1998, the NYSE lists over 3,100 companies. These companies have over $`2\times 10^{11}`$ shares, worth approximately USD $`10^{13}`$, available for trading on the Exchange. In contrast to NYSE, Nasdaq uses computers and telecommunications networks which create an electronic trading system wherein the market participants meet over the computer rather than face to face. Nasdaq’s share volume reached $`1.6\times 10^{11}`$ shares in 1997 and dollar volume reached USD $`4.4\times 10^{12}`$. As of December 1998, the Nasdaq Stock Market listed over 5,400 US and non-US companies. Nasdaq and AMEX, have merged on October 1998, after the end of the period studied in this work. The first database we consider is the trades and quotes (TAQ) database, for which we analyze the 2-year period January 1994 to December 1995. The TAQ database, which is published by NYSE since 1993, covers all trades at the three major US stock markets. This huge database is available in the form of CD-ROMs. The rate of publication was 1 CD-ROM per month for the period studied, but recently has increased to 2–3 CD-ROMs per month. The total number of transactions for the largest 1000 stocks is of the order of $`10^9`$ in the 2-year period studied. The second database we analyze is the Center for Research and Security Prices (CRSP) database. The CRSP Stock Files cover common stocks listed on NYSE beginning in 1925, the AMEX beginning in 1962, and the Nasdaq Stock Market beginning in 1972. The files provide complete historical descriptive information and market data including comprehensive distribution information, high, low and closing prices, trading volumes, shares outstanding, and total returns. The CRSP Stock Files provide monthly data for NYSE beginning December 1925 and daily data beginning July 1962. For the AMEX, both monthly and daily data begin in July 1962. For the Nasdaq Stock Market, both monthly and daily data begin in July 1972. We also analyze the S&P 500 index, which comprises 500 companies chosen for market size, liquidity, and industry group representation in the US. In our study, we analyze data with a recording frequency of less than 1 min that cover the 13 years from January 1984 to December 1996. The total number of data points in this 13-year period exceeds $`4.5\times 10^6`$. ## III The distribution of returns for $`\mathrm{\Delta }t<1`$ day The basic quantity studied for individual companies — $`i=1,2,\mathrm{},1000`$ — is the market capitalization $`S_i(t)`$, defined as the share price multiplied by the number of outstanding shares. The time $`t`$ runs over the working hours of the stock exchange—removing nights, weekends and holidays. For each company, we analyze the return $$G_iG_i(t,\mathrm{\Delta }t)\mathrm{ln}S_i(t+\mathrm{\Delta }t)\mathrm{ln}S_i(t).$$ (1) For small changes in $`S_i(t)`$, the return $`G_i(t,\mathrm{\Delta }t)`$ is approximately the forward relative change, $$G_i(t,\mathrm{\Delta }t)\frac{S_i(t+\mathrm{\Delta }t)S_i(t)}{S_i(t)}.$$ (2) For time scales shorter than 1 day, we analyze the data from the TAQ database. We consider the largest 1000 companies, in decreasing order of values of their market capitalization on the first trading day, 3 January 1994. We sample the price of these 1000 companies at 5 min intervals. In order to obtain time series for market capitalization, we multiply the stock price of each company by the number of outstanding shares for that company at each sampling time. We thereby generate a time series, sampled at 5 min intervals, for the market capitalizations of each of the largest 1000 companies. Each of the 1000 time series has approximately 40,000 data points—corresponding to the number of 5 min intervals in the 2-year period—or about 40 million data points in total. For each time series of market capitalizations, we compute the 5 min returns using Eq. (1). We filter the data to remove spurious events, such as occur due to the inevitable recording errors. ### A The distribution of returns for $`\mathrm{\Delta }t=5`$ min Figure 1(a) shows the cumulative distributions of returns $`G_i`$ for $`\mathrm{\Delta }t=5`$ min — the probability of a return larger than or equal to a threshold — for 10 individual companies randomly selected from the 1000 companies that we analyze. For each company $`i`$, the asymptotic behavior of the functional form of the cumulative distribution is “visually” consistent with a power-law, $$P(G_i>x)\frac{1}{x^{\alpha _i}},$$ (3) where $`\alpha _i`$ is the exponent characterizing the power-law decay. In Fig. 1(b) we show the histogram for $`\alpha _i`$, obtained from power-law regression-fits to the positive tails of the individual cumulative distributions of all 1000 companies studied. The histogram has most probable value $`\alpha _{MP}=3`$. Next, we compute the time-averaged volatility $`v_iv_i(\mathrm{\Delta }t)`$ of company $`i`$ as the standard deviation of the returns over the 2-year period $$v_{i}^{}{}_{}{}^{2}G_{i}^{}{}_{}{}^{2}_TG_i_{T}^{}{}_{}{}^{2},$$ (4) where $`\mathrm{}_T`$ denotes a time average over the 40,000 data points of each time series, for the 2-year period studied. Figure 1(a) suggests that the widths of the individual distributions differ for different companies; indeed, companies with small values of market capitalization are likely to fluctuate more. In order to compare the returns of different companies with different volatilities, we define the normalized return $`g_ig_i(t,\mathrm{\Delta }t)`$ as $$g_i\frac{G_iG_i_T}{v_i}.$$ (5) Figure 1(c) shows the ten cumulative distributions of the normalized returns $`g_i`$ for the same ten companies as in Fig 1(a). The distributions for all 1000 normalized returns $`g_i`$ have similar functional forms to these ten. Hence, to obtain better statistics, we compute a single distribution of all the normalized returns. The cumulative distribution $`P(g>x)`$ shows a power-law decay \[Fig 2(a)\], $$P(g>x)\frac{1}{x^\alpha }.$$ (6) Regression fits in the region $`2g80`$ yield $$\alpha =\{\begin{array}{cc}3.10\pm 0.03\hfill & \text{(positive tail)}\hfill \\ 2.84\pm 0.12\hfill & \text{(negative tail)}\hfill \end{array}.$$ (7) These estimates of the exponent $`\alpha `$ are well outside the stable Lévy range, which requires $`0<\alpha <2`$. In order to obtain an alternative estimate for $`\alpha `$, we use the methods of Hill . We first calculate the inverse of the local logarithmic slope of $`P(g)`$, $`\zeta ^1(g)d\mathrm{log}P(g)/d\mathrm{log}g`$, where $`g`$ is rank-ordered. We then estimate the asymptotic slope $`\alpha `$ by extrapolating $`\zeta `$ as a function of $`1/g0`$. Figure 3 shows the results for the negative and positive tails, for the 5 min returns for individual companies, each using all returns larger than 5 standard deviations. Extrapolation of the linear regression lines yield: $$\alpha =\{\begin{array}{cc}2.84\pm 0.12\hfill & \text{(positive tail)}\hfill \\ 2.73\pm 0.13\hfill & \text{(negative tail)}\hfill \end{array}.$$ (8) ### B Scaling of the distribution of returns for $`\mathrm{\Delta }t1`$day The next logical step would be to extend the previous procedure to time scales longer than 5 min. However, this approach leads to unreliable results, the reason being that the estimate of the time averaged volatility—used to define the normalized returns of Eq. (5)—has estimation errors that increase with $`\mathrm{\Delta }t`$. For the distribution of 5 min returns, the previous procedure relies on 40,000 data points per company for the estimation of the time averaged volatility. For 500 min returns the number of data points available is reduced to 400 per company which leads to a much larger error in the estimate of $`v_i(\mathrm{\Delta }t)`$. To circumvent the difficulty arising from the large uncertainty in $`v_i(\mathrm{\Delta }t)`$, we use an alternative procedure for estimating the volatility which relies on two observations. The first is that volatility decreases with market capitalization \[Fig. 4\]. The second is that companies with similar market capitalization typically have similar volatilities. Based on these observations, we make the hypothesis that the market capitalization is the most influential factor in determining the volatility, $$v_i=v_i(S,\mathrm{\Delta }t).$$ (9) Hence, we group the returns of all the companies into “bins” according to the market capitalization of each company at the beginning of the interval for which the return is computed. We then compute the conditional probability of the $`\mathrm{\Delta }t`$ returns for each of the bins of market capitalization. We define $`G_SG_S(t,\mathrm{\Delta }t)`$ as the $`\mathrm{\Delta }t`$ returns of the subset of all companies with market capitalization $`S`$, and we then calculate the cumulative conditional probability $`P(G_Sx|S)`$. Figure 5(a) shows $`P(G_Sx|S)`$ for 30 min returns for four different bins of $`S`$. The functional form for each of each of the four distributions is consistent with a power-law. We define a normalized return $$g_Sg_S(t,\mathrm{\Delta }t)\frac{G_S(\mathrm{\Delta }t)G_S(\mathrm{\Delta }t)_S}{v_S(\mathrm{\Delta }t)},$$ (10) where $`\mathrm{}_S`$ denotes an average over all returns of all companies with market capitalization $`S`$. The average volatility $`v_Sv_S(\mathrm{\Delta }t)`$ is defined through the relation, $$v_{S}^{}{}_{}{}^{2}G_{S}^{}{}_{}{}^{2}_SG_S_{S}^{}{}_{}{}^{2}.$$ (11) We show in Fig. 5(b) the cumulative conditional probability of the normalized 30 min returns $`P(g_Sx|S)`$ for the same four bins shown in Fig. 5(a). Visually, it seems clear that these distributions have power-law functional forms with similar values of $`\alpha `$. Hence, to obtain better statistics, we consider the normalized returns for all values of $`S`$ and compute a single cumulative distribution. Figure 6(a) shows the distribution of normalized 30 min returns. We test if our alternative procedure of normalizing the returns by the time averaged volatility for each bin of market capitalization $`S`$ is consistent with the previous procedure of normalizing by the time averaged volatility for each company through Eq. (5). To this end, we also show in Fig. 6(a) the distribution of normalized 30 min returns using the normalization of Eq. (5). The distribution of returns obtained by both procedures are consistent with a power law decay of the same form as Eq. (6). Power-law regression fits to the positive tail yield estimates of $`\alpha =3.21\pm 0.08`$ for the former method and $`\alpha =3.23\pm 0.05`$ for the latter, confirming the consistency of the two procedures. The values of the exponent for 30 min time scales, $`\alpha =3.21\pm 0.08`$ (positive tail) and $`\alpha =3.01\pm 0.12`$ (negative tail), are also consistent with the estimates, Eq. (7), for 5 min normalized returns. Next, we compute the distribution of returns for longer time scales $`\mathrm{\Delta }t`$. Figure 6(b) shows the cumulative distribution of the normalized returns for time scales from 5 min up to 1 day. We observe good “data collapse” with consistent values of $`\alpha `$ which suggests that the distribution of returns appears to retain its functional form for larger $`\mathrm{\Delta }t`$. The scaling of the distribution of returns for individual companies is consistent with previous results for the distribution of the S&P 500 index returns . The estimates of the exponent $`\alpha `$ from power-law regression fits to the cumulative distribution and from the Hill estimator are listed in Table I. ### C Scaling of the moments for $`\mathrm{\Delta }t<1`$ day In the preceding subsection we reported that the distribution of returns retains the same functional form for 5 min$`<\mathrm{\Delta }t<`$ 1 day. We can further test this scaling behavior by analyzing the moments of the distribution of normalized returns $`g`$, $$\mu _k|g|^k,$$ (12) where $`\mathrm{}`$ denotes an average over all the normalized returns for all the bins. Since $`\alpha 3`$, we expect $`\mu _k`$ to diverge for $`k3`$, and hence we compute $`\mu _k`$ for $`k<3`$. Figure 6(c) shows the moments of the normalized returns $`g`$ for different time scales from 5 min up to 1 day. The moments do not vary significantly for the above time scales, thus confirming the scaling behavior of the distribution observed in Fig 6(b). ## IV The distribution of returns for 1 day $`\mathrm{\Delta }t16`$ days For time scales of 1 day or longer, we analyze data from the CRSP database. We analyze approximately $`3.5\times 10^7`$ daily records for about 16,000 companies for the 35-year period 1962-96. We expect the market capitalization of a company to change dramatically in such a long period of time. Further, we expect small companies to be more volatile than large companies. Hence, large changes that might occur in the market capitalization of a company will lead to large changes on its average volatility. To control for these changes in market capitalization, we adopt the method that was used in the previous subsection for $`\mathrm{\Delta }t>`$ 5 min. Thus, we compute the cumulative conditional probability $`P(G_Sx|S)`$ that the return $`G_SG_S(t,\mathrm{\Delta }t)`$ is greater than $`x`$, for a given bin of average market capitalization $`S`$. We first divide the entire range of $`S`$ into bins of uniform length in logarithmic scale. We then compute a separate probability distribution for the returns $`G_S`$ which belong to a bin of average market capitalization $`S`$. Figure 7(a) shows the cumulative distribution of daily returns $`P(G_S>x|S)`$ for different values of $`S`$. Since the widths of these distributions are different for different $`S`$, we analyze the normalized returns $`g_S`$, which were defined in Eq. (10). Figure 7(b) shows the cumulative distribution $`P(g_S>x)`$ of the normalized daily returns $`g_S`$. These distributions appear to have similar functional forms for different values of $`S`$. In order to improve statistics, we compute a single cumulative distribution $`P(g_S>x)`$ of the normalized returns for all $`S`$. We observe a power-law behavior of the same form as Eq. (6). Regression fits yield estimates for the exponent, $`\alpha =2.96\pm 0.09`$ for the positive tail and $`\alpha =2.70\pm 0.10`$ for the negative tail. Figure 8(a) compares the cumulative distributions of the normalized 1 day returns obtained from the CRSP and TAQ databases. The estimates of the power-law exponents obtained from regression fits are in good agreement for these two databases. Figures 8(b,c) show the distributions of normalized returns for $`\mathrm{\Delta }t=1,4,16`$ days. The estimates of the exponent $`\alpha `$ increase slightly in value for the positive tail, while for the negative tail the estimates of $`\alpha `$ are approximately constant. The increase in $`\alpha `$ for the positive tail is also reflected in the moments \[Fig. 8(d)\]. ## V The distribution of returns for $`\mathrm{\Delta }t16`$ days The scaling behavior of the distributions of returns appears to break down for $`\mathrm{\Delta }t16`$ days, and we observe indications of slow convergence to Gaussian behavior. In Figs. 9(a,b) we show the cumulative distributions of the normalized returns for $`\mathrm{\Delta }t16`$ days. For the positive tail, we find indications of convergence to a Gaussian, while the negative tail appears not to converge. The convergence to Gaussian behavior is also apparent from the behavior of the moments for these time scales \[Fig. 9(c)\]. To summarize our results for the distribution of individual company returns, we find that (i) the distribution of normalized returns for individual companies is consistent with a power-law behavior characterized by an exponent $`\alpha 3`$, (ii) the distributions of returns retain the same functional form for a wide range of time scales $`\mathrm{\Delta }t`$, varying over 3 orders of magnitude, 5 min$`\mathrm{\Delta }t6240`$ min = 16 days, and (iii) for $`\mathrm{\Delta }t>16`$ days, the distribution of returns appears to slowly converge to a Gaussian \[Fig. 10\]. ## VI Cross-Correlations In this section we address the second question that we posed initially. That is, why is it that the distribution of returns for individual companies and for the S&P 500 index have the same asymptotic form? In the previous sections, we presented evidence that the distribution of returns scales for a wide range of time intervals. In a previous study , we demonstrated that this scaling behavior is possibly due to time dependencies, in particular, volatility correlations. Next, we will show that as the time correlations lead to the time scaling of the distributions of returns, so do cross correlations among different companies lead to a functional form of the distribution of returns of indices similar to that for single companies. A direct way of analyzing the cross-correlations is by computing the cross-correlation matrix . Here, we take a different approach, by analyzing the distribution of returns as a function of market capitalization. First, we compare the distributions of the S&P 500 index and that of individual companies. Figures 11(a,b) show the cumulative distribution $`P(gx)`$ for individual companies and for the S&P 500 index. The distributions show the same power-law behavior for $`2g80`$. This is surprising, because the distribution of index returns $`G_{SP500}(t,\mathrm{\Delta }t)`$ does not show convergence to Gaussian behavior—even though the 500 distributions of individual returns $`G_i(t,\mathrm{\Delta }t)`$ are not stable. Consider the family of index returns defined as the partial sum $$G_{(N)}(t,\mathrm{\Delta }t)\underset{i=1}{\overset{N}{}}w_iG_i(t,\mathrm{\Delta }t),$$ (13) where the weights $`w_iS_i/_{j=1}^NS_j`$ have weak time dependencies. From the central limit theorem for random variables with finite variance, we expect that the probability distribution of $`G_{(N)}`$ would change systematically with $`N`$ and approach a Gaussian for large $`N`$, provided there are no significant dependencies among the returns $`G_i`$ for different $`i`$. Instead, we find that the distribution of $`G_{(N)}`$ has the same asymptotic behavior as that for individual companies. In order to show that the scaling behavior may be due to cross-correlations between companies, we first destroy any existing dependencies among the returns of different companies by randomizing each of the 1000 time series $`G_i(t)`$. By adding up the shuffled series, we construct a shuffled index return $`G_{(N)}^{sh}(t)`$ out of statistically independent companies with the same distribution of returns. Fig. 11(c) shows the cumulative distribution of the shuffled index returns $`G_{(N)}^{sh}(t,\mathrm{\Delta }t)`$ for increasing $`N`$ and $`\mathrm{\Delta }t=5`$ min. The distribution changes with $`N`$, and approaches a Gaussian shape for large $`N`$, which indicates that the scaling in Fig. 11(a) is caused by non-trivial dependencies between different companies. ## VII Discussion We have presented a systematic analysis, on two different databases, of the distribution of returns for individual companies for time scales $`\mathrm{\Delta }t`$ ranging from 5 min up to $`4`$ years. We find that the distribution of returns is consistent with a power-law asymptotic behavior, characterized by an exponent $`\alpha 3`$—well outside the stable Lévy regime $`0<\alpha <2`$—for time scales up to approximately 16 days. For longer time scales, the scaling behavior appears to break down and we observe “slow” convergence to Gaussian behavior. We also find that the distribution of returns of individual companies and the S&P 500 index have the same asymptotic behavior. This scaling behavior does not hold when the cross-correlations between companies are destroyed, suggesting the existence of correlations between companies —as occurs in strongly interacting physical systems where power-law correlations at the critical point result in scale-invariant properties. Recent studies of the cross-correlation matrix using methods of random matrix theory also show the existence of correlations that are present through a wide range of time scales from 30 mins up to 1 day . These studies show that the largest eigenvalue of the cross-correlation matrix corresponds to correlations that pervade the entire market, and a few other large eigenvalues correspond to clusters of companies that are correlated amongst each other. ## VIII Acknowledgments We thank J.-P. Bouchaud, M. Barthélemy, S. V. Buldyrev, P. Cizeau, X. Gabaix, I. Grosse, S. Havlin, K. Illinski, C. King, C.-K. Peng, B. Rosenow, D. Sornette, D. Stauffer, S. Solomon, J. Voit, and especially R. N. Mantegna for stimulating discussions and helpful suggestions. We thank X. Gabaix, C. King, J. Stein, and especially T. Lim for help with obtaining the data. We are also very grateful to L. Giannitrapani of the SCV at Boston University for her generous help in allocating the necessary computer resources, and to R. Tompolski for his help throughout this work. MM thanks DFG and LANA thanks FCT/Portugal for financial support. The Center for Polymer Studies is supported by NSF. ## A Dependence of volatility on size We find that the average volatility for each bin, $`v_S(\mathrm{\Delta }t)`$ shows an interesting dependence on the market capitalization. In Fig. 4, we plot the standard deviation as a function of size on a log-log scale for $`\mathrm{\Delta }t=1`$ day. We find a power-law dependence of the standard deviation of the returns on the market capitalization, with exponent $`\beta 0.2`$ very similar to the values reported for the annual sales of firms, the GDP of countries and the university research budgets. For larger time scales the exponent gradually decreases, approaching the value $`\beta 0.09`$ for $`\mathrm{\Delta }t`$= 1000 days.
no-problem/9907/gr-qc9907084.html
ar5iv
text
# Black Holes, Bandwidths and Beethoven ## 1 Introduction Functions which contain only frequencies up to a certain maximum frequency occur in various contexts from theoretical physics to the applied sciences. For example, in quantum field theory the method of “ultraviolet” regularization by energy-momentum cut-off yields fields which are frequency limited. Frequency limited functions also occur for example as so-called “bandlimited signals” in communication engineering. Intuitively, one may expect that a frequency limited function, $`\varphi (t)`$, can vary at most as fast as its highest frequency component, $`\omega _{max}`$. In fact, this is not the case. Aharonov et al and Berry , gave explicit examples - which they named superoscillations - which drastically demonstrate that frequency limited functions are able to oscillate for arbitrarily long finite intervals arbitrarily faster than the highest frequency component which they contain. It has even been conjectured, in , that for example 5000 seconds of a 20 KHz bandwidth recording of a symphony of Beethoven can be part of a 1Hz bandlimited signal. Before we begin our investigation of superoscillations a few remarks on our use of terminology may be in place. Frequency limited functions and superoscillations not only occur in ultraviolet cut-off quantum field theory and in information theory but also in a whole variety of other physical contexts. Superoscillation are known to occur for example with evanescent waves and quantum billiards, see , and for example with effects of apparent superluminal propagation in unstable media such as media with inverted level populations, see e.g. . We will here mostly be concerned with the general properties of superoscillations and we could therefore use as our terminology the language of any one of these contexts where superoscillations occur. Our choice of terminology here will be to use both the language of quantum field theory and the language of information theory. We make this choice because, on the one hand, our main interest is in the implications of superoscillations in ultraviolet regular quantum field theories. On the other hand, we will often find it advantageous to use the concrete and intuitive terminology of information theory. We will for example often use terms such as “signal” and “bandwidth” where we mean “field” and “ultraviolet cut-off”. For our purposes, the main advantage of the language of information theory will be that this language contains several useful terms which describe properties of bandlimited signals - and which by correspondence also describe properties of ultraviolet cut-off fields - for which there does not seem to exist an established corresponding terminology in the language of quantum field theory. These will be terms such as “data transmission rate”, “noise”, or “signal reconstruction from samples”. We will introduce these terms as needed, and we will discuss their corresponding meaning in quantum field theory. Concerning the physics of superoscillations in ultraviolet regularized quantum field theories, an interesting possibility has been discussed in . These authors suggest that the existence of superoscillations may resolve the transplanckian energies paradox of black hole radiation: Let us recall how this paradox arises for black hole radiation as derived in the standard free field theory formalism. One considers a Hawking photon which is observed at asymptotic distance from the black hole and which has some typical energy $`E`$. The calculation of the redshift shows that the same photon, when it was still close to the horizon, say at a Planckian distance, should have had a proper energy of the order of $`Ee^{\alpha M^2}`$ where $`\alpha `$ is of order one and where for a macroscopic black hole, in Planckian units, $`M10^{40}`$. The paradox is that the assumptions which went into the derivation of the existence of Hawking radiation (no interactions or backreactions) likely do not hold true at those far transplanckian energies. See for example -. This, therefore, raises the question whether the phenomenon of Hawking radiation is dependent on assumptions about the physics at transplanckian energies. In particular, a fundamental ultraviolet cut-off may well exist at a Planck- or string scale, and the question arises whether Hawking radiation is compatible with the existence of a natural ultraviolet cut-off. A number of studies have therefore investigated the problem of Hawking radiation in the presence of various kinds of ultraviolet cut-off. For reviews, see e.g. or . An intuitive example is Unruh’s consideration of the dumb hole , which is an acoustic analog of the black hole, where a “horizon” forms where a stationarily flowing fluid’s velocity exceeds the velocity of the sound waves. The consensus in the literature appears to be that Hawking radiation is indeed a robust phenomenon - if only for example for thermodynamical reasons. However, there does not seem to exist a consensus about how precisely the transplanckian frequencies paradox is to be resolved. The recent work by Rosu and Reznik which we mentioned above aims at resolving the transplanckian energies paradox by employing superoscillations. Their main argument is that even fields with a strict ultraviolet cut-off at some maximum frequency can still display arbitrarily high frequency oscillations, indeed any transplanckian frequencies, in some finite region, e.g. close to the horizon, namely if the field superoscillates. In this context, as in all contexts where an argument is based on the phenomenon of superoscillations, the argument can only be as good as our understanding of the properties of superoscillations. We will therefore address here three points concerning the general properties of superoscillations: Firstly, we will apply methods recently developed in to obtain exact results about the extent to which frequency limited functions can superoscillate. Namely, we will show that among the functions with frequency cut-off $`\omega _{max}`$ there always exist functions which pass through any finite number of arbitrarily prespecified points. We will also show that superoscillations cannot be prespecified on any continuous interval. We can translate this result into the language of information theory: The implication is that a 20KHz recording of a Beethoven symphony cannot occur as part of a 1Hz bandlimited signal - but that 1 Hz bandlimited signals can indeed always be found which coincide with the Beethoven symphony at arbitrarily many discrete points in time. Our result on the extent to which frequency limited functions can superoscillate shows, in particular, that frequency limited functions can indeed not be reliably characterized as varying slower than their highest Fourier component. This raises the problem of finding a reliable characterization of the effect of frequency limitation on the “behavior” of functions. Therefore, secondly, we will show that a reliable characterization of the effect of frequency limitation on the behavior of functions is in terms of an uncertainty relation: If a strictly frequency limited function, $`\varphi (t)`$, superoscillating or not, is sampled at the so-called Nyquist rate, then the standard deviation $`\mathrm{\Delta }T`$ of its samples $`\varphi (t_n)`$ is bounded from below by $`\mathrm{\Delta }T>1/4\omega _{max}`$. We will therefore conclude that a frequency limit is not a limit to how quickly a function can vary, but that instead a frequency limit is a limit to how much a function’s Nyquist rate samples can be peaked. Thirdly, we will explain how this characterization of frequency limited functions generalizes to time-varying frequency limits $`\omega _{max}(t)`$. We will apply these results to a recently developed generalized Shannon sampling theorem . Translated into the language of quantum field theory, our results will show, for example, that a frequency cut-off can possess many of the advantages of lattice regularizations without needing to break translation invariance. For example, the Shannon sampling theorem and its generalization implies that the number of degrees of freedom per unit volume (we assume a euclidean formulation) is literally finite for ultraviolet cut-off fields: the fields are fully determined if they are specified on any one of a family of equivalent lattices whose spacing is determined by the ultraviolet-cut-off. The family of lattices is covering the entire continuous space so that translational invariance is not broken. We will cover the general case where the density of degrees of freedom is spatially varying. ## 2 Examples of Superoscillations Let us consider functions, $`\varphi (t)`$, which are frequency limited with a maximum frequency $`\omega _{max}`$, i.e. which contain only plane waves up to this frequency. We can write such functions in the form $$\varphi (t)=_{\mathrm{}}^+\mathrm{}𝑑ur(u)e^{it\omega (u)}$$ (1) where $`\omega (u)`$ is a real-valued function which obeys $$|\omega (u)|\omega _{max}\text{for all}u\text{I R},$$ (2) and where $`r(u)`$ is a complex-valued function. Berry gives the explicit example (among other examples) $$\omega (u):=\frac{\omega _{max}}{1+u^2}$$ (3) and $$r(u):=\frac{1}{\sqrt{2\pi ϵ}}e^{\frac{(uic)^2}{2ϵ}},$$ (4) where $`ϵ`$ and $`c`$ are positive constants. The claim is that for suitable choices of $`ϵ`$ and $`c`$ the resulting function $`\varphi (t)`$ displays superoscillations, i.e. that in some interval it oscillates faster than $`\omega _{max}`$. There is a simple argument for why this should be true. Berry reports this argument to be due to Aharonov: Namely, for sufficiently small $`ϵ`$ the function $`r(u)`$ should effectively become a Gaussian approximation to a Dirac $`\delta `$-function which is peaked around the imaginary value $`u=ic`$. Therefore, the factor $`r(u)`$ in Eq.1 should effectively project out the value of the integrand at $`u=ic`$. Due to Eq.3, this value of $`u`$ corresponds to the frequency: $$\omega (ic)=\frac{\omega _{max}}{1c^2}.$$ (5) Clearly, for suitable choices of the parameter $`c`$, this frequency can be made arbitrarily larger than the bandwidth $`\omega _{max}`$. Thus, the situation is that on the one hand, $`\varphi (t)`$ certainly contains only frequencies up to $`\omega _{max}`$ because $`\omega (u)\omega _{max}`$ for all real values of $`u`$, and the intergration in Eq.1 is over real $`u`$ only. On the other hand, for imaginary values of $`u`$ the value of $`\omega (u)`$ can become much larger than $`\omega _{max}`$ . Indeed, the behavior of $`r(u)`$ indicates that the integral should effectively be peaked around the imaginary value $`u=ic`$. This suggests that, for choices of $`c`$ close enough to $`1`$, in some interval the function $`\varphi (t)`$ could display superoscillations with frequencies around $`\omega _{so}1/(1c^2)>\omega _{max}`$. This intuitive argument for superoscillations has been confirmed, in , both by asymptotic analysis and by numerical calculations. Berry also explains in that the price for a function to have this type of a superoscillating period is that the function also possesses a period with exponentially large amplitudes - nevertheless, the whole function is square integrable. We remark that another method for constructing examples of superoscillations has been found in . ## 3 To which extent are frequency limited functions able to superoscillate? ### 3.1 Definitions Let us in the following refer to frequency limited functions $`\varphi (t)`$ as “signals” and to the variable $`t`$ as “time”. More precisely, we define the class of signals $`\varphi `$ with bandwidth $`\omega _{max}`$ as the Hilbert space of square integrable functions on the interval $`[\omega _{max},\omega _{max}]`$ in frequency space $$H_{\omega _{max}}=L^2(\omega _{max},\omega _{max})$$ (6) with the usual scalar product: $$(\varphi _1,\varphi _2)=_{\omega _{max}}^{\omega _{max}}𝑑\omega \stackrel{~}{\varphi }_1(\omega )^{}\stackrel{~}{\varphi }_2(\omega )$$ (7) We then define the set $`B_{\omega _{max}}`$ of strictly bandlimited signals with bandwidth $`\omega _{max}`$ as the set of all functions $`\stackrel{~}{\varphi }(\omega )`$ on frequency space for which there exists a $`c(\varphi )<\omega _{max}`$ such that $$\stackrel{~}{\varphi }(\omega )=0\text{if}|\omega |>c(\varphi )$$ (8) and whose derivatives $`d^n\stackrel{~}{\varphi }(\omega )/d\omega ^n`$ are square integrable for all $`n\text{N}\text{I}`$. Clearly, the strictly bandlimited signals are dense in the Hilbert space of bandlimited signals $`H_{\omega _{max}}`$: $$H_{\omega _{max}}=\overline{B_{\omega _{max}}}$$ (9) ### 3.2 Proposition We claim that each Hilbert space of bandlimited signals $`H_{\omega _{max}}`$ contains signals such that the Fourier transform of $`\stackrel{~}{\varphi }(\omega )`$, i.e. the signal $`\varphi (t)`$, passes through any finite number of arbitrarily prespecified points. Explicitly, we can fix a value for the bandwidth, $`\omega _{max}`$. Then, we choose $`N`$ arbitrary times $`\{t_i\}_{i=1}^N`$ and $`N`$ arbitrary amplitudes $`\{a_i\}_{i=1}^N`$. The claim is that there always exist signals of bandwidth $`\omega _{max}`$ which obey: $$\varphi (t_i)=a_i\text{for all}i=1,2,\mathrm{},N$$ (10) In field theory language, we are claiming that for any choice of an ultraviolet cut-off frequency there are fields which obey the cut-off and which at an arbitrary finite number of points in space take arbitrary prespecified values. In particular, since these points and the amplitude of the field at these points can be chosen arbitrarily we claim that even fields which obey a cut-off can vary arbitrarily wildly over any finite interval. ### 3.3 Proof Let us first outline the proof. We will begin by considering the simple symmetric operator $`T:\varphi (t)t\varphi (t)`$ on $`B_{\omega _{max}}`$. Its self-adjoint extensions, $`T(\alpha )`$, then yield a set of Hilbert bases $`\{𝐭_n(\alpha )\}`$ of $`H_{\omega _{max}}`$ as their eigenbases. The amplitudes of bandlimited signals $`\varphi (t)`$ can be written as scalar products with these eigenvectors: $`\varphi (t)=(𝐭,\varphi )`$. The proof of the proposition will consist in showing that any finite set $`\{𝐭_i\}_{i=1}^N`$ of basis vectors among all eigenvectors of the self-adjoint extensions is linearly independent. #### The “time operator” $`T`$ We define the operator $`T`$ on the domain $`D_T:=B_{\omega _{max}}`$ as the operator which acts on strictly bandlimited signals $`\varphi (t)`$ by multiplication with the time variable: $$T:\varphi (t)T\varphi (t)=t\varphi (t)$$ (11) The operator $`T`$ maps strictly bandlimited functions into strictly bandlimited functions: $$T:B_{\omega _{max}}B_{\omega _{max}}$$ (12) This is because $`T`$ acts in the Fourier representation as $$T:\stackrel{~}{\varphi }(\omega )T\stackrel{~}{\varphi }(\omega )=i\frac{d}{d\omega }\stackrel{~}{\varphi }(\omega )$$ (13) and, clearly, if $`\stackrel{~}{\varphi }(\omega )`$ obeys the bandwidth condition, Eq.8, so does its derivative $`_\omega \stackrel{~}{\varphi }(\omega )`$. The elements $`\varphi D_T`$ are strictly bandlimited and they therefore obey, in particular: $$\stackrel{~}{\varphi }(\omega )=0=\stackrel{~}{\varphi }(\omega )$$ (14) Thus, for all $`\varphi B_{\omega _{max}}`$: $$_{\omega _{max}}^{\omega _{max}}𝑑\omega \stackrel{~}{\varphi }_1(\omega )^{}(i_\omega )\stackrel{~}{\varphi }_2(\omega )=_{\omega _{max}}^{\omega _{max}}𝑑\omega \left((i_\omega )\stackrel{~}{\varphi }_1\right)^{}\stackrel{~}{\varphi }_2(\omega )$$ (15) Consequently, $$(\varphi ,T\varphi )=(T\varphi ,\varphi )=(\varphi ,T\varphi )^{}\varphi B_{\omega _{max}}$$ (16) and therefore, $$(\varphi ,T\varphi )\text{I R}\varphi B_{\omega _{max}}$$ (17) which means that $`T`$ is a symmetric operator. Nevertheless, $`T`$ is not self-adjoint. Indeed, $`T`$ possesses no (normalizable nor nonnormalizable) eigenvectors. This is because the only candidates for eigenvectors, namely the plane waves $`e^{2\pi it\omega }`$ do not obey Eqs.8,14. Thus, the plane waves are not strictly bandlimited and therefore they are not in the domain $`D_T=B_{\omega _{max}}`$ of $`T`$. On the other hand, while the plane waves are not strictly bandlimited, they are nevertheless bandlimited, i.e. they are elements of the Hilbert space $`H_{\omega _{max}}`$. Indeed, the domain of $`T`$ can be suitably enlarged to yield a whole family of self-adjoint extensions of $`T`$, each with a discrete subset of the plane waves as an eigenbasis. We will derive these self-adjoint extensions below. For a standard reference on the functional analysis of self-adjoint extensions, see e.g. . #### The self-adjoint extensions $`T(\alpha )`$ of $`T`$, and their eigenbases There exists a $`U(1)`$\- family of self-adjoint extensions $`T(\alpha )`$ of $`T`$: The self-adjoint operator $`T(\alpha )`$ is obtained by enlarging the domain of $`T`$ by signals, $`\varphi `$, which obey the boundary condition: $$\stackrel{~}{\varphi }(\omega )=e^{i\alpha }\stackrel{~}{\varphi }(\omega )$$ (18) To be precise: We first close the operator $`T`$. Then, the domain $`D_T^{}`$ of $`T^{}`$ consists of all those signals $`\varphi H`$ for which also $`i_\omega \stackrel{~}{\varphi }(\omega )H_{\omega _{max}}`$. The signals $`\varphi D_T^{}`$ are not required to obey any boundary conditions. Thus, all plane waves are eigenvectors of $`T^{}`$. Note that while some plane waves are orthogonal, most are not. This is consistent because $`T^{}`$ is not a symmetric operator: due to the lack of boundary conditions in its domain, $`T^{}`$ also has complex expectation values. Any self-adjoint extension $`T(\alpha )`$ of $`T`$ is a restriction of $`T^{}`$ by imposing a boundary condition of the form of Eq.18: $$D_{T(\alpha )}=\{\varphi D_T^{}|\stackrel{~}{\varphi }(\omega )=e^{i\alpha }\stackrel{~}{\varphi }(\omega )\}.$$ (19) For each choice of a phase $`e^{i\alpha }`$ we obtain an operator $`T(\alpha )`$ which is self-adjoint and diagonalizable. Its orthonormal eigenvectors, $`\{𝐭_n^{(\alpha )}\}_{n=\mathrm{}}^+\mathrm{}`$, obeying $$T(\alpha )𝐭_n(\alpha )=t_n(\alpha )𝐭_n(\alpha ),$$ (20) form a Hilbert basis for $`H_{\omega _{max}}`$. In frequency space, they are the plane waves $$\stackrel{~}{𝐭}_n^{(\alpha )}(\omega )=\frac{e^{2\pi it_n(\alpha )\omega }}{\sqrt{2\omega _{max}}}$$ (21) which correspond to the $`T(\alpha )`$-eigenvalues: $$t_n(\alpha )=\frac{n}{2\omega _{max}}\frac{\alpha }{4\pi \omega _{max}},n\text{Z}\text{Z}$$ (22) As mentioned before, each eigenvector of a self-adjoint extension is also an eigenvector of $`T^{}`$, the adjoint of $`T`$: $$T^{}𝐭_n(\alpha )=t_n(\alpha )𝐭_n(\alpha )n,\alpha $$ (23) The eigenvalues of $`T^{}`$, i.e. the eigenvalues of all the extensions $`T(\alpha )`$, together, cover the real line exactly once, i.e. for each $`t\text{I R}`$ there exists exactly one $`e^{i\alpha }`$ and one $`n`$ such that $`t=t_n(\alpha )`$. We will therefore occasionally write simply $`𝐭`$ for $`𝐭_n(\alpha )`$. In this notation, Eq.23 reads: $$T^{}𝐭=t𝐭$$ (24) Using the scalar product, Eq.7, the signal $`\varphi (t)`$, i.e. the Fourier transform of the function $`\stackrel{~}{\varphi }(\omega )`$, can then be written simply as: $$\varphi (t)=(𝐭,\varphi )$$ (25) Thus, the signal as a time-dependent function $`\varphi (t)`$ is the expansion of the abstract signal $`\varphi `$ in an overcomplete set of vectors, namely in all the eigenbases of the family of operators $`T(\alpha )`$. As an immediate consequence we recover the Shannon sampling theorem: #### The Shannon sampling theorem, and its translation into field theory terminology The Shannon sampling theorem states that if the amplitudes of a strictly bandlimited signal $`\varphi (t)`$ are known at discrete points in time with spacing $$t_{n+1}t_n=1/2\omega _{max}$$ (26) which is the so-called Nyquist rate, then the signal $`\varphi (t)`$ can already be calculated for all $`t`$: Namely, let us fix one $`\alpha `$. Then, to know the values $`\varphi (t_n(\alpha ))`$ of the function $`\varphi (t)`$ at the discrete set of eigenvalues $`t_n(\alpha )`$ (whose spacing, from Eq.22, is $`1/2\omega _{max}`$), is to know the coefficients of the vector $`\varphi `$ in the Hilbert basis $`\{𝐭_n(\alpha )\}`$. Thus, $`\varphi `$ is fully determined as a vector in the Hilbert space $`H_{\omega _{max}}`$. Therefore, its coefficients can be calculated in any arbitrary Hilbert basis. Thus, in particular, the values of $`\varphi (t)=(𝐭,\varphi )`$ can be calculated for all $`t`$: $$\varphi (t)=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}(𝐭,𝐭_n(\alpha ))\varphi (t_n(\alpha ))$$ (27) Clearly, Eq.27 is obtained simply by inserting the resolution of the identity $`1=_{n=\mathrm{}}^{\mathrm{}}𝐭_n(\alpha )𝐭_n^{}(\alpha )`$ on the RHS of Eq.25. We note that while for each fixed $`\alpha `$ the set of vectors $`\{𝐭_n(\alpha )\}`$ forms an orthonormal Hilbert basis in $`H`$, the basis vectors belonging to different self-adjoint extensions are not orthogonal: $$(𝐭_n(\alpha ),𝐭_m(\alpha ^{}))0\text{ for}\alpha \alpha ^{}.$$ (28) In the sampling formula Eq.27 we need this scalar product, i.e. $`(𝐭,𝐭_n(\alpha ))`$, and it is easily calculated for all values of the arguments: $`(𝐭,𝐭^{})`$ $`=`$ $`{\displaystyle _{\omega _{max}}^{\omega _{max}}}𝑑\omega {\displaystyle \frac{e^{2\pi i(tt^{})\omega }}{2\omega _{max}}}`$ $`=`$ $`{\displaystyle \frac{\mathrm{sin}(2\pi (tt^{})\omega _{max})}{2\pi (tt^{})\omega _{max}}}`$ Note that the sampling kernel $`(𝐭,𝐭^{})`$ is real and continuous which means that we describe real, continuous (in fact, entire) signals $`\varphi (t)^{}=\varphi (t)`$, which would not be the case for other choices of the phases of the eigenvectors $`𝐭`$. The Shannon sampling theorem has an interesting translation into the language of field theory: Consider first fields, say scalar fields, without an ultraviolet cut-off. These fields possess at each point in space one degree of freedom: the amplitude. Thus, the field possesses an infinite number of degrees of freedom per unit volume. The Shannon sampling theorem shows that an ultraviolet cut-off field is already determined everywhere if it is known only on any one of a family of discrete lattices. In other words, fields which are ultraviolet cut-off, in the original sense of a frequency cut-off, are continuous fields, which can however be represented without loss of information on certain discrete lattices. This also means that for ultraviolet cut-off fields the number of degrees of freedom of per unit volume is literally finite: it is given by the number of sampling points needed per unit volume in order to be able to reconstruct the field everywhere. The field theoretic meaning of the information theory term “Nyquist rate” is the spatial density of the degrees of freedom of fields. We will later discuss a generalization of the Shannon sampling theorem for classes of signals whose Nyquist rate is time-varying. This theorem will translate into the statement that these signals with time-varying bandwidth correspond to fields whose spatial density of degrees of freedom is spatially varying. These are continuous fields which are representable without loss of information on families of lattices whose minimum spacing is spatially varying. #### Superoscillations We can now prove that for every bandwidth $`\omega _{max}`$ there always exist bandlimited signals $`\varphi H_{\omega _{max}}`$, which pass through any finite number of prespecified points. To this end we choose $`N`$ arbitrary distinct times $`t_1,\mathrm{},t_N`$ and $`N`$ amplitudes $`a_1,\mathrm{},a_N`$. We must show that for each such choice and for each bandwidth $`\omega _{max}`$ there exist bandlimited signals $`\varphi H_{\omega _{max}}`$ which pass at the times $`t_i`$ through the values $`a_i`$: $$\varphi (t_i)=(𝐭_i,\varphi )=a_ii=1,\mathrm{},N$$ (30) We recall that the eigenbases of the self-adjoint extensions $`T(\alpha )`$ of $`T`$ each yield a resolution of the identity: $$1=\underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}𝐭_n(\alpha )𝐭_n^{}(\alpha )$$ (31) Inserting one of these resolutions of the identity into Eq.30 we obtain an explicit inhomogeneous system of linear equations: $$\underset{n=\mathrm{}}{\overset{+\mathrm{}}{}}(𝐭_i,𝐭_n(\alpha ))(𝐭_n(\alpha ),\varphi )=a_ii=1,\mathrm{}N.$$ (32) Solutions to Eq.32 exist, i.e. there are bandlimited signals which go through all the specified points, exactly if the matrix $`(𝐭_i,𝐭_n(\alpha ))`$ is of full rank $$\text{rank}\left((𝐭_i,𝐭_n(\alpha ))_{i=1,n=\mathrm{}}^{i=N,n=+\mathrm{}}\right)=N,$$ (33) which is the case exactly if the set of vectors $`\{𝐭_i\}`$ is linearly independent. In order to prove that indeed every finite set of distinct eigenvectors $`𝐭_i`$ of $`T^{}`$ is linearly independent, let us now assume the opposite. Namely, let us assume that there does exist a set of $`N`$ eigenvectors $`𝐭_i`$ of $`T^{}`$, and complex coefficients $`\lambda _i`$ which are not all zero, such that: $$\underset{i=1}{\overset{N}{}}\lambda _i𝐭_i=0$$ (34) Since the sum is a finite sum, we can repeatedly apply $`T^{}`$ to Eq.34, to obtain: $$\underset{i=1}{\overset{N}{}}\lambda _it_i^n𝐭_i=0n\text{N}\text{I}$$ (35) The first $`N`$ equations yield: $$\left(\begin{array}{ccccc}1& 1& 1& \mathrm{}& 1\\ t_1& t_2& t_3& \mathrm{}& t_N\\ \\ & & \mathrm{}& & \\ \\ t_1^{N1}& t_2^{N1}& t_3^{N1}& \mathrm{}& t_N^{N1}\end{array}\right)\left(\begin{array}{c}\lambda _1𝐭_1\\ \lambda _2𝐭_2\\ \lambda _3𝐭_3\\ \\ \mathrm{}\\ \\ \lambda _N𝐭_N\end{array}\right)=0$$ (36) This $`N\times N`$ matrix is a Vandermonde matrix and its determinant is known to take the form: $$\left|\begin{array}{ccccc}1& 1& 1& \mathrm{}& 1\\ t_1& t_2& t_3& \mathrm{}& t_N\\ \\ & & \mathrm{}& & \\ \\ t_1^{N1}& t_2^{N1}& t_3^{N1}& \mathrm{}& t_N^{N1}\end{array}\right|=\underset{1j<kN}{}(t_kt_j)$$ (37) In particular, the determinant does not vanish, since the $`𝐭_i`$ are by assumption distinct, i.e. $`t_kt_j`$ for all $`kj`$. Thus, the Vandermonde matrix has an inverse. Multiplying this inverse from the left onto Eq.36 we obtain that $`\lambda _i𝐭_i=0,i=1,\mathrm{},N`$, i.e. we can conclude that $`\lambda _i=0i=1,\mathrm{},N`$. Therefore, any finite set of distinct eigenvectors $`𝐭`$ of $`T^{}`$ is indeed linearly independent and consequently Eq.33 is obeyed. Thus, for any arbitrarily chosen bandwidth $`\omega _{max}`$, there are indeed signals $`\varphi H_{\omega _{max}}`$ which pass through any finite number of arbitrarily prespecified points. ### 3.4 Beethoven at 1Hz ? Let us now address the question whether a recording of a Beethoven symphony could indeed appear as part of a 1Hz bandlimited signal. Correspondingly, the question is whether fields on a space with this particular ultraviolet cut-off are free to take prespecified values on a finite interval. More explicitly, let us ask for example whether it is possible to take say $`5000`$ seconds of a 20 KHz recording of a Beethoven symphony and to append a suitable function before and suitable function after the symphony, so that the whole signal ranging from time $`t=\mathrm{}`$ to $`t=+\mathrm{}`$ is a 1Hz bandlimited signal. If the question is posed in this form, the answer is no. To see this, we recall that bandlimited functions are always entire functions. Entire functions are Taylor expandable everywhere, and with infinite radius of convergence. Thus if an entire function $`\varphi (t)`$ is known on even a tiny interval $`[t_i,t_f]`$ of the time axis, then we can calculate at a point $`t_0[t_i,t_f]`$ in that interval all derivatives $`d^n/dt^n\varphi (t_0)`$. This yields a Taylor series expansion of $`\varphi (t)`$ around the time $`t_0`$ with infinite radius of convergence. Thus, if a bandlimited function is known on any finite interval then it is already determined everywhere. One consequence is that a bandlimited signal cannot vanish on any finite interval, since this would mean that it vanishes everywhere. Thus, for example, if the original signal of the Beethoven recording is truly 20KHz bandlimited, then it is an entire function and therefore it does not vanish on any finite interval between $`t=\mathrm{}`$ and $`t=+\mathrm{}`$. On the other hand, we are only interested in an interval of length about $`5000s`$. Now the question is whether these 5000 seconds of the 20KHz bandlimited recording can occur as a superoscillating period of a signal which is bandlimited, say by 1Hz. The answer is negative because this 1Hz bandlimited signal, if existing, would also be entire - but clearly two entire functions which coincide on a finite interval coincide everywhere. It is therefore not possible to arbitrarily prespecify the exact values of a 1Hz bandlimited signal on any finite interval. We are left with the question whether there are topologies with respect to which approximations may converge. On the other hand, it is clear that if we wish to prespecify precise values of the signal then the most that may be possible is to arbitrarily prespecify the values of a 1Hz bandlimited signal at arbitrary discrete times. This would mean, for example, that one can find 1Hz bandlimited signals which coincide with the 20KHz Beethoven recording at arbitrarily many discrete points in time. That it is indeed possible to prespecify the signals’ values at an arbitrary finite number of discrete points in time is what we proved in the previous section. ### 3.5 Superoscillations for data compression? As is well-known, the bandwidth of a communication channel limits its maximal data transmission rate. We have just seen, however, that signals with fixed bandwidth can superoscillate and exhibit for example arbitrarily fine ripples and arbitrarily sharp spikes. This suggests that it should be possible to encode and transmit an arbitrarily large amount of information in an arbitrarily short time interval of a 1Hz bandlimited signal - because there is always a 1Hz bandlimited signal which passes through any number of arbitrarily prespecified points. Thus, this raises the question whether superoscillations are able to circumvent the bandwidth limitations of communication channels - and whether, as Berry suggested, superoscillations may for example be used for data compression. Here, we need to recall that the bandwidth alone does not fix the maximal data transmission rate. It is known that, in the absence of noise, every channel - with any arbitrary bandwidth - can carry an infinite amount of information in any arbitrarily short amount of time. In practise, every channel has noise and this prevents us from measuring the signal to ideal precision. Essentially, the effect of the noise is that only a finite number of amplitude levels can be resolved. Now if the information is encoded in $`V`$ different amplitude levels (i.e. binary would be two levels, $`V=2`$), then the maximum baud rate $`b`$ in bits$`/`$second is $$b=2\omega _{max}\mathrm{ln}_2V.$$ (38) This follows immediately from the Shannon sampling theorem: Each amplitude measurement yields one out of $`V`$ possible outcomes, i.e. each measurement yields $`\mathrm{ln}_2V`$ bits of information. This yields Eq.38 because by the Shannon theorem we need to measure only $`2\omega _{max}`$ samples per second to capture all of the signal. We only remark here that for the example of white noise the maximal data transmission rate can be expressed directly in the signal to noise ratio $`S/N`$: $$b_{noise}=\omega _{max}\mathrm{ln}_2\left(1+\frac{S}{N}\right)$$ (39) For the precise definitions and the proof, see e.g. the classic text by Shannon, . For us interesting here is that Eqs.38,39 show that indeed even in the presence of noise the data transmission rate can be made arbitrarily large, for any fixed bandwidth - though at a cost. The price to be paid is that in order to increase the baud rate to bandwidth ratio the maximal signal amplitude must be improved exponentially as compared to the resolution of the amplitude, or more precisely as compared to the noise level. Let us consider the implications for superoscillations. Superoscillations, in spite of their peculiar behavior, do obey the bandlimit, $`\omega _{max}`$. Therefore, superoscillations cannot violate the limits on the baud rate in Eqs.38, 39. Indeed, conversely, from the validity of the limits on the baud rate we can deduce properties of superoscillations: If large amounts of information are to be sent over a low bandwidth channel, e.g. by employing superoscillations, this necessitates an exponentially large dynamical range of the superoscillating signal. Indeed, Berry conjectured that superoscillations necessarily occur with exponentially large dynamical ranges. This is essentially the same as saying that it is difficult to stabilize superoscillations under perturbations: We showed that it is not possible to prespecify superoscillations on any continuous time interval. For example, there is no 1Hz bandlimited function which coincides with a symphony’s recording on a continuous interval of say 5000 seconds. On the other hand, we showed that it is possible to prespecify superoscillations at any number of discrete points in time. For example, there do exist 1Hz bandlimited functions which coincide with the 20KHz bandlimited Beethoven recording at $`10^{1000}`$ points in time during the 5000 seconds of the recording. Thus, a 1Hz bandlimited function which coincides with a symphony’s recording at $`10^{1000}`$ points on a 5000s interval, can only be 1Hz bandlimited because of very fine-tuned cancellations in the calculation of its Fourier spectrum - cancellations which depend on small details of the function. We therefore conclude that tiny perturbations of such a 1Hz bandlimited superoscillating function are able to induce very high frequency components. Thus, superoscillations are in this sense unstable, and they are therefore likely to be difficult to make practical use of in imperfect communication channels. On the other hand, as the reverse side of the coin, important phenomena in signal processing are instabilities in the reconstruction procedures of signals which are oversampled, i.e. which are sampled at a rate higher than the Nyquist rate. The instabilities in the reconstruction arise because small imprecisions in the measurement of the then overdetermined samples of an ordinary (i.e. in general non-superoscillating) signal can lead to the reconstruction of a deviant signal which, in our terminology, possesses superoscillations. This connection was pointed out already by Berry , quoting I. Daubechi. For a general reference on oversampling see e.g. . In terms of models of fields at the Planck scale, the instabilities of superoscillations suggest that in ultraviolet cut-off quantum field theories the intercation of particles whose fields superoscillate could easily destroy their superoscillations. In concrete cases this effect is likely to depend on the type interactions of the field theory that one considers. Studies in this direction could be worth pursuing since these instabilities could have implications for example for the viability of the Rosu-Reznik approach to superoscillations in black hole radiation when treated within a framework of interacting fields. ## 4 A strict bandlimit is a lower bound to how much the samples of a signal can be peaked - an uncertainty relation for ultraviolet cut-off fields ### 4.1 The minimum standard deviation The existence of superoscillations shows that bandlimited functions cannot be characterized reliably as varying at most as fast as their highest Fourier component. Indeed, we have just proved that for any fixed bandwidth there always exist functions which possess arbitrarily fine ripples and arbitrarily sharp spikes. Let us therefore look for a better, i.e. for a reliable characterization of the effect of bandlimitation on the behavior of functions. Our proposition is that, while a bandlimit does not imply a bound on how much bandlimited signals can locally be peaked, a bandlimit does imply a bound to how much strictly bandlimited signals can be peaked globally. Equivalently, our proposition is that while an ultraviolet cut-off does not imply a bound to how much the fields can be peaked locally in space, the cut-off does imply a lower bound to how much the fields can be peaked globally. Our motivation derives from the Heisenberg uncertainty principle: If we read $`T`$ as the momentum operator of a particle in a (one-dimensional) box then, because the position uncertainty is bounded from above by the size of the box, we expect the momentum uncertainty (here $`\mathrm{\Delta }T(\varphi )`$) to be bounded from below. To be precise, consider a normalized, strictly bandlimited signal $`\varphi B_{\omega _{max}}`$. Then, $$\overline{T}(\varphi ):=(\varphi ,T\varphi )$$ (40) is the $`T`$-expectation value, or the time-mean or the “center of mass” of the signal $`\varphi `$ on the time axis. A measure of how much the signal is overall peaked around this time is the formal standard deviation: $$\mathrm{\Delta }T(\varphi ):=\sqrt{(\varphi ,\left(T\overline{T}(\varphi )\right)^2\varphi )}$$ (41) We note that both, $`\overline{T}(\varphi )`$ and $`\mathrm{\Delta }T(\varphi )`$ are not sensitive to local features of $`\varphi (t)`$, such as fine ripples and sharp spikes. Instead, being the first and second moment of $`T`$, the time $`\overline{T}(\varphi )`$ is simply the signal’s global average position on the time axis and $`\mathrm{\Delta }T(\varphi )`$ is the global spread of the signal around that position. Our claim is that strictly bandlimited signals, $`\varphi B_{\omega _{max}}`$, are always globally spread by at least a certain minimum amount: $$\mathrm{\Delta }T(\varphi )>\frac{1}{4\omega _{max}}\text{for all}\varphi B_{\omega _{max}}$$ (42) In field theory language, our claim is that there exists a formal finite minimum uncertainty in position for ultraviolet cut-off fields. ### 4.2 The minimum standard deviation as a property of the Nyquist rate samples Let us now rewrite $`\overline{T}(\varphi )`$ and $`\mathrm{\Delta }T(\varphi )`$ as explicit expressions in the signals $`\varphi (t)`$ as functions of time. To this end, we can use any one of the resolutions of the identity $`1=_{n=\mathrm{}}^+\mathrm{}𝐭_n(\alpha )𝐭_n^{}(\alpha )`$ which are induced by the self-adjoint extensions $`T(\alpha )`$ of $`T`$. Inserting one of the resolutions of the identity into Eq.40 we obtain, restricting attention to signals $`\varphi B_{\omega _{max}}`$ which are real, $`\varphi (t)^{}=\varphi (t)`$: $$\overline{T}(\varphi )=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\varphi (t_n(\alpha ))^2t_n(\alpha ),(\text{independently of }\alpha )$$ (43) Thus, $`\overline{T}(\varphi )`$ is the “mean” of the discrete set of samples of the signal, when sampled on one of the time-lattices of Eq.22, i.e., $`\overline{T}(\varphi )`$ is the time around which the discrete samples of the signal $`\varphi `$ are centered. Indeed, for each set of samples taken at the Nyquist rate (i.e. for each time lattice corresponding to some fixed $`\alpha `$), the time $`\overline{T}(\varphi )`$ around which the samples are centered is the same. This is because in order to calculate $`\overline{T}(\varphi )`$ from Eq.41 we can equivalently use any one of the resolutions of the identity $`1=_{n=\mathrm{}}^+\mathrm{}𝐭_n(\alpha )𝐭_n^{}(\alpha )`$. Similarly, we obtain an explicit expression for how much the samples are spread around the value $`\overline{T}(\varphi )`$ by inserting a resolution of the identity into the expression for the standard deviation, Eq.41: $$\mathrm{\Delta }T(\varphi )=\sqrt{\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\varphi (t_n(\alpha ))^2\left(t_n(\alpha )\overline{T}(\varphi )\right)^2}$$ (44) Again, also the standard deviation does not depend on which sampling lattice $`\{t_n(\alpha )\}`$ has been chosen. We remark that, clearly, not only the mean and standard deviation, but indeed also all higher moments of a bandlimited signal’s Nyquist rate samples are independent of the choice of the lattice of sampling times. We can therefore refer to the mean, the standard deviation and to the higher moments of a signal $`\varphi `$ without needing to specify the choice of a sampling lattice. On the other hand, let us emphasize that the values of $`\overline{T}(\varphi )`$ and $`\mathrm{\Delta }T(\varphi )`$ are not the usual mean and standard deviation of a continuous curve as conventionally calculated in terms of integrals rather than sums. Instead, while the strictly bandlimited signals are of course continuous, $`\overline{T}(\varphi )`$ and $`\mathrm{\Delta }T(\varphi )`$ are the mean and the standard deviation of their discrete Nyquist rate samples. Our proposition of above, i.e. Eq.42, if expressed explicitly in terms of the strictly bandlimited signal’s Nyquist rate samples, is therefore that the standard deviation $`\mathrm{\Delta }T(\varphi )`$ of these samples is bounded from below by $`1/4\omega _{max}`$. ### 4.3 Calculation of the maximally peaked signals$`/`$fields In order to prove the lower bound on the standard deviation expressed in Eq.42, let us now explicitly solve the variational problem of finding signals $`\varphi `$ which minimize $`\mathrm{\Delta }T(\varphi )`$. To this end, we minimize $`(\varphi ,T^2\varphi )`$ while enforcing the constraints $`(\varphi ,T\varphi )=t`$ and $`(\varphi ,\varphi )=1`$. We work in frequency space, where $`T`$ acts on the strictly bandlimited signals as the symmetric operator $`T=id/d\omega `$. Introducing Lagrange multipliers $`k_1,k_2`$, the functional to be minimized reads: $$S[\varphi ]:=_{\omega _{max}}^{\omega _{max}}𝑑\omega \left\{(_\omega \stackrel{~}{\varphi }^{})(_\omega \stackrel{~}{\varphi })+k_1(\stackrel{~}{\varphi }^{}\stackrel{~}{\varphi }c_1)+k_2(i\stackrel{~}{\varphi }^{}_\omega \stackrel{~}{\varphi }c_2)\right\},$$ (45) Setting $`\delta S[\varphi ]/\delta \varphi =0`$ yields the Euler-Lagrange equation: $$_\omega ^2\stackrel{~}{\varphi }+k_1\stackrel{~}{\varphi }i_\omega \stackrel{~}{\varphi }=0$$ (46) Imposing the boundary condition, Eq.14, which is obeyed by all strictly bandlimited signals, we obtain exactly one (up to phase) normalized solution $`\mathrm{\Phi }_{\overline{T}}`$ for each value of the mean $`\overline{T}`$: $$\stackrel{~}{\mathrm{\Phi }}_{\overline{T}}(\omega )=\frac{1}{\sqrt{2\pi \omega _{max}}}\mathrm{cos}\left(\frac{\pi \omega }{2\omega _{max}}\right)e^{2\pi i\overline{T}\omega }$$ (47) The standard deviations, $`\mathrm{\Delta }T(\mathrm{\Phi }_{\overline{T}})`$, of these solutions are straightforward to calculate in Fourier space, to obtain: $$\mathrm{\Delta }T(\varphi _t)=\frac{1}{4\omega _{max}}\text{for all }t$$ (48) Since the signals $`\stackrel{~}{\mathrm{\Phi }}_{\overline{T}}(\omega )`$ which minimize $`\mathrm{\Delta }T`$ are not themselves strictly bandlimited - they do not obey Eq.8 \- we can conclude that all strictly bandlimited signals, or ultraviolet cut-off fields, obey the strict bound given in Eq.42. ## 5 Generalization to time-varying bandwidths - or spatially varying ultraviolet cut-offs ### 5.1 Superoscillations and the concept of time-varying bandwidth Intuitively, it is clear that the bandwidths of signals can vary with time. One might therefore expect to be able to define the time-varying bandwidth of signals for example in terms of the highest frequency components which they contain in intervals centered around different times. This approach encounters difficulties, however, due to the existence of superoscillations: We recall that a signal $`\varphi (t)`$ obeys a constant bandlimit $`\omega _{max}`$ if its Fourier transform $$\stackrel{~}{\varphi }(\omega )=(2\pi )^{1/2}_{\mathrm{}}^+\mathrm{}𝑑t\varphi (t)\mathrm{exp}(2\pi i\omega t)$$ (49) has support only in the interval $`[\omega _{max},\omega _{max}]`$. The integration in Eq.49 ranges over the entire time axis. This means that the bandlimit is a global property of the signal. If it were true that bandlimited signals could nowhere vary faster than their highest frequency component then this would mean that the bandwidth is also a local property of the signal. Namely, one might then expect that if we consider the same signal on some finite interval, $`[t_i,t_f]`$, and if we calculate its Fourier expansion on that interval then we will find that its Fourier coefficients are nonzero only for frequencies smaller or equal than $`\omega _{max}`$. If so, we could indeed define time-varying bandwidths as time-varying upper limits on the local frequency content, as indicated above. Indeed, in practise, windowed Fourier transforms and in particular the more sophisticated Wigner transforms or wavelet decompositions, are generally very useful . However, the existence of superoscillations shows that any local definition of a time-varying bandwidth must contain counterintuitive features: This is because whatever the overall bandwidth $`\omega _{max}`$, there are always signals with this bandwidth which superoscillate in any given interval $`[t_i,t_f]`$. In practise, of course, strongly superoscillating signals will rarely occur because they are very fine-tuned. But their existence shows that there do exist low bandwidth signals which locally possess arbitrarily high frequency components \- where “local frequency components” are defined e.g. by windowed Fourier transforms - in any finite length interval. In field theory terminology this means that even if a field is varying wildly in some spatial region, this does not imply that the field necessarily possesses a large cut-off frequency or, equivalently, that it possesses a high density of degrees of freedom. Instead, even at small cut-off frequencies there are fields which locally display fast oscillations. Even these superoscillating fields are fully determined everywhere (by the sampling theorem) if known only on any one of the family of lattices whose lattice spacing is as large as is consistent with the ultraviolet cut-off. ### 5.2 The time-varying bandwidth as a limit to how much the samples of signals can be peaked around different times We saw that a finite bandwidth does not impose a limit to how much signals can be locally peaked around say a time $`t`$. However, we also saw that a finite bandwidth does impose a limit $`\mathrm{\Delta }T_{min}`$ to how much the signals can be globally peaked, around any time $`t`$. Indeed, this characterization of the effect of bandlimitation naturally generalizes to time-varying bandwidths: Namely, the limit to how much signals can be peaked may in general depend on the time $`t`$ around which they are peaked: We found that a constant bandwidth can be understood as a minimum standard deviation of the signals’ Nyquist rate samples: If a strictly bandlimited signal $`\varphi B_{\omega _{max}}`$ is centered around a time $`t=\overline{T}(\varphi )`$, then its standard deviation around the time $`t`$ is always bounded from below by the uncertainty relation $`\mathrm{\Delta }T(\varphi )>1/4\omega _{max}`$. We were then only discussing the case of constant bandwidth. Accordingly, we found that the standard deviation of signals $`\varphi B_{\omega _{max}}`$ which are centered around a time $`t_1`$ obey the same lower bound $`1/4\omega _{max}`$ as do signals $`\varphi ^{}B_{\omega _{max}}`$ which are centered around some other time $`t_2`$. This suggests to try to define the notion of time-varying bandwidth in such a way that a class of strictly bandlimited signals with a time-varying bandwidth is simply a class of signals for which the minimum standard deviation $`\mathrm{\Delta }T_{min}`$ depends on the time $`t`$ around which the signals are centered. This would mean that the uncertainty relation Eq.42 becomes time dependent: $$\mathrm{\Delta }T(\varphi )>\mathrm{\Delta }T_{min}(\overline{T}(\varphi ))$$ (50) Correspondingly, we would expect the Nyquist rate to be time-varying. To this end, let us recall the functional analytic structure of the Hilbert space of bandlimited signals which we discussed in Sec.3.3: The operator $`T`$ is a simple symmetric operator with deficiency indices $`(1,1)`$, whose self-adjoint extensions have purely discrete and equidistant spectra. Indeed, the theory of simple symmetric operators with deficiency indices $`(1,1)`$, whose self-adjoint extensions have discrete but not necessarily equidistant spectra, has been shown to yield a generalized Shannon sampling theorem in , and it is indeed exactly the theory of time-varying bandwidths in the sense which we just indicated. For example, the nonequidistant spectra yield time-varying Nyquist rates. The time-varying Nyquist rate can be calculated from the time-varying minimum standard deviation $`\mathrm{\Delta }T_{min}(t)`$ and vice versa. This is worked out in detail in . In terms of field theory, the time-varying bandwidth means, as we mentioned already, a spatially varying ultraviolet cut-off. We remark that this is a nontrivial generalization of the concept of frequency (or energy-momentum) cut-off in field theories. An ordinary energy-momentum cut-off affects fields globally, i.e. the cut-off scale is the same everywhere in space. But we may ask: how could an energy momentum cut-off be implemented such that the cut-off-scale is spatially varying? For example, the cut-off scale may be dynamically generated, e.g. through an interplay of gravity and ordinary forces. In such a scenario, the actual cut-off scale may be dynamic and spatially varying, e.g. determined by the value of some field. In our approach to defining spatially varying cut-offs the cut-off is understood as a formal minimum uncertainty or standard deviation in position. For constant bandwidths this is an equivalent definition to the usual definition as a frequency cut-off. We then found that the notion of formal minimum position uncertainty generalizes ‘naturally’ to the situation where the value of the formal minimum position uncertainty depends on the position, i.e. on the formal position expectation value of the field. As we will discuss in the last section, there is in fact very little arbitrariness in the definition of these short-distance structures. ## 6 Outlook Functions with a bounded Fourier spectrum appear in numerous contexts from theoretical physics to the experimental sciences and engineering applications. A priori, the phenomena of superoscillations can play a role in each of these contexts. Our aim here has been to investigate the general properties of superoscillations. In particular, we found precise results about the extent to which frequency limited functions can superoscillate. Further, we gave a reliable characterization of the effect of frequency limitation on the behavior of functions, in terms of uncertainty relations. We formulated much of our discussion in the concrete and intuitive language of information theory but, of course, our results can easily be translated into all those physical contexts where frequency limited functions occur. Here, we chose to always translate our results into the context of ultraviolet cut-off fields, where the ultraviolet cut-off is understood in the original sense of a high frequency cut-off. We mentioned that superoscillations in field theory have been suggested, by Rosu and Reznik, to resolve the transplanckian frequencies paradox of black hole radiation. In this context, our results showed that while generic superoscillations of arbitrarily high frequencies do exist, they could be too instable under perturbations by interactions. This problem should be worth further pursuing. We also obtained the general result that strictly bandlimited signals obey a lower bound $`\mathrm{\Delta }T_{min}`$ on the standard deviation of their Nyquist rate samples and we generalized to time-varying bandwidths. In terms of ultraviolet cut-off quantum field theory these results mean that the fields in ultraviolet cut-off field theories obey a formal minimum spatial uncertainty $`\mathrm{\Delta }X_{min}`$, where the minimum value of this formal position uncertainty can be spatially varying. In particular, we found that in ultraviolet cut-off quantum field theories the Nyquist rate for signals corresponds exactly to the in general spatially varying density of local degrees of freedom. So-far in our discussion we assumed this short-distance cut-off to arise from the crude ultraviolet cut-off obtained by cutting off high momenta. However, interestingly, the same short-distance structure can also arise for example in theories with effective Heisenberg uncertainty relations which contain correction terms of the form: $$\mathrm{\Delta }X\mathrm{\Delta }P\frac{\mathrm{}}{2}\left(1+k(\mathrm{\Delta }P)^2+\mathrm{}\right),$$ (51) As is easy to verify, for a suitable small positive constant $`k`$, Eq.51 indeed yields a lower bound $$\mathrm{\Delta }X_{min}=\mathrm{}\sqrt{k}$$ (52) which could be at a Planck- or at a string scale. This type of uncertainty relation implies that the momentum stays unbounded. This means that the short-distance structure which we have here considered - a formal finite minimum uncertainty in position - is not tied to putting an upper bound to momentum. Indeed, correction terms to the uncertainty relations of the type of Eq.51 have appeared in various studies in the context of quantum gravity and string theory. For reviews, see e.g. . For recent discussion of potential physical origins of this type of uncertainty relations see e.g. . Quantum mechanical and quantum field theoretical models which display such uncertainty relations have been investigated in detail. For example, the ultraviolet regularity of loop graphs in such field theories has been shown. See . In work by Brout et al, , it has been shown that this type of short-distance cut-off without energy-momentum cut-off, when built into quantum theory could resolve the transplanckian energies paradox of black hole radiation - without invoking superoscillations. Since as we now see, both, the approaches of Rosu and Reznik, , and of Brout et al, , assume in fact the same short-distance structure it should be very interesting to investigate their relationship. A recent reference in this context is . Finally, we remark that it is not necessarily surprising that various different studies in quantum gravity and in string theory have led to the same short-distance structure, namely the short distance structure that arises for example from the uncertainty relation Eq.51. In a certain sense it is not even surprising that the same type of minimum uncertainty structure also appears in communication engineering: This is because, as has been shown in , in any theory, any real degree of freedom which is described by an operator which is linear can only display very few types of short-distance structures. The basic possibilities are continua, lattices and two basic types of unsharp short distance structures, which have been named “fuzzy-A” and “fuzzy-B”. All others are mixtures of these. Technically, the unsharp real degrees of freedom are those described by simple symmetric operators with nonzero (and, for the two types fuzzy-A and fuzzy-B either equal or unequal) deficiency indices. The “time” degree of freedom of electronic signals is real, the corresponding time operator $`T`$ therefore had to fall into this classification, and among the few possibilities it happened to be of the type fuzzy-A. But equally, we can consider for example in the matrix model of string theory the coordinates of D0-branes. These are encoded in (the diagonal of) self-adjoint matrices $`X_i`$. The quantization and the limit for the matrix size $`N\mathrm{}`$ are difficult, but it is clear that the $`X_i`$ will eventually be operators which are at least symmetric i.e. that their formal expectation values are real. The short distance structure which these $`X_i`$ display will therefore fall into this classification which we mentioned. Since there are only these few basic possibilities continuous, discrete or “fuzzy”, they may well be found to be of one of the fuzzy, types. In the present paper we have been concerned with short-distance structures which are characterized by a formal finite minimum uncertainty. The classification given in shows that all such degrees of freedom are of the type fuzzy-A. We can therefore also view our present results on superoscillations as clarifying aspects of one of these very general classes of short-distance structures of real degrees of freedom. Our results on superoscillations translate in any theory which contains unsharp degrees of freedom of this type. Acklowledgements: The author is very grateful to Haret Rosu for bringing the issue of superoscillations to his attention, and the author is happy to thank John Klauder and Pierre Sikivie for their very valuable criticisms.
no-problem/9907/cond-mat9907247.html
ar5iv
text
# Proximity effects and Andreev reflection in mesoscopic SNS junction with perfect NS interfaces \[ ## Abstract Low temperature transport measurements on superconducting film–normal wire–superconducting film (SNS) junctions fabricated on the basis of thin superconducting polycrystalline PtSi films are reported. Due to the perfectness of SN boundaries in the junctions, zero bias resistance dip related to pair current proximity effect and subharmonic energy gap structure originating from phase coherent multiple Andreev reflections have been observed and studied. \] Research activity in the study of properties of NS and SNS junctions (where N is a normal metal and S is a superconductor) has increased significantly over the past few years, mainly owing to technological advances in fabrication of mesoscopic hybrid systems. Until now, all such systems have been made by a combination of different materials, for example, the superconductor–normal metal pairs Al-Ag , Nb-Al and Nb-Ag , superconductor–heavily doped semiconductor pairs Nb-$`n^+`$InGaAs , Nb-$`n^+`$InAs , Nb-$`p^+`$Si , Al-$`n^+`$GaAs and so on. One of the important topics of the physics of NS and SNS junctions is the current–voltage characteristics behavior. Two interesting phenomena can be observed from the dependence of SNS differential resistance versus bias voltage ($`dV/dI`$-$`V`$): an anomalous resistance dip at zero bias , the so-called zero bias anomaly (ZBA); and symmetrical dips at nonzero biases . In some cases the existence of tunnel barriers at the NS interfaces does not give the possibility of the definite interpretation of experimental results. In this paper we report the fabrication and study of SNS junctions with perfect SN interfaces. The best way to obtain a perfect junction is to have both superconducting and normal metal parts of an SNS junction made of the same material. It is known that the wires made from some thin superconducting metallic films can have the transition temperature $`T_c`$ less than that of the film itself. We have used this property to fabricate the ideal SNS junctions. We started with fabrication of ultrathin PtSi films. Smooth, continuous and uniform PtSi films with thickness of 6 nm were formed on Si substrates by deposition of a thin Pt layer followed by a rapid thermal annealing step at 450C for 60 s to convert the as-deposited Pt into PtSi. The films formed were polycrystalline, with an average grain size of roughly 20 nm. The samples used in the experiments are three Hall bridges with $`50\mu `$m in width and $`100\mu `$m in length. The main parameters of the films obtained from Hall measurements at $`T>T_c=0.54`$ K are presented in Table I. The diffusion constant is estimated assuming the simple free electron model. As is seen from Table I, our PtSi films are metal films with small mean free path. It should be noted that they have a hole type conductivity. PtSi wires of length $`L=1.5\mu `$m and $`6\mu `$m and width $`W=0.3\mu `$m were fabricated by means of electron lithography and subsequent plasma etching and placed in one of the Hall bridges. For a schematic picture of the samples, and a scanning electron micrograph of one of the structures, see Fig. 1. At $`T>T_c`$ the resistances of the wires are $`R_N610\mathrm{\Omega }`$ for $`L=1.5\mu `$m and $`R_N2600\mathrm{\Omega }`$ for $`L=6\mu `$m. These values correspond to the wire sheet resistance of $`R_{\mathrm{}}=120`$$`130\mathrm{\Omega }`$, that is slightly larger than $`R_{\mathrm{}}`$ of the film itself. About ten samples were investigated. None of the wires were superconducting down to $`T=35`$ mK. The reason for that is not completely understood. The suppression of the superconductivity possibly results from the intrinsic stresses in the PtSi films which increase in constrictions. This suggestion is supported by the enhancement of the PtSi sheet resistance. Maybe there is another reason of the losing superconductivity in the wires. Nevertheless, after the processing we have at $`T<T_c`$ the SNS junctions which consist of two superconducting seas connected by the normal metal PtSi wire. The measurements were carried out with the use of a phase sensitive detection technique at a frequency of 10 Hz that allowed us to measure the differential resistance ($`R_{\mathrm{SNS}}=dV/dI`$) as a function of the dc voltage ($`V`$). The ac current was equal to 10 nA. Figure 2 shows typical dependences of $`dV/dI`$-$`V`$ for the structures with (a) short ($`L=1.5\mu `$m) and (b) long ($`L=6.0\mu `$m) wires at $`T=35`$ mK. Manifest zero bias resistance dip with respect to the resistance $`R_N`$ at $`T>T_c`$ are observed, with quick initial increasing differential resistance being followed by a less steep increase at some dc bias. For SNS junctions with the short wires ($`L=1.5\mu `$m) a number of symmetrical features (marked as $`V_1`$ and $`V_2`$ in Fig. 2a) can be seen. It is necessary to note that the resistance value in this bias range exceeds the wire resistance $`R_N`$ at $`T>T_c`$. The dependences of $`dV/dI`$ versus $`V`$ at different temperatures are shown in Fig. 3. Both the deep minimum at zero bias and the sharp minima at nonzero biases are temperature dependent. Above $`T_c`$ the features disappear. The symmetrical sharp dips at nonzero biases observed in $`R_{\mathrm{SNS}}`$ versus $`V`$ (Fig. 2a and Fig. 3) are the so-called subharmonic energy gap structure (SGS) originated from the multiple Andreev reflections . The positions of these dips are determined by the condition $`V=\pm 2\mathrm{\Delta }/en`$, with ($`n=1,2,3,\mathrm{}`$), where $`\mathrm{\Delta }`$ is the superconducting energy gap. The dips corresponding to $`V=\pm 2\mathrm{\Delta }/e`$ and $`\pm \mathrm{\Delta }/e`$ manifest in our case. The inset to Fig. 3 shows the temperature dependence of the positions of these dips. It actually reflects the dependence of the superconducting gap $`\mathrm{\Delta }(T)`$ and strongly supports the SGS nature of the dips. The results presented above show that SGS is clearly observed even in the case of diffusive transport with very small mean free path (we have $`L/l1.3\times 10^3`$ for the short wire) due to a large phase coherence length. This shows that in the SNS junctions under study the SGS is determined by phase coherent transport of retroreflected holes in the normal wire between the superconducting seas. The importance of the phase coherence to observe SGS is supported by the experiment with the long wires (Fig. 2b) where no SGS is seen. Similar results have been reported recently by Kutchinsky et al. where SGS have been studied in Al-$`n^+`$GaAs-Al mesostructures. Figure 4 shows: (a) the superconducting transitions of the PtSi film as functions of the magnetic field at several temperatures, providing the upper critical fields at various $`T`$, and (b) the zero bias resistance for the structure with short wire under the same conditions. Three distinct regions most pronounced for the curve at the lowest temperature can be observed in Fig. 4b: (1) at the magnetic fields $`B<20`$ mT, the SNS junctions exhibit a linear dependence of $`R(B)`$, with this feature surviving up to $`T=400`$ mK; (2) there is a range of the magnetic fields where $`R=R_N`$; (3) a sharp rise of the resistance is resulted from the transition of the superconducting seas into a normal state. This can be seen by comparing the results presented in Fig. 4a and Fig. 4b. The issue to be addressed now is the behavior of $`R_{\mathrm{SNS}}`$ at the zero magnetic fields and the zero bias. As one can see from Fig. 2, at the lowest temperatures the value of $`\mathrm{\Delta }R_{\mathrm{SNS}}/R_N`$ for all samples is approximately 10%, with $`R_{\mathrm{SNS}}`$ reaching $`R_N`$ roughly at the same dc bias. The similar large value of extraconductance $`\mathrm{\Delta }G`$ has been observed earlier in mesoscopic SNS and SN junctions . There are different theoretical models explaining the extraconductance, ranging from pair current proximity effect to weak localization effects. These approaches are not necessarily opposed to each other, but determined by the object under study. The analysis of all these mechanisms showed that our experimental data are best explained by the pair current proximity effect. If weak localization were the dominant mechanism , we should expect values of $`\mathrm{\Delta }G`$ less than $`10^6\mathrm{\Omega }^1`$ because of very small mean free path. However, in our experiments we have observed $`\mathrm{\Delta }G2.2\times 10^4\mathrm{\Omega }^1`$ for the junctions with short wires and $`\mathrm{\Delta }G3.6\times 10^5\mathrm{\Omega }^1`$ for those with long ones. The conventional proximity effect is known to imply that the Cooper pair amplitude decays exponentially with distance into a normal metal over the characteristic length $`\xi _N=\sqrt{hD/2\pi kT}`$. It can effect in decrease the wire length, and resistance. In our case estimated length $`\xi _N150`$ nm at 35 mK, and we get extraconductance $`\mathrm{\Delta }G=4\times 10^4\mathrm{\Omega }^1`$ at $`L=1.5\mu `$m and $`\mathrm{\Delta }G=2\times 10^5\mathrm{\Omega }^1`$ at $`L=6.0\mu `$m, which is close to $`\mathrm{\Delta }G`$ experimentally observed at this temperature. We suppose it is the suppression of the proximity effect that is responsible for reaching $`R_N`$ on the dependences of $`dV/dI`$-$`V`$ (Fig. 2) and weak plateau on the magnetic field dependences (Fig. 4). Similar plateaus on the $`dV/dI`$-$`B`$ curves were observed in Ref. . The following increase of the differential resistance may be connected with the penetration of the normal state into the superconducting region. This phenomenon, that was theoretically predicted by Geshkenbein et al. , implies that if the transition temperature changes slowly with a distance near an SN interface at some value of the current the penetration of the electrical field into superconductor occurs This is likely to be our case, as the reasons leading to the suppression of the superconductivity in the wires may produce the transitional region with varying order parameter. In summary, we have observed for the first time the large ZBA and SGS in perfect SNS junctions in the regime of diffusive transport. To compare, ZBA was observed without SGS in Ref. . On the other hand, van Huffelen et al. saw SGS in their samples, but no ZBA. There is the only paper where ZBA and SGS occur in the same junction, but this ZBA is quite different and more complicated than the one observed by us. At the lowest temperature a weak ZBA (1%–5%) occurred at low bias ($`V<100\mu `$V), it disappeared together with SGS when the temperature increased, and after that a new wider ZBA arose. The difference in the behavior of the junctions can be related to the quality of NS interfaces. There is no guarantee that any kind of barrier is absent between the superconductor and the normal metal or the semiconductor in the above references. The results of our paper strongly support the fact that ZBA and SGS can be observed in SNS junctions with a perfect NS interface. We are grateful to S. N. Artemenko, E. G. Batyiev and M. V. Entin for helpful discussions. This work has been supported by EU program PHANTOMS and by grant No. 5-4 of the program “Physics of quantum and wave processes” of the Russian Ministry of Science and Technology.
no-problem/9907/quant-ph9907041.html
ar5iv
text
# Entanglement Teleportation via Werner States \[ ## Abstract Transfer of entanglement and information is studied for quantum teleportation of an unknown entangled state through noisy quantum channels. We find that the quantum entanglement of the unknown state can be lost during the teleportation even when the channel is quantum correlated. We introduce a fundamental parameter of correlation information which dissipates linearly during the teleportation through the noisy channel. Analyzing the transfer of correlation information, we show that the purity of the initial state is important in determining the entanglement of the replica state. \] The nonlocal property of quantum mechanics enables a striking phenomenon called quantum teleportation. By quantum teleportation an unknown quantum state is destroyed at a sending place while its perfect replica state appears at a remote place via dual quantum and classical channels . For the perfect quantum teleportation, a maximally entangled state, e.g. a singlet state, is required for the quantum channel. However, the decoherence effects due to the environment make the pure entangled state into a statistical mixture and degrade quantum entanglement in the real world. Popescu studied the quantum teleportation with the mixed quantum channel and found that even when the channel is not maximally entangled, it has the fidelity better than any classical communication procedure. For a practical purpose, a purification scheme may be applied to the noisy channel state before teleportation. . Earlier studies have been confined to the teleportation of single-body quantum states: Quantum teleportation of two-level states , $`N`$-dimensional states , and continuous variables . In this Letter, we are interested in teleportation of two-body entangled quantum states, especially regarding the effects of the noisy environment. Direct transmission of an entangled state was considered in a noisy environment . A possibility to copy pure entangled states was studied . Extending the argument of the single-body teleportation we can easily show that an entangled $`N`$-body state can be perfectly teleported using the $`N`$ maximally-entangled pairs for the quantum channel. However, for the noisy channel, it is important and nontrivial to know how much the entanglement is transferred to the replica state and how close the replica state is to the original unknown state, depending on the entanglement of the unknown state and channel state. Bennett et al. argued that teleportation is a linear operation for the perfect quantum channel so that it would also work with mixed states and could be extended to what is now called entanglement swapping . We rigorously found that teleportation is linear even for the mixed channel, considering the maximization of the average fidelity . With the property of the linearity, one may conjecture that quantum teleportation preserves the nature of quantum correlation in the unknown entangled state if the channel is quantum-mechanically correlated. We investigate this conjecture. In this Letter, the original unknown state is assumed to be in an entangled two-body pure spin-1/2 state and the noisy quantum channel to be represented by a Werner state . We define the measure of entanglement for the two spin-1/2 system and study the transfer of entanglement in the teleportation. We find that for the quantum channel there is a critical value of minimum entanglement required to teleport quantum entanglement. This minimum entanglement is understood by considering the transfer of entanglement and correlation information. The newly-defined correlation information, which dissipates linearly during the teleportation through the noisy channel, is related to quantum entanglement for a pure state, and may also be to classical correlation for a mixed state. Analyzing the transfer of correlation information, it is shown that the purity of the initial state is important in determining the entanglement of the replica state. Before considering the entanglement teleportation procedure, we define a measure of entanglement. Consider a density matrix $`\widehat{\rho }`$ and its partial transposition $`\widehat{\sigma }=\widehat{\rho }^{T_2}`$ for a two spin-1/2 system. The density matrix $`\widehat{\rho }`$ is inseparable if and only if $`\widehat{\sigma }`$ has any negative eigenvalues . The measure of entanglement $`(\widehat{\rho })`$ is then defined by $$(\widehat{\rho })=2\underset{i}{}\lambda _i^{}$$ (1) where $`\lambda _i^{}`$ is a negative eigenvalue of $`\widehat{\sigma }`$. It is straightforward to prove that $`(\widehat{\rho })`$ satisfies the necessary conditions required for every measure of entanglement . The entanglement teleportation is schematically plotted in Fig. 1. Sender’s unknown state $`\widehat{\rho }_{12}`$ is prepared by the source $`S`$. Two independent EPR pairs are generated from $`E`$ (one pair numbered 3 and 5, the other pair 4 and 6 in Fig. 1). When a noisy environment is considered, its effects are attributed to the quantum channels and the perfect EPR pair becomes mixed. By applying random $`SU(2)`$ operations locally to both members of a pair a general mixed two-body state becomes a highly symmetric Werner state which is $`SU(2)SU(2)`$ invariant . For example, the quantum channel $`Q_1`$ is represented by the density matrix $`\widehat{w}_{35}`$ of purity $`(\mathrm{\Phi }_{35}+1)/2`$ : $$\widehat{w}_{35}=\frac{1}{4}\left(II\frac{2\mathrm{\Phi }_{35}+1}{3}\underset{n}{}\sigma _n\sigma _n\right)$$ (2) where $`\sigma _n`$ is a Pauli matrix. The parameter $`\mathrm{\Phi }_{35}`$ is related to the measure of entanglement $`_{35}`$, i.e., $`_{35}(\widehat{w}_{35})=\text{max}(0,\mathrm{\Phi }_{35})`$. To make our discussion simpler, we assume that the two independent quantum channels are equally entangled, i.e., $`_{35}=_{46}_w`$. This assumption can be justified as the two quantum channels are influenced by the same environment. At $`A_i`$, a Bell-state measurement is performed on the particle $`i`$ from $`S`$ and one of the pair, $`i+2`$, in the quantum channel $`Q_i`$. The Bell-state measurement at $`A_i`$ is then represented by a family of projectors $`\widehat{P}_i^\alpha =|\mathrm{\Psi }_i^\alpha \mathrm{\Psi }_i^\alpha |`$ with $`\alpha =1,2,3,4`$, where $`|\mathrm{\Psi }_i^\alpha `$ are the four possible Bell states. The joint measurements at $`A_1`$ and $`A_2`$ project the total density matrix $`\widehat{\rho }`$ on to the Bell states $`|\mathrm{\Psi }_1^\alpha `$ and $`|\mathrm{\Psi }_2^\beta `$, respectively, with the probability $`P_{\alpha \beta }=\mathrm{Tr}\widehat{P}_1^\alpha \widehat{P}_2^\beta \widehat{\rho }`$. The probability $`P_{\alpha \beta }`$ is 1/16 which is a characteristic of the Werner state. After receiving the two-bit information on the measurements through the classical channels $`C_1`$ and $`C_2`$, the unitary transformations $`\widehat{U}_1^\alpha `$ and $`\widehat{U}_2^\beta `$ are performed on the particles 5 and 6 accordingly. By the unitary transformations, we reproduce the unknown state at $`B_1`$ and $`B_2`$ if the channel is maximally entangled. In choosing $`\widehat{U}_i^\alpha `$, an important parameter to consider is the fidelity $``$ defined as the distance between the unknown pure state $`\widehat{\rho }_{12}`$ and the replica state $`\widehat{\rho }_{78}`$: $`=\mathrm{Tr}\widehat{\rho }_{12}\widehat{\rho }_{78}`$. If $`\widehat{\rho }_{78}=\widehat{\rho }_{12}`$ then $`=1`$. It shows that the replica is exactly the same as the unknown state and the teleportation has been perfect. The four unitary operations are given by the Pauli spin operators for the singlet-state channel: $`\widehat{U}_i^1=\widehat{1},\widehat{U}_i^2=\widehat{\sigma }_x,\widehat{U}_i^3=\widehat{\sigma }_y,\widehat{U}_i^4=\widehat{\sigma }_z`$. For the Werner-state channel, we found that the same set of unitary operations $`\widehat{U}_i^\alpha `$ are applied to maximize the fidelity . The density matrices of both the original unknown state and the replica state can be written in the same form: $$\widehat{\rho }=\frac{1}{4}\left(II+\stackrel{}{a}\stackrel{}{\sigma }I+I\stackrel{}{b}\stackrel{}{\sigma }+\underset{nm}{}c_{nm}\sigma _n\sigma _m\right).$$ (3) The real vectors $`\stackrel{}{a}`$, $`\stackrel{}{b}`$, and real matrix $`c_{nm}`$ of the replica state $`\widehat{\rho }_{78}`$ is related with $`\stackrel{}{a}_0`$, $`\stackrel{}{b}_0`$, and $`c_{nm}^0`$ of the original state: $`\stackrel{}{a}=(2\mathrm{\Phi }_w+1)\stackrel{}{a}_0/3`$, $`\stackrel{}{b}=(2\mathrm{\Phi }_w+1)\stackrel{}{b}_0/3`$, and $`c_{nm}=(2\mathrm{\Phi }_w+1)(2\mathrm{\Phi }_w+1)c_{nm}^0/9`$. The maximum fidelity $``$ depends on the initial entanglement $`_{12}=(\widehat{\rho }_{12})`$: $$=^c+^q_{12}^2$$ (4) where $`^c=(E_w+2)^2/9`$, $`^q=(2_w+1)(_w1)/9`$. When the unknown pure state is not entangled, i.e. $`_{12}=0`$, the fidelity is just $`^c`$ which is the maximum fidelity for double teleportation of independent two particles . For a given channel entanglement, the fidelity $``$ decreases monotonously as the initial entanglement $`_{12}`$ increases because $`^q0`$. To obtain the same fidelity, the larger entangled channels are required for the larger entangled initial state. It implies that the entanglement is so fragile to teleport. The measure of entanglement $`_{78}`$ for the replica state $`\widehat{\rho }_{78}`$ is found using its definition (1) as $$_{78}=\text{max}\{0,\frac{1}{9}\left[(2_w^2+2_w4)+(1+2_w)^2_{12}\right]\}.$$ (5) In Fig. 2, the entanglement $`_{78}`$ is plotted with respect to the entanglement $`_{12}`$ for the unknown state and $`_w`$ for the quantum channel. We find that $`_{78}`$ is nonzero showing entanglement in the replica state only when $`_w`$ is larger than a critical value $`_w^c(3\sqrt{2_{12}+1})/(2\sqrt{2_{12}+1})`$. If the unknown state is maximally entangled with $`_{12}=1`$, the quantum channel is required to have the entanglement larger than $`_w^c0.3660`$. It is remarkable that the entanglement teleportation has the critical value of minimum entanglement $`_w^c0`$ for the quantum channel to transfer any entanglement. Brukner and Zeilinger recently introduced a new measure of quantum information which is normalized to have $`n`$ bits of information for $`n`$ qubits. Based on their derivation, we define a measure of correlation information. The measure of total information for the density matrix $`\widehat{\rho }`$ of the two spin-1/2 particles is $`(\widehat{\rho })=\frac{2}{3}\left(4\mathrm{T}\mathrm{r}\widehat{\rho }^21\right)`$, which may be decomposed into three parts. Each particle has its own information corresponding to its marginal density matrix, which we call the individual information. The two particles can also share the correlation information which depends on how much they are correlated. The measure of individual information $`^a(\widehat{\rho })`$ for the particle $`a`$ is $$^a(\widehat{\rho })=2\mathrm{T}\mathrm{r}_a\left(\widehat{\rho }_a\right)^21$$ (6) where $`\widehat{\rho }_a=\mathrm{Tr}_b\widehat{\rho }`$ is the marginal density matrix for particle $`a`$. The measure of individual information $`^b(\widehat{\rho })`$ for particle $`b`$ can be obtained analogously. If the total density matrix $`\widehat{\rho }`$ is represented by $`\widehat{\rho }=\widehat{\rho }_a\widehat{\rho }_b`$, the total system has no correlation. We define the measure of correlation information as $$^c(\widehat{\rho })=(\widehat{\rho })(\widehat{\rho }_a\widehat{\rho }_b)$$ (7) If there is no correlation between the two particles, the measure of total information is a mere sum of individual information. On the other hand, the total information is imposed only on the correlation information, $`=^c`$, if there is no individual information as for the singlet state. For a two-body spin-1/2 system, 1 bit is the maximum degree of each individual information while the correlation information can have maximum 2 bits. The correlation information is in general contributed from quantum entanglement and classical correlation. When a pure entangled state is considered, its entanglement contributes to the whole of correlation information. For a mixed state, on the other hand, the correlation information may also be due to classical correlation. For example, the Werner state with the entanglement $``$ has the correlation information $`^c=\alpha +\beta +\gamma ^2`$ with constants $`\alpha `$, $`\beta `$, and $`\gamma `$. The entanglement teleportation transfers the correlation information $`_{12}^c^c(\widehat{\rho }_{12})`$ of the unknown state $`\widehat{\rho }_{12}`$ to the replica state $`\widehat{\rho }_{78}`$. After a straightforward algebra, we find that the transferred correlation information $`_{78}^c`$ is given by $$_{78}^c=\kappa ^4_{12}^c,\kappa =\frac{2_w+1}{3}$$ (8) which shows that the correlation information dissipates linearly during the teleportation via the noisy quantum channel. As far as the channel is entangled ($`\frac{1}{3}<\kappa 1`$), some correlation information remains in the replica state. Substituting Eq. (3) into Eq. (7), we find that the replica state can have both classical and quantum correlation. Further, if the channel is entangled less than $`_w^c`$, $`_{78}^c`$ is totally determined by classical correlation. The reason why the teleportation does not necessarily transfer the entanglement to the replica state is that the correlation information for the replica state can be determined not only by quantum entanglement but also by classical correlation. We analyze it further as we separate the full teleportation into two partial teleportations of entanglement. Consider a series of two partial teleportations of entanglement . After the teleportation of particle 1 of the state $`\widehat{\rho }_{12}`$, particle 2 of $`\widehat{\rho }_{72}`$ is teleported and the final replica state is $`\widehat{\rho }_{78}`$ in Fig. 1. We calculate the transfer of correlation information for the two teleportations $$_{72}^c=\kappa ^2_{12}^c,_{78}^c=\kappa ^2_{72}^c.$$ (9) From these linear equations, we can easily recover Eq. (8). Now we investigate the dependence of correlation information on entanglement and classical correlation. For the entangled channel, $`_w0`$, the correlation information $`_{72}^c`$ can be written in terms of the entanglement $`_{72}`$ for $`\widehat{\rho }_{72}`$: $$_{72}^c=2\kappa ^2\left(43\frac{_{72}+(1_w)}{_w(2+_w)}_{72}\right)\frac{_{72}+(1_w)}{_w(2+_w)}_{72}$$ (10) which shows that for $`_w0`$ the correlation information of the state $`\widehat{\rho }_{72}`$ is due only to entanglement. The partial teleportation $`\widehat{\rho }_{12}\widehat{\rho }_{72}`$ transfers at least some of the initial entanglement as far as the channel is entangled. However, we have already seen that the final replica state $`\widehat{\rho }_{78}`$ may include some classical correlation. The partial teleportation $`\widehat{\rho }_{72}\widehat{\rho }_{78}`$ may bring about no entanglement transfer. Why? The only difference of the two procedures is the purity of their initial states as $`\widehat{\rho }_{12}`$ is pure while $`\widehat{\rho }_{72}`$ may be mixed. The purity of $`\widehat{\rho }_{72}`$ is determined by the entanglement of the channel $`Q_1`$. To analyze the importance of the initial purity for the entanglement transfer in partial teleportation, we release, for a while, the hereto assumption that both the quantum channels have the same measure of entanglement. The entanglement $`_{78}`$ for the replica state then depends on the entanglement $`_{46}`$ of the quantum channel $`Q_2`$, and entanglement $`_{72}`$ and purity $`𝒫_{72}`$ of the state $`\widehat{\rho }_{72}`$. The more $`Q_1`$ is entangled, the purer $`\widehat{\rho }_{72}`$ is. We numerically calculate the dependence of entanglement $`_{78}`$ on the purity $`𝒫_{72}`$ of the intermediate state $`\widehat{\rho }_{72}`$ as shown in Fig. 3. It clearly shows that the purity of the initial state determines the possibility of the entanglement transfer. This analysis can be analogously applied to the other sequence of partial teleportations $`\widehat{\rho }_{12}\widehat{\rho }_{18}`$ and $`\widehat{\rho }_{18}\widehat{\rho }_{78}`$. In conclusion, we investigated the effects of the noisy environment on the entanglement and information transfer in the entanglement teleportation. The introduction of the measures of entanglement and correlation information enables us to analyze intrinsic properties of the entanglement teleportation. We found that the teleportation always transfers the correlation information which dissipates linearly through the impure quantum channel. On the other hands, the entanglement transfer is not always possible. The analysis of partial teleportation shows that the purity of an initial state determines the possibility of the entanglement transfer. We explained this nontrivial feature by showing that a mixed state can have simultaneously quantum and classical correlations. Our studies on the entanglement transfer in the noisy environment will contribute to the entanglement manipulation, one of basic schemes in quantum information theory. JL thanks Inbo Kim and Dong-Uck Hwang for discussions. This work is supported by the Brain Korea 21 project of the Korean Ministry of Education.
no-problem/9907/astro-ph9907357.html
ar5iv
text
# Two new quasars at 𝑧 = 1.90 and 𝑧 = 0.15 from the Calán–Tololo Survey Based on observations collected at the European Southern Observatory, La Silla, Chile ## 1 Introduction Although large surveys aiming at the detection of bright QSOs have been undertaken such as the photographic Palomar-Green survey (Green et al. Gr86 (1986)), which was carried out in the early seventies, and the more recent Hamburg/ESO survey (e.g., Wisotzki et al. Wi (1996)) and the Calán–Tololo survey (e.g., Maza et al. Ma96 (1996)), serendipitous discoveries are being reported from time to time. The recent discovery of a bright and nearby QSO less than $`1^{}`$ away from 3C273, one of the best studied quasars (Read et al. ReMi (1998)) is such a case. Here we present the spectra of three new quasars from the Calán–Tololo Survey. Somewhat ironically, these objects were initially selected as candidate cataclysmic variables on the basis of their visual appearance on the objective prism plates (Augusteijn et al. Au99 (1999)). Their blue fluxes and a strong emission line near the H$`\beta `$ rest wavelength led to such misidentifications. The follow–up observations, however, revealed their true nature. ## 2 Observations and Data Reduction The spectroscopic and photometric observations have been carried out with various telescopes at ESO, La Silla, Chile (see Tab. 1). The data reduction was performed in the usual manner including bias level subtraction and flatfielding using the various IRAF packages. ## 3 Results The inspection of our spectra revealed broad emission lines typical for non–stellar objects (Fig. 13). However, an analysis of the FWHMs, in both the DSS frames and our photometric data, yielded point source characteristics for all three targets, letting us suspect a QSO nature. Also the measured color indices (Tab. 2) show values typical for quasars. The redshifts have been measured from several emission lines after identifying one reference emission line in each spectrum. The remaining lines were identified afterwards based on the comparison with the known rest wavelengths for the one low– and the two intermediate–redshifted objects (see Tab. 2). For the low redshift QSO, CTCV J1322–2101, only an R magnitude could be determined due to the limited spectral range covered. However, assuming an upper color limit of $`VR`$ $``$ 0.4, this indicates that the object is probably brighter than $`M_V=`$23. Motivated by the low redshift value of CTCV J 1322–2101, additional near–infrared data were obtained in the K band in order to check whether the host galaxy could be detected. No evidence for any extended emission around the object at the level of sensitivity was found. Furthermore, the measured FWHM from the photometry was consistent with stellar values and much smaller than the FWHM of the faint galaxy that is located $`14^{\prime \prime }`$ SE of the QSO. From all studied quasars discovered by the Calán–Tololo objective prism survey, only very few quasars with redshifts $`z`$ 0.3 have been found. Our finding would be the only high–luminosity quasar ($`M_B<23`$) with $`z`$ 0.2 in this survey (Maza et al. Ma96 (1996), and references therein). ###### Acknowledgements. Some of the data were obtained, and reduced during a research stay of CT at the Universidad Católica, Santiago, Chile. This was financially supported by the Deutscher Akademischer Austauschdienst (DAAD) under grant D/94/14720. We would also like to thank the referee, Dr. Lutz Wisotzki, for helpful comments.
no-problem/9907/cond-mat9907405.html
ar5iv
text
# The structural and electronic properties of germanium clathrates ## I Introduction The Si and Ge clathrates can be viewed as covalent fullerene solids which is composed of three-dimensional networks of fullerene cages connected by face sharing. The silicon clathrate compounds M<sub>x</sub>Si<sub>46</sub> and M<sub>x</sub>Si<sub>136</sub> (M$`=`$Na, K, Rb, and Cs) were first synthesized in 1965 . The structures of semiconductor clathrates can be classified into two cubic structural types with 46 atoms or 136 atoms per unit cell . As shown in Fig.1, the type I clathrate (Clathrate-46) is formed by two smaller 14-face pentagonal dodecahedra (12 five-fold rings, $`I_h`$) and six larger 16-face tetrakaidecahedra (12 five-fold rings and 2 six-fold rings, D<sub>6d</sub>). The structure of type II clathrate (Clathrate-136) consists of sixteen smaller pentagonal dodecahedra (12 five-fold rings, $`I_h`$) and eight larger hexakaidecahedra (12 five-fold rings and 4 six-fold rings, $`T_d`$). Electronic structure calculations based on different methods have shown that these two open network structures have similar electronic properties . The research interests of semiconductor clathrates come from several aspects: (1) possible alteration of electronic structures and energy gap from standard diamond form , (2) metal-insulator transition in M<sub>x</sub>Si<sub>136</sub> with different concentration of metallic impurity , (3) the finding of superconductivity behavior in Na<sub>x</sub>Ba<sub>y</sub>Si<sub>46</sub> , (4) candidate for thermoelectric applications , (5) template to from three dimensional arrays of nanosized clusters , (6) similarity in structural and electronic properties between semiconductor clathrates and nanoclusters . There are lots of the experimental and theoretical studies on Si<sub>46</sub> and Si<sub>136</sub> clathrates and their compounds . The structural, electronic and vibrational properties of Si clathrates are investigated at various theoretical levels ranging from ab initio to tight-binding and empirical potential . The most interesting result from those calculations is that the band gap is about 0.7 eV higher than that of diamond phase. Experimental works on Si clathrates include resistivity and magnetization , transport properties , photoemission spectroscopy , NMR , Raman , ESR , neutron scattering etc. In contrast to the intensive studies on silicon clathrates, our current knowledge on germanium clathrates is rather limited. Recently, the germanium clathrate compounds such as K<sub>8</sub>Ge<sub>46</sub>, Rb<sub>x</sub>Ge<sub>46</sub>, Na<sub>x</sub>Ge<sub>136</sub>, Cs<sub>8</sub>Na<sub>16</sub> have been synthesized and their structures are analyzed with X-ray diffraction . An empirical potential calculation has been performed on pure germanium clathrates . However, there is no first principles electronic structure calculation and their fundamental electronic properties are still unclear theoretically. In this work, we report results of first principles study on the structures and electronic properties of Ge<sub>46</sub> and K<sub>8</sub>Ge<sub>46</sub> clathrates. The equilibrium structure, electronic band and band gap, electronic density of states and electron density distribution are obtained and discussed. ## II Computational methods First principles SCF pseudopotential method is used to perform static calculation on the electronic structures and total energy of Ge<sub>46</sub> and K<sub>8</sub>Ge<sub>46</sub> in ideal clathrate structures with different lattice constants. The ion-electron interaction is modeled by numerical BHS norm-conserving nonlocal pseudopotential in the Kleinman-Bylander form . The Ceperley-Alder’s exchange-correlation parameterized by Perdew and Zunger is used for the LDA in our program . The kinetic energy cutoff for plane-wave basis is chosen as 12 Ryd. Ten symmetric k points generated in Monkhorst-Pack spirit are employed to sample the Brillouin zone. From the static calculation, the equation of states with ideal crystal structures and the equilibrium lattice constants are obtained. These ideal structures are further optimized by using structural minimization via conjugate gradient technique (CASTEP ). The CASTEP program is based on plane-wave pseudopotential technique. It can relax the atomic position by computing the force acting on atoms from electronic calculation and moving atoms efficiently in a numerical way . ## III The structures and band gaps of Ge<sub>46</sub> In Fig.2, we present the equation of states for both Ge<sub>46</sub> clathrate and diamond phase that is obtained from first principles static calculations. We find that the Ge<sub>46</sub> clathrate is a locally stable structure and its energy is only about 0.08 eV per atom higher than that of diamond phase. For comparison, the $`\beta `$-tin phase of germanium is about 0.25 eV higher in energy than the diamond phase . The low energy feature of clathrate phase is a natural consequence of its four coordinate characteristics and may be partially attributed to the softness of the bond-bending distortion modes . As compared to diamond structure, the volume per atom in the clathrate phase is increased by about 14.8$`\%`$. All of these results are very close to the previous empirical potential simulation on Ge<sub>46</sub>, in which the change of atomic volume in type I clathrate is 15.3 $`\%`$ and its energy is 0.071 eV per atom higher than diamond . In previous theoretical study on bulk silicon and germanium of various phases , considerable similarity in the bonding behavior and phase diagram are found between silicon and germanium. In Table I, we summarize our results on the structural properties and band gaps for perfect and relaxed Ge<sub>46</sub> and K<sub>8</sub>Ge<sub>46</sub> clathrate and compare them with the previous LDA and tight-binding calculations on Si<sub>46</sub> clathrate . We find the difference of volume and energy between clathrate and diamond phase for germanium and silicon are comparable. The ideal clathrate structure for the Ge<sub>46</sub> at equilibrium lattice constant is further optimized with CASTEP plane-wave pseudopotential calculation. Both atomic positions and unit cell parameters are allowed to relax. The SCF pseudopotential code used in static calculation has also been used to test the total energy for the initial and final structure and the results agree with CASTEP calculations. After optimization, the lattice constant of simple cubic unit cell decreases from 10.43 $`\AA `$ to 10.37 $`\AA `$. The relative atomic positions are only slightly relaxed from their initial configuration. The range of bond length distribution has also been narrowed, i.e., from the 2.375 $`\AA `$ $``$ 2.540 $`\AA `$ in ideal clathrate structure to 2.38 $`\AA `$ $``$ 2.433 $`\AA `$. The electronic band structure calculations are carried out for Ge<sub>46</sub> clathrate with the equilibrium structure. The near band-gap band structures for the minimum energy configurations of Ge<sub>46</sub> clathrates are plotted in Fig.3(a). Both the valence band maximum and conduction band minimum are located on the $`\mathrm{\Gamma }`$ to $`X`$ line and they are very close in k space. An indirect gap of 1.46 eV is found. As for bulk germanium in diamond phase, our calculations have yielded an indirect band gap of 0.40 eV between $`L`$ and $`\mathrm{\Gamma }`$ points. The underestimation of band gap is a common feature of LDA calculations. Nevertheless, our current results suggests that the band gap of Ge<sub>46</sub> phase is about 1 eV higher than that of the diamond phase, which is similar to the 0.7 eV increment in band gap found for Si<sub>46</sub> and Si<sub>34</sub> clathrates . Furthermore, the electronic band structure of Ge<sub>46</sub> clathrate are calculated independently via a simple tight-binding model. The tight-binding hopping parameters for germanium are taken from Ref. and the parameters for nearest neighboring atoms in clathrate structure is assumed to be the same as that in diamond. According to tight-binding calculation, the band gap of diamond is 1.13 eV and the gap for relaxed clathrate Ge<sub>46</sub> is 2.93 eV. Although the tight-binding method usually overestimate the band gap of a system, the increment of 1.80 eV in the band gap from tight-binding model is in reasonable agreement with ab initio calculation. Therefore, we expect that the true increase of band gap $`\mathrm{\Delta }`$ from diamond to Ge<sub>46</sub> clathrate phase might be between the LDA and TB prediction, i.e., 1.06 eV $``$ $`\mathrm{\Delta }`$ $``$ 1.80 eV. This result suggests that the clathrate materials might be useful in the new electronic and optical application in the future. Since the Ge<sub>46</sub> clathrate structure is essentially a 3D network composed by Ge<sub>20</sub> and Ge<sub>24</sub> cages with face sharing, it is interesting to compare the electronic properties of the individual Ge<sub>20</sub> and Ge<sub>24</sub> cages with that of the clathrate. SCF pseudopotential electronic structure calculations on isolated Ge<sub>20</sub> and Ge<sub>24</sub> clusters in fullerene cages are performed. The clusters are placed in a large simple cubic supercell with length of 28 a.u. In Fig.4, we present the calculated electronic density of states (DOS) for Ge<sub>20</sub> and Ge<sub>24</sub> cages along with DOS for diamond and Ge<sub>46</sub>. The detailed analysis are given in the following. Firstly, we find that most of the peaks in the DOS of Ge<sub>46</sub> can be assigned to Ge<sub>24</sub>, while the DOS of Ge<sub>20</sub> also show some similarity with Ge<sub>46</sub>. The DOS of Ge<sub>24</sub> is closer to that of clathrate because most atoms in clathrate are associated with Ge<sub>24</sub> cage. The presence of s-p gap in due to the large number of five-member rings in the cage and clathrate structure. This remarkable similarity implies that the Ge<sub>46</sub> clathrate can be taken as the small Ge fullerene assembled solid with both geometrical and electronic hierarchy. However, both the Ge<sub>24</sub> and Ge<sub>20</sub> cages are not semiconductor systems and do not have open band gaps like Ge<sub>46</sub> clathrates. The band splitting in Ge<sub>46</sub> can be attributed to the sharing of face atoms by Ge<sub>20</sub> and Ge<sub>24</sub> and the interaction between neighboring fullerene cages. On the other hand, we can compare the DOS of clathrate and diamond in Fig.4. Several significant differences can be found. Besides the improvement of band gap in clathrate upon diamond phase, the total width of valence band of Ge<sub>46</sub> (about 12 eV) is narrower than that of diamond (about 13 eV). This phenomenon has been also predicted in Si<sub>46</sub> clathrate and observed experimentally . We also found a gap opening between the s-like and p-like states in the case of Ge<sub>46</sub> clathrates as well as Ge<sub>20</sub>, Ge<sub>24</sub> clusters, while $`s`$ and $`p`$ states are overlapped in the DOS of diamond. These remarkable differences in the electronic structure of clathrate and diamond can be understood by the large portion of five-fold ring in clathrate structure. In the diamond lattice consisting of $`100\%`$ six-fold ring, the $`4s`$ orbital can form complete antibonding states, which can distribute to high energy and overlap with the lower $`4p`$-like states. In contrast to diamond, the Ge<sub>46</sub> clathrate is composed of $`87\%`$ five-fold rings and $`13\%`$ six-fold rings. In the five-fold ring, $`4s`$ orbital cannot form complete antibonding states so that the top of $`4s`$-like states is still lower in energy than the bottom of $`4p`$-like and a gap between $`s`$ and $`p`$ states inside valence band opens. The similar effect of five-fold ring on the $`4p`$ bonding orbitals will induce the incompleteness of $`4p`$-like states. As a consequence, the valence band top of Ge<sub>46</sub> is lower than that of diamond, which corresponds to the narrowing of valence band width and broadening of fundamental gap between valence and conduction band. The novel electronic properties caused by five-fold ring and the similarity between clathrate and fullerene cages may be explored for future electronic and optical applications. ## IV Electronic structures of K<sub>8</sub>Ge<sub>46</sub> clathrate Although the pure silicon and germanium semiconductor clathrates are predicted to be locally stable, experiments have only synthesized the metal-doped clathrate compounds of Si and Ge. The metallic impurity atoms inside the clathrate might influence the electronic properties of clathrate due to the interaction between the metal atoms and semiconductor skeleton. Here we have chosen the K<sub>8</sub>Ge<sub>46</sub> clathrate as such a model system to study the doping effect on germanium clathrate. The structural properties of K<sub>8</sub>Ge<sub>46</sub> are studied with the same procedure we have applied on Ge<sub>46</sub> clathrate: a static calculation of equation of states followed by a full relaxation. The minimized structure of K<sub>8</sub>Ge<sub>46</sub> clathrate is shown in Fig.1 and the structural parameters of this minimum energy configuration is given in Table I. It is natural to find that the lattice constant of cubic unit cell for K<sub>8</sub>Ge<sub>46</sub> clathrate is larger than pure Ge<sub>46</sub> clathrate in both ideal and relaxed structure because of the K atoms inside the fullerene cage. We also find a decrease of lattice constant from 10.79 $`\AA `$ to 10.45 $`\AA `$ after structure minimization. The minimized lattice constant (10.45 $`\AA `$) for the K<sub>8</sub>Ge<sub>46</sub> is in good agreement with experimental value (10.66 $`\AA `$) . During the minimization, the geometries of K<sub>8</sub>Ge<sub>46</sub> relax from the perfect clathrate network a little more than that happened in the Ge<sub>46</sub>. First principles SCF pseudopotential electronic structure calculation has been performed on the relaxed K<sub>8</sub>Ge<sub>46</sub> clathrate. In Fig.3(b), we presented the electronic band structure of K<sub>8</sub>Ge<sub>46</sub> near Fermi level. The system is found as metallic due to the K dopants. We can further examine the highest valence bands and lowest conduction bands of Ge<sub>46</sub> and K<sub>8</sub>Ge<sub>46</sub> shown in Fig.3(a) and (b) in detail. The valence bands of K<sub>8</sub>Ge<sub>46</sub> are very closed to the original bands in Ge<sub>46</sub>, while the conduction band structures have been slightly modified upon the inclusion of K atoms. We also study the difference of the electronic properties between the pure and doped systems by comparing their the electronic density of states (DOS). The density of states for Ge<sub>46</sub> and K<sub>8</sub>Ge<sub>46</sub> clathrate are compared in Fig.5. The DOS of valence electrons in K<sub>8</sub>Ge<sub>46</sub> is very close to that in Ge<sub>46</sub> while the DOS for conduction electrons of K<sub>8</sub>Ge<sub>46</sub> are somewhat different from that in Ge<sub>46</sub>. On the other hand, the gap between valence and conduction band is 0.23 eV narrower in the case of K<sub>8</sub>Ge<sub>46</sub>. The analysis on the charge transfer and chemical bonding effects have been presented in the contour plots of the electron densities of K<sub>8</sub>Ge<sub>46</sub> on the (100) plane in Fig.6. We have also calculated the charge density distribution of Ge<sub>46</sub> and find it is very close to that of K<sub>8</sub>Ge<sub>46</sub>. As shown in Fig.6, there is almost no charge sitting on K sites. This result is consistent with previous calculation on Na<sub>2</sub>Ba<sub>6</sub>Ge<sub>46</sub> , which found a rather simple charge transfer from Na to the Si skeleton. In their calculation, some hybridization are found between Ba and Si since Ba atoms has some low lying $`5d`$ orbitals . This difference also corresponds to the DOS and other electronic properties. A high DOS peak at Fermi energy is found for Na<sub>2</sub>Ba<sub>6</sub>Ge<sub>46</sub> and this material is superconducting . In comparison, the DOS at Fermi level for K<sub>8</sub>Ge<sub>46</sub> is moderate and it is not superconducting. ## V Conclusions We have used first principles SCF pseudopotential method to investigate the structural and electronic properties of Ge<sub>46</sub> and K<sub>8</sub>Ge<sub>46</sub> clathrates. The main conclusion of this work can be made as follows: (1) Germanium clathrate Ge<sub>46</sub> is found to be a locally stable structure with its energy only slight higher than that of diamond phase, and its atomic volume is about 13$`\%`$ larger than diamond phase. (2) Ge<sub>46</sub> clathrate shows an indirect band gap along $`\mathrm{\Gamma }`$-$`X`$ direction that is about 1 eV higher than the band gap of diamond phase. The pentagonal rings in the clathrate structure cause the valence band structure of Ge<sub>46</sub> clathrate to be similar to that of Ge<sub>24</sub> fullerene cage. The open covalent network structures contribute to the large band gaps. (3) The K<sub>8</sub>Ge<sub>46</sub> clathrate is metallic with a moderate density of states. The valence band structures and DOS are similar to those of the pure Ge<sub>46</sub>, while the conduction bands are modified due to the K dopants. Almost complete charge transfer from K sites to Ge frames is found in the K<sub>8</sub>Ge<sub>46</sub> clathrate. ###### Acknowledgements. This work is supported by the U.S. Army Research Office (Grant DAAG55-98-1-0298) and Department of Energy (Grand DEFG02-96ER45560). The authors thank O.Zhou for helpful discussions. We acknowledge computational support from the North Carolina Supercomputer Center.
no-problem/9907/nucl-ex9907012.html
ar5iv
text
# The Charge Form Factor of the Neutron from the Reaction ²H⃗⁢(𝑒⃗,𝑒'⁢𝑛)⁢𝑝 ## Abstract We report on the first measurement of spin-correlation parameters in quasifree electron scattering from vector-polarized deuterium. Polarized electrons were injected into an electron storage ring at a beam energy of 720 MeV. A Siberian snake was employed to preserve longitudinal polarization at the interaction point. Vector-polarized deuterium was produced by an atomic beam source and injected into an open-ended cylindrical cell, internal to the electron storage ring. The spin correlation parameter $`A_{ed}^V`$ was measured for the reaction $`{}_{}{}^{2}\stackrel{}{\mathrm{H}}(\stackrel{}{e},e^{}n)p`$ at a four-momentum transfer squared of 0.21 (GeV/$`c`$)<sup>2</sup> from which a value for the charge form factor of the neutron was extracted. Although the neutron has no net electric charge, it does have a charge distribution. Precise measurements where thermal neutrons from a nuclear reactor are scattered from atomic electrons indicate that the neutron has a positive core surrounded by a region of negative charge. The actual distribution is described by the charge form factor $`G_E^n`$, which enters the cross section for elastic electron scattering. It is related to the Fourier transform of the charge distribution and is generally expressed as a function of $`Q^2`$, the square of the four-momentum transfer. Data on $`G_E^n`$ are important for our understanding of the nucleon and are essential for the interpretation of electromagnetic multipoles of nuclei, e.g. the deuteron. Since a practical target of free neutrons is not available, experimentalists mostly resorted to (quasi)elastic scattering of electrons from unpolarized deuterium to determine this form factor. The shape of $`G_E^n`$ as function of $`Q^2`$ is relatively well known from high precision elastic electron-deuteron scattering. However, in this case the cross section is dominated by scattering from the proton and, moreover, is sensitive to nuclear-structure uncertainties and reaction-mechanism effects. Consequently, the absolute scale of $`G_E^n`$ still contains a systematic uncertainty of about 50%. Many of the aforementioned uncertainties can be significantly reduced through the measurement of electronuclear spin observables. The scattering cross section with both longitudinal polarized electrons and a polarized target for the $`{}_{}{}^{2}\stackrel{}{\mathrm{H}}(\stackrel{}{e},e^{}N)`$ reaction, can be written as $$S=S_0\left\{1+P_1^dA_d^V+P_2^dA_d^T+h(A_e+P_1^dA_{ed}^V+P_2^dA_{ed}^T)\right\}$$ (1) where $`S_0`$ is the unpolarized cross section, $`h`$ the polarization of the electrons, and $`P_1^d`$ ($`P_2^d`$) the vector (tensor) polarization of the target. $`A_e`$ is the beam analyzing power, $`A_d^{V/T}`$ the vector and tensor analyzing powers and $`A_{ed}^{V/T}`$ the vector and tensor spin-correlation parameters. The target analyzing powers and spin-correlation parameters, depend on the orientation of the target spin. The polarization direction of the deuteron is defined by the angles $`\mathrm{\Theta }_d`$ and $`\mathrm{\Phi }_d`$ in the frame where the $`z`$-axis is along the direction of the three-momentum transfer ($`𝐪`$) and the $`y`$-axis is defined by the vector product of the incoming and outgoing electron momenta. $`A_{ed}^V(\mathrm{\Theta }_d=90^{},\mathrm{\Phi }_d=0^{})`$ contains an interference term, where the effect of the small charge form factor is amplified by the dominant magnetic form factor . At present, there is a worldwide effort to measure the neutron charge form factor by scattering polarized electrons from neutrons bound in deuterium and <sup>3</sup>He nuclei, where either the target is polarized or the polarization of the ejected neutron is measured. Experiments with external beams have been carried out at Mainz and MIT. In the present paper we describe a measurement performed at NIKHEF (Amsterdam), which uses a stored polarized electron beam and a vector-polarized deuterium target. The experiment was performed with a polarized gas target internal to the AmPS electron storage ring. An atomic beam source (ABS) was used to inject a flux of $`3\times 10^{16}`$ deuterium atoms/s (in two hyperfine states) into the feed tube of a cylindrical storage cell cooled to 75 K. The cell had a diameter of 15 mm and was 60 cm long. An electromagnet was used to provide a guide field of 40 mT over the storage cell which oriented the deuteron polarization axis perpendicular to $`𝐪`$ in the scattering plane. A doublet of steering magnets around the target region compensated for the deflection of the electron beam by the guide field. In addition, two sets of four beam scrapers preceding the internal target were used to reduce events that originated from beam halo scattering from the cell. By alternating two high-frequency transitions in the ABS, the vector polarization of the target ($`P_1^d=\sqrt{\frac{3}{2}}(n_+n_{})`$), with $`n_\pm `$ the fraction of deuterons with spin projection $`\pm 1`$) was varied every 10 seconds. Compared to our previous experiments with tensor-polarized deuterium , this target setup resulted in an increase of the figure of merit by more than one order of magnitude, with a typical target thickness of $`1\times 10^{14}`$ deuterons/cm<sup>2</sup>. Polarized electrons were produced by photo-emission from a strained-layer semiconductor cathode (InGaAsP) prepared to the negative electron affinity surface state with cesium and oxygen. The transverse polarization of the electrons was measured by Mott scattering at 100 keV. After linear acceleration to 720 MeV the electrons were injected and stacked in the AmPS storage ring. In this way, beam currents of more than 100 mA could be stored, with a life time in excess of 15 minutes. Every 5 minutes, the remaining electrons were dumped, and the ring was refilled, after reversal of the electron polarization at the source. The polarization of the stored electrons was maintained by setting the spin tune to 0.5 with a strong solenoidal field (using the well-known Siberian snake principle). Optimization of the longitudinal polarization at the interaction point was achieved by varying the orientation of the spin axis at the source and by measuring the polarization of the stored electrons with a polarimeter based on spin-dependent Compton backscattering. Scattered electrons were detected in the large-acceptance magnetic spectrometer Bigbite with a momentum acceptance from 250 to 720 MeV/$`c`$ and a solid angle of 96 msr (see Fig. 1). Kinematics were chosen such that $`G_E^n`$ was probed near its maximum (as determined from Ref. ), resulting in the most sensitive measurement of $`G_E^n`$ for a given statistical accuracy. Consequently, the electron detector was positioned at a central angle of 40, with an acceptance for the electron scattering angle of $`35^{}\theta _e45^{}`$, resulting in a central value of $`Q^2=0.21(\mathrm{GeV}/c)^2`$. Neutrons and protons were detected in a time-of-flight (TOF) system made of two subsequent and identical scintillator arrays. Each array consisted of four 160 cm long, 20 cm high, and 20 cm thick plastic scintillator bars stacked vertically. Each bar was preceded by two ($`\delta E`$ and $`\mathrm{\Delta }E`$) plastic scintillators (3 and 10 mm thick, respectively) of equal length and height, used to identify and/or veto charged particles. Each of the 24 scintillators was read out at both ends to obtain position information along the bars (resolution $`4`$ cm) and good coincidence timing resolution ($`0.5`$ ns). The TOF detector was positioned at a central angle of 58 and covered a solid angle of about 250 msr. Protons with kinetic energies in excess of 40 MeV were detected with an energy resolution of about 10%. The $`e^{}N`$ trigger was formed by a coincidence between the electron arm trigger and a hit in any one of the eight TOF bars. By simultaneously detecting protons and neutrons in the same detector, one can construct asymmetry ratios for the two reaction channels $`{}_{}{}^{2}\stackrel{}{\mathrm{H}}(\stackrel{}{e},e^{}p)n`$ and $`{}_{}{}^{2}\stackrel{}{\mathrm{H}}(\stackrel{}{e},e^{}n)p`$, in this way minimizing systematic uncertainties associated with the deuteron ground-state wave function, absolute beam and target polarizations, and possible dilution by cell-wall background events. An experimental asymmetry ($`A_{exp}`$) can be constructed, via $$A_{exp}=\frac{N_+N_{}}{N_++N_{}}$$ (2) where $`N_\pm `$ is the number of events that pass the selection criteria, with $`hP_1^d`$ either positive or negative. $`A_{exp}`$ for the $`{}_{}{}^{2}\stackrel{}{\mathrm{H}}(\stackrel{}{e},e^{}p)n`$-channel, integrated up to a missing momentum of 200 MeV/$`c`$, is shown in Fig. 2 as a function of time for part of the run. These data were used to determine the effective product of beam and target polarization by comparing to the predictions of the model of Arenhövel *et al.* . This advanced, non-relativistic model includes the effects from final-state interaction, meson-exchange currents, isobar configurations, and relativistic corrections, and has shown to provide good descriptions for quasifree proton knockout from tensor-polarized deuterium. Finite acceptance effects were taken into account with a Monte Carlo code that interpolated the model predictions between a dense grid of calculations over the full kinematical range and detector acceptance. In this way, the effective product of beam and target polarization (i.e. including the effect of background events and electron depolarization) was determined to be $`0.42`$ with a statistical precision of better than 1% and a systematic uncertainty of 3%, mainly limited by the knowledge of the proton form factors. Neutrons were identified by a valid hit in one $`E`$-scintillator or two neighboring $`E`$-scintillators (to allow for events that deposit energy in two neighboring $`E`$-scintillators) and no hits in the preceding ($`\delta E`$ and $`\mathrm{\Delta }E`$) scintillators, which resulted in an 8- to 12-fold veto requirement. Minimum-ionizing particles and photons were rejected by a cut on the time of flight, resulting in a clean sample of neutrons, with only a small proton contamination. The spin-correlation parameter was obtained from the experimental asymmetry by correcting for the contribution of protons misidentified as neutrons (less than 1%, as determined from a calibration with the reaction <sup>1</sup>H($`e,e^{}p`$)), and for the product of beam and target polarization, as determined from the $`{}_{}{}^{2}\stackrel{}{\mathrm{H}}(\stackrel{}{e},e^{}p)n`$ channel. The main effect of cell wall events is a reduction of the effective target polarization. Therefore, the effects largely cancel in the asymmetry ratio. We have studied the cell wall contribution by measuring with an empty storage cell. The background contribution to the $`(e,e^{}p)n`$ and $`(e,e^{}n)p`$ channels amounted to 5 $`\pm `$ 1%, stable over the entire run. A possible dependence on the target density was investigated by injecting various fluxes of unpolarized hydrogen into the cell and measuring quasifree nucleon knock-out events. The target density dependence was found to be negligible at ABS operating conditions. Figure 3 shows the spin-correlation parameter for the $`{}_{}{}^{2}\stackrel{}{\mathrm{H}}(\stackrel{}{e},e^{}n)p`$ channel as a function of missing momentum. The data are compared to the predictions of the full model of Arenhövel *et al.* , assuming the dipole parameterization for the magnetic form factor of the neutron and the Paris nucleon-nucleon ($`NN`$) potential, folded over the detector acceptance with our Monte Carlo code for various values of $`G_E^n`$. Full model calculations are required for a reliable extraction of $`G_E^n`$. This can be seen from Fig. 3, as a Plane Wave Impulse Approximation (PWIA) calculation for $`G_E^n=0`$, would result in $`A_{ed}^V(90^{},0^{})=0`$, independent of $`p_m`$. We extract $`G_E^n(Q^2=0.21(\mathrm{GeV}/c)^2)=0.066\pm 0.015\pm 0.004`$, where the first (second) error indicates the statistical (systematic) uncertainty. The systematic error is mainly due to the uncertainty in the correction for misidentified protons and the orientation of the holding field (thus the contribution of the spin-correlation parameter $`A_{ed}^V(0^{},0^{})`$ to our experimental asymmetry). We have investigated the influence of the $`NN`$ potential on the calculated spin-correlation parameters using Arenhövel’s full treatment. The results for $`A_{ed}^V(90^{},0^{})`$ using the Paris, Bonn, Nijmegen, and Argonne V<sub>14</sub> $`NN`$ potential differ by less than 5% for missing momenta below 200 MeV/$`c`$. In Fig. 4 we compare our experimental result to other data obtained with spin-dependent electron scattering. Note that all other data have been obtained from a comparison to PWIA predictions and thus without taking into account reaction mechanism effects. The figure also shows the results from Ref. , where the upper and lower boundaries of the ‘shaded’ area correspond to their result obtained with the Nijmegen and Reid Soft Core potentials, respectively. It is seen that our result favors their extraction of $`G_E^n`$ which uses the Nijmegen potential. By comparison to the predictions of the QCD-VM model by Gari and Krümpelmann, with and without the inclusion of the coupling of the $`\varphi `$-meson to the nucleon (which these authors identify with the effect of strangeness in the neutron), our datum favors the prediction without strangeness in the neutron included. In summary, we presented the first measurement of the sideways spin-correlation parameter $`A_{ed}^V(90^{},0^{})`$ in quasifree electron-deuteron scattering from which we extract the neutron charge form factor at $`Q^2=0.21`$ (GeV/$`c`$)<sup>2</sup>. When combined with the known value and slope at $`Q^2=0`$ (GeV/$`c`$)<sup>2</sup> and the elastic electron-deuteron scattering data from Ref. , this result puts strong constraints on $`G_E^n`$ up to $`Q^2=0.7`$ (GeV/$`c`$)<sup>2</sup>. We would like to thank the NIKHEF and Vrije Universiteit technical groups for their outstanding support and Prof. H. Arenhövel for providing the calculations. This work was supported in part by the Stichting voor Fundamenteel Onderzoek der Materie (FOM), which is financially supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), the National Science Foundation under Grants No. PHY-9504847 (Arizona State Univ.), US Department of Energy under Grant No. DE-FG02-97ER41025 (Univ. of Virginia) and the Swiss National Foundation.
no-problem/9907/astro-ph9907166.html
ar5iv
text
# Topological Pattern Formation \[ ## Abstract We provide an informal discussion of pattern formation in a finite universe. The global size and shape of the universe is revealed in the pattern of hot and cold spots in the cosmic microwave background. Topological pattern formation can be used to reconstruct the geometry of space, just as gravitational lensing is used to reconstruct the geometry of a lens. Contribution to the conference proceedings for the “Cosmological Topology in Paris” (CTP98) workshop. \] We have all come to accept that spacetime is curved. Yet the idea that space is topologically connected still meets with resistance. One is no more exotic than the other. In the true spirit of Einstein’s revolution, gravity is a theory of geometry and geometry has two facets: curvature and topology. The big bang paradigm forces us to consider the topology of the universe. As best as we can ascertain, when the universe was created both gravity and quantum mechanics were at work. Any theory which incorporates gravity and quantum mechanics must assign a topology to the universe. String theory is currently the most powerful model which naturally hosts gravity in a unified framework. It should not be overlooked that in string theory there are six extra dimensions all of which must be topologically compact. In order to create a viable low-energy theory, the internal dimensions are finite Calabi-Yau manifolds. We naturally wonder why a universe would be created with six compact dimensions and four infinite ones. A more equitable beginning might create all spatial dimensions compact and of comparable size. Six dynamically squeeze down while the other three inflate. In fact, it is dynamically possible for inflation of $`3`$-space to be kinetically driven by the contraction of internal dimensions . Whatever mechanism stabilizes the internal dimensions at a small size would likewise stabilize the external dimensions at an inversely large size. Topology need not be at odds with inflation. Another interesting possibility is that the topology itself naturally selects the expansion of $`3`$-dimensions and the contraction of $`6`$. The topology can create boundary contributions to an effective cosmological constant. The sign and magnitude of the vacuum energy depends on the topology and it is conceivable that it selects three dimensions for expansion and three for contraction in a kind of inside/out inflation. In the wake of the recent observational evidence that there is a cosmological constant today, the pursuit of these calculations is worthwhile. Perhaps we are still inflating as the vacuum energy tracks the topology scale. Our quest to measure the large-scale curvature of the universe may also produce a measurement of the topology. (For a review and a collection of papers see .) Topological lensing of the cosmic microwave background (CMB) results in multiple images of the same points in different directions. Pattern formation in the universe’s hot and cold spots reveals the global topology . Just as with gravitational lensing, the location, number and distribution of repeated points will allow the reconstruction of the geometry. The circles of Ref. are specific collections of topologically lensed points. We demonstrate topological pattern formation with the Thurston space, popular in homage to the Thurston person . The space corresponds to $`m003(2,3)`$ in the SnapPea census . A CMB map of the sky does not immediately reveal the geometry. If we scan the sky for correlations between points we can draw out the hidden pattern. There are an infinite number of possible correlated spheres. The sphere of fig. 1 is antipody; the correlation of every point on the sky with its opposite point, $$A(\widehat{n})=\frac{\delta T(\widehat{n})}{T}\frac{\delta T(\widehat{n})}{T}.$$ (1) In an infinite universe, light originating from opposite directions would be totally uncorrelated. The ensemble average antipodal correlation would produce a monopole with no structure. In a finite universe by contrast, light which is received from opposite directions may in fact have originated from the same location and simply took different paths around the finite cosmos. The antipody map would then show structure as it caught the recurrence of near or identical sources. Again, the analogy with gravitational lensing is apparent. We estimate antipody following the method of Ref. . We take the correlation between two points to be the correlation they would have in an unconnected, infinite space given their minimum separation. The curvature is everywhere negative and the spectrum of fluctuations are taken to be flat and Gaussian, even in the absence of inflation. This is justified on a compact, hyperbolic space since, according to the tenents of quantum chaos, the amplitude of quantum fluctuations are drawn from a Gaussian random ensemble with a flat spectrum consistent with random matrix theory. To find the minimum distance we move the points under comparison back into the fundamental domain using the generators for the compact manifold. The result for the Thurston space with $`\mathrm{\Omega }_o=0.3`$ is shown in fig. 1. Notice the interesting arcs of correlated points. Clearly there is topological lensing at work. Arcs were also found under antipody for the Weeks space in Ref. . If antipody were a symmetry of the space then at least some circles of correlated points representing the intersection of copies of the surface of last scatter with itself would have been located , as were found for the Best space . Antipody is by definition symmetric under a rotation by $`\pi `$ and so the back of the sphere is identical to the front. There are an infinite number of correlated spheres which can be used to systematically reconstruct the geometry of the fundamental domain. Another example is a correlation of one point in the sky with the rest of the sphere, $$C_P(\widehat{n})=\frac{\delta T(\widehat{n}_P)}{T}\frac{\delta T(\widehat{n})}{T}.$$ (2) This selects out recurrent images of the one point. In an unconnected, infinite space, the sphere would only show one spot, namely the correlation of the point with itself. In fig. 2 we have a kaleidescope of images providing detailed information on the underlying space. There is a trifold symmetry in fig. 2. Notice that there is a band of points moving from the middle upward vertically which then bends over to the left and that this band repeats twice making an overall three-pronged swirl emanating from the middle of the figure. Since this correlated sphere is not symmetric under $`\pi `$, we also show the back of the sphere in fig. 3. A different pattern emerges but still with the tri-fold symmetry. There is a three-leaf arrangement of spots in the center of the figure. We need the improved resolution and signal-to-noise of the future satellite missions MAP and Planck Surveyor to observe topological pattern formation. High resolution information will be critical in distinguishing fictitious correlations from real spots. Beyond the CMB, a finite universe would sculpt the distribution of structure on the largest scales. Even if we never see repeated images of galaxies or clusters of galaxies, the physical distribution of matter could be shaped by the shape of space. The topological identifications select discrete modes and the modes themselves can in turn trace the identifications. The result is an overall web of primordial fluctuations in the gravitational potential specific to the finite space. A web-like distribution of matter would then be inherent in the initial primordial spectrum . This is different from the structureless distribution of points one would expect in an infinite cosmos. We close with the more fanciful possibility that even time is compact. If time is compact, every event would repeat precisely as set by the age of the universe. Only a universe which is able to naturally return to its own infancy could be consistent with a closed time loop. A big crunch which feeds another big bang could allow our entire history to repeat. The same galaxies form and the same stars and planets and people. Even a proponent of free will can see that at the very least we would be limited in the choices we are or are not free to make. We would live out the same lives, make the same choices, make the same mistakes. Of course, in a quantum creation of the universe, different galaxies would form in different locations composed of different stars and new planets. We would not be here but chances are, someone would. Even if our CMB sky does not look like the Thurston pattern, perhaps someone’s does. JL thanks the participants and organizers of CTP98.
no-problem/9907/cs9907031.html
ar5iv
text
# Beta-Skeletons Have Unbounded Dilation ## 1 Introduction A number of authors have studied questions of the dilation of various geometric graphs, defined as the maximum ratio between shortest path length and Euclidean distance. For instance, Chew showed that the rectilinear Delaunay triangulation has dilation at most $`\sqrt{\mathrm{1}\mathrm{0}}`$ and that by placing points around the unit circle, one could find examples for which the Euclidean Delaunay triangulation has dilation arbitrarily close to $`\pi /\mathrm{2}`$. In the journal version of his paper , Chew added a further result, that the graph obtained by Delaunay triangulation for a convex distance function based on an equilateral triangle has dilation at most $`\mathrm{2}`$. Chew’s conjecture that the Euclidean Delaunay dilation was constant was proved by Dobkin et al. , who showed that the Delaunay triangulation has dilation at most $`\phi \pi `$ where $`\phi `$ is the golden ratio $`(\mathrm{1}+\sqrt{\mathrm{5}})/\mathrm{2}`$. Keil and Gutwin further improved this bound to $`\frac{\mathrm{2}\pi }{\mathrm{3}\mathrm{cos}(\pi /\mathrm{6})}\mathrm{2}.\mathrm{4}\mathrm{2}`$. Das and Joseph showed that these constant dilation bounds hold for a wide variety of planar graph construction algorithms, satisfying the following two simple conditions: * Diamond property. There is some angle $`\alpha <\pi `$, such that for any edge $`e`$ in a graph constructed by the algorithm, one of the two isosceles triangles with $`e`$ as a base and with apex angle $`\alpha `$ contains no other site. This property gets its name because the two triangles together form a diamond shape, depicted in Figure 1(a). * Good polygon property. There is some constant $`d`$ such that for each face $`f`$ of a graph constructed by the algorithm, and any two sites $`u`$, $`v`$ that are visible to each other across the face, one of the two paths around $`f`$ from $`u`$ to $`v`$ has dilation at most $`d`$. Figure 1(b) depicts a graph violating the good polygon property. Intuitively, if one tries to connect two vertices by a path in a graph that passes near the straight line segment between the two, there are two natural types of obstacle one encounters. The line segment one is following may cross an edge of the graph, or a face of the graph; in either case the path must go around these obstacles. The two properties above imply that neither type of detour can force the dilation of the pair of vertices to be high. For a survey of further results on dilation, see . Our interest here is in another geometric graph, the $`\beta `$-skeletons , which have been of recent interest for their use in finding edges guaranteed to take part in the minimum weight triangulation As a special case, $`\beta =\mathrm{1}`$ gives the Gabriel graph, a subgraph of the Delaunay triangulation and the relative neighborhood graph, and a supergraph of the minimum spanning tree. These graphs have a definition (given below) closely related to Das and Joseph’s diamond property. The value $`\beta `$ is a parameter that can be taken arbitrarily close to zero; for any point set, as beta approaches zero, more and more edges are added to the $`\beta `$-skeleton until eventually one forms the complete graph. Therefore it seems reasonable to guess that, for sufficiently small $`\beta `$, the $`\beta `$-skeleton should have bounded dilation. Such a result would also fit well with Kirkpatrick and Radke’s motivation for introducing $`\beta `$-skeletons in the study of “empirical networks”: problems such as modeling the probability of the existence of a road between cities . In this paper, we show that this is surprisingly not the case. For any $`\beta `$, we find point sets for which the $`\beta `$-skeleton has arbitrarily high dilation. Our construction uses fractal curves closely related to the Koch snowflake. We show that the point set can be chosen in such a way that the $`\beta `$-skeleton forms a path with this fractal shape; the fact that the curve has a fractal dimension greater than one then implies that the graph shortest path between its endpoints has unbounded length. ## 2 Beta-skeletons The $`\beta `$-skeleton of a set of points is a graph, defined to contain exactly those edges $`ab`$ such that no point $`c`$ forms an angle $`acb`$ greater than $`\mathrm{sin}^\mathrm{1}\mathrm{1}/\beta `$ (if $`\beta >\mathrm{1}`$) or $`\pi \mathrm{sin}^\mathrm{1}\beta `$ (if $`\beta <\mathrm{1}`$). Equivalently, if $`\beta >\mathrm{1}`$, the $`\beta `$-skeleton can be defined in terms of the union $`U`$ of two circles, each having $`ab`$ as a chord and having diameter $`\beta d(a,b)`$. Edge $`ab`$ is included in this graph exactly when $`U`$ contains no points other than $`a`$ and $`b`$. If $`\beta =\mathrm{1}`$, an edge $`ab`$ is included in the $`\beta `$-skeleton exactly when the circle having $`ab`$ as diameter contains no points other than $`a`$ and $`b`$. The 1-skeleton is also known as the Gabriel graph . If $`\mathrm{0}<\beta <\mathrm{1}`$, there is a similar definition in terms of the intersection $`I`$ of two circles, each having $`ab`$ as a chord and having diameter $`d(a,b)/\beta `$. Edge $`ab`$ is included in the $`\beta `$-skeleton exactly when $`I`$ contains no points other than $`a`$ and $`b`$. Figure 2 depicts these regions for $`\beta =\sqrt{\mathrm{2}}`$ (union of circles), $`\beta =\mathrm{1}`$ (single circle), and $`\beta =\mathrm{1}/\sqrt{\mathrm{2}}`$ (intersection of circles). As noted above, $`\beta `$-skeletons were originally introduced for analyzing empirical networks. Gabriel graphs and $`\beta `$-skeletons have many other applications in computational morphology (combinatorial methods of representating shapes). Gabriel graphs can also be used to construct minimum spanning trees, since the gabriel graph contains the MST as a subgraph. More recently, various researchers have shown that $`\beta `$-skeletons (for certain values of $`\beta >\mathrm{1}`$) form subgraphs of the minimum weight triangulation . Su and Chang have described a generalization of Gabriel graphs, the $`k`$-Gabriel graphs, in which an edge is present if its diameter circle contains at most $`k\mathrm{1}`$ other points. One can similarly generalize $`\beta `$-skeletons to $`k`$-$`\beta `$-skeletons. Our results can be made to hold as well for these generalizations as for the original graph classes. ## 3 Fractals and dilation Our construction showing that beta-skeletons have unbounded dilation consists of a fractal curve with a recursive definition similar to that of a Koch snowflake. For a given angle $`\theta `$ define the polygonal path $`P(\theta ,\mathrm{1})`$, by following a path of five equal-length line segments: one horizontal, one at angle $`\theta `$, a second horizontal, a segment at angle $`\theta `$, and a third horizontal. We then more generally define the graph $`P(\theta ,k)`$ to be a path of $`\mathrm{5}^k`$ line segments, formed by replacing the five segments of $`P(\theta ,\mathrm{1})`$ with congruent copies of $`P(\theta ,k\mathrm{1})`$, scaled so that the two endpoints of the path are at distance one from each other. Figure 3 shows three levels of this construction. In the drawing of Figure 3, the orientations of the five copies of $`P(\theta ,k\mathrm{1})`$ alternate along the overall path, so that the horizontal copies are in the same orientation as the overall path and the other two copies are close to upside-down, but this choice of orientation is not essential to our construction. Note that, if we denote the length of $`P(\theta ,k)`$ by $`\mathrm{}_k=\mathrm{}_k(\theta )`$, then $`\mathrm{}_\mathrm{1}>\mathrm{1}`$ and $`\mathrm{}_k=\mathrm{}_\mathrm{1}^k`$. ###### Lemma 1 $`P(\theta ,k)`$ is contained within a diamond shape having the endpoints of the path as its diagonal, and with angle $`\theta `$ at those two corners of the diamond. Proof: This follows by induction, as shown in Figure 4, since the five such diamonds containing the five copies of $`P(\theta ,k\mathrm{1})`$ fit within the larger diamond defined by the Lemma. ###### Lemma 2 If $`\theta <(\pi \mathrm{sin}^\mathrm{1}\beta )/\mathrm{2}`$, $`P(\theta ,k)`$ is the $`\beta `$-skeleton of its vertices. Proof: We show that, if $`a`$ and $`b`$ are non-adjacent vertices in the path, then there is some $`c`$ forming an angle of at least $`\pi \mathrm{sin}^\mathrm{1}\beta `$. We can assume that $`a`$ and $`b`$ are in different copies of $`P(\theta ,k\mathrm{1})`$, since otherwise the result would hold by induction. But no matter where one places two points in different copies of the small diamonds containing the copies of $`P(\theta ,k\mathrm{1})`$ (depicted in Figure 4), we can choose one of the three interior vertices of $`P(\theta ,\mathrm{1})`$ as the third point $`c`$ forming an angle $`acb\pi \mathrm{2}\theta `$. The result follows from the assumed inequality relating $`\theta `$ to $`\beta `$. For instance, the graphs $`P(\pi /\mathrm{4},k)`$ depicted in Figure 3 are Gabriel graphs of their vertices. A more careful analysis shows that larger values of $`\theta `$ still result in a $`\beta `$-skeleton: if the orientations of the copies of $`P(\theta ,k\mathrm{1})`$ that form $`P(\theta ,k)`$ are chosen carefully, $`P(\theta ,k)`$ is contained in only half the diamond of Lemma 1, and angle $`acb`$ in the proof above can be shown to be $`\pi \mathrm{3}\theta /\mathrm{2}`$. ###### Theorem 1 For any $`\beta >\mathrm{0}`$ there is a $`c>\mathrm{0}`$ such that $`\beta `$-skeletons of $`n`$-point sets have dilation $`\mathrm{\Omega }(n^c)`$. Proof: We have seen that we can choose a $`\theta `$ such that the graphs $`P(\theta ,k)`$ are $`\beta `$-skeletons. Since the endpoints of the path are at distance one from each other, the dilation of $`P(\theta ,k)`$ is $`\mathrm{}_k=\mathrm{}_\mathrm{1}^k`$. Each such graph has $`n=\mathrm{5}^k+\mathrm{1}`$ vertices and dilation $`\mathrm{}_\mathrm{1}^k=n^{log_\mathrm{5}\mathrm{}_\mathrm{1}o(\mathrm{1})}`$. Since $`\mathrm{}_\mathrm{1}>\mathrm{1}`$, $`log_\mathrm{5}\mathrm{}_\mathrm{1}>\mathrm{0}`$. ## 4 Upper Bounds We have shown a lower bound of $`\mathrm{\Omega }(n^c)`$ for the dilation of $`\beta `$-skeletons, where $`c`$ is a constant depending on $`\beta `$, and approaching zero as $`\beta `$ approaches zero. This behavior of having length a fractional power of $`n`$ is characteristic of fractal curves; is it inherent in $`\beta `$-skeletons or an artifact of our fractal construction? We now show the former by proving an upper bound on dilation of the same form. To do this, we define an algorithm for finding short paths in $`\beta `$-skeletons. As a first start towards such an algorithm, we use the following simple recursion: to find a path from $`s`$ to $`t`$, test whether edge $`st`$ exists in the $`\beta `$-skeleton. If so, use that edge as path. If not, some $`r`$ forms a large angle $`srt`$; concatenate the results of recursively finding paths from $`s`$ to $`r`$ and $`r`$ to $`t`$. For $`\beta \mathrm{1}`$, $`sr`$ and $`rt`$ are shorter than $`st`$, so this algorithm always terminates; we assume throughout the rest of the section that $`\beta \mathrm{1}`$. We can represent the path it finds as a tree of triangles, all having an angle of at least $`\pi \mathrm{sin}^\mathrm{1}\beta `$, rooted at triangle $`srt`$ (Figure 5). The hypotenuse of each triangle in this tree is equal to one of the two shorter sides of its parent. Note that the triangles may overlap geometrically, or even coincide; we avoid complications arising from this possibility by only using the figure’s combinatorial tree structure. We will bound the length of the path found by this algorithm by manipulating trees of this form. For any similarly defined tree of triangles, we define the boundary length of the tree to be the following formula: $$|T|=\mathrm{dist}(s,t)+\underset{\mathrm{\Delta }T}{}(\mathrm{perim}(\mathrm{\Delta })\mathrm{2}\mathrm{hypotenuse}(\mathrm{\Delta })).$$ In other words, we sum the lengths of all the short sides of the triangles, and subtract the lengths of all non-root hypotenuses. If the tree forms a non-self-intersecting polygon, such as the one shown in the figure, this is distance from $`s`$ to $`t`$ “the long way” around the polygon’s perimeter ###### Lemma 3 For the tree defined by the algorithm above, $`|T|`$ is the length of the path constructed by the algorithm. Proof: This can be shown by induction using the fact that the path from $`s`$ to $`t`$ is formed by concatenating the paths from $`s`$ to $`r`$ and $`r`$ to $`t`$. Our bound will depend on the number of leaves in the tree produced above. However, this number may be very large, larger than $`n`$, because the same vertex of our input point set may be involved in triangles in many unrelated parts of the tree. Our first step is to prune the tree to produce one that still corresponds in a sense to a path in the $`\beta `$-skeleton, but with a good bound on the number of leaves. ###### Lemma 4 For any $`\beta \mathrm{1}`$, we can find a tree like the one described above, with at most $`\mathrm{2}n`$ leaves, for which $`|T|`$ is the length of some path in the $`\beta `$-skeleton from $`s`$ to $`t`$. Proof: Define a “leaf vertex” to be the vertex opposite the hypotenuse of a leaf triangle in $`T`$. We prune the tree one step at a time until each vertex appears at most twice as a leaf vertex. At each step, the path corresponding to $`T`$ (and with length at most $`|T|`$) will visit all the leaf vertices in tree order (as well as possibly visiting some other vertices coming from interior nodes of the tree). Suppose some vertex $`v`$ appears three or more times. Then we prune $`T`$ by removing all subtrees descending from the path between the first and last appearance of $`v`$ (occurring between the two appearences in tree order), and we shorten the corresponding path by removing the portion of it between these two appearances of $`v`$. At each step, the change to $`|T|`$ comes from subtracting some triangle short side lengths corresponding to the subtrees removed from $`T`$, as well as adding some hypotenuses of triangles from the same subtrees. Each subtracted side length that is not cancelled by an added hypotenuse corresponds to one of the edges removed from the path, so the total reduction in $`|T|`$ is at most as great as the total reduction in the length of the path, and the invariant that $`|T|`$ bounds the path length is maintained. After this pruning, there will be no leaves between the two appearances of $`v`$, and no new leaves are created elsewhere in the tree, so the invariant that the path visits the leaf vertices in order is also maintained. This pruning process removes at least one appearance of $`v`$, and so can be repeated at most finitely many times before terminating. We use induction on the number of leaves to prove bounds on $`|T|`$. The following lemma forms the base case: ###### Lemma 5 Let $`T`$ be a tree of triangles, all having an angle of at least $`\theta >\pi /\mathrm{2}`$ opposite the edge connecting to the parent in the tree, with exactly one leaf triangle, and scaled so that the hypotenuse of the root triangle has length $`\mathrm{1}`$. Then $`|T|\mathrm{1}/\mathrm{cos}\theta `$. Proof: Since $`|T|`$ does not depend on the ordering of tree nodes, we can assume without loss of generality that each node’s child is on the left. For any such tree, we can increase $`|T|`$ by performing a sequence of the following steps: (1) If any triangle has an angle greater than $`\theta `$, change it to one having an angle exactly equal to $`\theta `$, without changing any other triangle shapes. (2) If any triangle has a ratio of left to right side lengths less than some value $`C`$, split it into two triangles by adding a vertex on the right side. (3) Add a child to the leaf of $`T`$. These steps are depicted in Figure 6. The result of this sequence of transformations is the concatenation of many triangles with angles equal to $`\theta `$, very short left sides, and right sides with length close to that of the hypotenuse. In the limit we get a curve from $`s`$ to $`t`$ formed by moving in a direction forming an angle $`\pi \theta `$ to $`t`$, namely the logarithmic spiral (Figure 7). Integrating the distance traveled on this spiral against the amount by which the distance to $`t`$ is reduced shows that it has the length formula claimed in the lemma. Since we reach this limit by a monotonically increasing sequence of tree lengths, starting with any finite one-leaf tree, any finite tree must have length less than this limit. More generally, we have the following result. ###### Lemma 6 Let $`T`$ be a tree of triangles, all having an angle of at least $`\theta >\pi /\mathrm{2}`$ opposite the edge connecting to the parent in the tree, with $`k`$ leaf triangles, and scaled so that the hypotenuse of the root triangle has length $`\mathrm{1}`$. Then $`|T|(\mathrm{1}/\mathrm{cos}\theta )^{\mathrm{1}+log_\mathrm{2}k}`$. Proof: We prove the result by induction on $`k`$; Lemma 5 forms the base case. If there is more than one leaf in $`T`$, form a smaller tree $`T^{}`$ by removing from $`T`$ each path from a leaf to the nearest ancestor with more than one child. These paths are disjoint, and each such removal replaces a subtree with one leaf by the edge at the root of the subtree, so using Lemma 5 again shows that $`|T||T^{}|/\mathrm{cos}\theta `$. Each leaf in $`T^{}`$ has two leaf descendants in $`T`$, so the number of leaves in $`T`$ is at most $`k/\mathrm{2}`$ and the result follows. This, finally, provides a bound on $`\beta `$-skeleton dilation. ###### Theorem 2 For $`\beta <\sqrt{\mathrm{3}}/\mathrm{2}\mathrm{0}.\mathrm{8}\mathrm{6}\mathrm{6}\mathrm{0}\mathrm{2}\mathrm{5}`$, any $`\beta `$-skeleton has dilation $`O(n^c)`$, where $`c<\mathrm{1}`$ is a constant depending on $`\beta `$ and going to zero in the limit as $`\beta `$ goes to zero. Proof: We have seen (Lemma 4) that we can connect any pair of vertices in the skeleton by a path with length bounded by $`|T|`$, where $`T`$ is a tree of triangles in which all angles are at least $`\pi \mathrm{sin}^\mathrm{1}\beta `$, and where $`T`$ has at most $`\mathrm{2}n`$ leaves. By Lemma 6, the length of such a tree is at most $$(\mathrm{1}/\mathrm{cos}(\pi \mathrm{sin}^\mathrm{1}\beta ))^{\mathrm{1}+log_\mathrm{2}\mathrm{2}n}=O\left(n^{log_\mathrm{2}\frac{\mathrm{1}}{\mathrm{cos}\left(\pi \mathrm{sin}^\mathrm{1}\beta \right)}}\right)=O\left(n^{\frac{\mathrm{1}}{\mathrm{2}}log_\mathrm{2}(\mathrm{1}\beta ^\mathrm{2})}\right)$$ which has the form specified in the statement of the theorem. Figure 8 shows the growth of the exponent $`c`$ as a function of $`\beta `$. For $`\sqrt{\mathrm{3}}/\mathrm{2}\beta \mathrm{1}`$, the theorem does not give the best bounds; a bound of $`n\mathrm{1}`$ on dilation can be proven using the fact that the skeleton contains the minimum spanning tree. ## Acknowledgements Work supported in part by NSF grant CCR-9258355 and by matching funds from Xerox Corp. Thanks to Marshall Bern for suggesting the problem of $`\beta `$-skeleton dilation.
no-problem/9907/hep-th9907066.html
ar5iv
text
# Tiling with almost-BPS junctions. \[ ## Abstract In the light of recent studies of BPS triple junctions in the Wess-Zumino model we describe techniques to construct infinite lattices using similar junctions. It is shown that whilst these states are only approximately locally BPS they are nevertheless stable to small perturbations, giving a stationary tiling of the plane. \] Domain walls have found their way into many areas of physics, ranging from the small scale in solid state laboratories, where they can appear as crystal dislocations, to the large scales of cosmology. Such objects can come about when there are vacuum states which are disconnected yet degenerate in energy, although one can envision walls appearing when the distinct vacua have different energy, but then the walls are not static. The case of interest here is supersymmetric field theory, where the distinct vacua come about from a polynomial superpotential. As we shall see one is able to construct states where two or more walls meet at a junction to create a configuration which saturates a Bogomol’nyi bound. These junctions have been investigated recently in the context of the Wess-Zumino model by two groups , pointing out that these junctions preserve $`\frac{1}{4}`$ of the $`N=1`$ supercharges. One may also motivate this study from the viewpoint of supersymmetric QCD. There it is found that gluino condensates can form, leading to the effective degrees of freedom satisfying a Wess-Zumino model . In this case distinct vacua are present, giving domain walls which are BPS states . An intriguing possibility then arises due to the existence of junctions - one may make a network of domain walls in a similar manner to string networks . Here however we shall see that these networks of domain walls are only locally approximately BPS states rather than full BPS states. In doing this we shall come across a wonderful variety of patterns, including some of the Euclidean tilings . Although a connection is not clear, such arrangements are familiar in fluid mechanics and the physics of granular layers . In these Faraday experiments the fluids are driven by an external force which can generate instabilities, such as convection instabilities. A consequence of this is the generation of diverse cell patterns, from regular to quasi-patterns. We shall follow the approach of Gibbons and Townsend in our choice of model , specifically the model under investigation is the bosonic sector of the Wess-Zumino model, reduced to 2+1 dimensions. Much of this section can be found in their work but is reproduced here for completeness. The Lagrangian is defined by, $``$ $`=`$ $`{\displaystyle \frac{1}{4}}_\mu \overline{\varphi }^\mu \varphi |W^{}(\varphi )|^2.`$ (1) For static configurations we may use the energy density to derive a Bogomol’nyi equation. This is facilitated by the introduction of the complex coordinate $`z=x+iy`$, whereupon the energy density becomes, $`=\left|{\displaystyle \frac{\varphi }{z}}e^{i\alpha }\overline{W^{}}\right|^2+2\mathrm{R}\mathrm{e}\left(e^{i\alpha }{\displaystyle \frac{W}{z}}\right)+{\displaystyle \frac{1}{2}}J(z,\overline{z}).`$ (2) Here we have introduced an arbitrary phase $`\alpha `$, and $`J(z,\overline{z})`$ is defined by, $`J(z,\overline{z})=\left({\displaystyle \frac{\varphi }{\overline{z}}}{\displaystyle \frac{\overline{\varphi }}{z}}{\displaystyle \frac{\varphi }{z}}{\displaystyle \frac{\overline{\varphi }}{\overline{z}}}\right).`$ (3) We may derive a Bogomol’nyi bound by defining the quantities, $`Q`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle dx\mathrm{dy}J(\mathrm{z},\overline{\mathrm{z}})},`$ (4) $`T`$ $`=`$ $`2{\displaystyle dx\mathrm{dy}\frac{W}{\mathrm{z}}},`$ (5) leading to $`E={\displaystyle dx\mathrm{dy}}Q+|T|,`$ (6) which is saturated by solutions of the first order equation, $`{\displaystyle \frac{\varphi }{z}}=e^{i\alpha }\overline{W^{}}.`$ (7) We now focus on models where the scalar field potential energy density, $`|W^{}(\varphi )|^2`$, contains isolated, degenerate minima. This allows for the presence of domain walls, in particular if there are more than two minima there will be more than one type of domain wall, giving the possibility of a wall junction. The question of what type of walls exist was investigated in , showing that not all 4D field theories with more than two disconnected vacua admit junction solutions, in contradiction to a statement made in . We note here that an existence proof for a class of junction solutions of the second order equations has been provided in . Junctions may be thought of as the meeting point of a number of domain walls. Each domain wall interpolates between two vacua and one may associate a complex topological charge to them , $`T_{ab}`$ $`=`$ $`2e^{iarg(W(\varphi _b)W(\varphi _a))}|W(\varphi _b)W(\varphi _a)|,`$ (8) with BPS walls having a tension, $`\mu _{ab}=|T_{ab}|`$. This formula for the tensions will be of great use later, as the wall tensions are needed to describe the form of the junctions. To be specific we now choose a superpotential which leads to three distinct vacua, placed where $`\varphi `$ is a cube root of unity, (1, $`\omega `$, $`\omega ^2`$), $`W(\varphi )`$ $`=`$ $`\varphi {\displaystyle \frac{1}{4}}\varphi ^4.`$ (9) In this case the tensions are all equal, leading to a triple junction with 120 separating the sectors. One possible lattice using these junctions consists of hexagonal domains, pictured in Fig. 1. One may ask the question whether such a network could be a BPS state , that this is not the case may be seen from two perspectives. Firstly we note that the lattice can at best be perturbatively stable, meaning it is not the lowest energy state for the given boundary data, and so is not BPS. If one of the domains were to tunnel to a different vacuum, liberating an energy of approximately $`3l\mu `$, the network would not recover its original form; the newly tunnelled phase would propagate through the entire lattice. Secondly, one may try to create such a network by solving the BPS equations in the plaquettes of the dual lattice (the dotted triangles in Fig. 1), then gluing the plaquettes together. This would clearly lead to a network of BPS junctions. However we note that each junction has a winding associated to it and that the three junctions it connects to have the opposite winding, with $`\varphi `$ being proportional to $`z`$ or $`\overline{z}`$ ($`z`$ here is measured from the centre of the junction.) We see then that while one junction may satisfy (7) its three neighbours satisfy an anti-BPS relation, $`{\displaystyle \frac{\varphi }{\overline{z}}}=e^{i\beta }\overline{W^{}}.`$ (10) A BPS junction would therefore be connected to three anti-BPS junctions, so no global coordinate system exists that could make the whole lattice BPS. We also note that such a construction would lead to discontinuities in the derivatives of $`\varphi `$ where the dual plaquettes meet because one is solving different equations in each of them. To establish whether a network could exist we performed a numerical simulation of the second order Lagrange equations and searched for a hexagonal structure, using a lattice with periodic boundary conditions. Once a tiling had been found we tested its stability against local fluctuations by randomly perturbing the field, with the expected result; static lattices exist so long as the domain sizes are greater than the width of the walls. An example is given below in Fig. 2. There remains, however, the possibility of non local instabilities other than tunnelling. In the thin wall limit the walls approach the BPS limit, and then one may expand or shrink a plaquette without changing the angles of the junctions, so keeping the energy the same. These non local zero modes do not survive outside the thin wall limit. As a plaquette is shrunk the walls which make up its boundary deviate more from the BPS limit, so increasing there tension, causing the plaquette to further collapse. This has been tested numerically, finding that such modes were not excited by the local random fluctuations but did occur when the initial data contained one plaquette smaller than the rest. An interesting property of the domain walls in this model is that the wall interpolating between two vacua is effected by the other vacua, making the the field trace out a curve in $`\varphi `$ space which is not straight. An even more remarkable property of the BPS domain walls is that these curves are straight when plotted in the superpotential, W, plane . To see this consider a domain wall which is independent of $`y`$, so that $`zx`$ in (7). Then multiplying both sides of (7) by $`\frac{\overline{\varphi }}{x}`$ yields $`|{\displaystyle \frac{\varphi }{x}}|^2`$ $`=`$ $`e^{i\alpha }{\displaystyle \frac{W}{x}},`$ (11) where $`\alpha `$ is now found to be the argument of the topological charge on the wall . This shows that the imaginary component of $`e^{i\alpha }W(\varphi )`$ is a constant, leading to BPS domain walls tracing out straight lines in the $`W`$ plane. This is illustrated below in Fig. 3, where the hexagonal array of Fig. 2 has been mapped to the $`\varphi `$ and $`W`$ planes. The density of dots in this figure represents the volume of physical space occupying that region of field space, the cusps (vacua) are the most dense, as we would expect for most of the field being in a vacuum. The straight lines joining the vacua in Fig. 3 (b) confirm the almost-BPS nature of the domain walls. Using what we have learned above we may now be more adventurous in the choice of superpotential. Using the quintic superpotential, $`W(\varphi )=\varphi \frac{1}{5}\varphi ^5`$, we find that the vacua for $`\varphi `$ are the fourth roots of unity. In this case there are six domain walls, four of which have tension $`2\frac{4}{5}`$ and two with $`2\frac{4}{5}\sqrt{2}`$, using (8). The junctions allowed are found by considering the ways that the vacua can be joined by domain walls. Here there are essentially only two types of junction, one three-junction and one four-junction, with the angles involved being found by drawing the vacuum connectivity in the $`W`$ space. We know that the tensions of the walls are proportional to the length of the connection in $`W`$ space between the vacua (8), in fact these connections may be used as a vector diagram for forces. Together the domain walls making up a junction form a closed polygon when mapped to the $`W`$ plane, this is precisely what is required of a vector diagram of tensions if there is to be no net force. As an example we consider the junctions of the quintic superpotential in fig. 4. A triple junction is calculated by connecting up three vacua in the $`W`$ plane and translating this into a closed vector diagram. These vectors then make up the tensions in the junction, allowing the angles to found trivially. One may initially expect the four-junction to be unstable to producing two triple junctions. This is not the case as it would require a heavy wall to interpolate between them which is disfavoured energetically. We undertook a simulation of this potential, looking for regular tessellations, testing the stability as before. One pattern which can be formed uses only the triple junction, leading to a familiar ‘bathroom tiling’ consisting of octagons and squares . This can be made more intricate by including the four-junction, a result pictured in Fig. 5. We end this catalogue by considering an order seven superpotential, $`W(\varphi )=\varphi \frac{1}{7}\varphi ^7`$. This has the possibility of a rich variety of patterns as there are 5 factorial domain walls connecting the vacua, which are the sixth roots of unity. The walls have three different tensions, occuring in in the ratio 2:$`\sqrt{3}`$:1. The triple vertex which has all its walls at a different tension forms a junction with angles of 120, 150 and 90, which is easily found using the aforementioned method. One may use this this to generate a tiling consisting of dodecagons, hexagons and squares, as illustrated in Fig. 6. Here we may also generate an attractive tiling using three of the possible vertices, the triple, quadruple and sextuple junctions. The result is shown in Fig. 7. In this paper we have indicated the huge variety of networks that are possible in a relatively simple field theory. These networks consist of domains, where the field falls into a given vacuum, and walls which separate the vacua. For stationary solutions we require that each domain does not ’know’ that there is another domain nearby in the same vacuum. This necessitates that the domain sizes are larger than the width of the walls separating them. In fact, violations of the BPS equation reduce as the wall thickness decreases with respect to the domain size, approaching the BPS limit as the thickness goes to zero. It is this same thin defect limit which makes the the string network’s of Sen BPS. One possible area for future study is static, space filling domain wall networks in three spatial dimensions. At present we see no reason why such states could not exist, although they may occur more naturally for a two component complex scalar transforming under a natural SU(2) action. The networks of domain walls have been shown not to be BPS states, nevertheless junctions do locally approximately satisfy the BPS (or anti-BPS) equation, with the violation getting smaller as the domain size increases. Acknowledgements: I would like to thank Gary Gibbons for suggesting this project and for his invaluable comments. Conversations with Nick Manton, Jesus Moreno and Paul Townsend are also gratefully acknowledged. This work was supported by PPARC.
no-problem/9907/astro-ph9907162.html
ar5iv
text
# 1 Introduction ## 1 Introduction In recent years, much attention was centred on the possibility that dark matter, and in particular its baryonic side, consists of astrophysical objects, generically termed as ”Massive Astrophysical Compact Halo Objects” (MACHOs) with mass $`10^8M_{}<M<10^2M_{}`$ . Direct searches of these objects can, at best, reach the solar neighborhood. In order to detect them further out, it was proposed by Paczynski (1986) to search for dark objects by gravitational microlensing . Microlensing is an application of General Relativity effect of gravitational lensing where the separation between the produced images is too small to be appreciated ($`\delta \theta 10^3`$ arcsec); nevertheless – owing to the motion of the lens – it produces a time–dependent light amplification of the source which is observable. In fact, when a compact object passes nearby the line of sight of a background star, the luminosity of this star increases giving rise to a characteristic luminosity curve (see Fig.1). The galactic structure is not very well understood due to the ignorance of the effective content of dark matter and its actual distribution, so the first goal is to perform a map of the MACHOs’ dark matter distribution both in the galactic disk through microlensing observations towards the galactic bulge<sup>1</sup><sup>1</sup>1Of course, observations towards the galactic bulge are not easy from the Northern Hemisphere, but possible. and the spiral arms (Sagittarius, $`\gamma `$Scuti, $`\beta `$Scuti) and in the galactic haloes of the Milky Way, M31, M33 and dwarf galaxies. Until now a lot of microlensing events have been detected towards the galactic bulge and the LMC . These results have allowed to better understand the galactic structure. For example it has been recently suggested that the bulge has not a simple spherical symmetry but it has also a barred structure . Microlensing searches by MACHO and EROS groups, looking for the magnification of LMC stars by MACHOs, have now been underway for several years. The very low microlensing probability requires several millions stars to be daily monitored, in order to observe significant luminosity increases. Some events have been reported, but less than expected in the standard halo model. Other experiments (DUO and OGLE), as well as MACHO itself, are monitoring stars of the Galactic bulge in order to look for microlensing by stars in the Galactic disk and in the bulge itself. These appear to find more events than expected . The AGAPE and Columbia-Vatican (VATT) collaborations look for microlensing in the direction of the M31 galaxy. This could yield very useful information on the haloes of both our own Galaxy and M31. A pilot observation at Pic-du-Midi Observatory by AGAPE has given promising results . Our project is to extend observations towards all the visible targets from the Potenza Toppo Observatory in order to detect a larger amount of microlensing events. Another goal is to search for planets or binary lensing events for which a very accurate photometry is required. Toppo Telescope has the right features to contribute substantially to this aim. This fact will also allow us to investigate in detail the initial stellar mass function as well as the presence of planets (both Jupiter–like and Earth–like) . ## 2 Gravitational lensing The light deflection due to a gravitational field ( in weak field approssimation and geometrical optics approssimation) is described by lens equation $$\eta =\frac{D_s}{D_d}\xi D_{ds}\widehat{\alpha }(\xi ),$$ (1) with $$\widehat{\alpha }(\xi )=d^2\xi ^{}\frac{4G\mathrm{\Sigma }(\xi ^{})}{c^2}\frac{\xi \xi ^{}}{\left|\xi \xi ^{}\right|}$$ (2) and $`D`$ is euclidean distance on galactic scale, while angular diameter distance on extragalactic scale. The Schwarschild one is the simplest lens system, made of a point-like deflector. In this case, the deflection angle is: $$\widehat{\alpha }=\frac{4GM}{c^2b},$$ (3) where $`M`$ is the deflector mass and $`b`$ the impact parameter. ## 3 Pixel Lensing In a dense field, many stars contribute to any pixel of the CCD camera at the focal point of the telescope. If an unresolved star is sufficiently magnified, the increase of the light flux can be measured on the pixel. Therefore, instead of monitoring individual stars, we follow the luminous intensity of the pixels. Then all stars in the field, and not the only few resolved ones, are candidates for a microlensing event; so the event rate is potentially much larger. Of course, only the brightest stars will be amplified enough to become detectable above the fluctuations of the background, unless the amplification is very high and this occurs very seldom. In a galaxy like M31, however, this is compensated by the very high density of stars. The first step for the analysis of light curves is to define the baseline or the background flux $`\varphi _{background}`$. This is done by taking the minimum of a sliding average on 10 consecutive points. One can define the beginning of a ”bump”<sup>2</sup><sup>2</sup>2A significative variation of luminosity on an opportune group of pixels connected with the dimension of the average PSF. if 3 consecutive points lie 3$`\sigma `$ above $`\varphi _{background}`$ and the end when 2 consecutive points fall below 3$`\sigma `$. The second step is to select microlensing candidates by light curves with only one bump and not more. The third step is to fit a high amplification degenerate Paczynski curve to the mono-bumps. The amplification is then well approximated by $$A(t)1\frac{1}{u(t)}withu(t)=\sqrt{\left(\frac{tt_0}{t_e}\right)^2+u_0^2},$$ (4) where the Einstein time is $`t_e=R_e/V_{}`$ , the ratio of the Einstein radius to the transverse velocity of the lens. In Fig.2 we find a typical light curve obtained with AGAPE method. ## 4 Conclusion The achievement could allow us to obtain: a) the first large microlensing survey performed in the Northern Hemisphere for spiral arms observations and marginally for the bulge by taking into account the geographic position, the new generation optics and device at the Toppo telescope; b) a detailed survey on other galaxies besides the Galaxy (first of all M31); c) the capability of detecting planets (both Jupiter-like and Earth-like); d) the partecipation in the follow-up observations of microlensing events which will be announced by program like the Global Microlensing Alert Network (GMAN) or Planet Collaboration; e) the possibility to use larger or spacecraft telescope as HST to resolve interesting pixels obtaining more astrophysical information on the amplified objects. An on-line selection and a quasi-on-line analysis could be made to performe the last point .
no-problem/9907/cond-mat9907423.html
ar5iv
text
# Entropy and Time ## I Introduction The second law of thermodynamics is usually stated as the inequality $`\mathrm{\Delta }S0`$ for an isolated system, where $`S`$ is the entropy. Implicit in thermodynamics is thus a direction of time determined by the evolution toward equilibrium of a macroscopic system with no external influences. The extent to which this notion emerges from statistical mechanics based on an underlying time-reversal-invariant dynamics is the topic of this paper. The point of view taken here is not controversial; it has been accepted since the work of Boltzmann was understood. Including the topic in this special issue may thus seem unnecessary. However, our impression is that many undergraduate (and even many graduate) courses do not cover this thought provoking topic adequately. The reason may be that the message can be lost in subtleties described by words such as Stosszahlansatz, Umkehreinwand, and Wiederkehreinwand, and by discussing the topic in the context of Boltzmann’s equation, Liouville’s theorem, and coarse graining. An alternative is to discuss simple concrete models. We will consider one that is easy to simulate and has a long history: the 1907 double-urn model of Paul and Tatiana Ehrenfest. Our treatment is based in part on Chapter 10 of Ref. VI. The outline of the paper is as follows. After a brief general discussion, the way in which a direction of time follows from the statistics of large numbers is illustrated using the Ehrenfest model (in its dog-flea version). The difference between time symmetric fluctuations and the time evolution of a macroscopically identifiable non-equilibrium initial condition is emphasized. Calculations are done for a single system and for an ensemble of systems, the latter being described by a Markovian equation. With the passage of time (no pun intended), the model has become more topical than its conceivers could have imagined. Here it is used to describe the approach to equilibrium of two-level quantum systems such as spins. Temperature is introduced into the model via a Metropolis algorithm, and the approach to equilibrium at constant temperature is discussed, including a population-inversion (negative temperature) initial condition. To our knowledge, the Ehrenfest model has not been used in this way, especially as regards the introduction of temperature. ## II Background From the point of view of thermal physics, the state of an isolated physical system with many degrees of freedom is specified by its energy and other macroscopic parameters such as volume and magnetization. The assumption of many degrees of freedom implies a dense spectrum of excitations, and thus a very large number of microscopic states in a small energy interval consistent with the given macroscopic parameters. The starting point of a statistical analysis of a mechanical system is the enumeration (allowed in principle by both quantum and classical mechanics) of these microscopic states. The fundamental postulate of equilibrium statistical mechanics is that each of them is equally probable in equilibrium; the logarithm of their number is the entropy associated with the thermodynamic equilibrium of the isolated system. We are interested in how equilibrium is reached. A reasonable, but incorrect, expectation is that if the microscopic states are not equally likely at some instant, the evolution will be toward a situation in which they are. If the number of microstates explored by the system increased with time, its logarithm or entropy would also increase, giving a statistical underpinning to the rule that an increase of entropy characterizes spontaneous processes in isolated systems. The trouble with this too simple idea is that not every one of the microscopic configurations consistent with a given macroscopic non-equilibrium state tends, under the action of the laws of mechanics, toward equilibrium, although most of them do. As a result the strict inequality in the second law has to be replaced by a statement of overwhelming likelihood in statistical mechanics, thereby allowing the latter to be consistent with the time-symmetric equations of motion. A tiny loophole now opens, with the consequence that there is no longer a strict logical connection between the direction of time and the increase of entropy: one can never rule out the overwhelmingly unlikely possibility that a low entropy initial condition is a time-symmetric giant fluctuation caught midway. To close the loophole, we have to require that macroscopic deviations from equilibrium are due to externally imposed initial conditions. The word “overwhelming” is not being used lightly. To illustrate its meaning, consider the ratio of the number of microscopic configurations for a gas filling all or 99.99% of a container. If we treat the $`N`$ molecules of the gas as very weakly interacting, an estimate for this ratio is $`0.9999^N`$, because each molecule has 0.01% fewer available states in the smaller volume. For a liter, $`N`$ is order $`10^{22}`$, so that the logarithm of the ratio is $`10^{18}`$. The reciprocal of the ratio is thus very small indeed. Yet, this unimaginably tiny number is the probability that a gas in equilibrium in the entire container would be found to be occupying 99.99% of its volume, and thus to have undergone a small but macroscopic entropy-reducing spontaneous fluctuation. ## III THE DOG-FLEA MODEL Treating the evolution of a reasonably realistic statistical system is technically difficult. Even for a weakly interacting gas, collisions cannot be ignored, because they are the mechanism that shuffles a particular molecule between its states of motion. The approach to equilibrium thus depends on the details of the motion, and is a less general phenomenon than equilibrium itself. In order not to lose the woods for the trees, it is useful to look for simple but illustrative examples. One such example is a collection of “two-level”systems, that is, systems described by two possible outcomes. As a physical realization of this model, one could consider a collection of weakly interacting quantum spin $`\frac{1}{2}`$ systems each of which has up and down states. A more whimsical illustration is based on the model proposed in Ref. VI: consider a system consisting of a subsystem of 50 fleas whose “states” are residence on dog A or dog B, which we shall call Anik and Burnside, sleeping side by side. To simulate molecular agitation, we suppose that the fleas jump back and forth between the dogs. Now we need something that plays the role of the rest of the system. Let us suppose that the fleas are each equipped with a number, and have been trained to jump when their number is called. The “environment” agitating the fleas, which is like a heat reservoir, will be something that calls out numbers at random, and our closed system will be the fleas and the reservoir. To simulate the approach to equilibrium, it is necessary to start from a configuration that almost never occurs in the maximally random state of affairs. Suppose that in the beginning Anik has no fleas at all. We agitate the fleas by having a computer generate random numbers between 1 and 50 and transferring the flea with this number from one dog to the other. At every step we record the total number (between 0 and 50) of fleas on Anik, but not their labels. This way has the practical advantage that we do not have to keep track of the $`2^{50}`$ ways of assigning 50 labelled fleas to the two dogs. It also means that we are following only the “macrostate.” A typical sequence is shown in Figs. 1 and 2. In the first step it is certain that the number called will belong to a flea on Burnside. In the second step the probability of this happening again is 49/50. Thus, there initially seems to be a steady march towards an equal partition of the fleas between the dogs. The early time development is shown in Fig. 1. After about 50 steps we reach a situation where sometimes Anik and sometimes Burnside has more fleas. In this region we would expect that every one of the $`2^{50}`$ configurations mentioned above would be equally likely. This expectation translates into a binomial distribution corresponding to the repeatable random event of tossing 50 fair coins, namely $$P(m)=\frac{1}{2^{50}}\left(\genfrac{}{}{0pt}{}{50}{m}\right),$$ (1) where $`P(m)`$ is the probability of the macrostate with $`m`$ fleas on Anik, and $`\left(\genfrac{}{}{0pt}{}{50}{m}\right)`$ is the combinatorial coefficient. An examination of Fig. 2 confirms this expectation. The horizontal lines have been drawn to include two standard deviations on either side of the mean, which corresponds approximately to the $`95\%`$ range for the distribution Eq. (1). (The standard deviation is $`\sqrt{50}/23.5`$.) Our eye tells us that, except for the initial transient, fluctuations outside this range are indeed rare. We can be more quantitative. Fig. 3 is a histogram of the relative durations of the possible outcomes, constructed from Fig. 2 with the first 100 steps omitted. Superimposed on the histogram is the binomial distribution. The agreement is very good. The dance of the fleas in Fig. 2 has thus very quickly forgotten its unusual starting point and become the endless jitterbug of “equilibrium,” in which an event as unlikely as a flea-less dog simply never happens again without outside intervention. Figure 2 also illustrates the role of motion reversal. The model is time reversal invariant because a string of random numbers in reverse order is just as random. After the first 100 or so steps, the evolution has no sense of time. If we were to expand the region near one of the reasonably large fluctuations away from the mean, we would find that there is no characteristic feature of the build-up preceding the maximum deviation to distinguish it from the time reverse of the decay following the maximum. There is also no conclusive argument to rule out the possibility that the start of the trace shown in Fig. 1 has captured a truly giant fluctuation midway. Of course, we know that the figure was not produced in this way. The point of the previous paragraph is worth reemphasizing. The time asymmetry expressed in the second law is not simply the consequence of applying time-symmetric microscopic dynamics to systems having many degrees of freedom. Time-reversal symmetry is indeed preserved in our system. If at some instant the system is in a highly improbable state (that is, no fleas on Anik), it is overwhelmingly likely that it will be in a more evenly-distributed state at some later time. However, if the improbable state were due to a giant fluctuation, precisely the same argument could be made regarding the prior history of the system – it would then be overwhelmingly likely that at earlier times the system was also in a more evenly-distributed state. In this sense, there is perfect symmetry between past and future. The notion of a statistical “arrow of time” thus depends on the added ingredient of imposed initial conditions. When we see a system in a highly unlikely state, we justifiably assume that this state is the result of a prepared starting condition, and not of an overwhelmingly improbable fluctuation from equilibrium. As has been particularly emphasized by Peierls, this setting of initial conditions at some specified time breaks the symmetry between past and future. In fact it is virtually impossible to wait long enough for the initial configuration in Figs. 1 and 2 to occur as a fluctuation in equilibrium, where it has a probability of $`2^{50}`$. To have a reasonable chance of witnessing such a fluctuation, we would have to allow a number of time steps approximately equal to the reciprocal of this probability — about $`10^{15}`$ — to elapse. Thus, to recover the unlikely configuration of a totally clean Anik by random shuffling of fleas between equally dirty dogs, even for this very small system of 50 fleas, we would need a plot roughly two hundred thousand million times as long as Fig. 2, which extends only for 5000 time steps. Because Fig. 2 is about 5 cm wide, the length of the required trace would be about ten million kilometers. In comparison, the distance to the moon is only about four hundred thousand kilometers. The law of large numbers is at work, here making an unlikely event overwhelmingly unlikely. Though not logically certain, it is roughly 99.999999999999999% probable that the time in Fig. 1 is running in the direction of increasing disorder. Ehrenfests’ dogs bring into focus the essential characteristics of time in statistical mechanics. (i) A starting point macroscopically distinct from equilibrium is overwhelmingly likely to evolve to greater disorder, that is, towards equilibrium. (ii) In equilibrium, fluctuations have no sense of time. (iii) Giant fluctuations from equilibrium to extremely unlikely states are extremely rare. (iv) A statistically determined direction of time follows from the assumption that a system in a highly unlikely ordered state has been so prepared by external influences. Even for the rather small system we considered, these uses of the words “overwhelmingly” and “extremely” are very conservative. The word entropy has not appeared in this section. As a matter of fact, there is more than one way to introduce that notion here, as will be seen in more detail in Section IV. The essential point can be made by noting that the states with $`n`$ fleas on Anik are “macrostates,” each allowing for $`\left(\genfrac{}{}{0pt}{}{50}{n}\right)`$ assignments of distinct fleas or “microstates.” We could simply call the logarithm of the latter number, the entropy, that is, $`S(n)=\mathrm{ln}\left(\genfrac{}{}{0pt}{}{50}{n}\right)`$. The combinatorial coefficients have a maximum half way, at $`n=25`$, and become smaller in either direction. In Fig. 4, the values of $`n`$ plotted in Fig. 2 have been converted to a plot of entropy versus time steps using this rule. As in Fig. 2, $`S`$ starts from zero, because $`S(0)=\mathrm{ln}\left(\genfrac{}{}{0pt}{}{50}{0}\right)=\mathrm{ln}1=0`$, and has fluctuations. This entropy, sometimes called the Boltzmann entropy, can be associated with a single time trace such as in Fig. 2. Although it fluctuates in equilibrium, the fluctuations diminish as the size of the system increases. In a sufficiently large system the Boltzmann entropy increases steadily as equilibrium is approached. Note that we have ignored any contribution to the entropy of the closed system from the reservoir which is responsible for the hopping of the fleas. This assumption is justified here because energy has not entered into our considerations, making the model slightly artificial. It is probably best to think of the reservoir as having a very high temperature (in energy units) compared to the characteristic energies of the subsystem. As a result, heat exchanges with the subsystem occur with no change in the entropy of the reservoir, and only the flea entropy changes with time. The introduction of energy and temperature in Section V will lead to an interesting difference. ## IV GIBBS ENTROPY The dog-flea model is simple enough to allow the solution of several other interesting problems in time-dependent statistical mechanics. We first reexamine the assignment of entropy to our subsystem of fleas. The usual expression for the entropy in statistical mechanics is $$S=\underset{i}{}P_i\mathrm{ln}P_i,$$ (2) where the sum is over microstates labeled by the index $`i`$ and $`P_i`$ is the probability of $`i`$. This entropy is associated with a distribution describing an ensemble of systems, whereas the Boltzmann entropy introduced earlier is defined for the macroscopic time development of a single system. If there are $`M`$ equally likely microstates, each of the $`P`$s would equal $`1/M`$, and Eq. (2) reduces to $`\mathrm{ln}M`$. The Boltzmann entropy has exactly this form if the macrostate with $`n`$ fleas has $`\left(\genfrac{}{}{0pt}{}{50}{n}\right)`$ equally likely microstates. As we saw, the equilibrium Boltzmann entropy fluctuates for the subsystem plus reservoir. It is possible to assign a constant entropy to equilibrium. A system of 50 two-level systems at a temperature much higher than the level spacing is commonly assigned an entropy of $`50\mathrm{ln}2=34.657`$. The expression (2) gives this result if each of the $`2^{50}`$ microstates is taken to be equally probable. As we saw from Fig. 3, the fluctuations in Figs. 1 and 2 are an expression of equal likelihood of all these microstates. Even when the probabilities $`P(m)`$ of the macrostates corresponding to $`m`$ fleas on Anik are not given by Eq. (1), the probabilities of the equally likely $`\left(\genfrac{}{}{0pt}{}{50}{m}\right)`$ microstates $`i(m)`$ associated with $`m`$ are $$P_{i(m)}=P(m)/\left(\genfrac{}{}{0pt}{}{50}{m}\right).$$ (3) If we substitute Eq. (3) into Eq. (2) and do the sum over $`i(m)`$, we obtain $$S=\underset{m=0}{\overset{50}{}}P(m)\mathrm{ln}P(m)+\underset{m=0}{\overset{50}{}}P(m)\mathrm{ln}\left(\genfrac{}{}{0pt}{}{50}{m}\right).$$ (4) The second term on the right-hand-side of Eq. (4) arises, as was just shown, from the fact that a macrostate having $`m`$ fleas on Anik has an additional contribution to the entropy coming from the equally likely microstates which make up the macrostate. If the expression (1) is substituted into Eq. (4), the result is the previously mentioned $`50\mathrm{ln}2`$. We shall call this new entropy the Gibbs entropy, because it is analogous to the entropy in Gibbs’s canonical ensemble. To associate a Gibbs entropy with the early, and consequently non-equilibrium, part of the time development we have been discussing, we need more information than the single time trace we have been discussing. Because this entropy is a property of a distribution, we need to assign probabilities to every time step of the process, which means that we have to contemplate an ensemble of subsystems and define probabilities in terms of occurrences in the ensemble. One way to proceed would be to create a very large number of traces such as the one in Fig. 1, all of them starting with the same configuration. Because the sequence of random numbers would be different in each run, these traces would differ from one another. At any given time, we could calculate a histogram like Fig. 3. Obtaining reliable distributions by this method would require a very large number of runs. Fortunately, there is a much simpler way of implementing the idea, which does not require a random number generator. Let us calculate a distribution function $`P(m)`$, with $`m`$ running from 0 to 50, which changes from step to step, and reflects the random transfer of fleas from dog to dog. At the start of the process we know with certainty that there are no fleas on Anik. In the language of probability, the distribution at $`t=0`$ is $`P_0(0)=1,P_0(1)=P_0(2)=\mathrm{}=P_0(50)=0`$. (To indicate the time we need another label, which we shall write as a subscript.) Now we argue that the probability distribution at time $`t`$ determines the probability distribution at time $`t+1`$. The assumption that the fleas are being called at random implies that: $$P_{t+1}(m)=\frac{m+1}{50}P_t(m+1)+\frac{50(m1)}{50}P_t(m1).$$ (5) Equation (5) can be understood by saying it in words. Anik can have $`m`$ fleas at time $`t+1`$ either because she had $`m+1`$ at time $`t`$ and one jumped off, which has a probability proportional to $`m+1`$, or because she had $`m1`$ and one jumped on, which has a probability proportional to the $`50(m1)=51m`$ fleas that were on Burnside at time $`t`$. We can write a program to develop the distribution corresponding to the initial certainty forward in time using Eq. (5). However, there is one artificiality in this time evolution: at odd (even) times only odd (even) numbers of fleas can be on Anik. This artificiality can be remedied by averaging Eq. (5) over two forward steps. The resulting evolution is shown in Fig. 5. The three-dimensional plot in Fig. 5 is obtained by stacking together the distributions at successive times. It very clearly shows the initial certainty evolving to a distribution — which, not surprisingly, can be shown to be the binomial in Eq. (1) — with the outer regions in the range of possibilities being extremely unlikely. At each step we can calculate an entropy using Eq. (4). The result, shown in Fig. 6, shows that the entropy rises steadily from zero to $`50\mathrm{ln}2`$. By imagining that the system can be restarted at will, we have as in Section III insisted on the possibility of the external imposition of an initial condition that is overwhelmingly unlikely in equilibrium. Time-symmetric dynamics applied to such an initial condition is overwhelmingly likely to evolve towards equilibrium. If it were possible to repeatedly arrange for such a state to be a final condition, we could use a backwards-in-time evolution equation relating the distribution function at time $`t1`$ to the distribution function at time $`t`$. This equation would be identical to Eq. (5), except for the replacement of $`t+1`$ by $`t1`$: $$P_{t1}(m)=\frac{m+1}{50}P_t(m+1)+\frac{51m}{50}P_t(m1).$$ (6) Equation (6) would predict the opposite of what is shown in Fig. 5 — as one moved back in time through the history of the system, the entropy would increase monotonically. But such repeated occurrences of low entropy states as fluctuations in equilibrium are unimaginable. Such states do not typically arise in this fashion, nor can we arrange for them to do so. Time asymmetry in this context thus originates through our use of Eq. (5) and rejection of Eq. (6). ## V ENERGY AND TEMPERATURE Up to now, the word temperature has only appeared at the end of Section III where it was argued that the standard Ehrenfest model describes equilibration at high temperature. We have confirmed this argument by showing that the entropy evolves to the situation in which all microscopic configurations are equally likely. It is, however, not difficult to introduce temperature in this context. Suppose that Anik is cleaner than Burnside, providing a less friendly environment for fleas. We may model this environment by assuming an energy cost $`ϵ`$ to be paid by a flea jumping from Burnside to Anik. Let the fleas be at an effective temperature $`T`$ (in energy units), and define $`\mathrm{\Delta }=ϵ/T`$. We argue that Eq. (5) should be changed to $$P_{t+1}(m)=\frac{m+1}{50}P_t(m+1)+\frac{50m}{50}\left[1e^\mathrm{\Delta }\right]P_t(m)+\frac{50(m1)}{50}e^\mathrm{\Delta }P_t(m1).$$ (7) For the new conditions, we expect that in equilibrium any particular flea will spend more time on (dirty) Burnside than on (clean) Anik. This expectation is implemented in Eq. (7), which implies that when the number of one of the fleas on Anik is called, it jumps to Burnside with probability unity (term 1), but if one of the fleas on Burnside is called, it either stays put with probability $`1e^\mathrm{\Delta }`$ (term 2), or jumps with probability $`e^\mathrm{\Delta }`$ (term 3). Making the jump-probability from Burnside to Anik smaller than the reverse process by the factor $`e^\mathrm{\Delta }`$ does in fact achieve equilibrium in the steady state. In equilibrium, at temperature $`T`$, the probability $`p`$ of a flea being on Anik, and the probability $`1p`$ of one being on Burnside should be given by the Gibbs distribution $$p=\frac{e^\mathrm{\Delta }}{1+e^\mathrm{\Delta }}\text{and}1p=\frac{1}{1+e^\mathrm{\Delta }}.$$ (8) For 50 fleas the equilibrium probability distribution should be the binomial corresponding to 50 tosses of an unfair coin with outcome probabilities $`p`$ and $`1p`$: $$P_{\mathrm{eq}}(m)=\left(\genfrac{}{}{0pt}{}{50}{m}\right)p^m(1p)^{50m}.$$ (9) It can be verified that Eqs. (8) and (9) are a stationary solution of Eq. (7), namely that substituting this form on the right reproduces it on the left. In short, the effective temperature of the fleas determines how many are willing to put up with Anik’s cleanliness. In the high temperature limit, $`\mathrm{\Delta }1`$, Eq. (5) is recovered. At very low temperatures, $`\mathrm{\Delta }1`$ and few fleas leave the snug comfort of Burnside. Several interesting and informative computations can be performed with the evolution equation Eq. (9). We will focus on one which we find particularly illuminating. Shown in Fig. 7 are three entropy versus time traces, each at a different temperature but with the same initial condition of all 50 fleas on the “clean” (energetically unfavorable) dog Anik. The curves have been generated by computing the the Gibbs entropy (4) at successive time steps. We observe that at low temperatures, the entropy of the system does not increase monotonically in time — after a certain critical time, it actually starts to decrease. Have we managed to violate the second law? A little thought shows that there is no violation. The second law requires only that the total entropy of the dog-flea system plus reservoir increase with time. The entropy of the reservoir is insensitive to changes in the configuration of the fleas only at temperatures much greater than the energy cost $`ϵ`$. In general, changes in entropy and energy of the reservoir at temperature $`T`$ are related by $$dS_{\mathrm{res}}=\frac{dU_{\mathrm{res}}}{T}=\frac{dU}{T},$$ (10) where the last equality is the result of conservation of energy, and $`U`$ is $`ϵ`$ times the number of fleas on Anik. (Note that the energy transfer is at at fixed $`ϵ`$, which implies that no work is done.) Using Eq. (10), an increase of the total entropy translates as usual into a decrease of the Helmholtz free energy $`F`$ of the flea subsystem, defined by $`F=UTS`$, where $`T`$ is the temperature of the reservoir and $`S`$ is the entropy defined in Eq. (4). In Fig. 8, we plot the time evolution of the free energy for the same initial conditions and temperatures used in Fig. 7. We see that in all cases, the free energy decreases monotonically with time. Note that the initial condition for Figs. 7 and 8 (all fleas on Anik) corresponds to having a negative temperature for the dog-flea system. Consequently, reducing the internal energy (moving fleas from Anik to Burnside) initially increases the entropy above its equilibrium low temperature value. ## VI CONCLUSION We have demonstrated that an understanding of time in statistical mechanics can be obtained by carefully examining the simple Ehrenfest dog-flea model. The model has the virtues of offering qualitative insights and yielding easily to quantitative analysis. Our study has emphasized the manner in which time reversal invariance is maintained in the model, and the role of initial conditions in establishing a direction of time. We have also shown that the model can be extended to finite temperatures, where it may be used to explore interesting issues. Finally, we list some suggestions for further reading. Excellent elementary discussions are to be found in Ref. 7. The subject is also treated in many textbooks accessible to advanced undergraduates. Whereas the topic is often underplayed in courses on thermal physics, the opposite may be true in specialized books. Several thought provoking articles as well as discussions of the deep implications of the ideas presented here are to be found in Ref. VI. Added Note: Our colleague Ben Widom remarks that there are ‘purists’—among whom he does not include himself—who think that the Ehrenfest model is not a first-principles explanation of irreversibility because there is a ‘stochastic element’ in the model, which makes it ‘not deterministic, as real dynamics is . . .’ To any such purists among our readers, we point out that our implementation of the model uses computer generated pseudo-random numbers which are completely deterministic. More generally, quite simple mechanical systems can generate pseudo-random numbers, so that the Ehrenfest heat reservoir can be thought of in a completely mechanical way. (See Chapter 11 of Ref. 3 for an elementary introduction to deterministic chaos.) ACKNOWLEDGEMENTS This work has been partially supported by the National Science Foundation under grant DMR-9805613.
no-problem/9907/cond-mat9907099.html
ar5iv
text
# Exotic Structures On Magnetic Multilayers ## Abstract To characterize the possible magnetic structures created on magnetic multilayers a model has been formulated and studied. The interlayer inhomogeneous structures found indicate either (i) a regular periodic, (ii) a quasiperiodic change in the magnetization or (iii) spatially chaotic glass states. The magnetic structures created depend mainly on the ratio of the magnetic anisotropy constant to the exchange constant. With the increase of this ratio the periodic structures first transform into the quasiperiodic and then into the chaotic glass states. The same tendency arises with the depolarization of the magnetic moments of the first layer deposited on the substrate. <sup>1</sup> Department of Physics, Loughborough University, LE11 3TU, UK <sup>2</sup> Landau Institute, Moscow, Russia PACS: 75.70.-i, 75.40.Mg, 75.10.Nr Keywords: magnetic multilayers, spin density waves, spin vortices | Correspondence to: Professor F Kusmartsev<sup>1</sup>, Tel: +44 (0) 1509 223316, | | --- | | Fax: +44 (0) 1509 223986; email: F.KUSMARTSEV@lboro.ac.uk | Modern growth techniques, such as molecular beam epitaxy (MBE) or laser ablation, allow magnetic mono- or multilayers to be built up. The magnetic layers produced from $`Fe`$, $`Ni`$ or $`Co`$ may be separated by non-magnetic layers, produced, for example, from $`Cu`$. In these structures the magnetic films are grown one layer at a time. In many cases the single layer appears as a single domain, i.e. all magnetic moments having a single orientation . When a second layer is grown on top of the first layer the orientation of magnetic moments in the second layer is not necessarily the same as in the first layer. Following addition of a third layer, the magnetic moments of this layer may have yet another orientation. The orientation of the magnetic moments is usually dictated by a competition between non-uniformity of exchange energy and anisotropy energy. Therefore, the questions arise : what kind of magnetic structures (analogous to the Bloch domain wall in bulk magnetic samples) may be created by the interaction between the monolayers in the film; how many types of structures can be created; what are the energy costs to create such a structure? The estimation of such energies and their hierarchy will indicate the possible temperatures and other conditions of the substrate needed for the creation of such structures. To describe the magnetic structure in a (uniaxially symmetric) magnetic multilayer the exchange energy and the anisotropy energy have been taken into consideration. The exchange energy, $`E_{ex}`$, favours the alignment of the magnetic moments of atoms and the magnetic anisotropy energy, $`E_{an}`$, promotes the alignment of the magnetic moments along the ‘easy’ axis or ‘easy’ plane depending on what kind of (uniaxial) magnetic anisotropy is dominant in the system. A multilayer film has different exchange constants depending on whether the exchange is developed in-plane or inter-plane. The nonmagnetic layers separating the magnetic layers may also contribute to the exchange constants between magnetic layers. As exchange coupling is a short-range effect the exchange constant inside each layer is much larger than the constant associated with the exchange interaction between the layers. Increasing the space between layers decreases the inter-layer exchange coupling but not the constant of magnetic anisotropy energy which is usually related to a long-range spin-spin interaction. There are some observations in $`CoCu`$ films that with an increase of the interlayer spacing the exchange constant strongly decreases while the constant in the anisotropy term fluctuates slightly . In general the orientation of the magnetic moments depends on two angles $`(\theta ,\varphi )`$ associated with the in-plane and inter-plane rotation of the moments. However, as the in-plane exchange interaction is much stronger than the inter-plane exchange interaction, it is easier to cause a defect in the alignment of the magnetic moments of different layers than to create a defect inside a single plane and so the in-layer moments may be considered as a single domain whereas interlayer moments should not be. Therefore, as a first step, only inter-plane inhomogeneous magnetic structures created, assuming that all magnetic moments in the same layer align homogeneously, are considered. With this assumption the relevant terms of the Hamiltonian associated with the interlayer magnetic structure are an interlayer exchange energy and the anisotropy energy. The competition between these two terms determines the interlayer structure of the magnetic multilayer film. The Euler-Lagrange equation for this model $$x_{n1}+2x_nx_{n+1}+\beta \mathrm{sin}x_n=0$$ (1) is a discrete version of the Sine-Gordon equation where $`\beta `$ is the ratio of the constant of anisotropy energy to the constant of exchange energy and $`x_n`$ is the orientation of the magnetic moments of the $`n^{th}`$ layer from the ‘easy’ axis. To solve this equation we apply the methods of Chaotic Dynamics. With such an approach , instead of solving the Sine-Gordon equation directly, we derive a 2D discrete map and investigate the trajectories of this map. The simple ferromagnetic alignment of the moments is a fixed point of (1) and does not depend on the value of $`\beta `$. However, there are always fluctuations associated with a finite number of layers that destroy any such alignment which make it impossible for this structure to exist. We find three characteristically different types of trajectories which may be classified as periodic, quasiperiodic and chaotic. These trajectories will correspond to the creation of three types of magnetic structures: periodic, quasiperiodic and chaotic, respectively. It is found that the structures created depend on both the orientation of the magnetic moments in the first layer, $`x_0`$ and on the value of $`\beta `$. Two types of periodic magnetic structures are created at small values of $`\beta `$. The first is spin density waves frozen in space. In the second type of periodic structure the orientation of the magnetic moments perform a rotation as we move up through the layers. These rotations may have either a positive or a negative sign. In analogy with vortices in superfluid systems one may refer to this structure as a periodic structure of spin vortices. With the increase of the parameter $`\beta `$ from zero there initially arise frozen spin density waves, the period of which decreases as the value of $`\beta `$ increases. Then, at some value of $`\beta `$, there appear spin vortices which are periodically separated throughout the multilayer film creating a lattice. These spin vortices (see Fig 1) occur over a relatively small number of layers while their separation is very large. The first 35 layers have approximately the same orientation and are nearly perpendicular to the substrate, $`x_010^5`$. This region is schematically indicated by the three rows of disks immediately above the substrate in Fig 1. The deviation from the vertical axis progressively increases for the next 20 layers of the multilayer and spin rotation occurs. After this rotation there are about seventy-five layers with approximately vertically oriented magnetic moments and then the next spin vortex occurs. This structure is periodically repeated. With further increase of the parameter $`\beta `$ the distance between the spin vortices decreases and also begins to fluctuate. When this distance becomes equal to the size of the spin vortex the quasiperiodicity is broken and some exotic, chaotic structures arise. In such structures, together with spin vortices, there are also incomplete spin rotations. Such structures arise only at large values of $`\beta `$ and may be equivalent to spin glasses arising in bulk magnetic samples. In conclusion, we have found that in thick magnetic films associated with magnetic multilayers there may arise spin density waves, spin vortex lattices and possibly chaotic magnetic structures. Figure Caption Fig 1 The cross section of the magnetic multilayer film displaying a spin vortex arising in the 35th - 51st layers of a quasiperiodic magnetic structure with an approximate period of 90 magnetic layers.
no-problem/9907/math9907170.html
ar5iv
text
# 1 Introduction ## 1 Introduction The two-photon Lie algebra $`h_6`$ is generated by the operators $`\{N,A_+,A_{},B_+,B_{},M\}`$ endowed with the following commutation rules : $$\begin{array}{ccc}[N,A_+]=A_+\hfill & [N,A_{}]=A_{}\hfill & [A_{},A_+]=M\hfill \\ [N,B_+]=2B_+\hfill & [N,B_{}]=2B_{}\hfill & [B_{},B_+]=4N+2M\hfill \\ [A_+,B_{}]=2A_{}\hfill & [A_+,B_+]=0\hfill & [M,]=0\hfill \\ [A_{},B_+]=2A_+\hfill & [A_{},B_{}]=0.\hfill & \end{array}$$ (1) The Lie algebra $`h_6`$ contains several remarkable Lie subalgebras: the Heisenberg–Weyl algebra $`h_3`$ spanned by $`\{N,A_+,A_{}\}`$, the harmonic oscillator algebra $`h_4`$ with generators $`\{N,A_+,A_{},M\}`$, and the $`gl(2)`$ algebra generated by $`\{N,B_+,B_{},M\}`$. Note that $`gl(2)`$ is isomorphic to a trivially extended $`sl(2,)`$ algebra (the central extension $`M`$ can be absorbed by redefining $`NN+M/2`$). Hence we have the following embeddings $$h_3h_4h_6sl(2,)gl(2)h_6.$$ (2) Representations of the two-photon algebra can be used to generate a large zoo of squeezed and coherent states for (single mode) one- and two-photon processes which have been analysed in . In particular, if the generators $`\widehat{a}_{}`$, $`\widehat{a}_+`$ close a boson algebra $$[\widehat{a}_{},\widehat{a}_+]=1,$$ (3) then a one-boson representation of $`h_6`$ reads $$\begin{array}{ccc}N=\widehat{a}_+\widehat{a}_{}\hfill & A_+=\widehat{a}_+\hfill & A_{}=\widehat{a}_{}\hfill \\ M=1\hfill & B_+=\widehat{a}_+^2\hfill & B_{}=\widehat{a}_{}^2.\hfill \end{array}$$ (4) This realization shows that one-photon processes are algebraically encoded within the subalgebra $`h_4`$, while $`gl(2)`$ contains the information concerning two-photon dynamics. When the operators $`\widehat{a}_{}`$, $`\widehat{a}_+`$ act in the usual way on the number states Hilbert space spanned by $`\{|m\}_{m=0}^{\mathrm{}}`$, i.e., $$\widehat{a}_{}|m=\sqrt{m}|m1\widehat{a}_+|m=\sqrt{m+1}|m+1,$$ (5) the action of $`h_6`$ on these states becomes $$\begin{array}{cc}N|m=m|m\hfill & M|m=|m\hfill \\ A_+|m=\sqrt{m+1}|m+1\hfill & B_+|m=\sqrt{(m+1)(m+2)}|m+2\hfill \\ A_{}|m=\sqrt{m}|m1\hfill & B_{}|m=\sqrt{m(m1)}|m2.\hfill \end{array}$$ (6) The one-boson realization (4) can be translated into a Fock–Bargmann representation by setting $`\widehat{a}_+\alpha `$ and $`\widehat{a}_{}\frac{d}{d\alpha }`$. Thus the $`h_6`$ generators act in the Hilbert space of entire analytic functions $`f(\alpha )`$ as linear differential operators: $`N=\alpha {\displaystyle \frac{d}{d\alpha }}A_+=\alpha A_{}={\displaystyle \frac{d}{d\alpha }}`$ $`M=1B_+=\alpha ^2B_{}={\displaystyle \frac{d^2}{d\alpha ^2}}.`$ (7) The two-photon algebra eigenstates are given by the analytic eigenfunctions that fulfil $$(\beta _1N+\beta _2B_{}+\beta _3B_++\beta _4A_{}+\beta _5A_+)f(\alpha )=\lambda f(\alpha ).$$ (8) In the Fock–Bargmann representation (7), the following differential equation is deduced from (8): $$\beta _2\frac{d^2f}{d\alpha ^2}+(\beta _1\alpha +\beta _4)\frac{df}{d\alpha }+(\beta _3\alpha ^2+\beta _5\alpha \lambda )f=0$$ (9) where $`\beta _i`$ are arbitrary complex coefficients and $`\lambda `$ is a complex eigenvalue. The solutions of this equation (provided a suitable normalization is imposed) give rise to the two-photon coherent/squeezed states . One- and two-photon coherent and squeezed states corresponding to the subalgebras $`h_4`$ and $`gl(2)`$ can be derived from equation (9) by setting $`\beta _2=\beta _3=0`$ and $`\beta _4=\beta _5=0`$, respectively. The prominent role that the two-photon Lie algebra plays in relation with squeezed and coherent states motivates the extension of the Lie bialgebra classifications already performed for its subalgebras $`h_3`$ , $`h_4`$ and $`gl(2)`$ , since the two-photon bialgebras would constitute the underlying structures of any further quantum deformation whose representations could be physically interesting in the field of quantum optics. Thus in the next section we present such a classification for the two-photon bialgebras. The remaining sections are devoted to show how quantum two-photon deformations provide a starting point in the analysis of ‘deformed’ states of light. ## 2 The two-photon Lie bialgebras The essential point in this contribution is the fact that any quantum deformation of a given Lie algebra can be characterized (and sometimes obtained) through the associated Lie bialgebra. Let us first recall that a Lie bialgebra $`(g,\delta )`$ is a Lie algebra $`g`$ endowed with a linear map $`\delta :ggg`$ called the cocommutator that fulfils two conditions : i) $`\delta `$ is a 1-cocycle, i.e., $$\delta ([X,Y])=[\delta (X),\mathrm{\hspace{0.17em}1}Y+Y1]+[1X+X1,\delta (Y)]X,Yg.$$ (10) ii) The dual map $`\delta ^{}:g^{}g^{}g^{}`$ is a Lie bracket on $`g^{}`$. A Lie bialgebra $`(g,\delta )`$ is called a coboundary Lie bialgebra if there exists an element $`rgg`$ called the classical $`r`$-matrix such that $$\delta (X)=[1X+X1,r]Xg.$$ (11) Otherwise the Lie bialgebra is a non-coboundary one. There are two types of coboundary Lie bialgebras $`(g,\delta (r))`$: i) Non-standard (or triangular): The $`r`$-matrix is a skewsymmetric solution of the classical Yang–Baxter equation (YBE): $$[[r,r]]=0,$$ (12) where $`[[r,r]]`$ is the Schouten bracket defined by $$[[r,r]]:=[r_{12},r_{13}]+[r_{12},r_{23}]+[r_{13},r_{23}].$$ (13) If $`r=r^{ij}X_iX_j`$, we have denoted $`r_{12}=r^{ij}X_iX_j1`$, $`r_{13}=r^{ij}X_i1X_j`$ and $`r_{23}=r^{ij}1X_iX_j`$. ii) Standard (or quasitriangular): The $`r`$-matrix is a skewsymmetric solution of themodified classical YBE: $$[X11+1X1+11X,[[r,r]]]=0Xg.$$ (14) ### 2.1 The general solution Now we proceed to introduce all the Lie bialgebras associated to $`h_6`$. Recently a classification of all Schrödinger bialgebras have been obtained showing that all of them have a coboundary character. Therefore we can make use of the isomorphism between the Schrödinger and two-photon algebras in order to ‘translate’ the results of the former in terms of the latter. The most general two-photon classical $`r`$-matrix, $`rh_6h_6`$, depends on 15 (real) coefficients: $`r=a_1NA_++a_2NB_++a_3A_+M`$ (15) $`+a_4B_+M+a_5A_+B_++a_6A_+B_{}`$ (16) $`+b_1NA_{}+b_2NB_{}+b_3A_{}M`$ (17) $`+b_4B_{}M+b_5A_{}B_{}+b_6A_{}B_+`$ (18) $`+c_1NM+c_2A_+A_{}+c_3B_+B_{}`$ (19) which are subjected to 19 equations that we group into three sets: $$\begin{array}{c}2a_6^2a_6b_1+3a_1b_5+2b_5b_6=0\hfill \\ a_2a_32a_1a_4+2a_4b_63a_5c_1a_5c_22a_5c_3=0\hfill \\ a_1a_22a_2b_64a_5c_3=0\hfill \\ a_5b_1a_1b_6+2a_2c_1+2a_2c_3+4a_4c_3=0\hfill \\ 2a_2a_6+4a_4a_62a_4b_12a_5b_2+2a_2b_34a_5b_4+a_1c_1+a_1c_2=0\hfill \\ 3a_1b_2+2a_2b_5+4a_6c_32b_1c_3=0\hfill \\ a_3b_2+2a_1b_4+2a_4b_5+a_6c_1a_6c_22a_6c_32b_3c_3=0\hfill \\ 3a_2b_5+b_2b_6+2a_6c_3=0\hfill \end{array}$$ (20) $$\begin{array}{c}2b_6^2b_6a_1+3b_1a_5+2a_5a_6=0\hfill \\ b_2b_32b_1b_4+2b_4a_63b_5c_1+b_5c_2+2b_5c_3=0\hfill \\ b_1b_22b_2a_6+4b_5c_3=0\hfill \\ b_5a_1b_1a_6+2b_2c_12b_2c_34b_4c_3=0\hfill \\ 2b_2b_6+4b_4b_62b_4a_12b_5a_2+2b_2a_34b_5a_4+b_1c_1b_1c_2=0\hfill \\ 3b_1a_2+2b_2a_54b_6c_3+2a_1c_3=0\hfill \\ b_3a_2+2b_1a_4+2b_4a_5+b_6c_1+b_6c_2+2b_6c_3+2a_3c_3=0\hfill \\ 3b_2a_5+a_2a_62b_6c_3=0\hfill \end{array}$$ (21) $$\begin{array}{c}a_2b_2+c_3^2=0\hfill \\ 2a_2b_4+2a_4b_2a_5b_5+a_6b_62c_3^2=0\hfill \\ a_1b_1+a_1a_6+b_1b_6+2a_5b_52a_6b_6=0.\hfill \end{array}$$ (22) The classical $`r`$-matrix (19) satisfies the modified classical YBE and its Schouten bracket reads $$[[r,r]]=(a_1b_3+a_3b_1+2a_3a_6+2b_3b_62a_5b_5+2a_6b_6c_2^2)A_+A_{}M.$$ (23) Hence we obtain an additional equation which allows us to distinguish between non-standard and standard classical $`r`$-matrices: $$\begin{array}{cc}\text{Non-standard:}\hfill & a_1b_3+a_3b_1+2a_3a_6+2b_3b_62a_5b_5+2a_6b_6c_2^2=0\hfill \\ \text{Standard:}\hfill & a_1b_3+a_3b_1+2a_3a_6+2b_3b_62a_5b_5+2a_6b_6c_2^20.\hfill \end{array}$$ (24) On the other hand, the following automorphism of $`h_6`$ $$\begin{array}{ccc}NN\hfill & A_+A_{}\hfill & A_{}A_+\hfill \\ MM\hfill & B_+B_{}\hfill & B_{}B_+\hfill \end{array}$$ (25) interchanges the roles of $`A_+`$ with $`A_{}`$, and $`B_+`$ with $`B_{}`$. This map can be also implemented at a bialgebra level by introducing a suitable transformation of the parameters $`a`$’s, $`b`$’s and $`c`$’s given by $`a_ib_ib_ia_ii=1,\mathrm{},6`$ (26) $`c_1c_1c_2c_2c_3c_3.`$ (27) Notice that the maps (25) and (27) leave the general classical $`r`$-matrix (19), the equations (22) and the Schouten bracket (23) invariant, while they interchange the sets of equations (20) and (21). As all the two-photon Lie bialgebras are coboundary ones, that is, they come from the classical $`r`$-matrix (19), the corresponding general cocommutator can be derived from (11): $$\begin{array}{c}\delta (N)=a_1NA_++2a_2NB_++a_3A_+M+2a_4B_+M+3a_5A_+B_+\hfill \\ b_1NA_{}2b_2NB_{}b_3A_{}M2b_4B_{}M3b_5A_{}B_{}\hfill \\ a_6A_+B_{}+b_6A_{}B_+\hfill \\ \delta (A_+)=(2a_6+b_1)A_{}A_++a_2B_+A_++b_2(B_{}A_++2A_{}N)\hfill \\ b_1NM2b_4A_{}M+b_5B_{}M+b_6B_+M\hfill \\ (c_1+c_2)A_+M+2c_3A_{}B_+\hfill \\ \delta (A_{})=(2b_6+a_1)A_+A_{}b_2B_{}A_{}a_2(B_+A_{}+2A_+N)\hfill \\ +a_1NM+2a_4A_+Ma_5B_+Ma_6B_{}M\hfill \\ +(c_1c_2)A_{}M+2c_3A_+B_{}\hfill \\ \delta (B_+)=4c_3NB_++2(a_1b_6)A_+B_++2b_1A_{}B_++2b_2B_{}B_+\hfill \\ +2(2a_6b_1)NA_++2b_5(2NA_{}A_+B_{}A_{}M)\hfill \\ 2(b_2+2b_4)NM2(a_6+b_3)A_+M2(c_1+c_3)B_+M\hfill \\ \delta (B_{})=4c_3NB_{}2(b_1a_6)A_{}B_{}2a_1A_+B_{}2a_2B_+B_{}\hfill \\ 2(2b_6a_1)NA_{}2a_5(2NA_+A_{}B_+A_+M)\hfill \\ +2(a_2+2a_4)NM+2(b_6+a_3)A_{}M+2(c_1c_3)B_{}M\hfill \\ \delta (M)=0.\hfill \end{array}$$ The bialgebra automorphism defined by the maps (25) and (27) also interchanges the cocommutators $`\delta (A_+)\delta (A_{})`$ and $`\delta (B_+)\delta (B_{})`$ leaving $`\delta (N)`$ and $`\delta (M)`$ invariant. ### 2.2 The two-photon Lie bialgebras with two primitive generators We have just shown that in all two-photon bialgebras the central generator $`M`$ is always primitive, that is, its cocommutator vanishes. In general, primitive generators determine the physical properties of the corresponding quantum deformations. Therefore we study now those particular two-photon bialgebras with one additional primitive generator $`X`$ (besides $`M`$). Furthermore, the restrictions implied by the condition $`\delta (X)=0`$ rather simplify the equations (20)–(22). Due to the equivalence $`+`$ defined by the maps (25) and (27) it suffices to restrict our study to three types of bialgebras: those with either $`N`$, $`A_+`$ or $`B_+`$ primitive. $``$ Type I: $`N`$ primitive. The condition $`\delta (N)=0`$ leaves three free parameters $`c_1`$, $`c_2`$ and $`c_3`$, all others being equal to zero. The equations (20)–(22) imply that $`c_3=0`$. The Schouten bracket reduces to $`[[r,r]]=c_2^2A_+A_{}M`$, then we have a standard subfamily with two-parameters $`\{c_1,c_20\}`$ and a non-standard subfamily with $`c_1`$ as the only free parameter; they read $`\text{Standard subfamily:}c_1,c_20`$ $`r=c_1NM+c_2A_+A_{}`$ $`\delta (N)=0\delta (M)=0`$ (28) $`\delta (A_+)=(c_1+c_2)A_+M\delta (A_{})=(c_1c_2)A_{}M`$ $`\delta (B_+)=2c_1B_+M\delta (B_{})=2c_1B_{}M.`$ $`\text{Non-standard subfamily:}c_1`$ $`r=c_1NM`$ $`\delta (N)=0\delta (M)=0`$ (29) $`\delta (A_+)=c_1A_+M\delta (A_{})=c_1A_{}M`$ $`\delta (B_+)=2c_1B_+M\delta (B_{})=2c_1B_{}M.`$ $``$ Type II: $`A_+`$ primitive. If we set $`\delta (A_+)=0`$ the initial free parameters are: $`a_1`$, $`a_3`$, $`a_4`$, $`a_5`$, $`b_3`$, $`c_1`$ and $`c_2=c_1`$. The relations (20)–(22) reduce to a single equation $`a_1a_4+a_5c_1=0`$, and the Schouten bracket characterizes the standard and non-standard subfamilies by means of the term $`a_1b_3c_1^2`$: Standard subfamily: $`a_1`$, $`a_3`$, $`a_4`$, $`a_5`$, $`b_3`$, $`c_1`$ with $`a_1a_4+a_5c_1=0`$, $`a_1b_3c_1^20`$. Non-standard subfamily: $`a_1`$, $`a_3`$, $`a_4`$, $`a_5`$, $`b_3`$, $`c_1`$ with $`a_1a_4+a_5c_1=0`$, $`a_1b_3c_1^2=0`$. The structure for both subfamilies of bialgebras turns out to be: $`r=a_1NA_++a_3A_+M+a_4B_+M+a_5A_+B_+`$ (30) $`+b_3A_{}M+c_1(NMA_+A_{})`$ $`\delta (N)=a_1NA_++a_3A_+M+2a_4B_+M+3a_5A_+B_+b_3A_{}M`$ $`\delta (A_+)=0\delta (M)=0`$ (31) $`\delta (A_{})=a_1(NMA_+A_{})+2a_4A_+Ma_5B_+M+2c_1A_{}M`$ $`\delta (B_+)=2a_1A_+B_+2b_3A_+M2c_1B_+M`$ $`\delta (B_{})=2a_1(NA_{}A_+B_{})+2a_3A_{}M+4a_4NM`$ (32) $`2a_5(2NA_+A_{}B_+A_+M)+2c_1B_{}M.`$ $``$ Type III: $`B_+`$ primitive. Finally, the condition $`\delta (B_+)=0`$ implies that the initial free parameters are: $`a_1`$, $`a_2`$, $`a_3`$, $`a_4`$, $`a_5`$, $`c_2`$ and $`b_6=a_1`$. The equations (20)–(22) lead to $`a_1=0`$ and $`a_2a_3a_5c_2=0`$. Hence from (24) we find that standard solutions correspond to considering the set of parameters $`\{a_2,a_3,a_4,a_5=\frac{a_2a_3}{c_2},c_20\}`$: $`\text{Standard subfamily:}a_2,a_3,a_4,c_20`$ $`r=a_2NB_++a_3A_+M+a_4B_+M+{\displaystyle \frac{a_2a_3}{c_2}}A_+B_++c_2A_+A_{}`$ $`\delta (N)=2a_2NB_++a_3A_+M+2a_4B_+M+3{\displaystyle \frac{a_2a_3}{c_2}}A_+B_+`$ (33) $`\delta (A_+)=a_2B_+A_+c_2A_+M`$ (34) $`\delta (A_{})=a_2(B_+A_{}+2A_+N)+2a_4A_+M{\displaystyle \frac{a_2a_3}{c_2}}B_+M`$ (35) $`c_2A_{}M`$ $`\delta (B_+)=0\delta (M)=0`$ $`\delta (B_{})=2a_2B_+B_{}+2a_3A_{}M+2(a_2+2a_4)NM`$ (36) $`2{\displaystyle \frac{a_2a_3}{c_2}}(2NA_+A_{}B_+A_+M).`$ The non-standard subfamily corresponds to taking $`\{a_2,a_3,a_4,a_5\}`$ together with the relation $`a_2a_3=0`$ which implies that either $`a_2`$ or $`a_3`$ is equal to zero. However if we set $`a_2=0`$ then $`\delta (A_+)=0`$ and we are within the above non-standard type II; therefore we discard it and only consider the case $`a_3=0`$. $`\text{Non-standard subfamily:}a_2,a_4,a_5`$ $`r=a_2NB_++a_4B_+M+a_5A_+B_+`$ $`\delta (N)=2a_2NB_++2a_4B_+M+3a_5A_+B_+`$ (37) $`\delta (A_+)=a_2B_+A_+`$ $`\delta (A_{})=a_2(B_+A_{}+2A_+N)+2a_4A_+Ma_5B_+M`$ $`\delta (B_+)=0\delta (M)=0`$ $`\delta (B_{})=2a_2B_+B_{}+2(a_2+2a_4)NM`$ (38) $`2a_5(2NA_+A_{}B_+A_+M).`$ In what follows we will study the quantum deformations of two specific bialgebras of non-standard type with either $`A_+`$ or $`B_+`$ as primitive generators. The former contains a quantum harmonic oscillator $`h_4`$ subalgebra while the latter includes a quantum $`gl(2)`$ subalgebra. ## 3 The quantum two-photon algebra $`U_{a_1}(h_6)`$ We consider the bialgebra belonging to the non-standard subfamily of type II with $`a_3=a_4=a_5=b_3=c_1=0`$ and $`a_1`$ as the only free parameter. Thus this one-parameter two-photon bialgebra can be written as $$\begin{array}{c}r=a_1NA_+\hfill \\ \delta (A_+)=0\delta (M)=0\hfill \\ \delta (N)=a_1NA_+\delta (B_+)=2a_1B_+A_+\hfill \\ \delta (A_{})=a_1(A_{}A_++NM)\hfill \\ \delta (B_{})=2a_1(B_{}A_++NA_{}).\hfill \end{array}$$ (39) In order to construct its corresponding quantum deformation $`U_{a_1}(h_6)`$, it is necessary to obtain a homomorphism, the coproduct, $`\mathrm{\Delta }:U_{a_1}(h_6)U_{a_1}(h_6)U_{a_1}(h_6)`$, verifying the coassociativity condition $`(\mathrm{\Delta }\text{id})\mathrm{\Delta }=\mathrm{\Delta }(\mathrm{\Delta }\text{id})`$. It turns out to be : $$\begin{array}{c}\mathrm{\Delta }(A_+)=1A_++A_+1\mathrm{\Delta }(M)=1M+M1\hfill \\ \mathrm{\Delta }(N)=1N+Ne^{a_1A_+}\mathrm{\Delta }(B_+)=1B_++B_+e^{2a_1A_+}\hfill \\ \mathrm{\Delta }(A_{})=1A_{}+A_{}e^{a_1A_+}+a_1Ne^{a_1A_+}M\hfill \\ \mathrm{\Delta }(B_{})=1B_{}+B_{}e^{2a_1A_+}a_1A_{}e^{a_1A_+}N\hfill \\ +a_1Ne^{a_1A_+}(A_{}a_1MN).\hfill \end{array}$$ (40) The compatible deformed commutation rules are obtained by imposing $`\mathrm{\Delta }`$ to be a homomorphism of $`U_{a_1}(h_6)`$: $`\mathrm{\Delta }([X,Y])=[\mathrm{\Delta }(X),\mathrm{\Delta }(Y)]`$; they are $`[N,A_+]={\displaystyle \frac{e^{a_1A_+}1}{a_1}}[N,A_{}]=A_{}[A_{},A_+]=Me^{a_1A_+}`$ (41) $`[N,B_+]=2B_+[N,B_{}]=2B_{}a_1A_{}N[M,]=0`$ $`[B_{},B_+]=2(1+e^{a_1A_+})N+2M2a_1A_{}B_+`$ (42) $`[A_+,B_{}]=(1+e^{a_1A_+})A_{}+a_1e^{a_1A_+}MN[A_+,B_+]=0`$ (43) $`[A_{},B_+]=2{\displaystyle \frac{1e^{a_1A_+}}{a_1}}[A_{},B_{}]=a_1A_{}^2.`$ The associated universal quantum R-matrix, which satisfies the quatum YBE, reads $$=\mathrm{exp}\{a_1A_+N\}\mathrm{exp}\{a_1NA_+\}.$$ (44) Note that the underlying cocommutator is related to the first order in $`a_1`$ of the coproduct by means of $`\delta =(\mathrm{\Delta }\sigma \mathrm{\Delta })`$ where $`\sigma (XY)=YX`$; the limit $`a_10`$ of (42) leads to the $`h_6`$ Lie brackets (1); and the first order in $`a_1`$ of $``$ corresponds to the classical $`r`$-matrix (39). We remark that the generators $`\{N,A_+,A_{},M\}`$ give rise to a non-standard quantum harmonic oscillator algebra $`U_{a_1}(h_4)U_{a_1}(h_6)`$ . On the other hand, a deformed one-boson realization of $`U_{a_1}(h_6)`$ is given by $`N={\displaystyle \frac{e^{a_1\widehat{a}_+}1}{a_1}}\widehat{a}_{}A_+=\widehat{a}_+A_{}=e^{a_1\widehat{a}_+}\widehat{a}_{}`$ (45) $`B_+=\left({\displaystyle \frac{1e^{a_1\widehat{a}_+}}{a_1}}\right)^2B_{}=e^{a_1\widehat{a}_+}\widehat{a}_{}^2M=1.`$ (46) Notice that the classical identifications $`B_+=A_+^2`$ and $`B_{}=A_{}^2`$ are no longer valid in the quantum case. The action of the generators of $`U_{a_1}(h_6)`$ on the number states $`\{|m\}_{m=0}^{\mathrm{}}`$ is $`A_+|m=\sqrt{m+1}|m+1M|m=|m`$ (47) $`A_{}|m=\sqrt{m}|m1+m{\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{a_{1}^{}{}_{}{}^{k+1}}{(k+1)!}}\sqrt{{\displaystyle \frac{(m+k)!}{m!}}}|m+k`$ (48) $`N|m=m|m+m{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{a_{1}^{}{}_{}{}^{k}}{(k+1)!}}\sqrt{{\displaystyle \frac{(m+k)!}{m!}}}|m+k`$ (49) $`B_+|m=\sqrt{(m+1)(m+2)}|m+2`$ (50) $`+{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}(2+2^{k+2}){\displaystyle \frac{(a_1)^k}{(k+2)!}}\sqrt{{\displaystyle \frac{(m+k+2)!}{m!}}}|m+k+2`$ (51) $`B_{}|m=\sqrt{m(m1)}|m2+a_1\sqrt{m}(m1)|m1`$ (52) $`+m(m1){\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{a_{1}^{}{}_{}{}^{k+2}}{(k+2)!}}\sqrt{{\displaystyle \frac{(m+k)!}{m!}}}|m+k.`$ (53) The deformed boson realization (46) can be translated into differential operators acting on the the space of entire analytic functions $`f(\alpha )`$, that is, a deformed Fock–Bargmann representation which is given by $`N={\displaystyle \frac{e^{a_1\alpha }1}{a_1}}{\displaystyle \frac{d}{d\alpha }}A_+=\alpha A_{}=e^{a_1\alpha }{\displaystyle \frac{d}{d\alpha }}`$ (54) $`B_+=\left({\displaystyle \frac{1e^{a_1\alpha }}{a_1}}\right)^2B_{}=e^{a_1\alpha }{\displaystyle \frac{d^2}{d\alpha ^2}}M=1.`$ (55) Hence the relation (8) provides the following differential equation that characterizes the deformed two-photon algebra eigenstates : $`\beta _2e^{a_1\alpha }{\displaystyle \frac{d^2f}{d\alpha ^2}}+\left(\beta _1{\displaystyle \frac{e^{a_1\alpha }1}{a_1}}+\beta _4e^{a_1\alpha }\right){\displaystyle \frac{df}{d\alpha }}`$ (56) $`+\left(\beta _3\left({\displaystyle \frac{1e^{a_1\alpha }}{a_1}}\right)^2+\beta _5\alpha \lambda \right)f=0.`$ (57) The particular equation with $`\beta _2=\beta _3=0`$ is associated to the quantum oscillator subalgebra $`U_{a_1}(h_4)`$ and it would give deformed one-photon coherent states; the case $`\beta _4=\beta _5=0`$ corresponds to the $`gl(2)`$ sector which is not a quantum subalgebra (see the coproduct (40)). We stress the relevance of the coproduct in order to construct tensor product representations of the two-photon generators (55). Finally, we remark that the limit $`a_10`$ of all the above expressions gives rise to their classical version presented in the Introduction. ## 4 The quantum two-photon algebra $`U_{a_2}(h_6)`$ Let us consider now the non-standard bialgebra of type III with $`a_4=a_5=0`$ and $`a_2`$ as a free parameter: $$\begin{array}{c}r=a_2NB_+\hfill \\ \delta (B_+)=0\delta (M)=0\hfill \\ \delta (N)=2a_2NB_+\delta (A_+)=a_2A_+B_+\hfill \\ \delta (A_{})=a_2(A_{}B_++2NA_+)\hfill \\ \delta (B_{})=2a_2(B_{}B_++NM).\hfill \end{array}$$ (58) The resulting coproduct, commutation rules and universal $`R`$-matrix of the quantum algebra $`U_{a_2}(h_6)`$ read : $$\begin{array}{c}\mathrm{\Delta }(B_+)=1B_++B_+1\mathrm{\Delta }(M)=1M+M1\hfill \\ \mathrm{\Delta }(N)=1N+Ne^{2a_2B_+}\mathrm{\Delta }(A_+)=1A_++A_+e^{a_2B_+}\hfill \\ \mathrm{\Delta }(A_{})=1A_{}+A_{}e^{a_2B_+}+2a_2Ne^{2a_2B_+}A_+\hfill \\ \mathrm{\Delta }(B_{})=1B_{}+B_{}e^{2a_2B_+}+2a_2Ne^{2a_2B_+}M\hfill \end{array}$$ (59) $`[N,A_+]=A_+[N,A_{}]=A_{}[A_{},A_+]=M`$ (60) $`[N,B_+]={\displaystyle \frac{e^{2a_2B_+}1}{a_2}}[N,B_{}]=2B_{}4a_2N^2`$ (61) $`[B_{},B_+]=4N+2Me^{2a_2B_+}[M,]=0`$ (62) $`[A_+,B_{}]=2A_{}+2a_2(NA_++A_+N)[A_+,B_+]=0`$ (63) $`[A_{},B_+]=2e^{2a_2B_+}A_+[A_{},B_{}]=2a_2(NA_{}+A_{}N)`$ $$=\mathrm{exp}\{a_2B_+N\}\mathrm{exp}\{a_2NB_+\}.$$ (64) Notice that the generators $`\{N,B_+,B_{},M\}`$ close a non-standard quantum $`gl(2)`$ algebra such that $`U_{a_2}(gl(2))U_{a_2}(h_6)`$, while the oscillator algebra $`h_4`$ is preserved as an undeformed subalgebra only at the level of commutation relations. A one-boson representation of $`U_{a_2}(h_6)`$ is given by: $`B_+=\widehat{a}_+^2M=1N={\displaystyle \frac{e^{2a_2\widehat{a}_+^2}1}{2a_2\widehat{a}_+}}\widehat{a}_{}`$ (65) $`A_+=\left({\displaystyle \frac{1e^{2a_2\widehat{a}_+^2}}{2a_2}}\right)^{1/2}A_{}={\displaystyle \frac{e^{2a_2\widehat{a}_+^2}}{\widehat{a}_+}}\left({\displaystyle \frac{1e^{2a_2\widehat{a}_+^2}}{2a_2}}\right)^{1/2}\widehat{a}_{}`$ (66) $`B_{}=\left({\displaystyle \frac{e^{2a_2\widehat{a}_+^2}1}{2a_2\widehat{a}_+^2}}\right)\widehat{a}_{}^2+\left({\displaystyle \frac{e^{2a_2\widehat{a}_+^2}}{\widehat{a}_+}}+{\displaystyle \frac{1e^{2a_2\widehat{a}_+^2}}{2a_2\widehat{a}_+^3}}\right)\widehat{a}_{}.`$ (67) We remark that, although (62) presents a non-deformed oscillator subalgebra, the representation (67) includes strong deformations in terms of the boson operators. The corresponding Fock–Bargmann representation adopts the following form: $`B_+=\alpha ^2M=1N={\displaystyle \frac{e^{2a_2\alpha ^2}1}{2a_2\alpha }}{\displaystyle \frac{d}{d\alpha }}`$ (68) $`A_+=\left({\displaystyle \frac{1e^{2a_2\alpha ^2}}{2a_2}}\right)^{1/2}A_{}={\displaystyle \frac{e^{2a_2\alpha ^2}}{\alpha }}\left({\displaystyle \frac{1e^{2a_2\alpha ^2}}{2a_2}}\right)^{1/2}{\displaystyle \frac{d}{d\alpha }}`$ (69) $`B_{}=\left({\displaystyle \frac{e^{2a_2\alpha ^2}1}{2a_2\alpha ^2}}\right){\displaystyle \frac{d^2}{d\alpha ^2}}+\left({\displaystyle \frac{e^{2a_2\alpha ^2}}{\alpha }}+{\displaystyle \frac{1e^{2a_2\alpha ^2}}{2a_2\alpha ^3}}\right){\displaystyle \frac{d}{d\alpha }}.`$ (70) Therefore if we substitute these operators in the equation of the two-photon algebra eigenstates (8) we obtain the differential equation: $`\beta _2({\displaystyle \frac{e^{2a_2\alpha ^2}1}{2a_2\alpha ^2}}){\displaystyle \frac{d^2f}{d\alpha ^2}}+(\beta _1{\displaystyle \frac{e^{2a_2\alpha ^2}1}{2a_2\alpha }}+\beta _4{\displaystyle \frac{e^{2a_2\alpha ^2}}{\alpha }}\left({\displaystyle \frac{1e^{2a_2\alpha ^2}}{2a_2}}\right)^{1/2}){\displaystyle \frac{df}{d\alpha }}`$ (71) $`+\beta _2({\displaystyle \frac{e^{2a_2\alpha ^2}}{\alpha }}+{\displaystyle \frac{1e^{2a_2\alpha ^2}}{2a_2\alpha ^3}}){\displaystyle \frac{df}{d\alpha }}+(\beta _3\alpha ^2+\beta _5\left({\displaystyle \frac{1e^{2a_2\alpha ^2}}{2a_2}}\right)^{1/2}\lambda )f=0.`$ (72) If we set $`\beta _4=\beta _5=0`$, then we obtain an equation associated to the quantum subalgebra $`U_{a_2}(gl(2))`$, while the case $`\beta _2=\beta _3=0`$ corresponds to the harmonic oscillator sector. Note that in the limit $`a_20`$ we recover the classical two-photon structure. To end with, it is remarkable that we can make use of the two-photon bialgebra automorphism defined by (25) and (27) in order to obtain from $`U_{a_1}(h_6)`$ and $`U_{a_2}(h_6)`$ two other (algebraically equivalent) quantum deformations of $`h_6`$, namely $`U_{b_1}(h_6)`$ and $`U_{b_2}(h_6)`$, but now with $`A_{}`$ and $`B_{}`$ as the primitive generators, respectively. However at a representation level, the physical implications are rather different. If, for instance, $`A_{}`$ is a primitive generator instead of $`A_+`$ we would obtain a deformed Fock–Bargmann representation with terms as $`\mathrm{exp}(b_1\frac{d}{d\alpha })`$ (instead of $`e^{a_1\alpha }`$), giving rise to a differential-difference realization . Therefore, quantum $`h_6`$ algebras with either $`A_+`$ or $`B_+`$ primitive would originate a class of smooth deformed states, while those with either $`A_{}`$ or $`B_{}`$ primitive will be linked to a set of states including some intrinsic discretization. ## Acknowledgments A.B. and F.J.H. have been partially supported by DGICYT (Project PB94–1115) from the Ministerio de Educación y Cultura de España and by Junta de Castilla y León (Projects CO1/396 and CO2/297). P.P. has been supported by a fellowship from AECI, Spain.
no-problem/9907/astro-ph9907336.html
ar5iv
text
# The Hubble Deep Fields: North vs. South ## 1. Introduction The Hubble Deep Field North (Williams et al. 1996) resulted in dozens of papers on galaxy evolution. All these papers treat the HDFN as a typical field; the conclusions that are drawn are assumed to hold for all fields. With the advent of the HDF South (Williams et al. 1999) it is possible to test this hypothesis. The two fields do show some differences. The number counts of Ferguson (1999, this volume) show a that the HDFN holds 15% more galaxies. This excess is more visible when the differential counts are plotted as shown in Figure 1. Only a fraction of the galaxies in the HDFN and virtually none of the galaxies in the HDFS have spectroscopic redshifts. Thus the question “where do these excess galaxies in the HDFN lie?” must be addressed with photometric redshifts. ## 2. Catalogs and Photometry The galaxy catalogs and photometry are generated using SExtractor (Bertin & Arnouts, 1996) and additional software written by the author. First, SExtractor is run on the I band image of the HDFN (version 2) and the HDFS (version 1) mosaics. SExtractor does an excellent job of deblending objects. The few errors it makes take the form of splitting single large, bright galaxies into fragments. These are easily corrected by hand. When determining photometric redshifts, one should measure the colours of galaxies through the smallest feasible aperture. Using a small aperture decreases the random error in the colours, at the expense of introducing systematic shifts if there is a colour gradient in the galaxy. These systematic effects are actually desirable since they generally take form a reddening towards the centre. Since reddening implies an increase in the amplitude of the 4000Å break, it is then easier to determine a photometric redshift. Using small apertures also minimises the chance of contamination by other nearby galaxies. However, it is desirable to construct a catalog using a larger aperture to avoid the systematic errors. For these reasons, the final catalog contained galaxies with with $`I_{ST}<`$28<sup>1</sup><sup>1</sup>1 All magnitudes in this article are given on the ST magnitude system unless otherwise specified. The ST magnitude system is defined such that zero magnitude corresponds to $`F_\lambda =3.63\times 10^{12}W\mu ^1`$cm<sup>-2</sup> in all band passes. This is similar to the AB system which is defined in terms of a constant value of $`F_\nu `$ but is convenient for comparing magnitudes to template spectra which are usual given in $`F_\lambda (\lambda )`$. ($`I_{AB}<`$27.2) as measured through the 1.0 arcsecond aperture. The colours measured through the smaller, 0.5 arcsecond, aperture are used to determine photometric redshifts. In both cases, pixels that lie within the isophotes of other nearby galaxies (as determined using the segmentation image generated by SExtractor) are excluded from the aperture. Because the HDF frames in the different bands are registered to within a fraction of a pixel, the same pixels can be excluded on each frame. This prevents colour contamination which could affect the photometric redshifts. Such contamination has particularly undesirable effects when faint U-band or B-band dropout galaxies lie near bright foreground galaxies. ## 3. Photometric Redshifts The galaxies in the the Hubble Deep Fields span a large range in redshifts: The available spectroscopic redshifts in the Hubble Deep Fields, although numerous below $`z=1`$ and in the range $`2<z<3`$, are spotty in the range $`1<z<2`$ and almost non-existent above $`z=3`$. The various linear regression photometric redshift techniques (e.g. Connolly et al., this volume) rely on a training set of spectroscopic redshifts. These techniques, although effective at low redshifts where such a training set exists, are unreliable where the spectroscopic coverage is sparse. Therefore, the photometric redshifts in this article are calculated using the template fitting technique. The templates are constructed from the observed spectral energy distributions of local galaxies. The four spectra of Coleman, Wu & Weedman (1980, CWW) were used initially. It was found however, that many of the blue galaxies in the Hubble Deep Fields are not well fit by even the bluest CWW spectrum. This caused moderate discrepancies when the photometric redshifts were compared to the spectroscopic redshifts. Therefore the CWW spectra are supplemented with the SB3 and SB2 spectra from Kinney et al. (1996) to form the basis of the template set. From this basis set of six spectra, intermediate templates are constructed by interpolation for a total of 51 templates. These templates are redshifted at intervals of 0.02 in $`\mathrm{log}_{10}(z)`$. Spacing the templates in $`\mathrm{log}z`$ is an improvement over the more usual linear spacing. It allows, for the same total number of templates, tighter coverage at low redshift (where it is most needed) at the small sacrifice of sparse coverage at high redshift (where it is not needed). The spectra are corrected for intergalactic absorption as prescribed by Madau (1995) After redshifting, the templates are multiplied by the response function of the UBRI filters to produce fluxes at the central wavelength of each filter. These fluxes are converted to magnitudes to form the final templates. Each template is compared to the observed galaxy magnitudes in turn and a $`\chi ^2`$ is determined: $$\chi ^2=\underset{i=1}{\overset{N_{filters}}{}}\frac{(M_iT_i\alpha )^2}{\sigma _{M_i}^2},$$ (1) Where $`M_i`$ is the observed magnitude of the galaxy, $`\sigma _{M_i}`$ is the magnitude uncertainty, $`T_i`$ is the template magnitude, and $`\alpha `$ is a normalisation factor that corrects the templates to the apparent magnitude of the galaxy. The optimal normalisation factor is determined by minimising equation (1) with respect to $`\alpha `$. In many cases the galaxy is undetected in one or more of the band passes. This can occur when the galaxy when a galaxy is at high redshift and its UV flux has been absorbed by the IGM (the U-band dropouts). However, this can also occur with intrinsically faint, low redshift galaxies. The situation is handled by replacing the relevant term of the sum in equation (1). If the magnitude predicted by template is less than the magnitude limit in that band, the term is replaced with zero. If this isn’t case, on the other hand, the term is replaced with $$\frac{M_{\mathrm{limit}}T_i\alpha }{\sigma _{T_i}^2}$$ (2) where $`M_{limit}`$ is the magnitude limit of the image in question. The weighting factor, $`\sigma _{T_i}`$, is the uncertainty that the galaxy’s magnitude would have if it was visible in that bandpass. As a check on the accuracy of the technique, photometric redshifts were calculated by the above method for galaxies in both Hubble Deep Fields which have spectroscopic redshifts. The comparison is shown in Figure 2. The redshift uncertainties scale with z. The typical relative error in the photometric redshifts is $`\sigma _z/z=11\%`$. ## 4. The Comparison The photometric redshift technique described in section 3 was applied to the photometric catalogs described in section 2. The resulting redshift distributions for the HDF North and South are shown in Figure 3. The two redshift distributions are not the same. The Kolmogorov-Smirnov test gives the probability of the two distributions being the same as $`1.2\times 10^6`$. The redshift distributions are most different in the redshift range $`0.4<z<1.2`$. It is tempting to ascribe the differences in the Hubble Deep Fields to a structure present in the North but not in the South. Indeed, there is a pronounced spike in the spectroscopic redshift distribution of the HDFN at $`z=0.475`$ (Cohen et al. 1996). Figure 4 shows the I band images of the Hubble Deep Fields. Only light from galaxies with photometric redshifts in the range $`0.4<z<0.8`$ is shown; the other galaxies been masked out. The images have been convolved with Gaussian profile ($`\sigma =6`$ arcseconds). The left image shows a large concentration of light in the HDFN that is not present in the South. The two-point angular correlation function was computed for various redshift slices. Note that it is impracticable to calculate the angular correlation function for slices much narrower than about $`\mathrm{\Delta }z=.4`$ without running into problems with small number statistics. Also, because of the uncertainties on the redshifts, it would be difficult to compute a reliable spatial (as opposed to angular) correlation function. The correlation functions for both fields were compared for each slice. For most redshift slices, they showed no difference within the errors. The only exception was the in the $`0.4<z<0.8`$ redshift slice, where galaxies in the HDFN were significantly more clustered than in the HDFS. The spatial scale of the structure (the HDF is $``$ 1 Mpc across at that redshift) and the number of galaxies involved ($`50`$ more galaxies in the North than in the South) suggest a very poor cluster or a very rich group. More generally, the differences in the redshift distributions could be due to cosmic variance in the large scale galaxy distribution. This hypothesis was tested empirically in the following manner: The William Herschel Deep Field (WHDF, Metcalfe et al. 1999 in press) extends to $`B=28`$ and has good coverage in the UBRIHK bands. It covers roughly 40 square arcminutes. The WHDF was divided into 9 separate areas, each the same size as the Hubble Deep Fields. The field to field variance was found to be 10% (rms), smaller than, but not inconsistent with, the difference between the HDFN and HDFS. N-body simulations computed by Stadel (private communication) indicate the variance in the mass distribution along lines of sight comparable the HDF are about 20% out to $`z=1`$. Assuming that galaxies are linearly biased (Kauffman, 1998), this should translate into a similar variance in the redshift distributions in the Hubble Deep Fields. Again this is slightly smaller than, not but not inconsistent with, the difference between the two redshift distributions below $`z=1`$ as seen in Figure 3. ## References Bertin, E., & Arnouts, S. 1996, A&A, 117, 393 Cohen, J. G., Cowie, L. L., Hogg, D. W., Songaila, A., Blandford, R., & Hu, E. M. 1996, ApJ, 471, L5 Coleman, G. D., Wu C-C. & Weedman, D. W. 1980, 43, 393 (CWW) Kauffmann, G., Colberg, J. M., Diaferio, A., White, S. D. M. 1998, astro-ph/98091678 Kinney, A. L., Calzetti, D., Bohlin, R. C., McQuade K., Storchi-Bergmann, T., & Schmidt, H. R. 1996, ApJ, 467 38 Madau, P. 1995, ApJ, 441, 18 Williams, R. E. et al. 1996, AJ, 112, 1335 Williams, R. E. et al. 1999, in preparation
no-problem/9907/hep-ph9907312.html
ar5iv
text
# QCD critical point and event-by-event fluctuations in heavy ion collisions ## 1 Introduction The goal of this work is to motivate a program of heavy ion collision experiments aimed at discovering an important qualitative feature of the QCD phase diagram — the critical point at which a line of first order phase transitions separating quark-gluon plasma from hadronic matter ends (see Fig. 1). The possible existence of such an endpoint E has recently been emphasized and its universal critical properties have been described . The point E can be thought of as a descendant of a tricritical point in the phase diagram for 2-flavor QCD with massless quarks. As pointed out in , observation of the signatures of freezeout near E would confirm that heavy ion collisions are probing above the chiral transition region in the phase diagram. Furthermore, we would learn much about the qualitative landscape of the QCD phase diagram. The basic ideas for observing the critical endpoint proposed in are based on the fact that such a point is a genuine thermodynamic singularity at which susceptibilities diverge and the order parameter fluctuates on long wavelengths. The resulting signatures all share one common property: they are nonmonotonic as a function of an experimentally varied parameter such as the collision energy, centrality, rapidity or ion size. The goal of is to develop a set of tools which will allow heavy ion collision experiments to discover the critical endpoint through the analysis of the variation of event-by-event fluctuations as control parameters are varied. ## 2 Non-critical fluctuations and comparison with data. Before we can achieve our goal we must develop sufficient understanding of non-critical event-by-event fluctuations. Large acceptance detectors, such as NA49 and WA98 at CERN, have made it possible to measure important average quantities in single heavy ion collision events, such as, for example, the mean transverse momentum of the charged particles in a single event. The most remarkable property of the data is that the event-by-event distributions of such observables are as perfect Gaussians as the data statistics allow. Our first step is to analyze the NA49 data and compare it with thermodynamic predictions for non-critical fluctuations. We find that the data is broadly consistent with the hypotheses that most of the fluctuations are thermodynamic in origin, and that PbPb collisions at 160 AGeV do not freeze out near the critical point. This allows us to establish the background, on top of which the effects of critical fluctuations should be sought as the control parameters are varied. Most of our analysis is applied to the fluctuations of the observables characterizing the multiplicity and momenta of the charged pions in the final state of a heavy ion collision. We model the hadronic matter at freeze-out by a resonance gas in thermal equilibrium. Our simulation shows that more than half of all observed pions come from resonance decays. The resonances also have a dramatic effect on the size of the multiplicity, $`N`$, fluctuations. We find: $$\frac{(\mathrm{\Delta }N)^2}{N}1.5,$$ (1) which is larger than the ideal gas value of 1. The contribution of resonances is important to bring this number up. The experimental value from NA49 of this ratio is 2.0. There is clearly room for non-thermodynamic fluctuations, such as fluctuations of impact parameter. Their effect can be studied and separated by varying the centrality cut using the zero degree calorimeter. Fluctuations of intensive observables, such as mean $`p_T`$ are less sensitive to impact parameter fluctuations. However, the effects of the flow on $`p_T`$ are large and complicate the analysis. The quantity we compare with the data is the ratio: $`v_{\mathrm{inc}}(p_T)/p_T`$, of the variance of the inclusive distribution to all-event mean $`p_T`$. The effects of the flow, which we do not calculate, should largely cancel in this ratio. We find: $$\frac{v_{\mathrm{inc}}(p_T)}{p_T}=0.68.$$ (2) The experimental value obtained from NA49 data is 0.75. We see that the major part of the observed fluctuation in $`p_T`$ is accounted for by the thermodynamic fluctuations. A large potential source of the discrepancy is the “blue shift” approximation we used and could be remedied by a better treatment of flow. A very important feature in the data is the value of the ratio of the scaled event-by-event variation to the variance of the inclusive distribution: $$F=\frac{Nv_{\mathrm{ebe}}^2(p_T)}{v_{\mathrm{inc}}^2(p_T)}=1.004\pm 0.004.$$ (3) This is a remarkable fact, since the contribution of the Bose enhancement to this ratio is almost an order of magnitude larger than the statistical uncertainty. Some mechanism must compensate for the Bose enhancement. In the next section we find a possible origin of this effect: anti-correlations due to energy conservation and thermal contact between the observed pions and the rest of the system at freeze-out. ## 3 Energy Conservation and Thermal Contact We consider the effect of the energy conservation and thermal contact between the subsystem we observe, which we call B and which consists mainly of charged pions, and the remaining unobserved part of the system, which we call A and which includes the neutral pions, the resonances, the pions not in the experimental acceptance and, if the freeze-out occurs near critical point, the order parameter or sigma field. We quantify the effect by calculating the “master correlator”: $$\mathrm{\Delta }n_p\mathrm{\Delta }n_k=v_p^2\delta _{pk}\frac{v_p^2ϵ_pv_k^2ϵ_k}{T^2(C_A+C_B)},$$ (4) where $`n_p`$ are the pion momentum mode occupation numbers, $`v_p^2=n_p(1+n_p)`$, and $`C_{A,B}`$ are the heat capacities of the two systems A and B. Using this expression for the correlator we can now calculate the effect of thermal contact and energy conservation on fluctuations of various observables, such as mean $`p_T`$, for example. In particular, we find that the anti-correlation introduced by this effect reduces the value of the ratio $`F`$ defined in (3) by and amount comparable to the Bose enhancement effect, and thus can compensate it. This effect can be distinguished from other effects, e.g., finite two-track resolution, also countering the Bose enhancement, by the specific form of the microscopic correlator (4). The effect of energy conservation and thermal contact introduces an off-diagonal (in $`pk`$ space, and also in isospin space) anti-correlation. Some amount of such anti-correlation is indeed observed in the NA49 data. Another important point of (4) is that as the freeze-out approaches the critical point and $`C_A`$ becomes very large the anti-correlation due to energy conservation disappears. ## 4 Pions Near the Critical Point: Interaction with the Sigma Field Finally, in this section, unlike the previous sections, we consider the situation in which the freeze-out occurs very close to the critical point. This point is characterized by large long-wavelength fluctuations of the sigma field (chiral condensate). We must take into account the effect of the $`G\sigma \pi \pi `$ interaction between the pions and such a fluctuating field. We do this by calculating the contribution of this effect to the “master correlator”. We find: $$\mathrm{\Delta }n_p\mathrm{\Delta }n_k=v_p^2\delta _{pk}+\frac{1}{m_\sigma ^2}\frac{G^2}{T}\frac{v_p^2v_k^2}{\omega _p\omega _k}.$$ (5) We see that the exchange of quanta of the soft sigma field (see Fig. 2) leads to a dramatic off-diagonal correlation, the size of which grows as we approach the critical point and $`m_\sigma `$ decreases. This correlation takes over the off-diagonal anti-correlation discussed in the previous section. To quantify the effect of this correlation we computed the contribution to the ratio $`F`$ (3) from (5). We find: $$\mathrm{\Delta }F_\sigma =0.14\left(\frac{G_{\mathrm{freeze}\mathrm{out}}}{300\mathrm{MeV}}\right)^2\left(\frac{\xi _{\mathrm{freeze}\mathrm{out}}}{6\mathrm{fm}}\right)^2\mathrm{for}\mu _\pi =0,$$ (6) This effect, similarly to the Bose enhancement, is sensitive to over-population of the pion phase space characterized by $`\mu _\pi `$ and increases by a factor 2.5 for $`\mu _\pi =60`$ MeV. We estimate the size of the coupling $`G`$ to be around 300 MeV near point E, and the mass $`m_\sigma `$, bound by finite size effects, to be less than 6 fm. The effect (6) can thus easily exceed the present statistical uncertainty in the data (3) by 1-2 orders of magnitude. It is also important to note that we have calculated the effect of critical fluctuations on $`F`$ because this ratio is being measured in experiments, such as NA49. This observable is not optimized for detection of critical fluctuations. Observables which are more sensitive to small $`p_T`$ than $`F`$ (e.g., “soft $`F`$”), and/or observables which are sensitive to off-diagonal correlations in $`pk`$ space would show even larger effect as the critical point is approached.
no-problem/9907/cond-mat9907143.html
ar5iv
text
# Entropy production, energy dissipation and violation of Onsager relations in the steady glassy state. ## Abstract In a glassy system different degrees of freedom have well-separated characteristic times, and are described by different temperatures. The stationary state is essentially non-equilibrium. A generalized statistical thermodynamics is constructed and a universal variational principle is proposed. Entropy production and energy dissipation occur at a constant rate; there exists a universal relation between them, valid to leading order in the small ratio of the characteristic times. Energy dissipation (unlike entropy production) is closely connected to the fluctuations of the slow degree. Corrections due to a finite ratio of the times are obtained. Onsager relations in the context of heat transfer are also considered. They are always broken in glassy states, except close to equilibrium. Statistical thermodynamics is a universal and powerful theory for describing equilibrium states. It was generalized to weakly non-equilibrium states, in an approach first started by Onsager, and further developed extensively, see e.g. . It was recognized long time ago that concepts and methods of statistical thermodynamics can also be applied to glassy non-equilibrium states . In such systems the relaxation times depend strongly on temperature. When cooling at a proper rate (varying from $`10^2`$ K/s for window glass to $`10^5`$ K/s for metallic glasses) the equilibrium relaxation time becomes very large near the experimentally defined glassy temperature $`T_g`$. The thus reached metastable state is not in equilibrium but, nevertheless, can be described by a generalized thermodynamics, assigning different temperatures (so-called effective or fictive temperatures) to processes with well-separated characteristic times -. In spite of much progress in this area many important questions are still not fully understood. In particular, it concerns dissipative effects. However, as the steady glassy state is non-equilibrium, there exist a constant-rate entropy production, an energy dissipation, and a transfer of heat. Although the physical importance of entropy production was stressed in the fundamental review , its systematic investigation has been continued only very recently . We shall consider the steady non-equilibrium glassy state of systems in which the subsystems are coupled to baths at different temperatures. The glassiness here is solely a consequence of the assumed large separation of time-scales of the subsystems. Our purposes are the following. (i) Derive the glassy stationary statistical distribution and the corresponding thermodynamics. (ii) Propose a general variational principle for glassy thermodynamics. (iii) Investigate the entropy production and energy dissipation in the stationary non-equilibrium glassy state. (iv) Show the breakdown of the Onsager relations for heat transfer. To adstruct our conclusions, let us introduce the simplest glassy system. It consist of a pair of coupled stochastic variables $`x_1`$, $`x_2`$ with Hamiltonian $`H(x_1,x_2)`$, which interact with different thermal baths and have different characteristic time-scales. Such an approach pretends to establish all important, necessary ingredients of glassy behavior. A theory of statistical systems interacting with different thermal baths was investigated in . The essentially new points of our approach are the large separation between characteristic times, and arbitrary difference between temperatures. The (overdamped) Langevin equations for the dynamics read: $`\mathrm{\Gamma }_i\dot{x}_i=`$ $`_iH+\eta _i(t),\eta _i(t)\eta _j(t^{})=2\mathrm{\Gamma }_iT_i\delta _{ij}\delta (tt^{}),`$ (1) $`i,j=`$ $`1,2`$ (2) where $`\mathrm{\Gamma }_1`$, $`\mathrm{\Gamma }_2`$ are the damping constants, and $`_i=/x_i`$. The Einstein relation between the strength of noise and the damping constant holds in Eq. (1) because the thermal baths themselves are in equilibrium . It is assumed that the relaxation time toward the total equilibrium (where $`T_2=T_1`$) is much larger than all considered times; thus for our purposes $`T_2`$ and $`T_1`$ are constants. Hereafter we shall assume that $`x_2`$ is changing much more slowly than $`x_1`$; this condition is ensured by the condition $`\gamma =\mathrm{\Gamma }_1/\mathrm{\Gamma }_21`$. Let us first indicate how the stationary distribution can be obtained to order $`\gamma ^0`$, which will give us the basic formulation of the generalized glassy thermodynamics. Eqs. (1) can be investigated by the method of adiabatic elimination (Born-Oppenheimer method). First Eq. (1) for $`x_1`$ is solved keeping the $`x_2`$ fixed, valid on relatively short time-scales where only Eq. (1) for $`x_1`$ is relevant. In this case the Langevin equation has the equilibrium distribution $$P_0(x_1|x_2)=\frac{1}{Z(x_2)}\mathrm{exp}(\beta _1H(x_1,x_2)),$$ (3) where $`Z(x_2)`$ is the partition sum for a fixed value of $`x_2`$. $`x_1`$ can be carried out. At quasi-equilibrium of the $`x_2`$-subsystem this average should be performed using the distribution (3). In this way we get from Eqs. (1) a related dynamics for the slow variable, in which the two particle Hamiltonian $`H(x_1,x_2)`$ is replaced by the effective one-particle Hamiltonian $`T_1\mathrm{ln}Z(x_2)`$. We thus have the effective equation of motion $$\mathrm{\Gamma }_2\dot{x}_2=\frac{}{x_2}T_1\mathrm{ln}Z(x_2)+\eta _2(t)$$ (4) As the noise is due to a bath at temperature $`T_2`$, see Eq. (1), the equilibrium distribution of this equation reads $$P_0(x_2)=\frac{Z^{T_1/T_2}(x_2)}{𝒵},𝒵=dx_2Z^{T_1/T_2}(x_2).$$ (5) The joint distribution of $`x_1`$ and $`x_2`$ can now be written as $`P_0(x_1,x_2)=P_0(x_1|x_2)P_0(x_2)`$. A similar approach is applied in spin-glasses and other disordered systems where $`n=T_1/T_2`$ is considered as“dynamically generated” replica number . Keeping this in mind we now consider the general situation. (i) If the state of a system is described by a distribution $`P(x_1,x_2)=P(x_1|x_2)P(x_2)`$ then there exist the general definition for the mean energy and entropy : $`U=H`$, $`S=\mathrm{ln}P`$. This latter Boltzmann-Gibbs-Shannon formula corresponds to the general statistical definition of entropy, relevant also outside of equilibrium . The total entropy decomposes as $`S=S_1+S_2`$, where $`S_1={\displaystyle dx_2P(x_2)[dx_1P(x_1|x_2)\mathrm{ln}P(x_1|x_2)]},`$ (6) $`S_2={\displaystyle dx_2P(x_2)\mathrm{ln}P(x_2)}.`$ (7) $`S_1`$ is the entropy of the fast variable $`x_1`$, averaged over the quenched slow variable $`x_2`$, and $`S_2`$ is the entropy of the slow variable itself. This general result of the statistical thermodynamics can be again applied in our case when $`P(x_1,x_2)=P_0(x_1,x_2)`$. From Eqs. (3), (5), (6) an important relation can be derived which generalizes the usual thermodynamical relation for the free energy $`F=T_2\mathrm{ln}𝒵`$: $$F=UT_1S_1T_2S_2$$ (8) This agrees with the expression of the free energy for a glassy system put forward previously by one of us . In that approach the equivalent of $`T_2`$ is the dynamically generated effective temperature, while here it is the temperature of a bath. The first and second laws of thermodynamics take the form $$\mathrm{d}U=\mathrm{¯}\mathrm{d}Q+\mathrm{¯}\mathrm{d}WT_1\mathrm{d}S_1+T_2\mathrm{d}S_2+\mathrm{¯}\mathrm{d}W,$$ (9) where $`\mathrm{¯}\mathrm{d}W`$ is the work which is done on the system by external forces, and the equality in Eq. (9) is realized for a reversible process. Eqs. (8-9) are the manifestation of the glassy thermodynamics which generalize the usual one to the case of non-equilibrium systems with well-separated time-scales. Here they have been obtained from Langevin equations under the sole assumption of a separation of time scales. (ii) Let us indicate how a variational principle can be obtained from a more general consideration. The usual Gibbs distribution for homogeneous equilibrium states can be obtained either from maximizing the entropy, keeping energy fixed, or from minimizing the energy, keeping the entropy fixed. For the glassy state, which is non-homogeneous and out of equilibrium, one can minimize the mean energy, keeping both entropies $`S_1`$ and $`S_2`$ fixed (somewhat similar to the microcanonical approach). Following the standard method we should minimize, with respect to $`P(x_2)`$ and $`P(x_1|x_2)`$, the Lagrange function $`={\displaystyle dx_1dx_2P(x_1,x_2)H}+T_2{\displaystyle dx_2P(x_2)\mathrm{ln}P(x_2)}`$ (10) $`+T_1{\displaystyle dx_2P(x_2)dx_1P(x_1|x_2)\mathrm{ln}P(x_1|x_2)},`$ (11) where $`T_1`$ and $`T_2`$ are Lagrange multipliers, and normalize the solutions. We then recover Eqs. (3), (5) but now on the basis of more general variational principle. This clearly demonstrates the conceptual difference compared to the usual (local-equillibrium) thermodynamics. (iii) Due to a difference between $`T_1`$ and $`T_2`$ there is constant current of heat through the system. This implies a constant production of entropy and dissipation of energy. We investigate these effects taking into account possible $`\gamma ^2`$ corrections. The Fokker-Planck equation which corresponds to Eqs. (1) reads $`_tP(x_1,x_2;t)+{\displaystyle \underset{i=1}{\overset{2}{}}}_iJ_i=0,`$ (12) $`\mathrm{\Gamma }_iJ_i=P(x_1,x_2;t)_iH+T_i_iP(x_1,x_2;t)`$ (13) where $`J_1`$, $`J_2`$ are the currents of probability. The stationary probability distribution can be expressed as $$P_1(x_1,x_2)=P_0(x_1,x_2)(1\gamma A(x_1,x_2))+𝒪(\gamma ^2),$$ (14) The boundary conditions are, as usual, that $`P(x_1,x_2)`$ and its derivatives vanish at infinity. $`A`$ is obtained from the stationarity condition $`_tP=0`$, taking into account the orthogonality condition $`𝑑x_1𝑑x_2AP_0=0`$, and consistency with $`𝒪(\gamma ^2)`$ terms. The general expression for $`A`$ is rather lengthy, but for a concrete model it is given in Eq. (25). In the first order of $`\gamma `$ the steady currents are given by $$J_1=\gamma \frac{T_1}{\mathrm{\Gamma }_1}P_0_1A,J_2=\gamma \frac{T_1T_2}{T_1\mathrm{\Gamma }_1}P_0\delta F_2,$$ (15) Notice that for $`J_2`$ the object $`A`$ is not needed, but only $$\delta F_2=_2H+dyP_0(y|x_2)_2H(y,x_2),$$ (16) being the difference between the force acting on the second subsystem and its conventional mean value obtained by averaging over the fast degree of freedom. Therefore some further results can be derived without knowledge of $`A`$, though it is needed for consistency checks and $`\gamma ^2`$-corrections. The change of total entropy reads $$\mathrm{¯}\mathrm{d}S_{tot}=\mathrm{d}S+\mathrm{¯}\mathrm{d}S_{b,1}+\mathrm{¯}\mathrm{d}S_{b,2}=\mathrm{d}S\beta _1\mathrm{¯}\mathrm{d}_1Q\beta _2\mathrm{¯}\mathrm{d}_2Q$$ (17) where $`S=S_1+S_2`$ is the entropy of the system defined by (6), $`S_{b,1}`$, $`S_{b,2}`$ are the entropies of the corresponding thermal baths, and $`\mathrm{¯}\mathrm{d}_1Q`$, $`\mathrm{¯}\mathrm{d}_2Q`$ are the amounts of heat obtained by the system from the thermal baths. Of course, from the conservation of energy we have $`\mathrm{¯}\mathrm{d}_iQ+\mathrm{¯}\mathrm{d}Q_{bath,i}=0`$, while $`\mathrm{¯}\mathrm{d}Q_{b,i}=T_i\mathrm{¯}\mathrm{d}S_{b,i}`$ holds because the baths are in equilibrium. Further, the expression $$\dot{Q}_i\frac{\mathrm{¯}\mathrm{d}_iQ}{dt}=dx_1dx_2H(x_1,x_2)_iJ_i$$ (18) can be obtained from (12). The entropy and the mean energy of the stationary state are constant: $`\dot{S}=0`$, $`\dot{Q}_1+\dot{Q}_2=0`$. Nevertheless, there exists a constant-rate transfer of entropy to the outside world (the thermal baths), and a stationary flux of heat through the system: $$\dot{S}_{tot}=(\beta _1\beta _2)\dot{Q}_2.$$ (19) If the system is in a non-equilibrium steady state then work should be done to keep it there. It is just the work needed for creating macroscopic currents inside the system. To illustrate this thesis, we can employ relation (9) for constant $`T_1`$, $`T_2`$, divide it by $`\mathrm{d}t`$ and write it as $`\dot{F}=\dot{W}\dot{\mathrm{\Pi }}`$. The positive quantity $`\dot{\mathrm{\Pi }}`$ is the energy dissipated per unit of time. Using Eq. (12) we get $`\dot{F}=(T_1T_2){\displaystyle dx_1dx_2J_2_2\mathrm{ln}P(x_1|x_2)}`$ (20) $`{\displaystyle dx_1dx_2P(x_1,x_2)\underset{i=1}{\overset{2}{}}\frac{1}{\mathrm{\Gamma }_i}(_iH+T_i_i\mathrm{ln}P)^2}`$ (21) The last term in the right-hand side is nothing else but the energy dissipated per unit of time; it can be written alternatively as the sum of energy dissipation driven by the corresponding thermal baths: $`\dot{\mathrm{\Pi }}=_{i=1}^2(T_i\mathrm{¯}\mathrm{d}_iS\mathrm{¯}\mathrm{d}_iQ)/\mathrm{d}t`$ (recall that $`\mathrm{¯}\mathrm{d}_iQ`$ is the heat obtained from the thermal bath $`i`$, and $`\mathrm{¯}\mathrm{d}_iS`$ is the change of system’s entropy induced by this thermal bath). The first term in the right-hand side of Eq. (20) should be associated with the performed work. In the stationary state the free energy is constant, and the dissipated energy and the performed work are equal. Using Eqs. (15,16) we get $`\dot{S}_{tot}=\gamma {\displaystyle \frac{\kappa }{T_2\mathrm{\Gamma }_1}}[\delta F_2]^2_1\gamma ^2{\displaystyle \frac{\kappa }{\mathrm{\Gamma }_1}}\delta F_2_2A_0,`$ (22) $`\dot{\mathrm{\Pi }}=\gamma {\displaystyle \frac{\kappa }{\mathrm{\Gamma }_1}}[\delta F_2]^2_1\gamma ^2{\displaystyle \frac{\kappa (2T_2T_1)}{\mathrm{\Gamma }_1}}\delta F_2_2A_0,`$ (23) where $`\kappa =(T_1T_2)/T_1`$, and $`\mathrm{}_{0(1)}`$ means averaging by the distribution $`P_{0(1)}`$. We observe the following deceivingly simple relation, valid to leading order in $`\gamma `$, $$\dot{\mathrm{\Pi }}=T_2\dot{S}_{tot}+𝒪(\gamma ^2)$$ (24) For a usual non-stationary system tending to equilibrium we have the following relation between entropy production and energy dissipation: $`\dot{\mathrm{\Pi }}=T\dot{S}_{tot}`$, where $`T`$ is the temperature of the unique thermal bath. On the other hand, Eq. (24) reflects degradation of the energy in the stationary state. The distinguished role of $`T_2`$ is connected with conservation of detailed balance for small time-scales (see Eq.(3)). Indeed, both entropy production and energy dissipation are small on the characteristic times of the $`x_1`$-variable (22). This equation also shows that when $`T_2`$ is close to zero, the energy dissipation (but not the entropy production) looses its leading term. At least in this limit the $`\gamma ^2`$-correction to $`\dot{S}_{tot}`$ is negative. Let us apply the obtained general results to a simple toy model. We consider a pair of weakly-interacting oscillators with coordinates $`x_1`$, $`x_2`$ and Hamiltonian: $`H=ax_1^2/2+ax_2^2/2+gx_1^2x_2^2`$, where $`a>0`$, $`g>0`$. Very similar models are applied to describe an oscillator with random frequency or some electrical circuits . For simplicity we shall discuss the model keeping only the first non-vanishing order in the small parameter $`g`$. The stationary distribution has the form (14), with $$A=\frac{g(T_1T_2)}{a^2}(1a\beta _2x_2^2)(1a\beta _1x_1^2)$$ (25) After some calculations we get from (15): $$\dot{S}_{tot}=\frac{8g^2}{\mathrm{\Gamma }_1}\frac{(T_1T_2)^2}{a^3}(\gamma \gamma ^2),$$ (26) $$\dot{\mathrm{\Pi }}=\frac{8g^2}{\mathrm{\Gamma }_1}\frac{(T_1T_2)^2}{a^3}(\gamma T_2+\gamma ^2(T_12T_2))$$ (27) We see that in this model the $`\gamma ^2`$ correction to $`\dot{S}_{tot}`$ is always negative. For $`\dot{\mathrm{\Pi }}`$ it is only the case if $`T_1<2T_2`$. We have thus provided a concrete example of our general results. (iv) Let us now discuss the Onsager relations concerning heat transfer in the glassy state. These fundamental and experimentally testable relations were proposed by Onsager to describe transport in weakly non-equilibrium systems (the linear case) . Later they were generalized to the non-linear regime. Following standard arguments the Onsager relation reads in our case $$_{\beta _1}\dot{Q}_2=_{\beta _2}\dot{Q}_1,$$ (28) where $`\dot{Q}_i`$, given by Eq. (18), but see also Eq. (19), is the heat flux from the thermal bath $`i`$. In the stationary case one has $`\dot{Q}_1+\dot{Q}_2=0`$. For our toy model we get from Eqs. (15,25) $$\dot{Q}_2=\gamma \frac{g^2}{\mathrm{\Gamma }_1a^3}\frac{\beta _1\beta _2}{\beta _1^2\beta _2^2}+𝒪(\gamma ^2).$$ (29) The linear case corresponds to Eq. (28) with $`\beta _1\beta _2`$. Indeed, then the fluxes can be written in more familiar form: $`\dot{Q}_i=_jL_{ij}\mathrm{\Delta }\beta _j`$, where $`\mathrm{\Delta }\beta _i=\beta _i\beta _0`$ is a small deviation of the inverse temperature $`\beta _i`$ from its equilibrium value $`\beta _0`$, and the $`L_{ij}`$ depend only on $`\beta _0`$ but not on $`\beta _{1,2}`$ separately. In that case the relation (28) takes the form: $`L_{12}=L_{21}`$ ($`=8g^2T_0^4/\mathrm{\Gamma }_2a^3`$ in our toy model). Let us stress that this form of Onsager relations is applicable only for the linear case , in contrast to the more general relation (28). In fact the general validity of Eq. (28) in the linear regime is a fundamental theorem supported by very general arguments. It means that any breaking of Eq. (28) can be connected only with $`T_1T_2`$. The converse is not true: there are physically important cases when the Onsager relations hold for $`T_1T_2`$ . Thus checking these relations for our concrete class of non-equilibrium systems seems very important. From Eqs. (15), (16), (18) we obtain to leading order in $`\gamma `$ $`_{\beta _1}\dot{Q}_2_{\beta _2}\dot{Q}_1=`$ (30) $`\gamma {\displaystyle \frac{\beta _1\beta _2}{\mathrm{\Gamma }_1}}\{_{\beta _1}(T_2(\delta F_2)^2)+_{\beta _2}(T_2(\delta F_2)^2)\}`$ (31) In the linear regime with $`\beta _1\beta _2`$ the Onsager relation is satisfied automatically. However, for the considered glassy system it is the exceptional case, and (28) cannot be true in general. Indeed, for our toy model we get $$_{\beta _1}\dot{Q}_2_{\beta _2}\dot{Q}_1=\gamma \frac{16g^2}{\mathrm{\Gamma }_1a^3}T_1T_2(T_1^2T_2^2).$$ (32) implying a violation of the Onsager relation for any $`T_1T_2`$, because, due to (29), the right hand side of (32) has the same order of magnitude as the individual terms in the left hand side. In this sense the violation is strong. One can give also general, model-independent arguments supporting breakdown of (28). Indeed, if it were to be valid, Eq. (30) says that we should have $`(\delta F_2)^2=\beta _2f(\beta _1\beta _2)`$ for all $`\beta _1`$, $`\beta _2`$, where $`f`$ is some positive function. Such a form cannot hold for a trivial reason: taking the limit $`\beta _20`$ one obtains zero in the right-hand side, while the left-hand side typically diverges, or at least stays finite and non-zero including non-typical cases. Our prediction for the breakdown of the Onsager relations in the glassy state should be testable experimentally. In glasses $`T_2`$ will correspond to effectively generated temperature -, and $`T_1`$ is the temperature of the environment. By changing the cooling rate, also for these systems Eq. (28) constitutes a relation between measurable quantities. A breaking of the Onsager relations can be investigated also in this more realistic setup. The details will be reported elsewhere. In conclusion, we have considered a stochastic model which contains all necessary ingredients of steady glassy behavior. In spite of the fact that such a system can be very far from equilibrium, it allows a thermodynamic description. Generalizing the usual equilibrium thermodynamics, we show that the glassy one can be obtained by minimization of the energy, keeping all entropies fixed. Universal relations, Eqs. (22, 24), are obtained between entropy production and energy dissipation. Energy dissipation (in contrast to entropy production) is closely connected to the fluctuations of slow degrees of freedom, and it looses its leading term when the corresponding temperature is close to zero. We discuss cases where the corrections arising from a finite but small ratio of characteristic times lead to a decrease of the entropy production and/or energy dissipation. We show that the nonlinear Onsager relation for heat transfer in the steady glassy state is always broken, reflecting its strongly non-equilibrium character. As the effect is of order unity, this breaking should be testable experimentally. It is also reminiscent of the breakdown of the Maxwell and Ehrenfest relations in the glassy state . A.E. A. is grateful to FOM (The Netherlands) for financial support.
no-problem/9907/quant-ph9907038.html
ar5iv
text
# 1 Introduction ## 1 Introduction Although many convincing EPR (Einstein–Podolsky–Rosen) experiments violating the local hidden variable models and various forms of Bell inequalities were performed in the past thirty years, an experiment involving no supplementary assumptions—usually called a loophole–free experiment—is still waiting to be carried out. Until recently loophole–free experiments were not considered because they require very high detection efficiency and all experiments carried out till now have had an efficiency under 10% . On the other hand, the most important supplementary assumption, the no enhancement assumption and the corresponding postselection method were considered to be very plausible. Then Santos devised \[22–25\] local hidden–variable models which violate not only the low detection loophole but also the no enhancement assumption as well as post–selection loophole, and these models, as well as considerable improvements in techniques, in particular, detector efficiencies, resulted in an interest into loophole–free experiments. In the past two years several sophisticated proposals appeared which rely on the recent improvement in the detection technology and meticulous elaborations of all experimental details. The first three use maximal superpostions and require detection efficiency of at least 83% and the other two use nonmaximal superpositions relying on recent results which require only 67% detection efficiency for them. All proposals are very demanding and at the same time all but the last proposal invoke a postselection which is also a supplementary assumption. In this paper we analyze several supplementary assumptions and propose a feasible method of doing a loophole–free Bell experiment which requires only 67% detection efficiency, can work with a realistic visibility, and uses a preselection method for preparing non–maximally entangled photon pairs. The preselection method is particularly attractive for its ability to employ solid angles of signal and idler photons (in a downconversion process in a nonlinear crystal) which differ up to five times from each other. This enables a tremendous increase in detection efficiency—from 10% to over 80%—as elaborated below. ## 2 Bell inequalities and their supplementary assumptions As we mentioned in the introduction the recent revival of the Bell issue has been partly triggered by new types of local hidden variables devised by Santos which made all experiments carried out so far inconclusive. However, from the very first Bell experiments it was clear that one day a conclusive loophole–free experiment must be carried out. At the time, such experiments were far from being feasible and as a consequence all experiments so far relied on coincidental detections and on an assumption that a subset of a total set of events would give the same statistics as the set itself. In other words no real experiment so far dealt with proper probabilities, i.e., with ratios of detected events to copies of the physical system initially prepared. Let us see this point in some more details, first, for the Clauser–Horne form of the Bell inequality, and then for Hardy’s equality. We consider a composite system containing two subsytems in a (non)maximal superposition. When a property is being measured on subsystem $`i`$ by detector D<sub>i</sub>, which has got an adjustable parameter $`a_i`$ corresponding to the property, the probability of an independent firing of one of the two detectors is $`p(a_i)=N(a_i)/N`$ ($`i=1,2`$) and of simultaneous triggering of both detectors is $`p(a_1,a_2)=N(a_1,a_2)/N`$, where $`N(a_i)`$ is the number of counts at D<sub>i</sub>, $`N(a_1,a_2)`$ is the number of coincident counts, and $`N`$ is the total number of the systems the source emits. Let a classical hidden state $`\lambda `$ determine the individual counts and the probabilities of individual subsystems triggering the detectors: $`p(\lambda ,a_i)`$ and $`p(\lambda ,a_1,a_2)`$. These probabilities are connected with the above introduced long run probabilities by means of: $`p(a_i)=_\mathrm{\Gamma }\rho (\lambda )p(\lambda ,a_i)𝑑\lambda `$ ($`i=1,2`$) and $`p(a_1,a_2)=_\mathrm{\Gamma }\rho (\lambda )p(\lambda ,a_1,a_2)𝑑\lambda `$, where $`\mathrm{\Gamma }`$ is the space of states $`\lambda `$ and $`\rho (\lambda )`$ is the normalized probability density over states $`\lambda `$. The locality condition—which assumes that the probability of one of the detectors being triggered does not depend on whether the other one has been triggered or not—can be formalized as $`p(\lambda ,a_1,a_2)=p(\lambda ,a_1)p(\lambda ,a_2)`$. Clauser–Horne’s form of the Bell inequality reads: $`A_1A_2p(\lambda ,a_1)p(\lambda ,a_2)p(\lambda ,a_1)`$ $`p(\lambda ,a_2^{})+p(\lambda ,a_1^{})p(\lambda ,a_2^{})+p(\lambda ,a_1^{})p(\lambda ,a_2)`$ (1) $`A_2p(\lambda ,a_1^{})A_1p(\lambda ,a_2)0,`$ where $`0p(\lambda ,a_i)A_i`$. The experiments carried out so far invoked the no–enhancement assumption $`A_i=p(\lambda ,\mathrm{})`$ (where $`\mathrm{}`$ means that a filter for a property corresponding to parameters $`a_i`$ is switched off), wherewith Eq. (1) after multiplication by $`\rho (\lambda )`$ and integration over $`\lambda `$ yields $$1\frac{p(a_1,a_2)}{p(\mathrm{},\mathrm{})}\frac{p(a_1,a_2^{})}{p(\mathrm{},\mathrm{})}+\frac{p(a_1^{},a_2^{})}{p(\mathrm{},\mathrm{})}+\frac{p(a_1^{},a_2)}{p(\mathrm{},\mathrm{})}\frac{p(a_1^{},\mathrm{})}{p(\mathrm{},\mathrm{})}\frac{p(\mathrm{},a_2)}{p(\mathrm{},\mathrm{})}0$$ (2) Thus—because of the low detection efficiency—all the experiments performed till now measured nothing but the above ratios. Then Santos devised hidden variables based on $`p(\lambda ,a_i)>p(\lambda ,\mathrm{})`$ and left us only with the loophole–free option $`A_1=A_2=1`$ wherewith Eq. (1) yields $`1p(a_1,a_2)p(a_1,a_2^{})+p(a_1^{},a_2^{})+p(a_1^{},a_2)p(a_1^{})p(a_2)0.`$ (3) The above cited loophole–free proposals used the right inequality which requires 83% detection efficiency for maximal superpositions and 67% detection efficiency for nonmaximal ones. The left inequality always requires 83% detection efficiency but it makes clear that if we want a loophole–free experiment we must always either register or preselect practically all the systems the source emits in order to obtain proper probabilities, i.e., ratios of detected events to the number of emitted systems. An excellent test which immediately shows whether a particular experiment can be loophole–free is to see whether we can obtain $`p(a_1)p(a_1,\pm )p(a_1,\mathrm{})`$, where ‘$`\pm `$’ means that a two–channel filter (corresponding to a property $`a`$ and property non–$`a`$), e.g., a birefringent prism, is used; ‘$`\mathrm{}`$’ means that the filter has been taken out altogether. Unfortunately all experiments carried out so far have $`p(a_1)>10p(a_1,\mathrm{})`$. This applies to other approaches as well. E.g., Ardehali’s additional assumptions are weaker than the no enhancement assumption but that does not help us in obtaining the proper probabilities. The latter is also true for the Hardy’s equality experiment recently carried out by Torgerson, Branning, Monken, and Mandel although they misleadingly claim that their “method does not depend on the use of detectors with high or even known quantum efficiencies.” Let us look at the experiment in some detail. Torgerson, Branning, Monken, and Mandel argue, in effect, as follows. In a two–photon polarization coincidence experiment at an asymmetric beam splitter one can—assuming 100% efficiency—pick up the orientation angles of the polarizers so as to have $`P(\theta _1,\theta _2^{})/P(\theta _1)=1`$ and $`P(\theta _1^{},\theta _2)/P(\theta _2)=1`$, i.e., polarization $`\theta _1`$ must occur together with $`\theta _2^{}`$ and $`\theta _2`$ with $`\theta _1^{}`$. Classically, if $`\theta _1`$ and $`\theta _2`$ sometimes occur together, then $`\theta _2^{}`$ and $`\theta _1^{}`$ should also sometimes occur together. In a quantum measurement though, for a particular reflectivity of the beam splitter one can ideally obtain $`P(\theta _1,\theta _2)>0`$ together with $`P(\theta _1^{},\theta _2^{})=0`$ which is a contradiction for a classical reasoning. When detection efficiency is far bellow 100% one can assume that only coincidence data are relevant and substitute $`P(\theta _1,\theta _2^{})+P(\theta _1,\theta _2)`$ for $`P(\theta _1)`$. If we define $`P(\theta _1,\theta _2)=N(\theta _1,\theta _2)/[N(\theta _1,\theta _2^{})+N(\theta _1,\theta _2^{})+N(\theta _1^{},\theta _2^{})+N(\theta _1^{},\theta _2^{})]`$, where $`N`$’s are two–photon coincidence detections, for the considered experiment we arrive at 98% efficiency. But, in doing so, we disregard first, that $`2R(1R)`$ percent (44% for the chosen $`R`$) of photons emerge from the same sides of the beam splitter, and secondly, that for the chosen source (LiIO<sub>3</sub> type–I downconverter) one has $`P(\theta _1)>20[P(\theta _1,\theta _2^{})+P(\theta _1,\theta _2)]`$. Thus, we end up not with $`P(\theta _1,\theta _2^{})/P(\theta _1)1`$ but with $`P(\theta _1,\theta _2^{})/P(\theta _1)=0.02`$. In other words, the experiment is not a candidate for the loophole–free type of Bell experiments although it is one of the most convincing coincidence counts experiments carried out so far. ## 3 Experiment A schematic representation of the experiment is shown in Fig. 1. Two independent type–II crystals (BBO) act as two independent sources of two independent singlet pairs. Two photons from each pair interfere at an asymmetrical beam splitter, BS and whenever they emerge from its opposite sides, pass through polarizers P1’ and P2’, and fire the detectors D1’ and D2’, they open the gate (activate the Pockels cells) which preselects the other two photons into a nonmaximal singlet state. We achieve the high efficiency (over 80%) by choosing optimally narrow solid angles determined by the openings of D1’ and D2’ and five times wider solid angles determined by D1 and D2. \[Type–II crystal, as a source of only one singlet pair , suffers from low efficiency (at most 10% ) due to necessarily symmetric detector solid angles.\] An ultrashort laser beam (a subpicosecond one) of frequency $`\omega _0`$ simultaneously (split by a beam splitter) pumps up two nonlinear crystals of type–II producing in each of them intersecting cones of mutually perpendicularly polarized signal and idler photons of frequencies $`\omega _0/2`$ as shown in Fig. 2. The idler and signal photon pairs coming out from the crystals do not have definite phases and therefore cannot exhibit second order interference. However they do appear entangled along the cone intersection lines because one cannot know which cone which photon comes from. By an appropriate preparation one can entangle them in a singlet–like state. Their state is therefore $`|\mathrm{\Psi }={\displaystyle \frac{1}{\sqrt{2}}}\left(|1_x_1|1_y_1^{}|1_y_1|1_x_1^{}\right){\displaystyle \frac{1}{\sqrt{2}}}\left(|1_x_2|1_y_2^{}|1_y_2|1_x_2^{}\right).`$ (4) The outgoing electric–field operators describing photons which pass through beam splitter BS and through polarizers P1’ and P2’ (oriented at angles $`\theta _1^{}`$ and $`\theta _2^{}`$, respectively) and are detected by detectors D1’ and D2’ will thus read (see Fig. 3) $`\widehat{E}_1^{}`$ $`=`$ $`\left(\widehat{a}_{1^{}x}t_x\mathrm{cos}\theta _1^{}+\widehat{a}_{1^{}y}t_y\mathrm{sin}\theta _1^{}\right)e^{i𝒌_1^{}𝒓_1^{}i\omega _1^{}(tt_1^{}\tau _1^{})}`$ (5) $`+i\left(\widehat{a}_{2^{}x}r_x\mathrm{cos}\theta _1^{}+\widehat{a}_{2^{}y}r_y\mathrm{sin}\theta _1^{}\right)e^{i\stackrel{~}{𝒌}_2^{}𝒓_1^{}i\omega _2^{}(tt_2^{}\tau _1^{})},`$ where $`t_x^2`$, $`t_y^2`$ are transmittances, $`r_x^2`$, $`r_y^2`$ are reflectances, $`t_j`$ is time delay after which photon $`j`$ reaches BS, $`\tau _1^{}`$ is time delay between BS and D1’, and $`\omega _j`$ is the frequency of photon $`j`$. The annihilation operators act as follows: $`\widehat{a}_{1x}|1_x_1^{}=|0_x_1^{}`$, $`\widehat{a}_{1x}|0_x_1^{}=0`$. $`E_2^{}`$ is defined analogously. Operators describing photons which pass through polarizers P1 and P2 (oriented at angles $`\theta _1`$ and $`\theta _2`$, respectively), and through Pockels cells and are detected by detectors D1 and D2 will thus read $`\widehat{E}_1=(\widehat{a}_{1x}\mathrm{cos}\theta _1+\widehat{a}_{1y}\mathrm{sin}\theta _1)e^{i\omega _1t_1}.`$ (6) $`E_2`$ is defined analogously. The probability of detecting all four photons by detectors D1, D2, D1’, and D2’ is thus $`P(\theta _1^{},\theta _2^{},\theta _1,\theta _2)`$ $`=`$ $`\eta ^2\mathrm{\Psi }|\widehat{E}_2^{}^{}\widehat{E}_1^{}^{}\widehat{E}_2^{}\widehat{E}_1^{}\widehat{E}_1^{}\widehat{E}_2^{}\widehat{E}_1^{}^{}\widehat{E}_2^{}^{}|\mathrm{\Psi }`$ (7) $`=`$ $`{\displaystyle \frac{\eta ^2}{4}}(A^2+B^22AB\mathrm{cos}\varphi ),`$ where $`\eta `$ is detection efficiency; $`A=Q(t)_{11^{}}Q(t)_{22^{}}`$ and $`B=Q(r)_{12^{}}Q(r)_{21^{}}`$; here $`Q(q)_{ij}=q_x\mathrm{sin}\theta _i\mathrm{cos}\theta _jq_y\mathrm{cos}\theta _i\mathrm{sin}\theta _j`$; $`\varphi =(\stackrel{~}{𝒌}_2𝒌_1)𝒓_1+(\stackrel{~}{𝒌}_1𝒌_2)𝒓_2=2\pi (z_2z_1)/L`$; here $`L`$ is the spacing of the interference fringes (see Fig. 3). $`\varphi `$ can be changed by moving the detectors transversally to the incident beams. Data for this expression are collected by detectors D1’ and D2’ whose openings are not points but have a certain width $`\mathrm{\Delta }z`$. Therefore, in order to obtain a realistic probability we integrate Eq. (7) over $`z_1`$ and $`z_2`$ over $`\mathrm{\Delta }z`$ to obtain $`P(\theta _1^{},\theta _2^{},\theta _1,\theta _2)`$ $`=`$ $`{\displaystyle \frac{\eta ^2}{4}}{\displaystyle \underset{z_1\frac{\mathrm{\Delta }z}{2}}{\overset{z_1+\frac{\mathrm{\Delta }z}{2}}{}}}{\displaystyle \underset{z_2\frac{\mathrm{\Delta }z}{2}}{\overset{z_2+\frac{\mathrm{\Delta }z}{2}}{}}}\left[A^2+B^22AB\mathrm{cos}\left[{\displaystyle \frac{2\pi (z_2z_1)}{L}}\right]\right]𝑑z_1𝑑z_2`$ (8) $`=`$ $`{\displaystyle \frac{\eta ^2}{4}}(A^2+B^2v2AB\mathrm{cos}\varphi ),`$ where $`v=\left[\mathrm{sin}(\pi \mathrm{\Delta }z/L)/(\pi \mathrm{\Delta }z/L)\right]^2`$ is the visibility of the coincidence counting. We assume the near normal incidence at BS so as to have $`r_x^2=r_y^2=R`$ and $`t_x^2=t_y^2=T=1R`$. Next we assume a symmetric position of detectors D1’ and D2’ with respect to BS and the photons paths from the middle of the crystals so as to obtain $`\varphi =0`$. Representing photons by a Gaussian amplitude distribution of energies we have shown in Ref. that the visibility is reduced when the condition $`\omega _1^{}=\omega _2^{}`$ is not perfectly matched and when the coincidence detection time is not much smaller than the coherence time. We meet the latter demand by using a subpicosecond laser pump beam and the former by reducing the size of the detector (D1’ and D2’) pinholes. By reducing the size of the detector pinholes we reduce the number of events detected by D1 and D2 but, on the other hand, this enables us to increase visibility of the Bell pairs at D1 and D2 by sizing pinholes $`ph`$ (see Fig. 2) so as to make solid angles five times wider than the pinholes of D1’ and D2’. (Cf. Joobeur, Saleh, and Teich. ) Alternatively, we can put $`\omega _0/2`$ filters ($`\omega _0`$ is the frequency of the pumping beam) in front of detectors D1 and D2 and drop the pinholes $`ph`$ altogether. Let us now see in which way and when are all photons entangled. For $`R=T=1/2`$ and $`v=1`$ the probability Eq. (8) reads as $`P(\theta _1^{},\theta _2^{},\theta _1,\theta _2)={\displaystyle \frac{1}{4}}(AB)^2={\displaystyle \frac{1}{16}}\mathrm{sin}^2(\theta _1^{}\theta _2^{})\mathrm{sin}^2(\theta _1\theta _2).`$ (9) and if we take away polarizers P1’ and P2’ the following maximal entanglement survives: $`P(\mathrm{}^{},\mathrm{}^{},\theta _1,\theta _2)=\frac{1}{8}\mathrm{sin}^2(\theta _1\theta _2)`$. For an asymmetrical BS, however, if we take away polarizers P1’ and P2’, we obtain only partially entangled state $`P(\mathrm{}^{},\mathrm{}^{},\theta _1,\theta _2)={\displaystyle \frac{1}{4}}[(TR)^2+2TR\mathrm{sin}^2(\theta _1\theta _2)].`$ (10) Thus, in order to obtain (non)maximal entangled state for an asymmetrical beam splitter it is necessary to orient polarizers P1’ and P2’ so as to obtain a corresponding “entangled” probability given by Eq. (7). For example, for $`\varphi =0^{}`$, $`\theta _1^{}=90^{}`$, and $`\theta _2^{}=0^{}`$, Eq. (7) projects out the following (non)maximal singlet–like probability: $`P(\theta _1,\theta _2)`$ $`=`$ $`\eta ^2s(\mathrm{cos}^2\theta _1\mathrm{sin}^2\theta _22v\rho \mathrm{cos}\theta _1\mathrm{sin}\theta _1\mathrm{cos}\theta _2\mathrm{sin}\theta _2+\rho ^2\mathrm{cos}^2\theta _2\mathrm{sin}^2\theta _1)`$ (11) $``$ $`\eta ^2p(\theta _1,\theta _2)`$ where $`s=T^2/(R^2+T^2)`$, $`\rho =R/T`$, and where we multiplied Eq. (8) by 4 for other three possible coincidence detections \[($`\theta _1^{}^{}`$,$`\theta _2^{}^{}`$), ($`\theta _1^{}^{}`$,$`\theta _2^{}^{}`$), and ($`\theta _1^{}^{}`$,$`\theta _2^{}^{}`$)\] at BS and by $`(R^2+T^2)^1`$ for photons emerging from the same side of BS. The singles–probability of detecting a photon by D1 is $`P(\theta _1)=\eta s(\mathrm{cos}^2\theta _1+\rho ^2\mathrm{sin}^2\theta _1)\eta p(\theta _1).`$ (12) Analogously, the singles–probability of detecting a photon by D2 is $`P(\theta _2)=\eta s(\mathrm{sin}^2\theta _2+\rho ^2\mathrm{cos}^2\theta _2)\eta p(\theta _2).`$ (13) Introducing the above obtained probabilities into the Clauser–Horne inequality (2) we obtain the following minimal efficiency for its violation. $`\eta ={\displaystyle \frac{p(\theta _1^{})p(\theta _2)}{p(\theta _1,\theta _2)p(\theta _1,\theta _2^{})+p(\theta _1^{},\theta _2^{})+p(\theta _1^{},\theta _2)}}.`$ (14) This efficiency is a function of visibility $`v`$ and by looking at Eqs. (11), (12), and (13) we see that for each particular $`v`$ a different set of angles should minimize it. A computer optimization of angles—presented in Fig. 4—shows that the lower the reflectivity is, the lower is the minimal detection efficiency. Also, we see a rather unexpected property that a low visibility does not have a significant impact on the violation of the Bell inequality. For example, with 70% visibility and 0.2 reflectivity of the beam splitter we obtain a violation with a lower detection efficiency than with 100% visibility and 0.5 ($`\rho =1`$) reflectivity. A similar calculation can be carried out for the Hardy equalities given at the and of Sec. 2. It can be shown that the lowest possible $`R`$, with only 5–10 standard deviations, should be taken and not the one which gives the greatest $`P(\theta _1,\theta _2)>0`$, again because the impact of a low visibility is the lowest when the beam splitter is the most asymmetrical. Thus our preselection scheme can be used for a loophole–free “Hardy experiment” as well. ## 4 Conclusion Our elaboration shows that the recently found four–photon entanglement can be used for a realization of loophole–free Bell experiments. We propose a set–up which uses two simultaneous type–II downconversions into two singlet–like photon pairs. By combining two photons, one from each such singlet–like pair, at an asymmetrical beam splitter and detecting them in coincidence we preselect the other two completely independent photons into another singlet–like state—let us call them ‘Bell pair’. (See Figs. 1 and 3.) Our calculations show that no time or space windows are imposed on the Bell pairs by the preselection procedure and this means that we can collect the photons within an optimal solid angle. If we take their solid angles five times wider than the angles of preselector photons (determined by the openings of detectors D1’ and D2’—see Fig. 1), then we can collect all Bell pairs and at the same time keep a probability of the “third party” counts negligible. For our set–up we can use the result presented in Fig. 4 which enables a conclusive violation of Bell’s inequalities with a detection efficiency lower than 80% even when the visibility is under 70% at the same time. If we, however, agree that it is physically plausible to take into account only those Bell pairs which are preselected by actually recorded detections at the beam splitter (firing of D1’ and D2’), then we can eliminate the low visibility impact altogether. In this case, we can set $`v=1`$ and for a different set of angles obtain a conclusive violation of Bell’s inequalities and Hardy’s equalities with still lower (under 70%) detection efficiency. In the end, we stress that the whole device can also be used for delivering ready–made Bell pairs in quantum cryptography and quantum computation and communication. Acknowledgments I acknowledge supports of the Alexander von Humboldt Foundation, and the Ministry of Science of Croatia. References Ardehali, M., Phys. Rev. A 49, R3143 (1994). Ardehali, M., Phys. Rev. A 49, R3143 (1994). Clauser, J. F. and M. A. Horne, Phys. Rev. D 10, 526 (1974). Clauser, J. F. and A. Shimony, Rep. Prog. Phys. 41, 1881 (1978). Eberhard, P. H., Phys. Rev. A, 47, R747 (1993). Fry, E. S., T. Walther, and S. Li, Phys. Rev. A 52, 4381 (1995). Garg, A. and N. D. Mermin, Phys. Rev. D 35, 3831 (1987). Garuccio, A., Ann. N. Y. Acad. Sci. 755 632 (1995). Hardy, L., Phys. Rev. Lett. 71, 1665 (1993). Home, D. and F. Selleri, Riv. Nuovo Cim. 14, No. 9, (1991). Huelga, S. F., M. Ferrero, and E. Santos, Phys. Rev. A 51, 5008 (1995). Jones, R. T. and E. G. Adelberger, Phys. Rev. Lett. 72, 267 (1994). Joobeur, A., B. E. A. Saleh, and M. C. Teich Phys. Rev. A 50, 3349 (1994). Kwiat, P. G., P. H. Eberhard, A. M. Steinberg, and R. Y. Chiao, Phys. Rev. A, 49, 3209 (1994). Kwiat, P. G. , K. Mattle, H. Weinfurter, and A. Zeilinger, Phys. Rev. Lett. 75, 4337 (1995). Ou, Z. Y. and L. Mandel, Phys. Rev. Lett. 61, 50 (1988); Pavičić, M., Phys. Rev. A 50, 3486 (1994). Pavičić, M., J. Opt. Soc. Am. B, 12, 821 (1995). Pavičić, M., Phys. Lett. A 209, 255 (1995). Pavičić, M. in Fourth International Conference on Squeezed States and Uncertainty Relations, (NASA CP 3322, Washington, 1996), pp. 325. Pavičić, M. and J. Summhammer, Phys. Rev. Lett. 73, 3191 (1994). Santos, E., Phys. Rev. Lett. 66, 1388 (1991). Santos, E., Phys. Rev. Lett. 68, 2702 (1992). Santos, E., Phys. Rev. A 46, 3646 (1992). Santos, E., Phys. Lett. A 212, 10 (1996). Torgerson, J., D. Branning, and L. Mandel, App. Phys. 60, 267 (1995). Torgerson, J., D. Branning, C.H. Monken, and L. Mandel, Phys. Lett. A 204, 323 (1995).
no-problem/9907/hep-ph9907222.html
ar5iv
text
# Neutrino Mass: theory, data and interpretation ## 1 Introduction Since the early geochemical experiments of Davis and collaborators, underground experiments have by now provided solid evidence for the solar and the atmospheric neutrino problems, the two milestones in the search for physics beyond the Standard Model (SM) . Of particular importance has been the recent confirmation by the Super-Kamiokande collaboration of the atmospheric neutrino zenith angle dependent deficit, which has marked a turning point in our understanding of neutrinos, providing a strong evidence for $`\nu _\mu `$ conversions. In addition to the neutrino data from underground experiments there is also some indication for neutrino oscillations from the LSND experiment . Neutrino conversions are naturally expected to take place if neutrinos are massive, as expected in most extensions of the Standard Model . The preferred theoretical origin of neutrino mass is lepton number violation, which typically leads also to lepton flavour violating transitions such as neutrino-less double beta decay, so far unobserved. However, lepton flavour violating transitions may arise without neutrino masses in models with extra heavy leptons and in supergravity . Indeed the atmospheric neutrino anomaly can be explained in terms of flavour changing neutrino interactions, with no need for neutrino mass or mixing . Whether or not this mechanism will resist the test of time it will still remain as one of the ingredients of the final solution, at the moment not required by the data. A possible signature of theories leading to FC interactions would be the existence of sizeable flavour non-conservation effects, such as $`\mu e+\gamma `$, $`\mu e`$ conversion in nuclei, unaccompanied by neutrino-less double beta decay. In contrast to the intimate relationship between the latter and the non-zero Majorana mass of neutrinos due to the Black-Box theorem there is no fundamental link between lepton flavour violation and neutrino mass. Barring such exotic mechanisms reconciling the LSND (and possibly Hot Dark Matter, see below) together with the data on solar and atmospheric neutrinos requires three mass scales. The simplest way is to invoke the existence of a light sterile neutrino . Out of the four neutrinos, two of them lie at the solar neutrino scale and the other two maximally-mixed neutrinos are at the HDM/LSND scale. The prototype models proposed in enlarge the $`SU(2)U(1)`$ Higgs sector in such a way that neutrinos acquire mass radiatively, without unification nor seesaw. The LSND scale arises at one-loop, while the solar and atmospheric scales come in at the two-loop level, thus accounting for the hierarchy. The lightness of the sterile neutrino, the nearly maximal atmospheric neutrino mixing, and the generation of the solar and atmospheric neutrino scales all result naturally from the assumed lepton-number symmetry and its breaking. Either $`\nu _e`$ \- $`\nu _\tau `$ conversions explain the solar data with $`\nu _\mu `$ \- $`\nu _s`$ oscillations accounting for the atmospheric deficit , or else the rôles of $`\nu _\tau `$ and $`\nu _s`$ are reversed . These two basic schemes have distinct implications at future solar & atmospheric neutrino experiments with good sensitivity to neutral current neutrino interactions. Cosmology can also place restrictions on these four-neutrino schemes. ## 2 Mechanisms for Neutrino Mass Why are neutrino masses so small compared to those of the charged fermions? Because of the fact that neutrinos, being the only electrically neutral elementary fermions should most likely be Majorana, the most fundamental fermion. In this case the suppression of their mass could be associated to the breaking of lepton number symmetry at a very large energy scale within a unification approach, which can be implemented in many extensions of the SM. Alternatively, neutrino masses could arise from garden-variety weak-scale physics specified by a scale $`\sigma =𝒪\text{ }(m_Z)`$ where $`\sigma `$ denotes a $`SU(2)U(1)`$ singlet vacuum expectation value which owes its smallness to the symmetry enhancement which would result if $`\sigma `$ and $`m_\nu 0`$. One should realize however that, the physics of neutrinos can be rather different in various gauge theories of neutrino mass, and that there is hardly any predictive power on masses and mixings, which should not come as a surprise, since the problem of mass in general is probably one of the deepest mysteries in present-day physics. ### 2.1 Unification or Seesaw Neutrino Masses The observed violation of parity in the weak interaction may be a reflection of the spontaneous breaking of B-L symmetry in the context of left-right symmetric extensions such as the $`SU(2)_LSU(2)_RU(1)`$, $`SU(4)SU(2)SU(2)`$ or $`SO(10)`$ gauge groups . In this case the masses of the light neutrinos are obtained by diagonalizing the following mass matrix in the basis $`\nu ,\nu ^c`$ $$\left[\begin{array}{cc}M_L& D\\ D^T& M_R\end{array}\right]$$ (1) where $`D`$ is the standard $`SU(2)U(1)`$ breaking Dirac mass term and $`M_R=M_R^T`$ is the isosinglet Majorana mass that may arise from a 126 vacuum expectation value in $`SO(10)`$. The magnitude of the $`M_L\nu \nu `$ term is also suppressed by the left-right breaking scale, $`M_L1/M_R`$ . In the seesaw approximation, one finds $$M_{\nu eff}=M_LDM_R^1D^T.$$ (2) As a result one is able to explain naturally the relative smallness of neutrino masses since $`m_\nu 1/M_R`$. Although $`M_R`$ is expected to be large, its magnitude heavily depends on the model and it may have different possible structures in flavour space (so-called textures) . In general one can not predict the corresponding light neutrino masses and mixings. In fact this freedom has been exploited in model building in order to account for an almost degenerate seesaw-induced neutrino mass spectrum . One virtue of the unification approach is that it may allow one to gain a deeper insight into the flavour problem. There have been interesting attempts at formulating supersymmetric unified schemes with flavour symmetries and texture zeros in the Yukawa couplings. In this context a challenge is to obtain the large lepton mixing now indicated by the atmospheric neutrino data. ### 2.2 Weak-Scale Neutrino Masses Neutrinos may acquire mass from extra particles with masses $`𝒪\text{ }(m_Z)`$ an therefore accessible to present experiments. There is a variety of such mechanisms, in which neutrinos acquire mass either at the tree level or radiatively. Let us look at some examples, starting with the tree level case. #### 2.2.1 Tree-level Neutrino Masses Consider the following extension of the lepton sector of the $`SU(2)U(1)`$ theory: let us add a set of $`two`$ 2-component isosinglet neutral fermions, denoted $`\nu _{}^{c}{}_{i}{}^{}`$ and $`S_i`$, $`i=e,\mu `$ or $`\tau `$ in each generation. In this case one can consider the mass matrix (in the basis $`\nu ,\nu ^c,S`$) $$\left[\begin{array}{ccc}0& D& 0\\ D^T& 0& M\\ 0& M^T& \mu \end{array}\right]$$ (3) The Majorana masses for the neutrinos are determined from $$M_L=DM^1\mu M_{}^{T}{}_{}{}^{1}D^T$$ (4) In the limit $`\mu 0`$ the exact lepton number symmetry is recovered and will keep neutrinos strictly massless to all orders in perturbation theory, as in the SM. The corresponding texture of the mass matrix has been suggested in various theoretical models , such as superstring inspired models . In the latter the zeros arise due to the lack of Higgs fields to provide the usual Majorana mass terms. The smallness of neutrino mass then follows from the smallness of $`\mu `$. The scale characterizing $`M`$, unlike $`M_R`$ in the seesaw scheme, can be low. As a result, in contrast to the heavy neutral leptons of the seesaw scheme, those of the present model can be light enough as to be produced at high energy colliders such as LEP or at a future Linear Collider. The smallness of $`\mu `$ is in turn natural, in t’Hooft’s sense, as the symmetry increases when $`\mu 0`$, i.e. total lepton number is restored. This scheme is a good alternative to the smallness of neutrino mass, as it bypasses the need for a large mass scale, present in the seesaw unification approach. One can show that, since the matrices $`D`$ and $`M`$ are not simultaneously diagonal, the leptonic charged current exhibits a non-trivial structure that cannot be rotated away, even if we set $`\mu 0`$. The phenomenological implication of this, otherwise innocuous twist on the SM, is that there is neutrino mixing despite the fact that light neutrinos are strictly massless. It follows that flavour and CP are violated in the leptonic currents, despite the masslessness of neutrinos. The loop-induced lepton flavour and CP non-conservation effects, such as $`\mu e+\gamma `$ , or CP asymmetries in lepton-flavour-violating processes such as $`Ze\overline{\tau }`$ or $`Z\tau \overline{e}`$ are precisely calculable. The resulting rates may be of experimental interest , since they are not constrained by the bounds on neutrino mass, only by those on universality, which are relatively poor. In short, this is a conceptually simple and phenomenologically rich scheme. Another remarkable implication of this model is a new type of resonant neutrino conversion mechanism , which was the first resonant mechanism to be proposed after the MSW effect , in an unsuccessful attempt to bypass the need for neutrino mass in the resolution of the solar neutrino problem. According to the mechanism, massless neutrinos and anti-neutrinos may undergo resonant flavour conversion, under certain conditions. Though these do not occur in the Sun, they can be realized in the chemical environment of supernovae . Recently it has been pointed out how they may provide an elegant approach for explaining the observed velocity of pulsars . #### 2.2.2 Radiative Neutrino Masses The prototype one-loop scheme is the one proposed by Zee . Supersymmetry with explicitly broken R-parity also provides alternative one-loop mechanisms to generate neutrino mass arising from scalar quark or scalar lepton exchanges, as shown in Fig. (1). An interesting two-loop scheme to induce neutrino masses was suggested by Babu , based on the diagram shown in Fig. (2). Note that I have used here a slight variant of the original model which incorporates the idea of spontaneous , rather than explicit lepton number violation. Finally, note also that one can combine these mechanisms as building blocks in order to provide schemes for massive neutrinos. In particular those in which there are not only the three active neutrinos but also one or more light sterile neutrinos, such as those in ref. . In fact this brings in novel Feynman graph topologies. ### 2.3 Supersymmetry: R-parity Violation as the Origin of Neutrino Mass This is an interesting mechanism of neutrino mass generation which combines seesaw and radiative mechanisms . It invokes supersymmetry with broken R-parity, as the origin of neutrino mass and mixings. The simplest way to illustrate the idea is to use the bilinear breaking of R–parity in a unified minimal supergravity scheme with universal soft breaking parameters (MSUGRA). Contrary to a popular misconception, the bilinear violation of R–parity implied by the $`ϵ_3`$ term in the superpotential is physical, and can not be rotated away . It leads also by a minimization condition, to a non-zero sneutrino vev, $`v_3`$. It is well-known that in such models of broken R–parity the tau neutrino $`\nu _\tau `$ acquires a mass, due to the mixing between neutrinos and neutralinos . It comes from the matrix $$\left[\begin{array}{ccccc}M_1& 0& \frac{1}{2}g^{}v_d& \frac{1}{2}g^{}v_u& \frac{1}{2}g^{}v_3\\ 0& M_2& \frac{1}{2}gv_d& \frac{1}{2}gv_u& \frac{1}{2}gv_3\\ \frac{1}{2}g^{}v_d& \frac{1}{2}gv_d& 0& \mu & 0\\ \frac{1}{2}g^{}v_u& \frac{1}{2}gv_u& \mu & 0& ϵ_3\\ \frac{1}{2}g^{}v_3& \frac{1}{2}gv_3& 0& ϵ_3& 0\end{array}\right]$$ (5) where the first two rows are gauginos, the next two Higgsinos, and the last one denotes the tau neutrino. The $`v_u`$ and $`v_d`$ are the standard vevs, $`g^{}s`$ are gauge couplings and $`M_{1,2}`$ are the gaugino mass parameters. Since the $`ϵ_3`$ and the $`v_3`$ are related, the simplest (one-generation) version of this model contains only one extra free parameter in addition to those of the MSUGRA model. The universal soft supersymmetry-breaking parameters at the unification scale $`m_X`$ are evolved via renormalization group equations down to the weak scale $`𝒪\text{ }(m_Z)`$. This induces an effective non-universality of the soft terms at the weak scale which in turn implies a non-zero sneutrino vev $`v_3^{}`$ given as $$v_3^{}\frac{ϵ_3\mu }{m_{Z}^{}{}_{}{}^{4}}\left(v_d^{}\mathrm{\Delta }M^2+\mu ^{}v_u\mathrm{\Delta }B\right)$$ (6) where the primed quantities refer to a basis in which we eliminate the $`ϵ_3`$ term from the superpotential (but reintroduce it, of course, in other sectors of the theory). The scalar soft masses and bilinear mass parameters obey $`\mathrm{\Delta }M^2=0`$ and $`\mathrm{\Delta }B=0`$ at $`m_X`$. However at the weak scale they are calculable from radiative corrections as $`\mathrm{\Delta }M^2`$ $``$ $`{\displaystyle \frac{3h_b^2}{8\pi ^2}}m_Z^2\mathrm{ln}{\displaystyle \frac{M_{GUT}}{m_Z}}`$ (7) Note that eq. (6) implies that the R–parity-violating effects induced by $`v_3^{}`$ are calculable in terms of the primordial R–parity-violating parameter $`ϵ_3`$. It is clear that the universality of the soft terms plays a crucial rôle in the calculability of the $`v_3^{}`$ and hence of the resulting neutrino mass . Thus eq. (5) represents a new kind of see-saw scheme in which the $`M_R`$ of eq. (1) is the neutralino mass, while the rôle of the Dirac entry $`D`$ is played by the $`v_3^{}`$, which is induced radiatively as the parameters evolve from $`m_X`$ to the weak scale. Thus we have a hybrid see-saw mechanism, with naturally suppressed Majorana $`\nu _\tau `$ mass induced by the mixing between the weak eigenstate tau neutrino and the zino. In order to estimate the expected $`\nu _\tau `$ mass let me first determine the tau neutrino mass in the most general supersymmetric model with bilinear breaking of R-parity, without imposing universality of the soft SUSY breaking terms. The $`\nu _\tau `$ mass depends quadratically on an effective parameter $`\xi `$ defined as $`\xi (ϵ_3v_d+\mu v_3)^2v_{3}^{}{}_{}{}^{2}`$ characterizing the violation of R–parity. The expected $`m_{\nu _\tau }`$ values are illustrated in Fig. (3). The band shown in the figure is obtained through a scan over the parameter space requiring that the supersymmetric particles are not too light. This should be compared with the cosmologically allowed values of the tau neutrino mass $`m_\nu <92\mathrm{\Omega }h^2`$ eV (see below). Note that this only holds if neutrinos are stable. In the present model the $`\nu _\tau `$ is expected to decay into 3 neutrinos, via the neutral current , or by slepton exchanges. This decay will reduce the relic $`\nu _\tau `$ abundance to the required level, as long as $`\nu _\tau `$ is heavier than about 200 KeV or so. Since on the other hand primordial Big-Bang nucleosynthesis implies that $`\nu _\tau `$ is lighter than about an MeV or so there is a forbidden gap in this model if the majoron is not introduced. In the full version of the model the presence of the majoron allows all neutrino masses to be viable cosmologically. Back to the simplest model with explicit bilinear breaking of R–parity, let me note that in this model the $`\nu _\tau `$ mass can be very large. A way to obtain a model with a small and calculable $`\nu _\tau `$ mass, as indicated by the simplest interpretation of the atmospheric neutrino anomaly in terms of $`\nu _\mu `$ to $`\nu _\tau `$ oscillations is to assume a SUGRA scheme with universality of the soft supersymmetry breaking terms at $`m_X`$. In this case the $`\nu _\tau `$ mass is theoretically predicted in terms of $`h_b`$ and can be small in this case due to a natural cancellation between the two terms in the parameter $`\xi `$, which follows from the assumed universality of the soft terms at $`m_X`$. One can verify that $`m_{\nu _\tau }`$ may easily lie in the ten electron-volt range. Lower masses require about two orders of magnitude in addition to that which is dictated by the RGE evolution, which is certainly not unreasonable. Moreover the solution of the atmospheric neutrino anomaly may involve some exotic mechanism, such as the FC interactions . As a last remark I note that $`\nu _e`$ and $`\nu _\mu `$ remain massless in this approximation. They get masses either from scalar loop contributions in Fig. (1) or by mixing with singlets in models with spontaneous breaking of R-parity . A detailed study is now underway of the loop contributions to $`\nu _\mu `$ and $`\nu _e`$ is underway in Valencia. It is important to notice that even when $`m_{\nu _\tau }`$ is small, many of the corresponding R-parity violating effects can be sizeable. An obvious example is the fact that the lightest neutralino decay will typically decay inside the detector, unlike standard R-parity-conserving supersymmetry. This leads to a vastly unexplored plethora of phenomenological possibilities in supersymmetric physics . In conclusion one can see that, of the various attractive schemes for giving neutrinos a mass, only the seesaw scheme requires a large mass scale. It gives a grand connection between the very light (the neutrinos) and the very heavy (some unknown particles). At this stage is is premature to bet on any mechanism and from this point of view neutrinos open the door to a potentially rich phenomenology, since the extra particles required have masses at scales that could be accessible to present experiments. In the simplest versions of these models the neutrino mass arises from the explicit violation of lepton number. Their phenomenological potential gets even richer if one generalizes the models so as to implement a spontaneous violation scheme. This brings me to the next section. ### 2.4 Majorons at the Weak-scale The generation of neutrino masses will be accompanied by the existence of a physical Goldstone boson that we generically call majoron in any model where lepton number (or B-L) is an ungauged symmetry which is arranged to break spontaneously. Except for the left-right symmetric unification approach, in which B-L is a gauge symmetry, in all of the above schemes one can implement the spontaneous violation of lepton number. One can also introduce it in a seesaw framework, both with $`SU(2)U(1)`$ as well as left-right symmetry . While in the $`SU(2)U(1)`$ case it is rather simple, in the case of left-right-symmetric models, one needs to implement a spontaneously broken global U(1) symmetry similar to lepton number. One interesting aspect that emerges in the latter case is that it allows also the left-right scale to be relatively low . Here I do not consider the seesaw-type majorons, for a discussion see ref. . I will mainly concentrate on weak-scale physics. In all models I consider the lepton-number breaks at a scale given by a vacuum expectation value $`\sigma m_{weak}`$. In all of these models, the weak scale arises as the most natural one and, as already mentioned, the neutrino masses when $`\sigma 0`$ i.e. when the lepton-breaking scale vanish . In any phenomenologically acceptable model one must arrange for the majoron to be mainly an $`SU(2)U(1)`$ singlet, ensuring that it does not affect the invisible Z decay width, well-measured at LEP. In models where the majoron has L=2 the neutrino mass is proportional to an insertion of $`\sigma `$, as indicated in Fig. (2). In the supersymmetric model with broken R-parity the majoron is mainly a singlet sneutrino, which has lepton number L=1, so that $`m_\nu \sigma ^2`$, where $`\sigma \stackrel{~}{\nu ^c}`$, with $`\stackrel{~}{\nu ^c}`$ denoting the singlet sneutrino. The presence of the square, just as in the parameter $`\xi `$ in Fig. (3), reflects the fact that the neutrino gets a Majorana mass which has lepton number L=2. The sneutrino gets a vev at the effective supersymmetry breaking scale $`m_{susy}=m_{weak}`$. The weak-scale majorons may have other phenomenological implications. One is the possibility of invisibly decaying Higgs bosons which I have no time to discuss here (see, for instance ). Finally note that if the majoron acquires a KeV mass (natural in weak-scale models) from gravitational effects at the Planck scale it may play obey the main requirements to play a rôle in cosmology as dark matter . In what follows I will just focus on two examples of how the underlying physics of weak-scale majoron models can affect neutrino cosmology in an important way. #### 2.4.1 Heavy Neutrinos and the Universe Mass Neutrinos of mass less than $`𝒪`$ (100 KeV) or so, are cosmologically stable if they have only SM interactions. Their contribution to the present density of the universe implies $$m_{\nu _i}<92\mathrm{\Omega }_\nu h^2eV,$$ (8) where the sum is over all isodoublet neutrino species with mass less than $`𝒪`$ (1 MeV). The parameter $`\mathrm{\Omega }_\nu h^21`$, where $`h^2`$ measures the uncertainty in the present value of the Hubble parameter, $`0.4<h<1`$, while $`\mathrm{\Omega }_\nu =\rho _\nu /\rho _c`$, measures the fraction of the critical density $`\rho _c`$ in neutrinos. For the $`\nu _\mu `$ and $`\nu _\tau `$ this bound is much more stringent than the laboratory limits. In weak-scale majoron models the generation of neutrino mass is accompanied by the existence of a physical majoron, which leads to potentially fast majoron-emitting decay channels such as $$\nu ^{}\nu +J.$$ (9) as well as new annihilations to majorons, $$\nu ^{}+\nu ^{}J+J.$$ (10) These could eliminate relic neutrinos and therefore allow neutrinos of higher mass, as long as the rates are large enough to allow for an adequate red-shift of the heavy neutrino decay and/or annihilation products. While the annihilation involves a diagonal majoron-neutrino coupling $`g`$, the decays proceed only via the non-diagonal part of the coupling, in the physical mass basis. A careful diagonalization of both mass matrix and coupling matrix is essential in order to avoid wild over-estimates of the heavy neutrino decay rates, such as that in ref. . The point is that, once the neutrino mass matrix is diagonalized, there is a danger of simultaneously diagonalizing the majoron couplings to neutrinos. That would be analogous to the GIM mechanism present in the SM for the couplings of the Higgs to fermions. Models that avoid this GIM mechanism in the majoron-neutrino couplings have been proposed, e.g. in ref. . Many of them are weak-scale majoron models . A general method to determine the majoron couplings to neutrinos and hence the neutrino decay rates in any majoron model was first given in ref. . For an estimate in the model with spontaneously broken R-parity see ref. . One can summarize that since neutrinos can be short-lived their masses can only be really constrained by laboratory experiments based on direct search. The cosmological and other bounds are important but require additional theoretical elements in their interpretation. #### 2.4.2 Heavy Neutrinos and Cosmological Nucleosynthesis The number of light neutrino species is restricted by cosmological Big Bang Nucleosynthesis (BBN). Due to its large mass, an MeV stable (lifetime longer than $`100`$ sec) tau neutrino would be equivalent to several SM massless neutrino species and would therefore substantially increase the abundance of primordially produced elements, e.g. $`{}_{}{}^{4}He`$ and deuterium . This can be converted into restrictions on the $`\nu _\tau `$ mass. If the bound on the effective number of massless neutrino species is taken as $`N_\nu <3.43.6`$, one can rule out $`\nu _\tau `$ masses above 0.5 MeV . If we take $`N_\nu <4.5`$ the $`m_{\nu _\tau }`$ limit loosens accordingly, as seen from Fig. (4), and allows a $`\nu _\tau `$ of about an MeV or so. In the presence of $`\nu _\tau `$ annihilations the BBN $`m_{\nu _\tau }`$ bound is substantially weakened or eliminated . In Fig. (4) we also give the expected $`N_\nu `$ value for different values of the coupling $`g`$ between $`\nu _\tau `$’s and $`J`$’s, expressed in units of $`10^5`$. Comparing with the SM $`g=0`$ case one sees that for a fixed $`N_\nu ^{max}`$, a wide range of tau neutrino masses is allowed for large enough values of $`g`$. No $`\nu _\tau `$ masses below the LEP limit can be ruled out, as long as $`g`$ exceeds a few times $`10^4`$. One can also see from the figure that $`N_\nu `$ can also be lowered below the canonical SM value $`N_\nu =3`$ due to the effect of the heavy $`\nu _\tau `$ annihilations to majorons. These results may be re-expressed in the $`m_{\nu _\tau }g`$ plane, as shown in figure 5. We note that the required values of $`g(m_{\nu _\tau })`$ fit well with the theoretical expectations of many weak-scale majoron models. As we have seen $`\nu _\tau `$ annihilations to majorons may weaken or even eliminate the BBN constraint on the tau neutrino mass. Similarly, in some weak-scale majoron models decays in eq. (9) may lead to short enough $`\nu _\tau `$ lifetimes that they may also play an important rôle in BBN, again with the possibility of substantially weakening or eliminating the BBN constraint on the tau neutrino mass . ## 3 Indications for New Physics The most solid hints in favour of new physics in the neutrino sector come from underground experiments on solar and atmospheric neutrinos. The published data data correspond to a 504–day solar neutrino data sample and 535–day atmospheric neutrino data sample , respectively. These were the data first presented at the past Neutrino 98 conference in Japan. Here we include also some results from the more recent 708–day data sample, see ref. and . ### 3.1 Solar Neutrinos The data collected by the Kamiokande, and the radiochemical Homestake, Gallex and Sage experiments have no Standard Model explanation. The event rates are summarized as: $`2.56\pm 0.23`$ SNU (chlorine), $`72.2\pm 5.6`$ SNU (Gallex and Sage gallium experiments sensitive to the $`pp`$ neutrinos), and $`(2.44\pm 0.10)\times 10^6\mathrm{cm}^2\mathrm{s}^1`$ (<sup>8</sup>B flux from Super-Kamiokande) . In Fig. (6) one can see the predictions of various standard solar models in the plane defined by the <sup>7</sup>Be and <sup>8</sup>B neutrino fluxes, normalized to the predictions of the BP98 solar model . Abbreviations such as BP95, identify different solar models, as given in ref. . The rectangular error box gives the $`3\sigma `$ error range of the BP98 fluxes. The values of these fluxes indicated by present data on neutrino event rates are also shown by the contours in the figure. The best-fit <sup>7</sup>Be neutrino flux is negative! Possible non-standard astrophysical solutions are strongly constrained by helioseismology studies . Within the standard solar model approach, the theoretical predictions clearly lie well away from the $`3\sigma `$ contour, strongly suggesting the need for new particle physics in order to account for the data . The most likely possibility is to assume the existence of neutrino conversions, such as could be induced by very small neutrino masses. Possibilities include the MSW effect , vacuum neutrino oscillations , the Resonant Spin-Flavour Precession mechanism and, possibly, flavour changing neutrino interactions . The recent 708–day data sample presents no major surprises, except that the recoil energy spectrum produced by solar neutrino interactions shows more events in the highest bins. Barring the possibly of poorly understood energy resolution effects, Bahcall and Krastev have noted that if the flux for neutrinos coming from the $`{}_{}{}^{3}\mathrm{He}+p{}_{}{}^{4}\mathrm{He}+e^++\nu _e`$, the so-called $`hep`$ reaction, is well above the (uncertain) SSM predictions, then this could significantly influence the electron energy spectrum produced by solar neutrino interactions in the high recoil region, with hardly any effect at lower energies. Fig. 7 shows the expected normalized recoil electron energy spectrum compared with the most recent experimental data . The solid line represents the prediction for the best–fit SMA solution with free $`{}_{}{}^{8}B`$ and $`hep`$ normalizations (0.69 and 12 respectively), while the dotted line gives the corresponding prediction for the best–fit LMA solution (1.15 and 34 respectively). Finally, the dashed line represents the prediction for the best no-oscillation scheme with free $`{}_{}{}^{8}B`$ and $`hep`$ normalizations (0.44 and 14, respectively). Clearly the spectra with enhanced $`hep`$ neutrinos provide better fits to the data. However Fiorentini et al have argued that the required $`hep`$ amount is too large to accept on theoretical grounds. We look forward to the improvement of the situation in the next round of data. The increasing rôle played rate-independent observables such as the spectrum, as well as seasonal and day-night asymmetries, marks a turning point in solar neutrino research, which will eventually select the mechanism responsible for the explanation of the solar neutrino problem. The required solar neutrino parameters are determined through a $`\chi ^2`$ fit of the experimental data. In Fig. (8) we show the allowed regions in $`\mathrm{\Delta }m^2`$ and $`\mathrm{sin}^2\theta `$ from the measurements of the total event rates at the Chlorine, Gallium and Super–Kamiokande (708-day data sample) combined with the zenith angle distribution observed in Super–Kamiokande, the recoil energy spectrum and the seasonal dependence of the event rates, for active-active oscillations (a) and active-sterile oscillations (b) . The darker (lighter) areas indicate 90% (99 %)CL regions. The best–fit points in each region are indicated by a star. The analysis uses free $`{}_{}{}^{8}B`$ and $`hep`$ normalizations One notices from the analysis that rate-independent observables, such as the electron recoil energy spectrum and the day-night asymmetry (zenith angle distribution), are playing an increasing rôle in ruling out large regions of parameters . Another example of an observable which has been neglected in most analyses of the MSW effect and which could be sizeable for the large mixing angle (LMA) region is the seasonal dependence in the solar neutrino flux which would result from the regeneration effect at the Earth and which has been discussed in ref. . This should play a more significant rôle in future investigations. A theoretical issue which has raised some interest recently is the study of the possible effect of random fluctuations in the solar matter density . The possible existence of noise fluctuations at a few percent level is not excluded by present helioseismology studies. In Fig. (9) we show averaged solar neutrino survival probability as a function of $`E/\mathrm{\Delta }m^2`$, for $`\mathrm{sin}^22\theta =0.01`$. This figure was obtained via a numerical integration of the MSW evolution equation in the presence of noise, using the density profile in the Sun from BP95 in ref. , and assuming that the correlation length $`L_0`$ (which corresponds to the scale of the fluctuation) is $`L_0=0.1\lambda _m`$, where $`\lambda _m`$ is the neutrino oscillation length in matter. An important assumption in the analysis is that $`l_{free}L_0\lambda _m`$, where $`l_{free}10`$ cm is the mean free path of the electrons in the solar medium. The fluctuations may strongly affect the <sup>7</sup>Be neutrino component of the solar neutrino spectrum so that the Borexino experiment should provide an ideal test, if sufficiently small errors can be achieved. The potential of Borexino in probing the level of solar matter density fluctuations provides an additional motivation for the experiment . The most popular alternative solution to the solar neutrino problem is the vacuum oscillation solution which clearly requires large neutrino mixing and to adjust the oscillation length so as to coincide roughly with the Earth-Sun distance. This solution fits well with some theoretical models . Fig. 10 shows the regions of just-so oscillation parameters at the 95 % CL obtained in a recent fit of the data, including both the rates, the recoil energy spectrum and seasonal effects, which are expected in this scenario and could potentially help in discriminating it from the MSW scenario. ### 3.2 Atmospheric Neutrinos There has been a long-standing discrepancy between the predicted and measured $`\nu _\mu `$ /$`\nu _e`$ ratio of the fluxes of atmospheric neutrinos . The anomaly was found both in water Cerenkov experiments, Kamiokande, Super-Kamiokande and IMB , as well as in the iron calorimeter Soudan2 experiment. Negative experiments, such as Frejus and Nusex have much larger errors. Although individual $`\nu _\mu `$ or $`\nu _e`$ fluxes are only known to within $`30\%`$ accuracy, the $`\nu _\mu `$ $`/\nu _e`$ ratio is known to $`5\%`$. The most important feature of the atmospheric neutrino 535-day data sample is that it exhibits a zenith-angle-dependent deficit of muon neutrinos which is inconsistent with theoretical expectations. For recent analyses see ref. . Experimental biases and uncertainties in the prediction of neutrino fluxes and cross sections are unable to explain the data. Fig. 11 shows the measured zenith angle distribution of electron-like and muon-like sub-GeV and multi-GeV events, as well as the one predicted in the absence of oscillation. It also gives the expected distribution in various neutrino oscillation schemes. The thick-solid histogram is the theoretically expected distribution in the absence of oscillation, while the predictions for the best-fit points of the various oscillation channels is indicated as follows: for $`\nu _\mu \nu _s`$ (solid line), $`\nu _\mu \nu _e`$ (dashed line) and $`\nu _\mu \nu _\tau `$ (dotted line). The error displayed in the experimental points is only statistical. The analysis used the latest improved calculations of the atmospheric neutrino fluxes as a function of zenith angle, including the muon polarization effect and took into account a variable neutrino production point . Clearly the data are not reproduced by the no-oscillation hypothesis. The most popular way to account for this anomaly is in terms of neutrino oscillations. In Fig. (12) I show the allowed parameters obtained in a global fit of the sub-GeV and multi-GeV (vertex-contained) atmospheric neutrino data including the 535 day SK data, as well as all other experiments combined at 90 (thick solid line) and 99 % CL (thin solid line) for each oscillation channel considered. The two lower panels Fig. (12) differ in the sign of the $`\mathrm{\Delta }m^2`$ which was assumed in the analysis of the matter effects in the Earth for the $`\nu _\mu \nu _s`$ oscillations. Though $`\nu _\mu \nu _\tau `$ oscillations give a slightly better fit than $`\nu _\mu \nu _s`$ oscillations, at present the atmospheric neutrino data cannot distinguish between these channels. It is well-known that the neutral-to-charged current ratios are important observables in neutrino oscillation phenomenology, which are especially sensitive to the existence of singlet neutrinos, light or heavy . The atmospheric neutrinos produce isolated neutral pions ($`\pi ^0`$-events) mainly in neutral current interactions. One may therefore study the ratios of $`\pi ^0`$-events and the events induced mainly by the charged currents, as recently advocated in ref. . This minimizes uncertainties related to the original atmospheric neutrino fluxes. In fact the Super-Kamiokande collaboration has estimated the double ratio of $`\pi ^0`$ over e-like events in their sample and found $`R=0.93\pm 0.07\pm 0.19`$. This is consistent both with $`\nu _\mu `$ to $`\nu _\tau `$ or $`\nu _\mu `$ to $`\nu _s`$ channels, with a slight preference for the former. The situation should improve in the future. We also display in Fig. (12) the sensitivity of present accelerator and reactor experiments, as well as that expected at future long-baseline (LBL) experiments. The first point to note is that the Chooz reactor data excludes the region indicated for the $`\nu _\mu \nu _e`$ channel when all experiments are combined at 90% CL. From the upper-left panel in Fig. (12) one sees that the regions of $`\nu _\mu \nu _\tau `$ oscillation parameters obtained from the atmospheric neutrino data analysis cannot be fully tested by the LBL experiments, as presently designed. One might expect that, due to the upward shift of the $`\mathrm{\Delta }m^2`$ indicated by the fit for the sterile case (due to the effects of matter in the Earth) it would be possible to completely cover the corresponding region of oscillation parameters. Although this is the case for the MINOS disappearance test, in general most of the LBL experiments can not completely probe the region of oscillation parameters indicated by the $`\nu _\mu \nu _s`$ atmospheric neutrino analysis, irrespective of the sign of $`\mathrm{\Delta }m^2`$ assumed. For a discussion of the various potential tests that can be performed at the future LBL experiments in order to unravel the presence of oscillations into sterile channels see ref. . However appealing it may be, the neutrino oscillation interpretation of the atmospheric neutrino anomaly is at the moment by no means unique. Indeed, the anomaly can be well accounted for in terms of flavour changing neutrino interactions, with no need for neutrino mass or mixing . Investigations involving upward through going muons by Superkamiokande as well as other experiments will play an important rôle in discriminating between oscillations and alternative mechanisms to explain the sub and multi-GeV atmospheric neutrino data . ### 3.3 Other Hints #### 3.3.1 LSND The Los Alamos Meson Physics Facility looked for $`\overline{\nu }_\mu \overline{\nu }_e`$ oscillations using $`\overline{\nu }_\mu `$ from $`\mu ^+`$ decay at rest . The $`\overline{\nu }_e`$’s are detected via the reaction $`\overline{\nu }_epe^+n`$, correlated with a $`\gamma `$ from $`npd\gamma `$ ($`2.2\mathrm{MeV}`$). The results indicate $`\overline{\nu }_\mu \overline{\nu }_e`$ oscillations, with an oscillation probability of ($`0.31_{0.10}^{+0.11}\pm 0.05`$)%, leading to the oscillation parameters shown in Fig. (13). The shaded regions are the favoured likelihood regions given in ref. . The curves show the 90 % and 99 % likelihood allowed ranges from LSND, and the limits from BNL776, KARMEN1, Bugey, CCFR, and NOMAD. A search for $`\nu _\mu `$ $``$ $`\nu _e`$ oscillations has also been conducted by the LSND collaboration. Using $`\nu _\mu `$ from $`\pi ^+`$ decay in flight, the $`\nu _e`$ appearance is detected via the charged-current reaction $`C(\nu _e\text{ },e^{})X`$. Two independent analyses are consistent with the above signature, after taking into account the events expected from the $`\nu _e`$ contamination in the beam and the beam-off background. If interpreted as an oscillation signal, the observed oscillation probability of $`2.6\pm 1.0\pm 0.5\times 10^3`$, consistent with the evidence for oscillation in the $`\overline{\nu }_\mu `$ $``$ $`\overline{\nu }_e`$ channel described above. Fig. 14 compares the LSND region with the expected sensitivity from MiniBooNE, which was recently approved to run at Fermilab . A possible confirmation of the LSND anomaly would be a discovery of far-reaching implications. #### 3.3.2 Dark Matter Galaxies as well as the large scale structure in the Universe should arise from the gravitational collapse of fluctuations in the expanding universe. They are sensitive to the nature of the cosmological dark matter. The data on cosmic background temperature anisotropies on large scales performed by the COBE satellite combined with cluster-cluster correlation data e.g. from IRAS can not be reconciled with the simplest COBE-normalized $`\mathrm{\Omega }_m=1`$ cold dark matter (CDM) model, since it leads to too much power on small scales. Adding to CDM neutrinos with mass of few eV (a scale similar to the one indicated by the LSND experiment ) corresponding to $`\mathrm{\Omega }_\nu 0.2`$, results in an improved fit to data on the nearby galaxy and cluster distribution . The resulting Cold + Hot Dark Matter (CHDM) cosmological model is the most successful $`\mathrm{\Omega }_m=1`$ model for structure formation, preferred by inflation. However, other recent data have begun to indicate a lower value for $`\mathrm{\Omega }_m`$, thus weakening the cosmological evidence favouring neutrino mass of a few eV in flat models with cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }=1\mathrm{\Omega }_m`$ . Future sky maps of the cosmic microwave background radiation (CMBR) with high precision at the MAP and PLANCK missions should bring more light into the nature of the dark matter and the possible rôle of neutrinos . Another possibility is to consider unstable dark matter scenarios . For example, an MeV range tau neutrino may provide a viable unstable dark matter scenario if the $`\nu _\tau `$ decays before the matter dominance epoch. Its decay products would add energy to the radiation, thereby delaying the time at which the matter and radiation contributions to the energy density of the universe become equal. Such delay would allow one to reduce the density fluctuations on the smaller scales purely within the standard cold dark matter scenario. Upcoming MAP and PLANCK missions may place limits on neutrino stability and rule out such schemes. #### 3.3.3 Pulsar Velocities One of the most challenging problems in modern astrophysics is to find a consistent explanation for the high velocity of pulsars. Observations show that these velocities range from zero up to 900 km/s with a mean value of $`450\pm 50`$ km/s. An attractive possibility is that pulsar motion arises from an asymmetric neutrino emission during the supernova explosion. In fact, neutrinos carry more than $`99\%`$ of the new-born proto-neutron star’s gravitational binding energy so that even a $`1\%`$ asymmetry in the neutrino emission could generate the observed pulsar velocities. This could in principle arise from the interplay between the parity violation present in weak interactions with the strong magnetic fields which are expected during a SN explosion . However, it has recently been noted that no asymmetry in neutrino emission can be generated in thermal equilibrium, even in the presence of parity violation. This suggests that an alternative mechanism is at work. Several neutrino conversion mechanisms in matter have been invoked as a possible engine for powering pulsar motion. They all rely on the polarization of the SN medium induced by the strong magnetic fields $`10^{15}`$ Gauss present during a SN explosion. This would affect neutrino propagation properties giving rise to an angular dependence of the matter-induced neutrino potentials. This would lead in turn to a deformation of the ”neutrino-sphere” for, say, tau neutrinos and thus to an anisotropic neutrino emission. As a consequence, in the presence of non-vanishing $`\nu _\tau `$ mass and mixing the resonance sphere for the $`\nu _e\nu _\tau `$ conversions is distorted. If the resonance surface lies between the $`\nu _\tau `$ and $`\nu _e`$ neutrino spheres, such a distortion would induce a temperature anisotropy in the flux of the escaping tau-neutrinos produced by the conversions, hence a recoil kick of the proto-neutron star. This mechanism was realized in ref. invoking MSW conversions with $`m_{\nu _\tau }`$ $`>`$$``$ 100 eV or so, assuming a negligible $`\nu _e`$ mass. This is necessary in order for the resonance surface to be located between the two neutrino-spheres. It should be noted, however, that such requirement is at odds with cosmological bounds on neutrinos masses unless the $`\tau `$-neutrino is unstable. On the other hand in ref. a realization was proposed in the resonant spin-flavour precession scheme (RSFP) . The magnetic field would not only affect the medium properties, but would also induce the spin-flavour precession through its coupling to the neutrino transition magnetic moment . Perhaps the simplest suggestion was proposed in ref. where the required pulsar velocities would arise from anisotropic neutrino emission induced by resonant conversions of massless neutrinos (hence no magnetic moment). Raffelt and Janka have argued, however, that the asymmetric neutrino emission effect was overestimated, since the temperature variation over the deformed neutrino-sphere is not an adequate measure for the anisotropy of the neutrino emission. This would invalidate all neutrino conversion mechanisms, leaving the pulsar velocity problem without any known viable solution. One potential way out would invoke conversions into sterile neutrinos, since the conversions would take place deeper in the star. However, it is too early to tell whether or not it works . ## 4 Fitting the Puzzles Together Physics beyond the Standard Model is required in order to explain solar and atmospheric neutrino data. While neutrino oscillations provide an excellent fit, alternative mechanisms are still viable. Thus it is still too early to tell for sure whether neutrino masses and angles are really being determined experimentally. Here we assume the standard neutrino oscillation interpretation of the data. While it can easily be accommodated in theories of neutrino mass, in general the angles involved are not predicted, in particular the maximal mixing indicated by the atmospheric data. It is suggestive to consider a theory with bi-maximal mixing of neutrinos if the solar neutrino data are explained in terms of the just-so solution. This is not easy to reconcile in a predictive quark-lepton unification scheme that relates lepton and quark mixing angles, since the latter are known to be small. For recent attempts to reconcile solar and atmospheric data in unified models with specific texture anzatze, see ref. . The story gets more complicated if one wishes to account also for the LSND anomaly and for the hot dark matter . As we have seen the atmospheric neutrino data requires $`\mathrm{\Delta }m_{atm}^2`$ which is much larger than the scale $`\mathrm{\Delta }m_{}^{2}{}_{}{}^{}`$ which is indicated by the solar neutrino data. This implies that with just the three known neutrinos there is no room, unless some of the experimental data are discarded. ### 4.1 Almost Degenerate Neutrinos The only possibility to fit solar, atmospheric and HDM scales in a world with just the three known neutrinos is if all of them have nearly the same mass , of about $``$ 1.5 eV or so in order to provide the right amount of HDM (all three active neutrinos contribute to HDM). This can be arranged in the unification approach discussed in sec. 2 using the $`M_L`$ term present in general in seesaw models. With this in mind one can construct, e.g. unified $`SO(10)`$ seesaw models where all neutrinos lie at the above HDM mass scale ($``$ 1.5 eV), due to a suitable horizontal symmetry, while the parameters $`\mathrm{\Delta }m_{}^{2}{}_{}{}^{}`$ & $`\mathrm{\Delta }m_{}^{2}{}_{atm}{}^{}`$ appear as symmetry breaking effects. An interesting fact is that the ratio $`\mathrm{\Delta }m_{}^{2}{}_{}{}^{}/\mathrm{\Delta }m_{}^{2}{}_{atm}{}^{}`$ appears as $`m_{c}^{}{}_{}{}^{2}/m_{t}^{}{}_{}{}^{2}`$ . There is no room in this case to accommodate the LSND anomaly. To what extent this solution is theoretically natural has been discussed recently in ref. . ### 4.2 Four-Neutrino Models The simplest way to incorporate the LSND scale is to invoke a fourth neutrino. It must be $`SU(2)U(1)`$ singlet ensuring that it does not affect the invisible Z decay width, well-measured at LEP. The sterile neutrino $`\nu _s`$ must also be light enough in order to participate in the oscillations together with the three active neutrinos. The theoretical challenges we have are: * to understand what keeps the sterile neutrino light, since the $`SU(2)U(1)`$ gauge symmetry would allow it to have a large bare mass * to account for the maximal neutrino mixing indicated by the atmospheric data, and possibly by the solar * to account from first principles for the scales $`\mathrm{\Delta }m_{atm}^2`$, $`\mathrm{\Delta }m_{}^{2}{}_{}{}^{}`$ and $`\mathrm{\Delta }m_{LSND/HDM}^2`$ With this in mind we have formulated the simplest maximally symmetric schemes, denoted as $`(e\tau )(\mu s)`$ and $`(es)(\mu \tau )`$ , respectively. One should realize that a given scheme (mainly the structure of the leptonic charged current) may be realized in more than one theoretical model. For example, an alternative to the model in was suggested in ref. . There have been many attempts to derive the above phenomenological scenarios from different theoretical assumptions, as has been discussed here . Although many of the phenomenological features arise also in other models, here I concentrate the discussion mainly on the theories developed in ref. . These are characterized by a very symmetric mass spectrum in which there are two ultra-light neutrinos at the solar neutrino scale and two maximally mixed almost degenerate eV-mass neutrinos (LSND/HDM scale), split by the atmospheric neutrino scale . The HDM problem requires the heaviest neutrinos at about 2 eV mass . These scales are generated radiatively due to the additional Higgs bosons which are postulated, as follows: $`\mathrm{\Delta }m_{LSND/HDM}^2`$ arises at one-loop, while $`\mathrm{\Delta }m_{atm}^2`$ and $`\mathrm{\Delta }m_{}^{2}{}_{}{}^{}`$ are two-loop effects. Since these models pre-dated the LSND results, they naturally focussed on accounting for the HDM problem, rather than LSND. However, in the meantime the evidence for hot dark matter has weakened, whereas LSND came into play. In contrast to the HDM problem, the LSND anomaly, if confirmed, would be a more convincing indication for the existence of a fourth light neutrino species, considering that the HDM may be accounted for in a three neutrino degenerate scenario. The models in are based only on weak-scale physics. They explain the lightness of the sterile neutrino, the large lepton mixing required by the atmospheric neutrino data, as well as the generation of the mass splittings responsible for solar and atmospheric neutrino conversions as natural consequences of the underlying lepton-number-like symmetry and its breaking. They are minimal in the sense that they add a single $`SU(2)U(1)`$ singlet lepton to the SM. Before breaking the symmetry the heaviest neutrinos are exactly degenerate, while the other two are still massless . After the global U(1) lepton symmetry breaks the heavier neutrinos split and the lighter ones get mass. The models differ according to whether the $`\nu _s`$ lies at the dark matter scale or at the solar neutrino scale. In the $`(e\tau )(\mu s)`$ scheme the $`\nu _s`$ lies at the LSND/HDM scale, as illustrated in Fig. (15) while in the alternative $`(es)(\mu \tau )`$ model, $`\nu _s`$ is at the solar neutrino scale as shown in Fig. (16) . In the $`(e\tau )(\mu s)`$ case the atmospheric neutrino puzzle is explained by $`\nu _\mu `$ to $`\nu _s`$ oscillations, while in $`(es)(\mu \tau )`$ it is due to $`\nu _\mu `$ to $`\nu _\tau `$ oscillations. Correspondingly, the deficit of solar neutrinos is explained in the first case by $`\nu _e`$ to $`\nu _\tau `$ conversions, while in the second the relevant channel is $`\nu _e`$ to $`\nu _s`$. The presence of additional weakly interacting light particles, such as our light sterile neutrino, is constrained by BBN since the $`\nu _s`$ would enter into equilibrium with the active neutrinos in the early Universe (and therefore would contribute to $`N_\nu ^{max}`$) via neutrino oscillations , unless $`\mathrm{\Delta }m^2sin^42\theta <3\times 10^6eV^2`$ Here $`\mathrm{\Delta }m^2`$ denotes the mass-square difference of the active and sterile species and $`\theta `$ is the vacuum mixing angle. However, systematic uncertainties in the BBN bounds still caution us not to take them too literally. For example, it has been argued that present observations of primordial Helium and deuterium abundances may allow up to $`N_\nu =4.5`$ neutrino species if the baryon to photon ratio is small . Adopting this as a limit, clearly both models described above are consistent. Should the BBN constraints get tighter e.g. $`N_\nu ^{max}<3.5`$ they could rule out the $`(e\tau )(\mu s)`$ model, and leave out only the competing scheme as a viable alternative. However the possible rôle of a primordial lepton asymmetry might invalidate this conclusion, for recent work on this see ref. . The two models would be distinguishable both at future solar as well as atmospheric neutrino data. For example they may be tested in the SNO experiment once they measure the solar neutrino flux ($`\mathrm{\Phi }_\nu ^{NC}`$) in their neutral current data and compare it with the corresponding CC value ($`\mathrm{\Phi }_\nu ^{CC}`$). If the solar neutrinos convert to active neutrinos, as in the $`(e\tau )(\mu s)`$ model, then one expects $`\mathrm{\Phi }_\nu ^{CC}/\mathrm{\Phi }_\nu ^{NC}`$ around 0.5, whereas in the $`(es)(\mu \tau )`$ scheme ($`\nu _e`$ conversion to $`\nu _s`$ ), the above ratio would be nearly $`1`$. Looking at pion production via the neutral current reaction $`\nu _\tau +N\nu _\tau +\pi ^0+N`$ in atmospheric data might also help in distinguishing between these two possibilities , since this reaction is absent in the case of sterile neutrinos, but would exist in the $`(es)(\mu \tau )`$ scheme. If light sterile neutrinos indeed exist one can show that they might contribute to a cosmic hot dark matter component and to an increased radiation content at the epoch of matter-radiation equality. These effects leave their imprint in sky maps of the cosmic microwave background radiation (CMBR) and may thus be detectable with the very high precision measurements expected at the upcoming MAP and PLANCK missions as noted recently in ref. . ### 4.3 MeV Tau Neutrino In ref. a model was presented where an unstable MeV Majorana tau neutrino naturally reconciles the cosmological observations of large and small-scale density fluctuations with the cold dark matter picture. The model assumes the spontaneous violation of a global lepton number symmetry at the weak scale. The breaking of this symmetry generates the cosmologically required decay of the $`\nu _\tau `$ with lifetime $`\tau _{\nu _\tau }10^210^4`$ sec, as well as the masses and oscillations of the three light neutrinos $`\nu _e`$ , $`\nu _\mu `$ and $`\nu _s`$ which may account for the present solar and atmospheric data, though this will have to be checked. One can also verify that the BBN constraints can be satisfied. ## 5 In conclusion The confirmation of an angle-dependent atmospheric neutrino deficit provides, together with the solar neutrino data, a strong evidence for physics beyond the Standard Model. Small neutrino masses provide the simplest, but not unique, explanation of the data. If the LSND result stands the test of time, this would be a puzzling indication for the existence of a light sterile neutrino. The two most attractive schemes to reconcile underground observations with LSND invoke either $`\nu _e`$ \- $`\nu _\tau `$ conversions to explain the solar data, with $`\nu _\mu `$ \- $`\nu _s`$ oscillations accounting for the atmospheric deficit, or the opposite. These two basic schemes have distinct implications at future solar & atmospheric neutrino experiments. SNO and Super-Kamiokande have the potential to distinguish them due to their neutral current sensitivity. Allowing for alternative explanations of the data from underground experiments one can still live with massless non-standard neutrinos or even very heavy neutrinos, which may naturally arise in many models. Although cosmological bounds are a fundamental tool to restrict neutrino masses, in many theories heavy neutrinos will either decay or annihilate very fast, thereby loosening the cosmological bounds. From this point of view, neutrinos can have any mass presently allowed by laboratory experiments, and it is therefore important to search for manifestations of heavy neutrinos at the laboratory in an unbiased way. Last but not least, though most of the recent excitement comes from underground experiments, one should note that models of neutrino mass may lead to a plethora of new signatures which may be accessible also at accelerators, thus illustrating the complementarity between the two approaches in unravelling the properties of neutrinos and probing for signals beyond the Standard Model . I am grateful to the Organizers for the kind hospitality at Corfu. This work was supported by DGICYT grant PB95-1077 and by the EEC under the TMR contract ERBFMRX-CT96-0090.
no-problem/9907/cond-mat9907050.html
ar5iv
text
# Effects of nonmagnetic impurities on optical conductivity in strongly correlated systems ## I Introduction Optical spectroscopy is an important tool in probing electronic states of strongly correlated systems including high-$`T_c`$ cuprates. Infrared properties of optical conductivity for $`\mathrm{YBa}_2\mathrm{Cu}_3\mathrm{O}_{7\delta }`$ crystals have been intensively investigated to explore the low-energy dynamics of charge carriers. Optical conductivity of $`\mathrm{La}_{2x}\mathrm{Sr}_x\mathrm{CuO}_4`$ for several doping rates between $`x=0`$ and $`x=0.34`$ at room temperature exhibited the appearance of mid-infrared band near 0.5 eV with the increase of hole doping concentration. The mid-infrared band was also observed for nonstoichiometric cuprates of $`\mathrm{Nd}_2\mathrm{CuO}_{4y}`$ and $`\mathrm{La}_2\mathrm{CuO}_{4+y}`$ with some vacancies on oxygen sites. Exact diagonalization calculations on small clusters verified the existence of the mid-infrared band upon doping holes away from half filling. Nonmagnetic impurities embedded in the high-$`T_c`$ cuprates have been used to investigate transport and magnetic properties of the cuprates. A small amount of Zn substituted for Cu is known to appreciably reduce the superconducting transition temperature. Magnetic susceptibility data for $`\mathrm{La}_{1.85}\mathrm{Sr}_{0.15}\mathrm{Cu}_{1x}\mathrm{Zn}_x\mathrm{O}_4`$ and NMR data for $`\mathrm{YBa}_2(\mathrm{Cu}_{1x}\mathrm{Zn}_x)_3\mathrm{O}_{7\delta }`$ provided evidences that Zn induces magnetic moments in the $`\mathrm{CuO}_2`$ plane. However, not much attention has been paid to the effect of nonmagnetic impurities on optical conductivity of the cuprates. In the present paper we report an exact diagonalization study of optical conductivity when a nonmagnetic impurity is introduced into the systems of antiferromagnetically correlated electrons. ## II Optical Conductivity We consider the following model Hamiltonian for the study of optical conductivity in strongly correlated electron systems, $$H=t\underset{ij\sigma }{\overset{}{}}(\stackrel{~}{c}_{i\sigma }^{}\stackrel{~}{c}_{j\sigma }+\text{H.c.})+J\underset{ij}{\overset{}{}}\left(𝐒_i𝐒_j\frac{1}{4}n_in_j\right)+V_{\mathrm{imp}}\underset{\mathrm{}}{}(1n_{\mathrm{}}).$$ (1) Here $`\stackrel{~}{c}_{i\sigma }`$ is the electron annihilation operator at site $`i`$ with no double occupancy. $`𝐒_i=\frac{1}{2}c_{i\alpha }^{}𝝈_{\alpha \beta }c_{i\beta }`$ is the electron spin operator and $`n_i=_\sigma c_{i\sigma }^{}c_{i\sigma }`$ is the number operator. $`t`$ is the hopping energy and $`J`$, the Heisenberg exchange energy. The prime in $`_{ij}^{}`$ denotes the sum over the nearest neighbor links $`ij`$ only between copper sites, thus excluding the impurity site. Nonmagnetic impurities Zn substituted for Cu atoms have closed-shell configuration of $`\mathrm{Zn}^{2+}`$ $`(3d^{10})`$ and are inert to electron hopping. The one-body Coulomb potential of the impurity, $`V_{\mathrm{imp}}`$ represents a repulsive interaction as a result of a positive charge ($`\mathrm{Zn}^{2+}`$ ion) of the nonmagnetic impurity interacting with a doped charge carrier (hole). $`_{\mathrm{}}`$ is the sum over the nearest neighbor links with the impurity site. The optical conductivity is obtained from $$\sigma (\omega )=\frac{1}{\omega \pi }\mathrm{Im}\psi _0\left|j_x\frac{1}{\omega H+E_0+iϵ}j_x\right|\psi _0,$$ (2) where $`j_x`$ is the current operator in the $`x`$-direction, $$j_x=it\underset{𝐢,\sigma }{}(c_{𝐢+𝐱\sigma }^{}c_{𝐢\sigma }c_{𝐢\sigma }^{}c_{𝐢+𝐱\sigma })$$ (3) and $`|\psi _0`$ is the ground state of energy $`E_0`$. For the study of optical conductivity in the presence of impurity for a hole-doped system we introduce a nonmagnetic impurity atom and one mobile hole into the systems of $`4\times 4`$ and $`\sqrt{20}\times \sqrt{20}`$ square lattices with periodic boundary conditions, in order to allow Lanczos exact diagonalization calculations. In Fig. 1(a) the predicted optical conductivity is shown for various values of $`J`$ and $`V_{\mathrm{imp}}=0`$ on $`4\times 4`$ square lattice. A Drude peak is seen to appear in the zero-frequency limit $`\omega 0`$ for all chosen values of $`J`$. For $`J=0.1t`$ a broad Drude peak is predicted with no special feature, as is shown in Fig. 1(a1). As the Heisenberg interaction strength $`J`$ increases further, interestingly enough, an additional small peak (denoted as $`E_B`$ and indicated by an upward arrow in the figure) is predicted to occur at a low frequency, as is shown in Figs. 1(a2)–(a4). In addition a large peak (denoted as $`E_J`$ and indicated by an downward arrow) is seen to appear at a higher frequency. This peak becomes increasingly separated from both the Drude peak and the small peak with the increase of $`J`$. The larger peak at a higher frequency may be directly associated with the Heisenberg exchange correlation, but not with the nonmagnetic impurity. For further study of the low energy peak, we choose $`t0.44\text{ eV}`$ for the hopping energy and $`J0.128\text{ eV}`$ for the Heisenberg exchange energy (i.e., $`J0.3t`$) obtained from a local-density-functional study. Similar features without the disappearance of the low energy peaks appear for the larger cluster of $`\sqrt{20}\times \sqrt{20}`$ size as is shown in Fig. 1(b). The presence of the low energy peak $`E_B`$ may not be subject to the finite size effect, although quantitative differences may exist. In Fig. 2(a1) the predicted optical conductivity is shown for $`J=0.3t`$ and $`V_{\mathrm{imp}}=0`$. The low-energy peak occurred at $`E_B0.16t`$. As the strength of the impurity potential $`V_{\mathrm{imp}}`$ increases, the intensity of the predicted peak becomes larger and its position is seen to shift to a higher frequency value of $`E_B0.20t`$. This feature is shown in Figs. 2(a2)–(a4). For comparison the optical conductivity in the absence of impurity is displayed in Fig. 2(a5). In the absence of impurity the low energy peak disappeared, while the position of the high energy peak $`E_J`$ remained unchanged at the same value of $`J=0.3t`$, as is shown in Fig. 2(a4) and Fig. 2(a5). The occurrence of the new small peak at a low-frequency region is attributed to the presence of the nonmagnetic impurity in the systems of antiferromagnetically correlated electrons, while the large peak at a high-frequency region is contributed by the Heisenberg interaction between electrons. Recent experimental data of optical conductivity for $`\mathrm{YBa}_2(\mathrm{Cu}_{1x}\mathrm{Zn}_x)_3\mathrm{O}_{7\delta }`$ crystals exhibited a similar trend (see the two figures in the second row of Fig. 2 in Ref. 11). The observed optical conductivity for $`4\%`$ Zn-doped samples measured at room temperature showed a small hump near 750 cm<sup>-1</sup>. For the choice of $`t0.44\text{ eV}`$ \[Ref. 13\] the predicted peak position at $`E_B0.2t`$ corresponds to the wave number of 710 cm<sup>-1</sup>, which is consistent with the measurement. Based on the conjecture of Poilblanc et al., this small low energy peak may be the reflection of forming a quasi-bound state as a result of resonant scattering with the nonmagnetic impurity. As mentioned above, the large peak which occurred at $`E_J0.54t`$ \[Fig. 2(a5)\] with $`J=0.3t`$ is attributed to the Heisenberg interaction between electrons. This can be clearly understood from Fig. 2(a6): in the absence of impurity the peak position is shifted to a higher frequency, $`E_J1.1t`$ for an increased value of $`J`$, say, $`J=0.6t`$. Experimental studies of cuprate materials revealed that broad bands associated with the Heisenberg interaction appear near 0.2 eV (see Fig. 3 in Ref. 5). Our predicted value of $`E_J0.54t0.24`$ eV is in good agreement with the experimental observation. ## III Conclusion We have investigated the effect of nonmagnetic impurities on the optical conductivity in the systems of antiferromagnetically correlated electrons by using the Lanczos exact diagonalization scheme. It is found that a low-frequency peak in the optical conductivity appears near 710 cm<sup>-1</sup> in the presence of nonmagnetic impurities, in good agreement with experimental results. The predicted low-frequency peak in optical conductivity may be due to the resonant scattering of hole with the nonmagnetic impurity by allowing a possibility of forming a quasi-bound state. In addition, a relatively high and broad peak is found to occur at a high frequency region as a consequence of the Heisenberg interaction between electrons, in general agreement with observation in the peak position. FIGURE CAPTIONS * Optical conductivity versus frequency for various values of $`J`$ and $`V_{\mathrm{imp}}=0`$. Clusters of size (a) $`4\times 4`$ and (b) $`\sqrt{20}\times \sqrt{20}`$ with one doped hole in the presence of a single nonmagnetic impurity atom are considered. The $`\delta `$-functions are given with the width, $`ϵ=0.05t`$. $`E_B`$ indicates a low-energy peak owing to the presence of a nonmagnetic impurity with a positive charge and $`E_J`$, a peak associated with Heisenberg exchange interaction. * Optical conductivity versus frequency for $`J=0.3t`$ and various values of $`V_{\mathrm{imp}}`$. Clusters of size (a) $`4\times 4`$ and (b) $`\sqrt{20}\times \sqrt{20}`$ with one doped hole in the presence of one nonmagnetic impurity are considered. The $`\delta `$-functions are given with the width, $`ϵ=0.05t`$. $`E_B`$ indicates a low-energy peak owing to the impurity and $`E_J`$ is associated with Heisenberg exchange correlation.
no-problem/9907/astro-ph9907167.html
ar5iv
text
# High Resolution VSOP Imaging of a Southern Blazar PKS 1921–293 at 1.6 GHz ## 1. Introduction The successful launch of the VLBI Space Observatory Programme (VSOP) satellite HALCA marks a great step forward in increasing the resolution over that possible with ground-based radio telescopes at 1.6 and 5.0 GHz (Hirabayashi et al. 1998 and references therein). HALCA’s 8-meter-diameter antenna is in an elliptical orbit with an apogee of 21,400 km, a perigee of 560 km and an orbital period of 6.3 hours. VSOP observations, with a factor of $``$3 improvement in resolution compared to the ground observations at same frequencies, enable a close look at the compact core of active galactic nuclei and as a result, to resolve some individual components within the compact core and jets, and to study the bent jet in the vicinity of the core observed at higher frequencies with ground telescopes. In particular, the addition of HALCA significantly improves the north–south resolution for equatorial and southern radio sources, as illustrated in this paper. VSOP also provides almost an order of magnitude increase to the detectable brightness temperature (from 10<sup>11</sup>–10<sup>12</sup> K to 10<sup>12</sup>–10<sup>13</sup> K for bright sources). As a highly polarized (cf. Worrall, Wilkes 1990) and optically violently variable quasar (Wills, Wills 1981) with m<sub>v</sub> = 17.5, PKS 1921–293 (OV–236) is classified as one of the brightest radio–loud blazars known. It shows a dramatic variability from radio to X-ray. Curiously, no $`\gamma `$–ray emission has been detected by EGRET (Fichtel et al. 1994; Mukherjee et al. 1997). At a redshift of 0.352 (Wills, Wills 1981), it has an angular–to–linear scale conversion of 3$`h^1`$ pc mas<sup>-1</sup> with H<sub>0</sub> = 100 $`h`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and q<sub>0</sub> = 0.5. The existing ground VLBI observations reveal a core–jet structure (cf. Kellermann et al. 1998). Its core is very compact (only a fraction of the beamwidth in diameter), with a brightness temperature (T<sub>b</sub>) in the rest frame of the source greater than 10<sup>12</sup> K. There is evidence that on a scale of 1–2 $`h^1`$ pc from the core, the jet moves along a curved trajectory superluminally (Shen et al. 1999) and then appears to end up in a diffuse component about 15 $`h^1`$ pc from the core (cf. Tingay et al. 1998). In this letter, we report on the results of a 1.6 GHz VSOP imaging of PKS 1921–293. We describe the observations and data reduction and present a 1.6 GHz VSOP image of PKS 1921–293 in section 2. The evolution of its fine structure and the implication of the high T<sub>b</sub> will be discussed in section 3. A brief summary is given in section 4. Throughout this paper the spectral index, $`\alpha `$, is defined as S<sub>ν</sub> $``$ $`\nu ^\alpha `$. ## 2. VSOP Observations and Data Reduction The 1.6 GHz VSOP observations of PKS 1921–293 were carried out as part of HALCA’s in–orbit checkout on July 18, 1997 for a total about 1.5 hours. The HALCA data acquisition was successfully done with the satellite tracking stations located in Goldstone (CA, USA), and NRAO <sup>1</sup><sup>1</sup>1The NRAO is operated by Associated Universities Inc., under cooperative agreement with the National Science Foundation Green Bank (WV, USA). The ground radio telescopes consisted of 10 VLBA antennas and the phased VLA of NRAO. The left-circular polarization (LCP) data were recorded in the standard VLBA format with an intermediate frequency (IF) band of 16 MHz. The cross-correlation of the data was carried out on the VLBA correlator at Socorro (NM, USA) with an output preaveraging time of 0.524 and 1.966 seconds for the space-ground and ground-ground baselines, respectively, and 256 spectral channels per IF band. The post–correlation data reduction was performed in NRAO AIPS and DIFMAP (Shepherd 1997). A priori visibility amplitude calibrations were applied using the antenna gain curves and the system temperatures measured at each antenna including HALCA. In the fringe–fitting run, a solution interval of 1 minute and a point source model were employed. The VLBA antenna at Los Alamos (LA) served as the reference telescope throughout. Strong fringes were consistently detected on space baselines to HALCA as well as all the ground baselines. Following this, the data were averaged over all frequency channels, and then phase self-calibrated with a 10–second solution interval and a point source model for the purpose of further time averaging. Finally, the visibility data were exported to DIFMAP for imaging. The data were integrated over 30 seconds to reconcile the different preaveraging time from the correlator output as mentioned above. The uncertainties in the averaged visibilities were computed from the scatter of data points within the averaging interval. Some obviously bad data were inspected and removed. Several iterations of cleaning and self–calibration to phases (and amplitudes in the later stages) were performed. To ensure a better angular resolution with HALCA data, uniform weighting of the data was adopted with gridding weights scaled by amplitude errors raised to the power of –1. The resulting image is shown in figure 1. The FWHM beam size is 4.1 mas $`\times `$ 1.1 mas at a position angle of 46. (For comparison, the synthesized beam of the ground–only observation is 21.7 mas $`\times `$ 5.7 mas along –4.) The peak flux density and the rms noise level are 4.61 Jy/beam and 7.0 mJy/beam, respectively. Thus, a peak–to–rms dynamic range of 650 is obtained in our short 1.6 GHz VSOP image. ## 3. Discussion ### 3.1. Structural Evolution PKS 1921–293 was unresolved at arcsecond–scale with VLA observations (Perley 1982; de Pater et al. 1985). Ground VLBI images at centimeter wavelengths showed a typical core–jet structure, with a diffuse jet feature located at a position angle $``$ 30 with respect to the compact, strong core (cf. Fey et al. 1996; Tingay et al. 1998; Kellermann et al. 1998). At 43 GHz, three–epoch VLBA images provide evidence for a superluminal jet ($`\beta _{app}`$ =2.1 $`h^1`$) within 1–2 $`h^1`$ pc, which has a sharp bend in the position angle compared to the jet seen on a scale of ten parsecs (Shen et al. 1999). The core–jet morphology of our VSOP image is in good agreement with those ground VLBI images made at other centimeter wavelengths. However, the addition of space VLBI antenna greatly improved the resolution (as can be seen from the comparison of the beams with and without HALCA), and thus enables us to clearly identify the compact core. To yield a quantitative description of the source structure, we applied a model consisting of three elliptical Gaussian components to fit both amplitudes and phases in the calibrated visibility data. The results of model parameters and corresponding 1-$`\sigma `$ errors are listed in table 1. It reveals that the data are consistent with an inner jet (component 2) at 1.5 mas north of the core (component 1), as well as a large jet feature (component 3) at a position angle of 30. 4 and a separation of 5.6 mas from the core. We note that the orientation angles of the two central components are practically identical, and both are within errors the same as the position angle of the synthesized beam for this observation. We do not regard these coincidences as being physically meaningful, although the model–fitting procedure gives the best results with these values. We are confident, however, that the relative separation and position angle of the components are correct. We show in figure 2 the distribution of visibility amplitude versus u-v distance along with the visibilities generated from our model fit (solid curves). The plot shows a dramatic drop of the correlated flux density within the ground baselines ($`<`$ 30 M$`\lambda `$), associated with the resolved diffuse jet (component 3) seen in all centimeter ground VLBI images. On the space baselines ($`>`$ 50 M$`\lambda `$), there is still a clear but gradual decrease in amplitude (to $``$ 1.0 Jy on the longest baselines of 140 M$`\lambda `$), which indicates that the compact central part is no longer unresolved. The measured non–zero closure phases on the HALCA baselines also confirm this. The introduction of component 2 is required to fit the visibility distribution at a u-v distance range of (50 – 70) M$`\lambda `$, which is eventually composed of space baselines in the north-south direction. We also tried additional model fitting to visibilities on the space baselines only. This should fit compact core structure well since the extended structure at about 5.6 mas away is totally resolved. The fit gives a separation of two components about 1.6 mas along 3. 3, consistent with the results in table 1. We note that component 2 is spatially coincident with an observed 43 GHz weak extended feature of $``$18 mJy (Shen et al. 1999). If these are the same component at the two frequencies, then the spectral index between 1.6 and 43 GHz would be steeper than –1.6, which is very common in the optically thin jets. However, since PKS 1921–293 is highly variable, this estimate of spectral index made from measurements taken at different epochs (1.5 years apart) may not be accurate and, further observations are definitely required to clarify this. We found that the position of this inner northern jet component from our 1.6 GHz VSOP experiment with 7 times better resolution in the north-south direction than ground-only VLBI observations (see u-v coverage embedded in figure 2), when compared with those ground VLBI images, is located on a common curved path connecting the jet within 1–2 $`h^1`$ pc to the 10 pc–scale jet. ### 3.2. Brightness Temperature T<sub>b</sub> PKS 1921–293 has one of the highest brightness temperatures measured in the rest frame of the source. A 22 GHz VLBI survey (Moellenbrock et al. 1996) gave a lower limit to T<sub>b</sub> $`>`$ 7.0$`{}_{2.1}{}^{}{}_{}{}^{+4.0}`$ $`\times `$ 10<sup>12</sup> K for PKS 1921–293. A previous VLBI experiment, using a telescope in Earth orbit, estimated a core T<sub>b</sub> of 3.8 $`\times `$ 10<sup>12</sup> K at 2.3 GHz (Linfield et al. 1989), the highest in the sample for sources with known redshifts. VLBI images made at 5.0 GHz also found T<sub>b</sub> significantly greater than 10<sup>12</sup> K (Shen et al. 1997; Tingay et al. 1998). The derived core T<sub>b</sub> from our 1.6 GHz VSOP image is (2.55$`\pm `$0.66) $`\times `$ 10<sup>12</sup> K, which is consistent with those earlier estimates. It has been shown that there is a limit to T<sub>b</sub> for incoherent synchrotron radiation, and a brightness temperature in excess of this limit is ascribed to the effect of Doppler boosting in a relativistic jet beamed toward the observer with a Doppler factor $`\delta `$ = \[$`\gamma `$(1 – $`\beta `$cos$`\theta `$)\]<sup>-1</sup> (cf. Readhead 1994). Here $`\gamma `$ = (1 – $`\beta ^2`$)<sup>-1/2</sup> is the Lorentz factor, $`\beta `$ is the jet velocity in units of the speed of light and, $`\theta `$ is the angle between the line of sight and the radio jet axis. A commonly accepted explanation is that the observed upper limit to $`T_b`$ ($``$ 10<sup>12</sup> K) is caused by the “inverse Compton catastrophe” (Kellermann, Pauliny-Toth 1969). Using formulae (1a) and (1b) rederived by Readhead (1994), we can calculate this inverse Compton scattering limit as T<sub>b,ic</sub> = 1.2 $`\times `$ 10<sup>11</sup> K for PKS 1921–293. Here we have applied a peak frequency of 8.0 GHz and assumed a high frequency cutoff of 100 GHz. The synchrotron self–absorption turn–over frequency of 8.0 GHz was claimed by Brown et al. (1989) and is confirmed by single–dish measurements from the University of Michigan Radio Astronomy Observatory (UMRAO) made around our VSOP observational epoch, from which we also obtained an optically thin spectral index of –0.15 as well as a total flux density of 17.9 Jy at 8.0 GHz. In order to reconcile with T<sub>b</sub> = (1.71$`\pm `$0.44) $`\times `$ 10<sup>12</sup> K from our 1.6 GHz VSOP results (here, we have multiplied a factor of 0.67 to convert a brightness temperature derived assuming a Gaussian component to an optically thin uniform sphere), a Doppler boosting factor $`D_{ic}`$ = 14.3$`\pm `$3.7 is required to avoid the inverse Compton catastrophe. This agrees very well with a lower limit to Doppler factor ($`\delta _{ssc}`$) of 14 derived from the argument that the observed X-ray emission is produced primarily by the inverse Compton scattering of synchrotron radiation (Güijosa, Daly 1996 and references therein). The inhomogeneous relativistic jet model (Blandford, Königl 1979; Königl 1981) also sets an upper limit to the measured brightness temperature. This limit is independent of frequency and depends very weakly on the observables with an approximate expression as T<sub>b,j</sub> $``$ 3.0 $`\times `$ 10<sup>11</sup> $`D_j`$<sup>5/6</sup> K, here $`D_j`$ is the Doppler factor associated with this jet model. This results in a $`D_j`$ = 12.7$`\pm `$4.0, which is very similar to $`D_{ic}`$ from the inverse Compton catastrophe. Combining with the detected superluminal jet motion $`\beta _{app}`$ = 3.0 (Shen et al. 1999; choosing $`h`$ = 0.7 here), we can derive its bulk Lorentz factor ($`\gamma `$) and the jet angle with the line of sight ($`\theta `$) as follows: $`\gamma _{ic}`$ = 7.5 and $`\theta _{ic}`$ = 1. 6 from $`D_{ic}`$ = 14.3, and $`\gamma _i`$ = 6.7 and $`\theta _i`$ = 2. 0 from $`D_j`$ = 12.7, respectively. Both models require about the same relativistic beaming factor to explain the high T<sub>b</sub> for PKS 1921–293, and we cannot distinguish between them. Readhead (1994) introduced the “equipartition brightness temperature” cutoff ($``$ 10<sup>11</sup> K) from a statistical analysis. In the case of PKS 1921–293, it gives a limit of T<sub>b,eq</sub> = 9.8 $`\times `$ 10<sup>10</sup> $`\delta ^{0.78}`$ $`h^{2/17}`$ K. The 1.6 GHz VSOP core significantly exceeds this limit, and therefore an equipartition Doppler factor as large as (39.1$`\pm `$12.9) $`h^{0.15}`$ is needed. This is about 3 times the values of $`D_{ic}`$ and $`D_j`$ and suggests that PKS 1921–293 may not be in equipartition. If PKS 1921–293 is not in equipartition, we can use ratio $`D_{eq}`$/$`\delta `$ = T<sub>b</sub>/T<sub>b,eq</sub> with the assumption that $`\delta `$ is about 13 (a value close to $`D_{ic}`$, $`D_j`$ and $`\delta _{ssc}`$) to derive an equipartition Doppler factor $`D_{eq}`$ = (30.6$`\pm `$7.9) $`h^{2/17}`$, and then $`\gamma _{eq}`$ = 14.8 and $`\theta _{eq}`$ = 0. 4. We can further measure how far the source is from equipartition by calculating the ratio $`D_{eq}`$/$`\delta `$ which ranges from 2.2 $`h^{2/17}`$ to 2.4 $`h^{2/17}`$ as $`\delta `$ changes from 14.3 ($`D_{ic}`$) to 12.7 ($`D_j`$). Güijosa and Daly (1996) obtained a ratio of 2.0 with $`D_{eq}`$ = 29 (assuming $`h=1`$). This leads to the conclusion that the core of PKS 1921–293 is strongly particle dominated, since the particle energy density ($`u_p`$) greatly exceeds the magnetic field energy density ($`u_B`$) by a factor of ($`D_{eq}`$/$`\delta `$)<sup>8.5</sup>, which is as large as $``$ 1000. Such a departure from equipartition has also been reported for the superluminal blazar 3C 345 (Unwin et al. 1994). We note that both PKS 1921–293 and 3C 345 have not been detected at $`>`$100 MeV $`\gamma `$–ray energy in spite of the fact that they are among the strongest blazars at radio wavelengths. Bower and Backer (1998), in their study of the $`\gamma `$–ray blazar NRAO 530, favor the inhomogeneous jet model which produces a reasonable Doppler factor while maintaining energy equipartition. They also speculate that EGRET–detected blazars are those in which the equipartition limit is briefly superseded by the inverse Compton catastrophe limit. It is also possible that blazars not detected by EGRET may not have equipartition between particle and magnetic field energy densities as in the case of PKS 1921–193 while one of the limits imposed by the inverse Compton catastrophe or the inhomogeneous relativistic jet model applies. In any case, a Doppler factor of 12 is required for PKS 1921–293. This results in a narrow viewing angle $`\theta `$ = 1. 9, at which any small bending angle could be enlarged when projected on the sky. Such phenomenon might be responsible for the observed jet curvature and be further related to its non-detection by EGRET (cf. Hong et al. 1998; Tingay et al. 1998). ## 4. Conclusions We have carried out a VSOP observation of the southern blazar PKS 1921–293. The overall source morphology is consistent with previous ground VLBI results. As is clear from figure 2, the space VLBI observations are critical for isolating the core of the source and permit images to be made with much finer spatial resolution than is possible with the ground VLBI at the same frequency. In the case of PKS 1921–293, the high resolution provided by VSOP data, especially along the north-south direction, plays an irreplaceable role in our resolving an inner jet component at about 1.5 mas north of the compact core. When compared with the ground VLBI images, this feature is believed to relate to the emission on its curved trajectory from the bent jet within 1–2 $`h^1`$ pc to the 10 pc–scale elongated jets. By model fitting VSOP calibrated data, we obtain a core brightness temperature of 2.6 $`\times `$ 10<sup>12</sup> K in the source rest frame under the assumption that the source has a Gaussian brightness distribution. This is in excess of 10<sup>12</sup> K, and implies a relativistic beaming in the core. We analyzed the source in terms of three models, involving the inverse Compton catastrophe, an inhomogeneous relativistic jet, and the equipartition of energy between the radiating particles and the magnetic field. We found no significant difference in Doppler factors for first two models, though inhomogeneous jet model is more realistic compared to the homogeneous sphere model in compact radio sources. Both models, however, will eventually lead to a particle dominated departure from equipartition state according to the equipartition argument. Otherwise, a relatively large Doppler factor is needed in order to maintain equipartition of energy in the source. Thus, our analysis of high $`T_b`$ in this $`\gamma `$ray–quiet blazar PKS 1921–293 is not in favor of any particular models. More VSOP imaging study of these strong blazars with high brightness temperatures will be necessary to improve our understanding of the physical process within. We gratefully acknowledge the VSOP Project, which is led by the Japanese Institute of Space and Astronautical Science in cooperation with many organizations and radio telescopes around the world. This research has made use of data from the University of Michigan Radio Astronomy Observatory which is supported by the National Science Foundation and by funds from the University of Michigan. Research at the ASIAA is funded by the Academia Sinica. ## References Blandford R. D., Königl A. 1979, ApJ 232, 34 Bower G. C., Backer D. C. 1998, ApJ 507, L117 Brown L. M. J., Robson E. I., Gear W. K., Hughes D. H., Griffin M. J., Geldzahler B. J., Schwartz P. R., Smith M. G. et al. 1989, ApJ 340, 129 de Pater I., Schloerb F. P., Johnson A. H. 1985, AJ 90, 846 Güijosa A., Daly R. A. 1996, ApJ 461, 600 Fey A. L., Clegg A. W., Fomalont E. B. 1996, ApJS 105, 299 Fichtel C. E., Bertsch D. L., Chiang J., Dingus B. L., Esposito J. A., Fierro J. M., Hartman R. C., Hunter S. D. et al. 1994, ApJS 94, 551 Hirabayashi H., Hirosawa H., Kobayashi H., Murata Y., Edwards P. G., Fomalont E. B., Fujisawa K., Ichikawa T. et al. 1998, Science 281, 1825 and erratum 282, 1995 Hong X. Y., Jiang D. R., Shen Z.-Q. 1998, A&A 330, L45 Kellermann K. I., Pauliny-Toth I. I. K. 1969, ApJ 193, 43 Kellermann K. I., Vermeulen R. C., Zensus J. A., Cohen M. H. 1998, AJ , 115, 1295 Königl A. 1981, ApJ 243, 700 Linfield R. P., Levy G. S., Ulvestad J. S., Edwards C. D., DiNardo S. J., Stavert L. R., Ottenhoff C. H., Whitney A. R. et al. 1989, ApJ 336, 1105 Moellenbrock G. A., Fujisawa K., Preston R. A., Gurvits L. I., Dewey R. J., Hirabayashi H., Inoue M., Kameno S. et al. 1996, AJ 111, 2174 Mukherjee R., Bertsch D. L., Bloom S. D., Dingus B. L., Esposito J. A., Fichtel C. E., Hartman R. C., Hunter S. D. et al. 1997, ApJ 490, 116 Perley R. A. 1982, AJ 87, 859 Readhead A. C. S. 1994, ApJ 426, 51 Shen Z.-Q., Moran J.M., Kellermann K.I. 1999, in preparation Shen Z.-Q., Wan T.-S., Moran J. M., Jauncey D. L., Reynolds J. E., Tzioumis A. K., Gough R. G., Ferris R. H. et al. 1997, AJ 114, 1999 Shepherd M. C. 1997, in Astronomical Data Analysis Software and Systems VI, ASP Conf. Series 125, eds. G. Hunt & H. E. Payne (San Francisco: ASP), 77 Tingay S. J., Murphy D. W., Lovell J. E. J., Costa M. E., McCulloch P., Edwards P. G., Jauncey D. L., Reynolds J. E. et al. 1998, ApJ 497, 594 Tingay S. J., Murphy D. W., Edwards P. G. 1998, ApJ 500, 673 Unwin S. C., Wehrle A. E., Urry C. M., Gilmore D. M., Barton E. J., Kjerulf B. C., Zensus J. A., Rabaca C. R. 1994, ApJ 432, 103 Wills D., Wills B. J. 1981, Nature 289, 384 Worrall D. M., Wilkes B. J. 1990, ApJ 360, 396
no-problem/9907/hep-th9907220.html
ar5iv
text
# GAUGE TRANSFORMATIONS ON MASSLESS SPIN-1/2 PARTICLES AND NEUTRINO POLARIZATION AS A CONSEQUENCE OF GAUGE INVARIANCE ## 1 Introduction The purpose of this paper is to discuss the internal space-time symmetries of massless particles, particularly their gauge degrees of freedom. The internal space-time symmetries of massive and massless particles are dictated by the little groups of the Poincaré group which are isomorphic to the three-dimensional rotation group and the two-dimensional Euclidean group respectively. The little group is the maximal subgroup of the Lorentz group whose transformations leave the four-momentum of the particle invariant. Using the properties of these groups we would like to address the following questions. On massless particles, there are still questions for which answers are not readily available. Why do spins have to be parallel or anti-parallel to the momentum? While photons have a gauge degree of freedom with two possible spin directions, why do massless neutrinos have only one spin direction without gauge degrees of freedom? The purpose of this note is to address these questions within the framework of Wigner’s little groups of the Poincaré group. The group of Lorentz transformations is generated by three rotation generators $`J_i`$ and three boost generators $`K_i`$. They satisfy the commutation relations: $$[J_i,J_j]=iϵ_{ijk}J_k,[J_i,K_j]=iϵ_{ijk}K_k,[K_i,K_j]=iϵ_{ijk}J_k.$$ (1) In studying space-time symmetries dictated by Wigner’s little group, it is important to choose a particular value of the four-momentum, For a massive point particle, there is a Lorentz frame in which the particle is at rest. In this frame, the little group is the three-dimensional rotation group. This is the fundamental symmetry associated with the concept of spin. For a massless particle, there is no Lorentz frame in which its momentum is zero. Thus we have to settle with a non-zero value of the momentum along one convenient direction. The three-parameter little group in this case is isomorphic to the $`E(2)`$ group. The rotational degree of freedom corresponds to the helicity of the massless particle, while the translational degrees of freedom are gauge degrees of the massless particle. In this report, we discuss first the $`O(3)`$-like little group for a massive particle. We then study the $`E(2)`$-like little group for massless particles. The $`O(3)`$-like little group is applicable to a particle at rest. If the system is boosted along the $`z`$ axis, the little group becomes a “Lorentz-boosted” rotation group, whose generators still satisfy the commutation relations for the rotation group. However, in the infinite-momentum/zero-mass limit, the commutation relation should become those for massless particles. This process can be carried out for both spin-1 and spin-1/2 particles. This “group-contraction” process in principle can be extended all higher-spin particles. In this report, we are particularly interested in spin-1/2 particles. In Sec. 2, we study the contraction of the $`O(3)`$-like little group to the $`E(2)`$-like little group. In Sec. 3, we discuss in detail the $`E(2)`$-like symmetry of massless particles. Secs. 4 and 5 are devoted to the question of neutrino polarizations and gauge transformation. Recently, the Lorentz group has established its prominence in optics. For instance, from the mathematical point of view, “squeezed state” are infinite-dimensional unitary representations of the Lorentz group. As for finite spinor representations, it is possible to formulate optical filters in terms of the two-by-two matrix of the $`SL(2,c)`$ group. In Sec. 6, we give a brief review of the recent development in this field and its relevance to the physics of neutrinos and photons. ## 2 Massless Particle as a Limiting Case of Massive Particle The $`O(3)`$-like little group for a particle at rest is generated by $`J_1,J_2`$, and $`J_3`$. If the particle is boosted along the $`z`$ direction with the boost operator $`B(\eta )=\mathrm{exp}\left(i\eta K_3\right)`$, the little group is generated by $`J_i^{}=B(\eta )J_iB(\eta )`$. Because $`J_3`$ commutes with $`K_3`$, $`J_3`$ remains invariant under this boost. $`J_1^{}`$ and $`J_2^{}`$ take the form $$J_1^{}=(\mathrm{cosh}\eta )J_1+(\mathrm{sinh}\eta )K_2,J_2^{}=(\mathrm{cosh}\eta )J_2(\mathrm{sinh}\eta )K_1.$$ (2) The boost parameter $`\eta `$ becomes infinite if the mass of the particle becomes vanishingly small. For large values of $`\eta `$, we can consider $`N_1`$ and $`N_2`$ defined as $`N_1=(\mathrm{cosh}\eta )^1J_2^{}`$ and $`N_2=(\mathrm{cosh}\eta )^1J_1^{}`$ respectively. Then, in the infinite-$`\eta `$ limit, $$N_1=K_1J_2,N_2=K_2+J_1.$$ (3) These operators satisfy the commutation relations $$[J_3,N_1]=iN_2,[J_3,N_2]=iN_1,[N_1,N_2]=0.$$ (4) $`J_3,N_1`$, and $`N_2`$ are the generators of the $`E(2)`$-like little group for massless particles. In order to relate the above little group to transformations more familiar to us, let us consider the two-dimensional $`xy`$ coordinate system. We can make rotations around the origin and translations along the $`x`$ and $`y`$ axes. The rotation generator $`L_z`$ takes the form $$L_z=i\left\{x\frac{}{y}y\frac{}{x}\right\}.$$ (5) The translation generators are $$P_x=i\frac{}{x},P_y=i\frac{}{y}.$$ (6) These generators satisfy the commutation relations: $$[L_z,P_x]=iP_y,[L_z,P_y]=iP_x[P_x,P_y]=0.$$ (7) These commutation relations are like those given in Eq.(4). They become identical if $`L_z`$, $`P_x`$ and $`P_y`$ are replaced by $`J_1`$, $`N_2`$ and $`N_3`$ respectively. This is the reason why the little group for massless particles is like $`E(2)`$. ## 3 Symmetry of Massless Particles The internal space-time symmetry of massless particles is governed by the cylindrical group which is locally isomorphic to two-dimensional Euclidean group. In this case, we can visualize a circular cylinder whose axis is parallel to the momentum. On the surface of this cylinder, we can rotate a point around the axis or translate along the direction of the axis. The rotational degree of freedom is associated with the helicity, while the translation corresponds to a gauge transformation in the case of photons. The question then is whether this translational degree of freedom is shared by all massless particles, including neutrinos and gravitons. We shall see in this paper that the requirement of gauge invariance leads to the polarization of neutrinos. Since this translational degree of freedom is a gauge degree of freedom for photons, we can extend the concept of gauge transformations to all massless particles including neutrinos. If we use the four-vector convention $`x^\mu =(x,y,z,t)`$, the generators of rotations around and boosts along the $`z`$ axis take the form $$J_3=\left(\begin{array}{cccc}0& i& 0& 0\\ i& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right),K_3=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& i\\ 0& 0& i& 0\end{array}\right),$$ (8) respectively. There are four other generators, but they are readily available in the literature. They are applicable also to the four-potential of the electromagnetic field or to a massive vector meson. The role of $`J_3`$ is well known. It is the helicity operator and generates rotations around the momentum. The $`N_1`$ and $`N_2`$ matrices take the form $$N_1=\left(\begin{array}{cccc}0& 0& i& i\\ 0& 0& 0& 0\\ i& 0& 0& 0\\ i& 0& 0& 0\end{array}\right),N_2=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& i& i\\ 0& i& 0& 0\\ 0& i& 0& 0\end{array}\right).$$ (9) The transformation matrix is $`D(u,v)=\mathrm{exp}\left\{i\left(uN_1+vN_2\right)\right\}`$ $`=\left(\begin{array}{cccc}1& 0& u& u\\ 0& 1& v& v\\ u& v& 1(u^2+v^2)/2& (u^2+v^2)/2\\ u& v& (u^2+v^2)/2& 1+(u^2+v^2)/2\end{array}\right).`$ (10) If this matrix is applied to the electromagnetic wave propagating along the $`z`$ direction $$A^\mu (z,t)=(A_1,A_2,A_3,A_0)e^{i\omega (zt)},$$ (11) which satisfies the Lorentz condition $`A_3=A_0`$, the $`D(u,v)`$ matrix can be reduced to $$D(u,v)=\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ u& v& 1& 0\\ u& v& 0& 1\end{array}\right).$$ (12) If $`A_3=A_0`$, the four-vector $`(A_1,A_2,A_3,A_3)`$ can be written as $$(A_1,A_2,A_3,A_0)=(A_1,A_2,0,0)+\lambda (0,0,\omega ,\omega ),$$ (13) with $`A_3=\lambda \omega `$. The four-vector $`(0,0,\omega ,\omega )`$ represents the four-momentum. If the $`D`$ matrix of Eq.(12) is applied to the above four vector, the result is $$(A_1,A_2,A_3,A_0)=(A_1,A_2,0,0)+\lambda ^{}(0,0,\omega ,\omega ),$$ (14) with $`\lambda ^{}=\lambda +(1/\omega )\left(uA_1+vA_3\right)`$. Thus the $`D`$ matrix performs a gauge transformation when applied to the electromagnetic wave propagating along the $`z`$ direction. With the simplified form of the $`D`$ matrix in Eq.(12), it is possible to give a geometrical interpretation of the little group. If we take into account of the rotation around the $`z`$ axis, the most general form of the little group transformations is $`R(\varphi )D(u,v)`$, where $`R(\varphi )`$ is the rotation matrix. The transformation matrix is $$R(\varphi )D(u,v)=\left(\begin{array}{cccc}\mathrm{cos}\varphi & \mathrm{sin}\varphi & 0& 0\\ \mathrm{sin}\varphi & \mathrm{cos}\varphi & 0& 0\\ u& v& 1& 0\\ u& v& 0& 1\end{array}\right).$$ (15) Since the third and fourth rows are identical to each other, we can consider the three-dimensional space $`(x,y,z,z)`$. It is clear that $`R(\varphi )`$ performs a rotation around the $`z`$ axis. The $`D`$ matrix performs translations along the $`z`$ axis. Indeed, the internal space-time symmetry of massless particles is that of the circular cylinder. ## 4 Massless Spin-1/2 Particles The question then is whether we can carry out the same procedure for spin-1/2 massless particles. We can also ask the question of whether it is possible to combine two spin 1/2 particles to construct a gauge-dependent four-potential. With this point in mind, let us go back to the commutation relations of Eq.(1). They are invariant under the sign change of the boost operators. Therefore, if there is a representation of the Lorentz group generated by $`J_i`$ and $`K_i`$, it is possible to construct a representation with $`J_i`$ and $`K_i`$. For spin-1/2 particles, rotations are generated by $`J_i=\frac{1}{2}\sigma _i`$, and the boosts by $`K_i=(+)\frac{i}{2}\sigma _i`$ or $`K_i=()\frac{i}{2}\sigma _i`$. The Lorentz group in this representation is often called $`SL(2,c)`$. If we take the (+) sign, the $`N_1`$ and $`N_2`$ generators are $$N_1=\left(\begin{array}{cc}0& i\\ 0& 0\end{array}\right),N_2=\left(\begin{array}{cc}0& 1\\ 0& 0\end{array}\right).$$ (16) On the other hand, for the (-) sign, we use the “dotted representation” for $`N_1`$ and $`N_2`$: $$\dot{N}_1=\left(\begin{array}{cc}0& 0\\ i& 0\end{array}\right),\dot{N}_2=\left(\begin{array}{cc}0& 0\\ 1& 0\end{array}\right).$$ (17) There are therefore two different $`D`$ matrices: $$D(u,v)=\left(\begin{array}{cc}1& uiv\\ 0& 1\end{array}\right),\dot{D}(u,v)=\left(\begin{array}{cc}1& 0\\ uiv& 1\end{array}\right).$$ (18) These are the gauge transformation matrices for massless spin-1/2 particles. As for the spinors, let us start with a massive particle at rest, and the usual normalized Pauli spinors $`\chi _+`$ and $`\chi _{}`$ for the spin in the positive and negative $`z`$ directions respectively. If we take into account Lorentz boosts, there are two additional spinors. We shall use the notation $`\chi _\pm `$ to which the boost generators $`K_i=(+)\frac{i}{2}\sigma _i`$ are applicable, and $`\dot{chi}_\pm `$ to which $`K_i=()\frac{i}{2}\sigma _i`$ are applicable. There are therefore four independent spinors. The $`SL(2,c)`$ spinors are gauge-invariant in the sense that $$D(u,v)\chi _+=\chi _+,\dot{D}(u,v)\dot{\chi }_{}=\dot{\chi }_{}.$$ (19) On the other hand, the $`SL(2,c)`$ spinors are gauge-dependent in the sense that $`D(u,v)\chi _{}=\chi _{}+(uiv)\chi _+,`$ $`\dot{D}(u,v)\dot{\chi }_+=\dot{\chi }_+(u+iv)\dot{\chi }_{}.`$ (20) The gauge-invariant spinors of Eq.(19) appear as polarized neutrinos in the real world. The Dirac equation for massless neutrinos contains only the gauge-invariant $`SL(2,c)`$ spinors. ## 5 The Origin of Gauge Degrees of Freedom However, where do the above gauge-dependent spinors stand in the physics of spin-1/2 particles? Are they really responsible for the gauge dependence of electromagnetic four-potentials when we construct a four-vector by taking a bilinear combination of spinors? The relation between the $`SL(2,c)`$ spinors and the four-vectors has been discussed in the literature for massive particles. However, is it true for the massless case? The central issue is again the gauge transformation. The four-potentials are gauge dependent, while the spinors allowed in the Dirac equation are gauge-invariant. Therefore, it is not possible to construct four-potentials from the Dirac spinors. However, it is possible to construct the four-vector with the four $`SL(2,c)`$ spinors. Indeed, $`\chi _+\dot{\chi }_+=(1,i,0,0),\chi _{}\dot{\chi }_{}=(1,i,0,0),`$ $`\chi _+\dot{\chi }_{}=(0,0,1,1),\chi _{}\dot{\chi }_+=(0,0,1,1).`$ (21) These unit vectors in one Lorentz frame are not the unit vectors in other frames. The $`D`$ transformation applicable to the above four-vectors is clearly $`D(u,v)\dot{D}(u,v)`$. $`D(u,v)\dot{D}(u,v)|\chi _+\dot{\chi }_+>=\dot{\chi }|\chi _+\dot{\chi }_+>(u+iv)|\chi _+\dot{\chi }_{}>,`$ $`D(u,v)\dot{D}(u,v)|\chi _{}\dot{\chi }_{}>=\chi _{}\dot{\chi }_{}>(u+iv)|\chi _+\dot{\chi }_{}>,`$ $`D(u,v)\dot{\chi }(u,v)|\chi _+\dot{\chi }_{}>=|\chi _+\dot{\chi }_{}>.`$ (22) The component $`\chi _{}\dot{\chi }_+=(0,0,1,1)`$ vanishes from the Lorentz condition. The first two equations of the above expression correspond to the gauge transformations on the photon polarization vectors. The third equation describes the effect of the $`D`$ transformation on the four-momentum, confirming the fact that $`D(u,v)`$ is an element of the little group. The above operation is identical to that of the four-by-four $`D`$ matrix of Eq.(12) on photon polarization vectors. It is possible to construct a six-component Maxwell tensor by making combinations of two undotted and dotted spinors. For massless particles, the only gauge-invariant components are $`|\chi _+\chi _+>`$ and $`|\dot{\chi }_{}\dot{\chi }_{}>`$. They correspond to the photons in the Maxwell tensor representation with positive and negative helicities respectively. It is also possible to construct Maxwell-tensor fields with for a massive particle, and obtain massless Maxwell fields by group contraction. ## 6 Lorentz Group in Polarization Optics In studying polarized light propagating along the $`z`$ direction, the traditional approach is to consider the $`x`$ and $`y`$ components of the electric fields. Their amplitude ratio and the phase difference determine the degree of polarization. Thus, we can change the polarization either by adjusting the amplitudes, by changing the relative phases, or both. Let us write these electric fields as $$\left(\begin{array}{c}E_x\\ E_y\end{array}\right)=\left(\begin{array}{c}A\mathrm{exp}\left\{i(kz\omega t+\varphi _1)\right\}\\ B\mathrm{exp}\left\{i(kz\omega t+\varphi _2)\right\}\end{array}\right).$$ (23) where $`A`$ and $`B`$ are amplitudes which are real and positive numbers, and $`\varphi _1`$ and $`\varphi _2`$ are the phases of the $`x`$ and $`y`$ components respectively. This column matrix is called the Jones vector. In dealing with light waves, we have to realize that the intensity is the quantity we measure. Then there arises the question of coherence and time average. We are thus led to consider the following parameters. $`S_{11}=<E_x^{}E_x>,S_{22}=<E_y^{}E_y>,`$ (24) $`S_{12}=<E_x^{}E_y>,S_{21}=<E_y^{}E_x>.`$ These four quantities are known as the Stokes parameters. It is possible to treat the expression of Eq.(23) as a two-component spinor to which two-by-two $`SL(2,c)`$ matrices are applicable. It is also possible to treat the above parameters as the four components of a Minkowskian four-vector. It is thus possible to study the symmetries of spin-1/2 and spin-1 particles by performing polarization experiments in optics laboratories.
no-problem/9907/nucl-th9907014.html
ar5iv
text
# Reply to the Comment on ”Δ⁢𝐼=4 bifurcation in ground bands of even-even nuclei and the Interacting Boson Model” ========================== cut here ======================================= (May 17, 1999) Kuyucak and Stuchbery (KS) comment on our study on the $`\mathrm{\Delta }I=4`$ bifurcation phenomena in the ground rotational band from both the experimental and the theoretical sides . Although there exist some interesting systematics in the existing experimental data, which are discussed in detail in our second paper , we have to wait more precise experiments for the definite evidence of these interesting phenomena. The main aim of our study was to show that there exists the mechanism to provide the $`\mathrm{\Delta }I=4`$ bifurcation phenomenon in the simplest version of IBA, the IBM-1. The existence of the staggering pattern depends only on the geometrical features of IBM, the boson number $`N`$ being finite and $`\beta >\sqrt{2}`$. The large value for $`\beta >\sqrt{2}`$ was questioned in the comment . The large $`\beta `$ value was used, however, by various authors in the literature . The consequence of the use of the actual value for $`\beta `$ by Toki and Wu on the beta and the gamma bands is discussed by Kuy et al. using the quadrupole Hamiltonian . We should, however, be aware of the fact that there exist many Hamiltonians which can give the same value for $`\beta `$, but do not provide the same spectra. In fact, our Hamiltonian (used in Fig. 6 of ) is in very good agreement with the experimental systematics not only for the ground band but also for the beta and the gamma bands. The reasonable yrast spectra and the suitable amplitude for the staggering are obtained as seen in Fig. 6 of ref. . We have performed the full diagonalization calculations using the computor code (PHINT) also for the beta and the gamma bands. We get very resaonable values for $`R_\beta `$ value, which ranges from 0.76 to 1.53, and $`R_\gamma `$ from 1.12 to 1.98 for boson number $`N`$ changed from 4 to 14. This is almost in perfect agreement with the experimental systematics as seen in Fig.1 of Ref. . These values are much better than those of the SU(3) limit, which gives $`R_\beta `$ and $`R_\gamma `$ approximately 2.5. Concerning the E2 transitions, we have one more free parameter $`\chi `$. We may take smaller $`\chi `$ in eq. (4) of to reduce the value of $`R`$ in Ref. . We want to make one comment on the quadrupole Hamiltonian for the case of the large beta deformation. The exact diagonalization (PHINT) provides totally differenct results than the approximate ones using the $`1/N`$ expansion . The amount of the staggering comes out to be 20 keV instead of 0.5 keV of the experimental data with $`\chi 3.2`$ of Ref. . In order to get the small value for the staggering, one should take a much smaller $`\chi `$, which should be $`1.8`$. However, the $`\beta `$ and $`\gamma `$ bandheads are still higher than the experimental systematics. This was the reason why we did not use the popular quadrupole formalism in our papers .
no-problem/9907/astro-ph9907348.html
ar5iv
text
# Magnitude bias of microlensed sources towards the Large Magellanic Cloud ## 1 Introduction One way of hiding most of the baryons in the Universe is to lock them in Machos, baryonic dark matter in the form of compact objects in galaxy halos. This has motivated current experimental searches for these objects in the Milky Way halo via gravitational microlensing (Paczńyski 1986), the temporary brightening of one of out of a million background stars as the unseen Macho passes close to the line of sight of that star by chance and focuses its light gravitationally. About $`2030`$ microlensing events in our line of sight to the Large Magellanic Cloud (LMC) and two to the Small Magellanic Cloud have been detected by the MACHO and EROS groups (Alcock et al. 1997a; Renault et al. 1997; Alcock et al. 1997b; Palanque-Delabrouille 1998; Alcock et al. 1999; Alcock et al. 2000), including two caustic binary events (MACHO-98-SMC-1 and MACHO-LMC-9) which are unambiguous cases of microlensing events. Nevertheless, it is a subject of heated debate whether these microlensing events are indeed due to dark matter in the Galaxy’s halo. Currently there are two popular views on the issue: * Galactic Halo Lensing Hypothesis (Machos): The lenses are located in the halo, and are dark. These lenses could be baryonic dark matter candidates (Alcock et al. 1997a, 2000; Gates & Gyuk 1999); * Lensing by Ordinary Stars Hypothesis: The lenses are ordinary stars, not dark matter. This hypothesis is strongly constrained so that the only presently viable model is that the LMC sports some extra three-dimensional thick stellar structure displaced from the two-dimensional thin and cold disk of the LMC. These stars are providing either lenses or sources of the observed events (Zhao 1998a,b, 1999a). A number of papers, including a few recent ones (Evans & Kerins 1999, Graff et al. 1999a; Fields, Freese & Graff, 1999; Gates & Gyuk 1999, Gyuk, Dalal & Griest 1999, Zhao 1999a) have investigated the shortcomings and plausibility of the two scenarios. Here we suggest using the source magnitude distribution to differentiate between the two classes of models. The existing data are not conclusive; however, there should shortly be enough data to apply the method discussed here. Ordinary star lensing models require additional concentrations of stars along the line of sight to the LMC, in front of its main disk, behind it, or perhaps both (e.g., a partial or complete ring around the LMC). The added population may either be an extended distribution of stars around the LMC (e.g., tidal extension) or a distinct stellar system. If additional stars are located behind the LMC disk, then the sample of source stars that undergo lensing by ordinary stars in the LMC disk will be strongly biased to be in the background population. On the other hand, if lensing is due to Galactic halo Machos or a foreground object, the sample of lensed stars should be a random subsample of the observed stars in the LMC, weighted only by the microlensing efficiency of observing an event in that particular star. The observational consequences of the lensed stars being behind the LMC disk are that they should have: * Slightly fainter baseline magnitudes in all bands than unlensed stars due to a larger (by $`0.3`$ mag) distance modulus; * Preferentially fainter apparent magnitudes in the bluer bands than unlensed stars in neighbouring lines of sight due to reddening by intervening dust in the LMC disk: $`0.6`$ magof $`U`$-band extinction with factor of two variations on arcminute angular scales (cf. Zhao 1999c,d); and * Velocity offsets of up to $`20`$ km s<sup>-1</sup> relative to unlensed stars because the kinematics of background stars will, in general, be different from those in the LMC disk (Zhao 1999b; Graff et al. 1999b). * Small lens-source proper motions ($`30`$km/s at the LMC/SMC) measurable with caustic crossing events (Kerins & Evans 1999). As reviewed by Zhao (1999a), all four effects statistically differentiate between the LMC self lensing and Macho hypotheses and do not necessarily apply on a star-by-star basis. A nice feature of these effects is that they are observable not only in real time for an on-going microlensing event but also for long-past events; the latter allows flexible telescope scheduling. These effects do not occur in the Galactic halo lensing models since the stars in the back and front of the LMC have nearly equal chance of being lensed by a Macho halfway to the LMC. In this paper, we examine the bias in the apparent baseline magnitudes of microlensed stars (the magnitude observed in the star either before or after the lensing event). This effect has been first discussed by Stanek (1995) for microlensing events in the Galactic bulge. ## 2 Model of the source and lens distribution Based on microlensing experiments alone, it is possible to constrain the positions of a putative additional object along the line of sight to the LMC. Objects bound to the Magellanic clouds have a typical transverse velocity of $`70\text{ km\hspace{0.17em}s}^1`$ (the speed of rotation of LMC disk stars) relative to the systemic velocity. Assuming a typical lens mass of $`0.10.3M_{}`$ from a standard stellar IMF, and a typical Einstein radius crossing time of 45 days, then the lens-source distance is of order $`510`$ kpc. The detailed models of Zhao (1999b) suggest similar distances. The models of Evans & Kerins (1999) and the N-body simulation of Weinberg (1999) have a wide range of source-lens distances with a mean value also in the range $`512.5`$ kpc. We assume that stars in the direction of the LMC are distributed in two separate groups, the primary LMC disk having a surface mass density $`\mathrm{\Sigma }_\mathrm{D}`$ and an extra background population with surface density $`\mathrm{\Sigma }_\mathrm{B}`$ situated $`512.5`$ kpc (0.2 – 0.5 mag) behind the primary disk, and we have tried a number of distance distributions within this range. We define $`\mathrm{\Sigma }_{\mathrm{tot}}\mathrm{\Sigma }_\mathrm{D}+\mathrm{\Sigma }_\mathrm{B}`$. There may also be standard Galactic halo Machos between us and the LMC. The implications of a possible population in the immediate foreground of the LMC disk will be discussed later. We define $`f_\mathrm{B}`$ to be the fraction of lensing events in which the lensed (source) stars are behind the LMC disk. Thus, $`f_\mathrm{B}=0`$ is for all halo lensing models in which all the lenses are well in front of the LMC and all the lensed (source) stars are in the LMC disk. Even in models in which there are additional stars behind the LMC disk, we expect $`f_\mathrm{B}<0.9`$ because some of the lensing events will be due to self lensing within the LMC disk (Wu 1994; Sahu 1994; Gould 1995). ## 3 The luminosity function of unlensed and lensed clumps In this paper, we investigate small ($`0.3`$ mag) differences in apparent baseline magnitude between lensed stars and field stars. The only place in the color-magnitude diagram where there is a feature sharp enough for this difference to be seen and which contains enough stars to be important is the red clump. (cf. Figure 1 of astro-ph/9907348). The red clump is a sharp feature, with width of only 0.14 magin the $`I`$ band. It has been used as a standard candle on several occasions in the past; in particular, Stanek (1995) applied it to microlensing events in the Galactic bulge. Even though main sequence events outnumber clump events by a factor of $`10`$, main sequence events cannot be used for this analysis since the main sequence is nearly verticle. A small shift in magnitude of a main sequence star will leave that star in the main sequence, especially when we consider that background stars are also likely to be reddened (Zhao 1999c) Our method is applicable to any passband, and is presented here for a general passband $`X`$. We suggest that microlensing groups apply the method to the clump luminosity function in their particular passbands. We advocate the $`I`$ band because the clump is narrow in $`I`$ (cf. Figure 1). We parametrize the observed apparent $`X`$-band luminosity function of the clump in the LMC disk as a narrow Gaussian superposed on a broader Schechter-like luminosity function for red giants. $$n_\mathrm{D}(X)=C_1\mathrm{exp}\left[0.5\left(\frac{XX^{\mathrm{RC}}}{\sigma ^{\mathrm{RC}}}\right)^2\right]+C_2F^\alpha \mathrm{exp}\left(\frac{\beta F}{F^{\mathrm{RC}}}\right),$$ (1) where $`XX^{\mathrm{RC}}`$ and $`F/F^{\mathrm{RC}}`$ are the observed $`X`$-band magnitude and flux of a star relative to that of an average red clump, $`\sigma ^{\mathrm{RC}}`$ is the width (dispersion) of the observed clump distribution, $`C_1`$, $`C_2`$, $`\alpha `$ and $`\beta `$ are constants to be adjusted to fit the observed luminosity function. Assuming that the stellar populations in the LMC disk $`(D)`$ and background object $`(B)`$ are the same, the magnitude distribution of the background objects $`n_\mathrm{B}(X)`$ is simply that of the disk stars $`n_\mathrm{D}(X)`$ shifted by some amount $`\mathrm{\Delta }`$ towards the faint side due to distance modulus and any excess extinction, i.e., $$n_\mathrm{B}(X)=\frac{_{\mathrm{Min}(\mathrm{\Delta })}^{\mathrm{Max}(\mathrm{\Delta })}𝑑\mathrm{\Delta }P(\mathrm{\Delta })n_\mathrm{D}(X\mathrm{\Delta })}{_{\mathrm{Min}(\mathrm{\Delta })}^{\mathrm{Max}(\mathrm{\Delta })}𝑑\mathrm{\Delta }P(\mathrm{\Delta })},$$ (2) where $`P(\mathrm{\Delta })`$ is the probability of finding a background population with a magnitude shift of $`\mathrm{\Delta }`$. Assume equal chance of finding the background population at a distance $`(1.11.25)\times 50`$ kpc, we have $$P(\mathrm{\Delta })\mathrm{const},\mathrm{Min}(\mathrm{\Delta })0.2\mathrm{mag},\mathrm{Max}(\mathrm{\Delta })0.5\mathrm{mag}.$$ (3) The observed luminosity function of LMC stars is a superposition of the disk stars and the background stars. However, in the limiting case that most of the surface density is in the LMC disk, we can make the simplifying assumption $$n_{\mathrm{obs}}(X)=n_\mathrm{D}(X)+\frac{\mathrm{\Sigma }_\mathrm{B}}{\mathrm{\Sigma }_\mathrm{D}}n_\mathrm{B}(X)n_\mathrm{D}(X).$$ (4) Fitting eq. (1) directly to observations, here the published OGLE observed clump magnitude distribution in the $`I`$ band (Udalski et al. 1998), we determine the fitting parameters $`C_1`$, $`C_2`$, $`\alpha `$ and $`\beta `$; the dispersion of the clump $`\sigma ^{\mathrm{RC}}=0.14`$ magis set at the value given by Udalski et al.. The lower panel of Figure 1 shows our model of the observed luminosity function is a fair approximation to that observed by the OGLE survey (cf. Fig. 5 of Udalski et al. 1998). Working with the relative magnitudes $`XX^{\mathrm{RC}}`$ has the advantage that the distributions are independent of calibrations of the zero point of the clump, and insensitive to changes of the passbands. Note that the derived luminosity function are not corrected for reddening, and we will consistently use uncorrected magnitudes throughout the paper. The luminosity function of the lensed stars depends on the fraction of lensed stars in the LMC disk and the fraction of lensed stars behind the LMC: $$n_{\mathrm{source}}(X,f_\mathrm{B})=f_\mathrm{F}n_\mathrm{D}(X)+f_\mathrm{B}n_\mathrm{B}(X).$$ (5) Here $`f_\mathrm{F}(1f_\mathrm{B})`$ is the fraction of events due to foreground lenses and LMC thin disk sources, while $`f_\mathrm{B}`$ is the fraction of events due to LMC thin disk lenses and background sources. If most of the lensing is due to a background object and $`f_\mathrm{B}`$ is large, then the luminosity function of lensed stars will basically be the luminosity function of unlensed stars shifted by $`\mathrm{\Delta }(0.20.5)`$ mag. This shift is quite significant since it is larger than the width of the clump, 0.14 magin OGLE $`I`$ band. As shown in the lower panel of Figure 1, the expected lensed star luminosity function for a model with $`f_\mathrm{B}=0.9`$ is quite distinct from the unlensed luminosity function. ## 4 Analysis ### 4.1 Observed clump events We urge the reader to examine Figure 3 of Alcock et al. (2000) for this discussion. At the moment, there are at least $`3`$ confirmed events in the clump region. The latest MACHO collaboration publication (Alcock et al. 2000) shows that events LMC-MACHO-1 is indeed a few tenths magnitudes dimmer than the clump. Although one event cannot by itself be significant, this event is a few tenths of a mag fainter than the clump, exactly where we expect it to be if lensing is due to a background object. LMC-MACHO-16 is brighter than the clump but this event carries no statistical significance since it must be a red giant branch object whether or not it is in the LMC disk or behind the disk. LMC-MACHO-25 is in the center of the clump, and may be due to an LMC disk source, indicating that $`f_B<1`$. However, these few events are not enough to make any significant statements about $`f_B`$. These three events are shown in the upper panel of Figure 1, where the probability of finding a clump is a factor of a few lower than average. The histograms here were made simply by binning the MACHO LMC color-magnitude diagram in the MACHO-defined clump region $`0.3VR0.9`$ and $`17.5V20`$ (cf. the clump “square” in Figure 11 of Alcock et al. 1997a). These distributions we get are similar to the published OGLE $`I`$ magnitude distributions (cf. Figure 5 of Udalski et al. 1998 and the lower panel of our Figure 2), but the OGLE distributions are narrower. This is perhaps partly due to better seeing with OGLE and a narrower color range $`0.8<VI<0.95`$ for the clump in Udalski et al. (1998). Furthermore, the clump is approximately horizontal in $`I`$ (Paczyński & Stanek 1998)—i.e., the mean magnitude of the clump does not depend strongly on color. Differential extinction in the $`I`$-band is also smaller, making the clump a narrower feature in $`I`$ than in $`V`$ and $`R`$. For these reasons we prefer to do our analysis in the $`I`$ band. Unfortunately the offset $`I`$ magnitude of the MACHO-LMC-1 from the clump is unknown, although it is perhaps fair to assume it is not too different from the observed value in $`V`$ and $`R`$. In this case the event lies also well off the peak for the unlensed stars, but exactly under the peak of the luminosity function of lensed stars in the $`f_\mathrm{B}=0.9`$ model, a model in which most of the lensing is due to a background object. The probability of finding such an event is approximately 5 times greater in the $`f_\mathrm{B}=0.9`$ model than in the $`f_\mathrm{B}=0`$ model. Thus, formally, this single event favors the background lensing hypothesis at the statistically marginal 80% confidence level. ### 4.2 Future clump events We have run Monte Carlo simulations to estimate the effect of increasing the events. These simulations are done in the $`I`$ band in order to use the published OGLE clump luminosity function (Udalski et al. 1998). These simulations suggest that perhaps 10 additional clump event is needed to exclude the Galactic halo lensing hypothesis. Extrapolating the present detection rate, we anticipate that these clump events could be detected with the OGLE II and EROS II experiments in the next few years. It should be very interesting to re-apply the above analysis. Finally we remark that in case Occam’s razor fails, i.e., in case the dark halo is a mix of Machos and non-baryonic matter and the microlenses are a mix of Machos and stars (in the foreground or the LMC), then $`f_\mathrm{B}`$ can still set an upper limit on the fraction of Machos in the halo via the relation: $$f_{\mathrm{Macho}}0.2f_\mathrm{F}=0.2(1f_\mathrm{B}),$$ (6) where the factor $`0.2`$ is the current estimated Macho halo mass fraction (Alcock et al. 2000). ## 5 Conclusion If the LMC lensing is due to the sources being in a background stellar system, then the properties of lensed stars should be systematically different from those of the bulk of LMC stars. Although there are several obvious comparisons that can be made (e.g., of the kinematics), the simplest way of examining the lensed stars is to compare their baseline apparent magnitudes with those of unlensed LMC field stars. In particular, the red clump provides a sharp feature that is ideal for such a comparison. This technique is powerful because the expected distance difference modulus we are searching for ($`0.20.5`$ mag) is wider than the sharp (0.14 mag) red clump. Any signal found using this (differential) technique cannot be due to the normal spatially-variable reddening in the LMC disk, poor photometry, or blending in crowded fields, all of which affect lensed and unlensed stars in the LMC disk equally. The technique can also apply to an arc-like distribution wrapping around the LMC disk (Kunkel et al. 1997), with some stars in the foreground and some in the background or to a shroud of stellar matter around the LMC (Weinberg 1999, Evans & Kerins 1999). We note, however, that this technique (and in fact all first three techniques mentioned in the Introduction) cannot distinguish between the standard Galactic halo lens model and a model in which there is a small additional distribution of stars immediately in front of the LMC (but none in the background), since in both cases the lensed stars will be exactly at the distance of the LMC primary disk. As few as one additional clump event could potentially rule out the halo lensing hypothesis at the 95% confidence level, although more events will be needed if, as seems indicated by unpublished microlensing alerts, some of the lensing events are due to a foreground object. Many more clump events should be available in a few years, and these should yield valuable clues to the line of sight structure of the Magellanic Cloud system. We thank the referee, David Bennett for doing a very careful job in scrutinization of the paper, David Alves, Andy Gould and Tim de Zeeuw for helpful discussions.
no-problem/9907/astro-ph9907151.html
ar5iv
text
# Improved data extraction procedures for IUE Low Resolution Spectra: The INES System ## 1 Introduction The International Ultraviolet Explorer (IUE) collected more than 104000 spectra of all types of astronomical objects during its more than 18 years of operations. The IUE Project considered it desirable to make available to the astronomical community a “Final Archive” holding all the IUE data processed in an uniform way and with improved reduction techniques and calibrations. For this purpose a new processing system (NEWSIPS) was developed and the full IUE archive was re-processed with a newly derived linearization and wavelength scale. Also an adapted optimal extraction scheme (Horne 1986), SWET, was used to derive the low resolution absolutely calibrated output spectra. A full description of the NEWSIPS system is given in Nichols & Linsky (1996) and in Nichols (1998). Technical details can be found in the NEWSIPS Manual (Garhart et al. 1997). One of the main goals of the system is to obtain the maximum signal-to-noise ratio in the final data. For this purpose the geometric and photometric corrections are performed through a new approach, based on cross-correlation techniques to align science and Intensity Transfer Functions (ITF) images (Linde & Dravins 1990). The application of this new approach reduces substantially the fixed pattern noise, and leads to improvements in the signal-to-noise ratio between 50 and 100% in low dispersion spectra and between 50 and 200% in high resolution data (Nichols 1998). The intrinsic non-linearity of the detectors (SEC VIDICON cameras) makes the photometric correction one of the most critical tasks in the processing of IUE data. The correction is performed through the Intensity Transfer Functions (ITFs), which are derived from series of graded lamp exposures. These functions transform the raw Data Numbers (DN) of each pixel in the Raw Image into linearized Flux Numbers (FN) in the Photometrically Corrected Image. Specifically for the Final Archive, a new set of ITFs images were obtained for the three cameras under well controlled spacecraft conditions and through improved algorithms. However, the final extracted spectra still show some residual non-linearities, most likely due to the breakdown at the extreme ITF levels of the assumption that over small differential flux ranges the relation between FNs and DNs can be approximated by a linear interpolation. Further modifications implemented in NEWSIPS include the improvement in the wavelength calibration, the revision of the flux scale, the derivation of noise models and the optimal extraction of spectra (only for low resolution). The existence of noise models has allowed to estimate the errors on IUE fluxes for the first time. A special effort has also been made to ensure the correctness of all the information referring to the specific observation attached to the data. The quality control procedures applied by the IUE Project have shown that the NEWSIPS reprocessed spectra are superior to the IUESIPS spectra in all cases (Nichols 1998). For the high resolution spectra the new methods to estimate the image background (Smith 1998) and the ripple correction algorithm (Cassatella et al. 1998) result in a much higher quality high resolution spectra for this data. However, it was found that the low resolution data extraction still contained some serious shortcomings which would affect significantly the usefulness of the extracted spectra. (Talavera et al. 1992, Nichols 1998). Most of these shortcomings and drawbacks in the IUEFA products were related to the method for the final extraction of the 1-D spectra (SWET) from the bi-dimensional, spatially resolved, rotated images (SILO<sup>1</sup><sup>1</sup>1The photometrically and geometrically corrected image is rotated so that the dispersion direction is along the X axis.) files. Within the framework of the ESA IUE Data Distribution System, it was decided to correct all the low dispersion spectra through the application of new extraction algorithms that significantly improve the quality and reliability in the final data products. A completely different philosophy is behind these new algorithms. The model-dependent strategy followed in SWET is abandoned, with the aim of retaining as much information as possible concerning the data. We anticipate that the results of both techniques are essentially identical, when the model parameters used by SWET are well suited, namely, for well exposed continuum sources. The method chosen has assured that the improvements achieved with the NEWSIPS geometric and photometric corrections are preserved since the new algorithms work on the SILO files. In this paper we describe the main features of the INES extraction procedures: background and spatial profile determination, quality flags handling, solar contamination removal, homogenization of the wavelength scale (Section 2). In Section 3 the repeatability, errors reliability and linearity of INES low dispersion data are evaluated. Finally, the major improvements achieved by INES are summarized in Section 4. ## 2 Optimal extraction of IUE Low dispersion spectra Optimal extraction techniques for bidimensional detectors were originally developed for CCD chips (Horne 1986). The basic equations of the method are: $$FN(\lambda )=\frac{_x[FN(x,\lambda )B(x,\lambda )]\frac{p(x,\lambda )}{\sigma (x,\lambda )^2}}{_x\frac{p(x,\lambda )^2}{\sigma (x,\lambda )^2}}$$ (1) $$\frac{1}{\mathrm{\Delta }FN(\lambda )^2}=\underset{x}{}\frac{p(x,\lambda )^2}{\sigma (x,\lambda )^2}$$ (2) where the variables are: * $`x`$ : coordinate in the cross-dispersion (spatial) direction * $`\lambda `$ : coordinate in the spectral direction * $`FN(x,\lambda )`$ : FN value at pixel $`(x,\lambda )`$ * $`B(x,\lambda )`$ : background at pixel $`(x,\lambda )`$ * $`\sigma (x,\lambda )`$ : noise at pixel $`(x,\lambda )`$ * $`p(x,\lambda )`$ : extraction profile at pixel $`(x,\lambda )`$ * $`FN(\lambda )`$ : total flux number (FN) at $`\lambda `$ It must be noted that the IUE detectors are quite different from CCDs, which are nearly linear detectors, with a very large dynamic range, formed by individual pixels, almost independent on their neighbors. None of these characteristics are valid for the IUE SEC Vidicon cameras. Each raw IUE image consist on a 768x768 array of 8-bit elements, which are not physical pixels, but picture elements determined by the stepping and size of the camera readout beam. The focusing system of this beam introduces geometric distortions. The dynamic range is small (0-255 DN) and the response of the camera is non-uniform and highly non-linear. Furthermore, the noise in these detectors deviates strongly from the poissonian photon noise of CCDs. Therefore, a direct application of the techniques used for CCDs is not appropriate. The application of Eq. 1 to IUE data requires a careful determination of the noise model, the background estimation, the extraction profile and the treatment of “bad” pixels. Furthermore, these determinations are model dependent and the best results are obtained only after fine tuning a number of parameters. In an interactive processing the choice of the best set of parameters is made case by case. For automatic processing, these processing parameters are fixed and must be chosen so as to cover the largest number of possible cases. This approach unavoidably leads to the degradation of the performance of the system. In the following subsections, the main items entering in the optimal extraction process are discussed, indicating the solutions adopted in SWET and identifying the problems which have led to the different extraction scheme applied in INES. ### 2.1 Noise models The estimate of the noise in IUE data is essential at two stages of the extraction of the spectrum from the SILO images. Firstly, the determination of the extraction profile requires an evaluation of the signal-to-noise ratio in order to perform weighted fits to the data. Secondly, the errors in individual pixels are propagated through the extraction procedure in order to assign errors to the final extracted fluxes. The characteristics of the noise in the IUE Raw images are strongly altered by the photometric linearization procedure via the Intensity Transfer Function and by the spatial resampling required to derive the SILO image format. The approach followed to derive the noise model in SWET, as well as in INES, has been to model it empirically from SILO science and lamp (UV-flood) images (Garhart et al. 1997). However the final noise models in the INES procedure are different from that used in SWET in two points: the extrapolation to high FN values and the handling of very low FN values. In the first case, the SWET noise model extrapolates a third order polynomials determined from the fitting of lower FN’s. These polynomials often have a negative derivative for high FN values, leading to unrealistic estimates of the noise as shown in Figure 1. At the low end of the FN range, SWET noise models also use high order polynomials which introduces strong boundary effects. It is especially remarkable that SWET assigns an error of 1FN to negative FNs, which occurs because of statistical fluctuations around the adopted NULL ITF level In the INES noise models, for every wavelength interval the standard deviation as a function of the FN is described by polynomials of different order for different FN ranges. For FN values below thirty, a first order polynomial was used in order to avoid boundary effects. In the FN range from thirty up to the point where still enough data points are available (“breakpoint”) a higher order polynomial was used (third degree for LWP, fourth for SWP and LWR). The region of higher FN is linearly extrapolated based on the third (fourth) order polynomial fit (Fig. 1). Therefore, for a given wavelength, $`\lambda `$ the noise, $`\sigma `$(FN), is represented by: $`\sigma (FN)|_{\lambda =const.}=`$ $`\{\begin{array}{ccc}B_1+C_1FN\hfill & for\hfill & FN30\hfill \\ & & \\ \underset{i=0}{\overset{4_{(SWP)},3_{(LWP)}}{}}A_iFN^i\hfill & for\hfill & 30FNbreakpoint\hfill \\ & & \\ B_2+C_2FN\hfill & for\hfill & FN>breakpoint\hfill \end{array}`$ The fitting of the third (fourth) order polynomial was iterated five times, excluding data points for which $`\sigma `$(FN)was greater then two times the values fitted in the previous iteration in order to exclude cosmic rays and similar features. The extrapolation to high FN values was based on the fifty highest data points. The “breakpoint” was defined as the value with the largest positive derivative. For the LWP camera the “breakpoints” are found at values between 390 and 460 FN, depending on the wavelength. For SWP they are in the range 280 to 400 FN, and for LWR in the range 105 to 410. Therefore the extrapolation in the LWR camera covers a larger range of FNs. The noise models were smoothed in the wavelength direction following a similar approach, i.e. different polynomials were used for different cameras and wavelength ranges. Finally, the noise model was interpolated over a two dimensional grid of 1025 FN values (from 0 to 1024) by 640 pixels in the wavelength direction. In the cases in which the SILO file has negative FN values, the noise of these pixels is taken as the value corresponding to FN=0 for that wavelength. As expected, both noise models are indistinguishable for most FN values and wavelengths. It is only in very short exposure time images and/or images with pixels reaching FN values larger than the ”break-points” defined above that different results are obtained. It should be noticed that in the SWET method a single high FN pixel with an incorrectly extrapolated error may affect significantly the extraction profile determination because of the exceptional signal-to-noise ratio assigned to it. ### 2.2 Spectral Extraction According to Eq. 1, the three major items in the optimal extraction method are: the background, the spatial profile and the noise model. Their treatment in the INES extraction procedure is described in following subsections. In addition, a subsection is devoted to describe the handling of those pixels whose quality is non-optimal. The method applied to remove the solar contamination in LWP images is also described in detail in a separate subsection. Finally, the method to homogenize the wavelength scale for all long and short wavelength spectra, independently of observing epoch or ITF, is outlined. #### 2.2.1 Background determination The background in IUE science images is a combination of different sources: particle radiation, radioactive phosphor decay in the detector, halation within the UV converter, background skylight, scattered light and readout noise. The first two depend on the instrument itself and on the radiation environment and vary slowly across the camera faceplate, whereas the last three depend on the spectral flux distribution of the object observed and their integrated effect varies in a complicated way across the raw image. The background is derived from two swathes seven pixels wide in the spatial direction, symmetrically located with respect to the center of the aperture. Along the dispersion direction, the method to estimate the background has to remove the high frequency noise but preserving the low frequency intrinsic variations. The two approaches generally followed in the past have been (a) to apply consecutively a median and a box filter (IUESIPS) or (b) to fit the background to a polynomial (SWET). A direct smoothing is simple, robust and model independent, but sensitive to bright spots and outlying pixels. A polynomial fit is more efficient in removing such outliers, but the degree of the polynomial must be too high to reproduce the small scale variations. As a compromise providing acceptable solutions, we have adopted for INES an iterative method in which the background is median and box filtered (31 pixels wide), allowing for outlying pixels rejection in each iteration. This method effectively reduces the noise, preserves the intrinsic background variations in relatively small scales and removes bright spots (Fig. 2). IUESIPS and SWET assumed a constant background in the spatial direction. For non-optimal extraction this may be an acceptable approach since the overestimate at one side of the aperture is roughly compensated by the underestimate at the other side. However, such compensation does not occur in an optimization technique such as SWET because the weighting profile will be forced to zero in the region where the background is overestimated, leading to a distortion of the extraction profile with respect to the true spatial profile (Fig. 3). In the extraction of the INES data, the background for each line within the aperture region is obtained from a linear interpolation between the smoothed background at both sides of the aperture. The largest deviations between INES and SWET results due to different background estimates are expected in images where the net signal from the target is rather weak. As will be described in next subsections, both methods follow completely different approaches to obtain the final 1-D spectrum from underexposed images. Since it is not easy to show the sole effect of the background, we defer to next subsections the discussion of the differences between both methods in underexposed spectra. #### 2.2.2 Extraction profile The not interactive processing of the data implies that the extraction parameters cannot be fine tuned for each individual spectrum. Furthermore, the targets observed with IUE span a wide range of properties: pure continuum/line emission, very blue/red objects, extended/point-like sources, multiple sources within the aperture, etc. In INES the spatial profile is modeled so that it is smooth, but able to track short scale variations along the dispersion direction. The 2-D spectrum is blocked in bins of similar total S/N and interpolated linearly in wavelength. The process is iterative and outlying pixels are rejected after each iteration. The iteration stops when no further outliers are found. All pixels with no real flux information (not photometrically corrected, telemetry dropout, reseaux, permanent artifacts, 159DN corrupted pixels) are excluded from the process of flux extraction. This method provides results in agreement with SWET within 2- 3% for well exposed continuum spectra, corresponding to the repeatability errors of the IUE instruments (see Section 3). For very underexposed spectra where the total S/N is too low to determine the spatial (weighting) profile empirically, the adopted approach in INES is to add-up all the spectral lines within the aperture (boxcar extraction). In contrast, the SWET method depends on the expected extension of the source: for extended sources a boxcar extraction is used too, but for point sources a default point-like extraction profile is used at the center of the aperture. These two different approaches define the difference in the philosophy underneath SWET and INES: SWET goal is to get the highest signal-to-noise spectrum, even if at some particular cases (weak sources that are not point-like in spite of its classification or point-like weak sources miscentered in the aperture) the flux reported is not correctly computed. INES goal is to get the best representation of the actual flux at all wavelengths, even at the cost of not reaching the highest signal-to-noise (weak point-like sources). Figures 4, 5 and 6 show examples of the different results obtained with SWET and INES for weak spectra. In all the examples, the spectrum is too weak for its profile to be determined empirically and the sources are classified as point-like. Therefore, SWET uses a default point-source profile and INES uses a boxcar through the whole aperture. When there is a true point source, properly centered in the aperture (Fig. 4), both extractions provide similar flux levels, although the INES spectrum is noisier. The second example is an exposure on the echo of the SN1987A in the Large Magellanic Cloud through the large aperture. Since the image is classified as IUECLASS 56 (Supernova), SWET uses the default point-like extraction profile at the center of the aperture (dotted line in the inset in Fig. 5), and the resulting flux is underestimated by more than a factor 2. Obviously, the boxcar method used by INES results in a noisier spectrum, but provides the correct flux level, better representing the actual information content of the spectrum. SWP 37503 is an image of CC Eri, a rapidly rotating late type star with strong chromospheric emission lines. Here, SWET again uses the default point-like extraction profile at the aperture center (Fig. 6). The source is indeed point-like, but was not properly centered within the slit. Thus, the extraction profile used by SWET is offset with respect to the location of the spectrum, resulting in a formal non-detection of the source, in particular of the strong emission lines. The boxcar method used in INES produces a noisier spectrum, but the emission lines are correctly extracted. Strong narrow emission lines onto a weak continuum have been reported to be incorrectly extracted by SWET, even though they are optimally exposed (Talavera et al. 1992, Huélamo et al. 1999). The problem is that in these cases there exist variations in the spatial profile on wavelength scales much shorter than SWET can follow. The origin is the ”beam pulling” effect (Boggess et al. 1978) which consists in a deflection of the readout beam in regions with large charge variations in the image section of the cameras. The shift in the image registration can be as much as 2 lines along the cross-dispersion direction in a few wavelength steps. The result is that the emission line registration is shifted with respect to the continuum. If the extraction profile cannot change on wavelength scales of the order of the spectral resolution, the strong unresolved emission lines are recognized and flagged as ”cosmic rays”, resulting in a strong underestimate of the lines flux. To account for this effect, the INES extraction method sets the minimum block size to 7 wavelength bins, slightly larger than the spectral resolution. An example of this effect is shown in Figure 7. The spectrum belongs to the symbiotic star AG Dra, characterized by a weak continuum with strong narrow emission lines. The intensity of the HeII 1640Å line given by SWET is approximately half the intensity given by INES. SWET finds part of the emission line outside the extraction profile, consequently flags the pixels as “cosmic rays” (flag -32) and rejects them in the derivation of the final spectrum. It is also worth to note that although half the line is rejected as ”cosmic ray” the flags do not go into the final 1-D quality flag spectrum (see next subsection). A similar example (a spectrum of Nova Puppis 1991) is shown in Figure 8. The ratio NIV\]1486 Å/NIII\]1750Å is smaller by a 20% when derived from the SWET spectrum, and clearly in error. These examples demonstrate that SWET results for sources with strong narrow emission lines onto a weak continuum are not optimal and the use of line ratios as diagnostics for physical parameters (temperature, density, chemical abundances …) may be misleading, greatly diminishing the usefulness of the IUEFA for general usage. #### 2.2.3 Quality flags handling and propagation Quality flags ($`\nu `$’s) mark those SILO pixels whose quality is not optimal. The quality of a pixel can be affected by different problems, and there is a gradation in the reliability of the value. Flags are coded in NEWSIPS in such a way that more negative values indicate more important problem conditions. The importance of a proper handling of $`\nu `$’s is twofold: firstly, the flags are used to exclude ”bad” pixels during the extraction procedure and secondly, they mark in the final 1-D extracted spectrum those wavelengths where the user should be warned about the reliability of the flux. One of the advantages of optimal extraction techniques is that they should be able to recover the flux at flagged pixels as far as the correct cross-dispersion profile is used. However, this ability must be analyzed carefully since flagged pixels are already excluded from the determination of the weighting profile. As an example we will discuss the case of an emission lines spectrum. In many cases the exposure times were chosen to get a good exposure level in the continuum, frequently resulting in saturation for the the peak of the lines. Then, the core of the strongest lines are flagged as ”Extrapolated ITF” or ”saturated”. If there are only a few pixels flagged it is expected that the correct flux will be obtained from a correct profile. However, the beam pulling effect in IUE images shifts the strong lines with respect to the continuum. Even in the case that the method would be able to reproduce such short scale shifts, if flagged pixels are not used to determine the spatial profile, the weighting profile will be shifted with respect to the actual spatial profile of the line that will be treated as a cosmic ray. For this reason, in the INES extraction only pixels with no real flux information are discarded: reseaux marks, pixels not photometrically corrected, 159DN corrupted pixels and telemetry dropouts. The way the information about bad quality pixels is passed onto the final 1-D output spectrum is also related to the role these pixels play in the extraction procedure. In the INES extraction, a conservative approach has been followed and the flag of any pixel in the SILO file that makes a contribution to the final 1-D extracted spectrum (i.e. for which the extraction profile is not zero) is passed into the 1-D flag spectrum. This method may propagate flags of pixels whose contribution is almost negligible (e.g. reseaux marks within the aperture, but outside the PSF), but assures that no relevant quality flag is lost. Figure 9 illustrates a case where there are two pixels in the HeII line with the saturation flag in the SILO file. SWET treats the line pixels as a ”cosmic rays” (note the ”-32” flags in SILO file), but neither these flags nor the saturation flags are passed onto the final 1-D spectrum. In contrast, INES reproduces the correct flux and flags the wavelength bins where there are saturated pixels. #### 2.2.4 Solar contamination removal By the end of its operational life, the IUE telescope was affected by the so-called FES anomaly (Pérez and Pepoy 1997). In reality, it was not an anomaly of the FES functionality but that name was given because the problem was firstly detected on FES images (Rodríguez-Pascual 1993). For an unknown reason, scattered Sun and Earth light was entering the telescope tube and reaching the on-board detectors (FES and SEC Vidicon cameras). On FES images this light was known as the “streak” because it filled only a portion of the image, producing a pseudo-background. Under the worst conditions the FES detector was fully saturated, providing a number of counts similar to that from a 5th magnitude star. The analysis of the problem showed that light scattered into the telescope was mainly solar in origin (Rodríguez & Fernley 1993). The effect on science images was to contaminate LWP low resolution images with an extended spectrum filling the whole aperture (Fig. 10). SWP images were not affected because of the solar-like spectrum of the scattered light and no measurable contamination has been detected in LWP high resolution spectra. Two types of contamination were identified in LWP images, depending on whether the dominant source was direct sunlight or sunlight reflected on the Earth (Rodríguez-Pascual & Fernley 1993). LWP images contaminated with solar scattered light are identified as extended sources by NEWSIPS. However, the SWET extraction module is forced to perform a point-like source extraction, i.e., restricted to 13 spectral lines, in all LWP images taken after November 1992 and whose IUECLASS does not correspond to solar system objects or sky exposures. This approach does not reduce the solar contribution to the extracted spectrum in a consistent way and definitely does not remove it completely. Several methods have been evaluated to correct this contamination. The correlation between the strength of the streak as measured with the FES and the strength of the contamination in spectral images led to consider the possibility of building up a spectral template to be scaled by the FES counts. However, this approach was not useful in practice because of the two types of solar spectra found and the large scatter in the FES counts-spectral flux relation (Rodríguez-Pascual & Fernley 1993) associated with the specific light scattering geometry. The procedure developed in the INES extraction was designed to handle only the most straightforward case: a point-like source, well centered into the aperture. The spectrum of the target does not fill the whole aperture and the solar contamination can be estimated from the spectral lines on both sides of the target PSF. Obviously, this method only works on large aperture spectra; contamination in the small aperture is not corrected. The first step is to identify whether an image is affected by solar contamination. The check is done only on LWP images taken after November 1992 since this was the time its presence was first detected in the FES. The procedure searches for the peak of the average spatial profile. If the contribution of a point source, i.e. up to 11 spectral lines wide, is between 5% and 95% of the total spatial profile then the presence of both an extended and a point-like sources is assumed. This method obviously does not guarantee that the extended source is due to the solar contamination; it may happen that the observation corresponds to a crowded field with several sources. However, we have adopted this approach because any potential user of crowded fields data should already be aware that the IUE Project does not provide individual spectra when several sources are within the aperture. Such spectra need to be individually analyzed from the SILO file. But any user interested in the archival data of an isolated object should not have to worry about contamination by other sources and can take the extracted spectra as the real spectra of the isolated object. Bearing this in mind, it was decided to accept the risk that in some cases the procedure will remove the contribution of an extended component that is not the solar contamination. Once a LWP image has been identified as contaminated, the 2-D spectrum of the solar light is reconstructed. First, the solar spectrum is extracted as in the standard case, but masking out 11 spectral lines centered at the location of the peak in the average spatial profile. Since sky exposures show that the cross-dispersion profile of the solar contamination is roughly linear in the center of the aperture (Fig. 10), the 2-D spatial profile of the solar contamination within the point-like source location is derived interpolating linearly from the wings of the profile. The 2-D contamination is then reconstructed and subtracted from the SILO file. The point-like spectrum is extracted from the resulting corrected SILO following the standard INES procedures. Spectra in which the correction for solar contamination has been applied are identified by the following message in the FITS header: \*** WARNING: SOLAR CONTAMINATION CORRECTION APPLIED In figures 11 and 12 we show an example of the performance of the method. The average spatial profile in the range 2900-3300Å is shown as a thick line in Fig 11; the thin lines show the profiles estimated for the extended and point sources (crosses represent the sum of both). The squares show the spectral lines discarded to estimate the extended source and later interpolated. The corresponding output spectra are shown in Fig 12. The performance of this technique has been tested using the data of the blazar PKS 2155-304, extensively monitored with IUE. In particular, two intensive monitoring campaign were carried out in 1991 and 1994 (Pian et al., 1997), before and after the appearance of the FES anomaly. During the 1991 campaign, 98 LWP spectra were obtained. In 1994, IUE was continuously pointing to this target for 10 days starting on May 15th. A total of 236 spectra were obtained, half of them with the LWP camera. Albeit the flux level of the target varied between both runs and even within each run, the effect of the solar scattered light into the LWP camera can be tested because the changes in the spectral shape are small (Pian et al. 1997). First we compare the ratio of the SWET average spectra of both campaigns (Fig 13). This ratio shows a sharp turn-up beyond 2800Å due to the solar contamination, but the ratio of the INES averages is essentially independent of the wavelength. This is a clear demonstration that SWET is not able to remove the scattered light in the output spectrum. The features beyond 3200Å are typically due to the low S/N in this region of the IUE LWP camera in the individual spectra. Another test of the presence of solar scattered light in the output spectra is to compare the ratios of fluxes in different wavelength bands. For each campaign and extraction method we have compared the relation between the flux at 2600Å, where no solar contamination is expected and the ratio of the fluxes at 3100Å and 2600Å. This ratio can be taken as a measure of the amount of contamination since the band centered at 3100Å is the most affected. The results are shown in Fig 14. The F(3100Å)/F(2600Å) ratio is definitely larger for the 1994 spectra extracted with SWET. However, the 1991 and 1994 values of this ratio for INES spectra are indistinguisable, although there are still a few data points for which the ratio is larger by $``$20%. ### 2.3 Homogenization of the wavelength scale One of the main purposes of the modifications implemented within the INES system is to provide the data in such a form that the user needs to perform the minimum number of operations before starting the scientific analysis and to decrease the instrumental dependence of the extracted spectrum (important for further use by scientist without specific IUE knowledge). One of the characteristics of the SWET low resolution spectra which, although well documented, can originate some confusion to users, is that the low resolution long wavelength data do not have an uniform wavelength scale, i.e., there are long wavelength spectra with different stepsize and with different number of points in the extracted data. These differences depend on the date of observation and, in the case of the LWR camera, on the ITF used in the processing. The dependency on the date of observation is very small (and it is also present in the SWP camera), but differences between both long wavelength cameras and between both LWR ITFs cannot be neglected. The difference is only in the size of the wavelength step and not in the starting wavelength of the NET spectra (1050 and 1750 Å for short and long wavelength ranges, respectively). Since the Inverse Sensitivity curves are not defined for the full spectral range of the extracted data, LWP and LWR NEWSIPS low resolution calibrated spectra do not start at the same wavelength and have a different number of points. Any combination or comparison of long wavelength spectra would require the rebinning to a common wavelength scale. In order to facilitate the use of the extracted data, this rebinning has already been built in the INES processing system, assuring homogeneity in the data. Table 1 summarizes the low resolution wavelength step used in NEWSIPS for each camera. The resampling was performed following this approach: * All the long wavelength spectra are rebinned to the same wavelength step. * The size of the new wavelength step is taken as the largest one of all the steps used, i.e. 2.6693 Å per pixel for the LW cameras, and 1.6764 Å per pixel for SWP. The number of calibrated pixels is 495 for the SWP camera, and 562 for the long wavelengths cameras. * The starting wavelength of the calibrated spectrum has not been modified (i.e. it is the first pixel within the spectral region in which the Inverse Sensitivity Curves are defined). * Only flux-calibrated points are included in the final spectrum. * Both the absolute flux and the sigma spectra are rebinned. * The rebinning of the flux spectrum is made through the following expression: $$F_i=\frac{_jw_jf_j}{_jw_j}$$ (4) where F<sub>i</sub> is the flux of the final rebinned pixel, f<sub>j</sub> are the fluxes of the input pixels, and $`\omega _j`$ are the fractions of each original pixel within the new one. It must be noted that for this particular case in which the original and the final wavelength steps are very close to each other, this procedure provides results very similar to a simple linear interpolation. * The procedure to handle the sigma spectrum is similar, but using the square of the errors instead: $$E_i=\sqrt{\frac{_jw_je_j^2}{_jw_j}}$$ (5) In this expression, E<sub>i</sub> and e<sub>j</sub> are the rebinned and original errors, respectively. * The $`\nu `$-spectrum is also re-computed. Each final pixel has the minimum (i.e. the “worst”) $`\nu `$ of the original pixels used in the rebinning. The number of flags in the rebinned spectrum is larger than in the original one, since every pixel contributes to at least two final pixels (e.g. a reseau mark originally flagged in two consecutive pixels would result in three flagged points in the resampled spectrum). It must be noted that the largest increase in the size of wavelength step (which corresponds to the LWP camera) is by a 0.25%. The spectral resolution for this camera is 5.2Å(1.95 pixels, Garhart et al. 1997). Consequently this rebinning does not introduce any significant degradation in spectral resolution. ## 3 INES Data quality evaluation ### 3.1 Flux repeatability The repeatability of the INES low resolution spectra has been tested on a large sample of spectra of some of the IUE standard stars. The only restriction imposed has been to include only non–saturated spectra of similar level of exposure (i.e. similar exposure times) in order to avoid the remaining non–linearity effects (see Section 3.3). The spectra cover all the range of observing epochs and camera temperatures. Therefore it must be taken into account that the repeatability, as defined here, includes implicitly the uncertainties in the camera time degradation and the temperature corrections. The study has been performed in 100 Å wide bands. Table 2 lists the central wavelength of the bands and the repeatability, defined as the percent rms respect to the mean intensity of the band. The figures in brackets are the number of spectra considered in each case. As expected, the best repeatability is attained in the regions of maximum sensitivity of the cameras. In the SWP the repeatability is around 2% longward 1400 Å. For the LWP camera, values lower than 3% are reached in the central part of the camera, 2400-3000 Å. At the extreme wavelengths the repeatability is around 15%. The results are slightly worse for LWR most likely due to the instability of the camera after it ceased to be used for routine operations. The repeatability is between 3-4% in the region 2300-3000 Å. Particularly bad is the 3300 Å band, but at the shortest wavelengths (1850-1950 Å) the repeatability is substantially better than in the LWP camera. When considering only images taken when LWR was the prime long wavelength camera, the repeatability is similar to that of LWP in the central part of the camera. ### 3.2 Reliability of extraction errors In addition to the flux spectrum, optimal extraction methods also provide an error spectrum. Formally, these errors only account for the uncertainties in the extraction procedure, based on the noise model of the detector. They do not include uncertainties driven by parameters affecting the image registration. During the processing, corrections are applied to account for the changes in temperature in the head amplifier of the cameras (THDA) and the loss of sensitivity of the detectors. All these corrections have their own uncertainties. There are yet other systematic errors that affect the absolute fluxes, as the uncertainty in the inverse sensitivity curve, but do not affect the comparison of different sets of IUE spectra. The extraction errors can be used to compare fluxes in different bands of the same spectrum or to compute weighted averages of a set of spectra, but they may not be appropriate to evaluate the variability of a source or an spectral feature, due to the considerations given above. In order to check the statistical validity of the errors provided by the INES extraction, we have taken the same data set used in the previous section (i.e. a large sample of spectra of standard stars with similar level of exposure) and compared the rms around the mean with the average errors as given by the extraction procedure. In general, the extraction errors underestimate the errors represented by the rms in the three cameras, with the exception of the shortest wavelength end of the LWP camera. In the SWP camera the errors are underestimated by $``$20-40% , depending on the wavelength. In the LWR the ratio between extraction and actual errors is nearly constant (12%) all along the camera, while in the LWP the discrepancy can be as large as 40% at the longest wavelength. In this camera, shortward 2400Å the extractions errors are too large by 15–20%. This region is very noisy and there are reasons to suspect that such a noise departs significantly from a gaussian behaviour. It is also found that the dependency of the ratio STD/Error (where “STD” is the standard deviation around the mean spectrum, and “Error” is derived from the extraction errors) with wavelength can be well represented by a straight line with the coefficients shown in Table 3. Reliable values for these coefficients could not be obtained for the LWR camera operated at -4.5kV due to the scarcity of data. In order to compare fluxes in different spectra of the same object, the extraction errors must be modified according to $$\epsilon (\lambda )=a+b\epsilon _\mathrm{E}(\lambda )$$ (6) where $`a`$ and $`b`$ are the coefficients in Table 3 and $`\epsilon _\mathrm{E}(\lambda )`$ are the extraction errors. The results of the application of this correction are shown in figures 15, 16 and 17. The dispersion around the expected value of 1 is 0.15 for SWP, 0.17 for LWP and 0.18 for LWR. The structure still seen in these figures might be related to remaining non linearities in the ITF’s (see below). ### 3.3 Flux Linearity Despite the correction applied during the processing of the IUE data through the application of the Intensity Transfer Functions (ITFs), the final spectra are still affected by non-linearities to some degree. As a consequence, spectra of the same non-variable object observed with different exposure times might have slightly different flux levels. In order to evaluate the importance of the remaining non-linearities we have chosen, for each camera, a set of low resolution spectra of the standard star BD+28 4211, extracted with the INES system, obtained close in time under similar temperature conditions and with different exposure times. In each set one of the spectra is defined as a 100% exposure, and all the other are referred to that one. The summary of the data used for each camera is as follows: * SWP: Nine spectra taken in December 1993, with exposure times ranging from 2 sec (9%) to 40 sec (150%). * LWP: Nine spectra taken in October 1986, with exposure times ranging from 10 sec (20%) to 100 sec (200%). * LWR: Five spectra taken in August 1980, with exposure times ranging from 20 sec (30%) to 150 sec (250%). All this spectra have been processed with ITF–B (Garhart et al. 1997), which is the one giving the best correlation coefficient in this case, as in most of the pre-1984 LWR spectra. Each spectrum was binned into in 100 Å bands and divided by the corresponding reference exposure. The results are summarized in Table 4. Examples of the behaviour of different spectral bands for each of the cameras as a function of the level of exposure are shown in Figure 18. In the SWP camera the largest departures from linearity are found at the short wavelength end of the underexposed spectra, where flux can be underestimated by up to a 20%. Apart from this case, longward Lyman $`\alpha `$ ratios to the 100% spectrum are generally within$`\pm `$5%. The best results are achieved in the 1800 Å band, where linearity is within $`\pm `$ 3%. For the LWP camera the largest non-linearities occur at the extreme wavelengths (1900, 3300 Å), where the flux is largely overestimated. Except for these bands, linearity is within $`\pm `$ 5% for spectra with exposure levels from 40% to 150%. In the saturated region of the most exposed spectrum the flux is overestimated by 10%. Excluding the saturated region, the bands which show the best linearity characteristics (within $`\pm `$ 3%) are those centered at 2800 and 3000 Å. The LWR camera shows the largest non-linearities at the longest wavelengths of the underexposed spectra. Linearity remains within $`\pm `$ 5% for exposure levels above 60%. The most linear bands are those centered at 2500, 2900 and 3100 Å. In the saturated part of the 200% spectrum, the flux is underestimated by approximately 10%. However, the flux is correct in the 170% spectrum, which is also saturated. ## 4 Summary Within the framework of the development of the ESA INES Data Distribution System foe the IUE Final Archive, IUE Low Dispersion spectra have been re-extracted from the bi-dimensional SILO files with a new extraction software. INES implements a number of major modifications with respect to the SWET extraction applied to the early version of the IUE Final Archive. The improvements in INES deal with the noise model, the optimal extraction method, the homogenization of the wavelength scale and the flagging of the absolute calibrated extracted spectra. * The noise models for the different cameras have been re-derived to correct anomalies at high and low exposure levels in those used in SWET. The noise model results in a considerably more realistic estimate of the actual extraction errors in the IUE spectra. * The algorithms to compute the camera background and the extraction profile are more consistent with the nature of the IUE detectors and result in a significantly improved data quality. * Weak extended or miscentered spectra are more adequately handled. The fluxes of strong emission lines in weak continuum spectra are more reliable and consistent in the INES extraction. * The handling and propagation of quality flags to the final extracted spectra has been improved. This implies a larger number of pixels flagged, but also a more correct information for the user of potential problems in the data. * A major improvement has been reached in the removal of the solar contamination in LWP images after 1992. * In order to facilitate an easier use of the data, all the spectra of a given range (short and long) have been resampled to a common wavelength scale. As a general rule, INES data are similar or superior to SWET. Although the INES spectra may at times give somewhat lower signal-to-noise ratio than those obtained through SWET (e.g. when boxcar extraction is required to maintain data validity), the INES extraction results in a higher reliability of the IUEFA data, allowing direct intercomparison of all low resolution spectra, through an adequate treatment of errors, flags and warning messages in the image header. ###### Acknowledgements. We would like to acknowledge the support of all VILSPA staff, which collaborated actively to the development of the INES system.
no-problem/9907/cond-mat9907490.html
ar5iv
text
# Inelastic lifetimes of hot electrons in real metals \[ ## Abstract We report a first-principles description of inelastic lifetimes of excited electrons in real Cu and Al, which we compute, within the GW approximation of many-body theory, from the knowledge of the self-energy of the excited quasiparticle. Our full band-structure calculations indicate that actual lifetimes are the result of a delicate balance between localization, density of states, screening, and Fermi-surface topology. A major contribution from $`d`$-electrons participating in the screening of electron-electron interactions yields lifetimes of excited electrons in copper that are larger than those of electrons in a free-electron gas with the electron density equal to that of valence ($`4s^1`$) electrons. In aluminum, a simple metal with no $`d`$-bands, splitting of the band structure over the Fermi level results in electron lifetimes that are smaller than those of electrons in a free-electron gas. \] Electron dynamics in metals are well-known to play an important role in a variety of physical and chemical phenomena, and low-energy excited electrons have been used as new probes of many-body decay mechanisms and chemical reactivity. Recently, the advent of time-resolved two-photon photoemission (TR-2PPE) has made possible to provide direct measurements of the lifetime of these so-called hot electrons in copper, other noble and transition metals, ferromagnetic solids, and high $`T_c`$ superconductors. Also, ballistic electron emission spectroscopy (BEES) has shown to be capable of determining hot-electron relaxation times in solid materials. An evaluation of the inelastic lifetime of excited electrons in the vicinity of the Fermi surface was first reported by Quinn and Ferrell, within a many-body free-electron description of the solid, showing that it is inversely proportional to the square of the energy of the quasiparticle measured with respect to the Fermi level. Since then, several free-electron calculations of electron-electron scattering rates have been performed, within the random-phase approximation (RPA) and with inclusion of exchange and correlation effects. Band structure effects were discussed by Quinn and Adler, and statistical approximations were applied by Tung et al and by Penn. Nevertheless, there was no first-principles calculation of hot-electron lifetimes in real solids, and further theoretical work was needed for the interpretation of existing lifetime measurements and, in particular, to quantitatively account for the interplay between band-structure and many-body effects on electron relaxation processes. In this Letter we report results of a full ab initio evaluation of relaxation lifetimes of excited electrons in real solids. First, we expand the one-electron Bloch states in a plane-wave basis, and solve the Kohn-Sham equation of density-functional theory (DFT) by invoking the local-density approximation (LDA) for exchange and correlation. The electron-ion interaction is described by means of a non-local, norm-conserving ionic pseudopotential, and we use the one-electron Bloch states to evaluate the screened Coulomb interaction within a well-defined many-body framework, the RPA. We finally evaluate the lifetime from the knowledge of the imaginary part of the electron self-energy of the excited quasiparticle, which we compute within the so-called GW approximation of many-body theory. Let us consider an inhomogeneous electron system. The damping rate of an excited electron in the state $`\varphi _0(𝐫)`$ with energy $`E`$ is obtained as (we use atomic units throughout, i. e., $`e^2=\mathrm{}=m_e=1`$) $$\tau ^1=2d𝐫d𝐫^{}\varphi _0^{}(𝐫)\mathrm{Im}\mathrm{\Sigma }(𝐫,𝐫^{};E)\varphi _0(𝐫^{}),$$ (1) where $`\mathrm{\Sigma }(𝐫,𝐫^{};E)`$ represents the electron self-energy. In the so-called GW approximation, only the first term of the expansion of the self-energy in the screened interaction is considered, and after replacing the Green function ($`G`$) by the zero order approximation ($`G^0`$), one finds $$\mathrm{Im}\mathrm{\Sigma }(𝐫,𝐫^{};E)=\underset{f}{}\varphi _f^{}(𝐫^{})\mathrm{Im}W(𝐫,𝐫^{};\omega )\varphi _f(𝐫),$$ (2) where $`\omega =EE_f`$ represents the energy transfer, the sum is extended over a complete set of final states $`\varphi _f(𝐫)`$ with energy $`E_f`$ ($`E_FE_fE`$), $`E_F`$ is the Fermi energy, and $`W(𝐫,𝐫^{};\omega )`$ is the screened Coulomb interaction: $$W(𝐫,𝐫^{};\omega )=d𝐫^{\prime \prime }ϵ^1(𝐫,𝐫^{\prime \prime },\omega )v(𝐫^{\prime \prime }𝐫^{}).$$ (3) Here, $`v(𝐫𝐫^{})`$ represents the bare Coulomb interaction, and $`ϵ^1(𝐫,𝐫^{},\omega )`$ is the inverse dielectric function of the solid, which we evaluate within RPA. We introduce Fourier expansions appropriate for periodic crystals, and find $`\tau ^1={\displaystyle \frac{1}{\pi ^2}}{\displaystyle \underset{f}{}}{\displaystyle _{\mathrm{BZ}}}d𝐪{\displaystyle \underset{𝐆}{}}{\displaystyle \underset{𝐆^{}}{}}`$ $`{\displaystyle \frac{B_{0f}^{}(𝐪+𝐆)B_{0f}(𝐪+𝐆^{})}{\left|𝐪+𝐆\right|^2}}`$ (5) $`\times \mathrm{Im}\left[ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )\right],`$ where $$B_{0f}(𝐪)=\mathrm{d}^3𝐫\varphi _0^{}(𝐫)\mathrm{e}^{\mathrm{i}𝐪𝐫}\varphi _f(𝐫),$$ (6) and where $`ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )`$ represent Fourier coefficients of the inverse dielectric function of the crystal. $`𝐆`$ and $`𝐆^{}`$ are reciprocal lattice vectors, and the integration over $`𝐪`$ is extended over the first Brillouin zone (BZ). In particular, if couplings of the wave vector $`𝐪+𝐆`$ to wave vectors $`𝐪+𝐆^{}`$ with $`𝐆𝐆^{}`$, i. e., the so-called crystalline local-field effects are neglected, one can write: $$\tau ^1=\frac{1}{\pi ^2}\underset{f}{}_{\mathrm{BZ}}d𝐪\underset{𝐆}{}\frac{\left|B_{0f}(𝐪+𝐆)\right|^2}{\left|𝐪+𝐆\right|^2}\frac{\mathrm{Im}\left[ϵ_{𝐆,𝐆}(𝐪,\omega )\right]}{|ϵ_{𝐆,𝐆}(𝐪,\omega )|^2}.$$ (7) If all one-electron Bloch states entering both the matrix elements $`B_{0f}(𝐪+𝐆)`$ and the dielectric function $`ϵ_{𝐆,𝐆}(𝐪,\omega )`$ are represented by plane waves, then Eq. (7) exactly coincides with the GW formula for the scattering rate of excited electrons in a free-electron gas (FEG), as obtained by Quinn and Ferrell and by Ritchie. In the case of electrons with energy very near the Fermi energy ($`EE_F`$) the phase space available for real transitions is simply $`EE_F`$, which yields the well-known $`(EE_F)^2`$ scaling of the scattering rate. In the high-density limit ($`r_s<<1`$), one finds $$\tau _{QF}=263r_s^{5/2}(EE_F)^2\mathrm{eV}^2\mathrm{fs}.$$ (8) The hot-electron decay in real solids depends on both the wave vector $`𝐤`$ and the band index $`n`$ of the initial Bloch state $`\varphi _0(𝐫)=\mathrm{e}^{\mathrm{i}𝐤𝐫}u_{𝐤,n}(𝐫)`$. We have evaluated hot-electron lifetimes along various directions of the wave-vector, and have found that scattering rates of low-energy electrons are strongly directional dependent. Since measurements of hot-electron lifetimes have been reported as a function of energy, we here focus on the evaluation of $`\tau ^1(E)`$, which we obtain by averaging $`\tau ^1(𝐤,n)`$ over all wave vectors and bands lying in the irreducible element of the Brillouin zone (IBZ) with the same energy. The results presented below have been found to be well converged for all hot-electron energies under study ($`EE_F=1.03.5\mathrm{eV}`$), and they all have been performed with inclusion of conduction bands up to a maximum energy of $`25\mathrm{eV}`$ above the Fermi level. The sampling of the BZ required for the evaluation of both the dielectric matrix and the hot-electron decay rate of Eqs. (5) and (7) has been performed on $`16\times 16\times 16`$ Monkhorst-Pack meshes. The sums in Eqs. (5) and (7) have been extended over $`15𝐆`$ vectors of the reciprocal lattice, the magnitude of the maximum momentum transfer $`𝐪+𝐆`$ being well over the upper limit of $`2q_F`$ ($`q_F`$ is the Fermi momentum). Our ab initio calculation of the average lifetime $`\tau (E)`$ of hot electrons in Cu, as obtained from Eq. (5) with full inclusion of crystalline local-field effects, is presented in Fig. 1 by solid circles. The lifetime of hot electrons in a FEG with the electron density equal to that of valence ($`4s^1`$) electrons in copper ($`r_s=2.67`$) is exhibited in the same figure, by a solid line. Both calculations have been carried out within one and the same many-body framework, the RPA; thus, the ratio between our calculated ab initio lifetimes and the corresponding FEG calculations unambiguously establishes the impact of the band structure of the crystal on the hot-electron decay. Our ab initio calculations indicate that the lifetime of hot electrons in copper is, within RPA, larger than that of electrons in a FEG with $`r_s=2.67`$, this enhancement varying from a factor of $`2.5`$ near the Fermi level ($`EE_F=1.0\mathrm{eV}`$) to a factor of $`1.5`$ for $`EE_F=3.5\mathrm{eV}`$. We have also performed calculations of the lifetime of hot electrons in copper by just keeping the $`4s^1`$ Bloch states as valence electrons in the pseudopotential generation. The result of this calculation nearly coincides with the FEG calculations, showing the key role that $`d`$-states play in the electron-decay mechanism. First of all, we focus on the role that both localization and density of states (DOS) available for real excitations play in the hot-electron lifetime. Hence, we neglect crystalline local-field effects and evaluate hot-electron lifetimes from Eq. (7) by replacing the electron initial and final states in $`\left|B_{0f}(𝐪+𝐆)\right|^2`$ by plane waves, and the dielectric function in $`\left|ϵ_{𝐆,𝐆}(𝐪,\omega )\right|^2`$ by that of a FEG with $`r_s=2.67`$. The result we obtain, with full inclusion of the band structure of the crystal in the evaluation of $`\mathrm{Im}\left[ϵ_{𝐆,𝐆}(𝐪,\omega )\right]`$, is represented in Fig. 1 by open circles. Since the states just below the Fermi level, which are available for real transitions, have a small but significant $`d`$-component, they are more localized than pure $`sp`$-states. Hence, their overlap with states over the Fermi level is smaller than in the case of free-electron states, and localization results in lifetimes of electrons with $`EE_F<2\mathrm{eV}`$ that are slightly larger than predicted within the FEG model of the metal (solid line). At larger energies this band structure calculation predicts a lower lifetime than within the FEG model, due to opening of the $`d`$-band scattering channel dominating the DOS with energies from $`2.0\mathrm{eV}`$ below the Fermi level. While the excitation of $`d`$-electrons diminishes the lifetime of electrons with energies $`EE_F>2\mathrm{eV}`$, $`d`$-electrons also give rise to additional screening, thus increasing the lifetime of all electrons above the Fermi level. That this is the case is obvious from our band-structure calculation exhibited by full triangles in Fig. 1. This calculation is the result obtained from Eq. (7) by still replacing hot-electron initial and final states in $`\left|B_{0f}(𝐪+𝐆)\right|^2`$ by plane waves (plane-wave calculation), but including the full band structure of the crystal in the evaluation of both $`\mathrm{Im}\left[ϵ_{𝐆,𝐆}(𝐪,\omega )\right]`$ and $`\left|ϵ_{𝐆,𝐆}(𝐪,\omega )\right|^2`$. The effect of virtual interband transitions giving rise to additional screening is to increase, for the energies under study, the lifetime by a factor of $`3`$, in qualitative agreement with the approximate prediction of Quinn. Finally, we investigate band structure effects on hot-electron energies and wave functions. We have performed band-structure calculations of Eq. (5) with and without (see also Eq. (7)) the inclusion of crystalline local field corrections, and we have found that these corrections are negligible for $`EE_F>1.5\mathrm{eV}`$, while for energies very near the Fermi level neglection of these corrections results in an overestimation of the lifetime of less than $`5\%`$. Therefore, differences between our full (solid circles) and plane-wave (solid triangles) calculations come from the sensitivity of hot-electron initial and final states on the band structure of the crystal. When the hot-electron energy is well above the Fermi level, these states are very nearly plane wave states for most of the orientations of the wave vector, and the lifetime is well described by plane-wave calculations (solid circles and triangles nearly coincide for $`EE_F>2.5\mathrm{eV}`$). However, in the case of hot-electron energies near the Fermi level, initial and final states strongly depend on the orientation of the wave vector and on the shape of the Fermi surface. While the lifetime of hot electrons with the wave vector along the necks of the Fermi surface, in the $`\mathrm{\Gamma }L`$ direction, is found to be longer than the averaged lifetime by up to $`80\%`$, flattening of the Fermi surface along the $`\mathrm{\Gamma }K`$ direction is found to increase the hot-electron decay rate by up to $`10\%`$ (see also Ref.). Since for most orientations the Fermi surface is flattened, Fermi surface shape effects tend to decrease the average inelastic lifetime. An opposite behaviour occurs for hole-states with a strong $`d`$-character below the Fermi level, localization of these states strongly increasing the hole-lifetime. Our band-structure calculation of hot-electron lifetimes in Al, as obtained from Eq. (5), is exhibited in the inset of Fig. 1 by solid circles, together with lifetimes of hot electrons in a FEG with $`r_s=2.07`$ (solid line). Since aluminum is a simple metal with no $`d`$-bands, $`\mathrm{Im}\left[ϵ_{𝐆,𝐆^{}}^1(𝐪,\omega )\right]`$ is well described within a free-electron model, and band structure effects only enter through the sensitivity of hot-electron initial and final wave functions on the band structure of the crystal. Due to splitting of the band structure over the Fermi level new decay channels are opened, and band structure effects now tend to decrease the hot-electron lifetime by a factor that varies from $`0.65`$ near the Fermi level ($`EE_F=1\mathrm{eV}`$) to a factor of $`0.75`$ for $`EE_F=3\mathrm{eV}`$. Scaled lifetimes of hot electrons in Cu, as determined from most recent TR-2PPE experiments, are represented in Fig. 2, together with our calculated lifetimes of hot electrons in the real crystal (solid circles) and in a FEG with $`r_s=2.67`$ either within the full RPA (solid line) or with use of Eq. (8) (dashed line). Though there are large discrepancies among results obtained in different laboratories, most experiments give lifetimes that are considerably longer than predicted within a free-electron description of the metal. At $`EE_F<2\mathrm{eV}`$, our calculations are close to lifetimes recently measured by Knoesel et al in the very-low energy range. At larger electron energies, good agreement between our band-structure calculations and experiment is obtained for Cu(110), where no band gap exists in the $`𝐤_{}=0`$ direction. However, one must be cautious in the interpretation of TR-2PPE lifetime measurements, since they may be sensitive to electron transport away from the probed surface and also to the presence of the hole left behind in the photoexcitation process. In the case of injected electrons in BEES experiments no hole is present. A careful analysis of electron-electron mean free paths in these experiments has shown a $`(EE_F)^2`$ dependence of hot-electron lifetimes in the noble metals, with an overall enhancement with respect to those predicted by Eq. (8) by a factor of $`2`$, in agreement with our band-structure calculations. We note that our calculated lifetimes (solid circles) approximately scale as $`(EE_F)^2`$, as a result of two competing effects. As the energy increases hot-electron lifetimes in a FEG (solid line) are known to be larger than those predicted by Eq. (8) (dashed line), and this enhancement of the lifetime is nearly compensated by the reduction, at energies well over the Fermi level, of band structure effects. In conclusion, we have performed full RPA band-structure calculations of hot-electron inelastic lifetimes in real solids, and have demonstrated that decay rates of excited electrons strongly depend on details of the electronic band structure. In the case of Cu, a subtle competition between localization, density of states, screening, and Fermi-surface topology results in hot-electron lifetimes that are larger than those of electrons in a FEG, and good agreement is obtained, for $`EE_F>2\mathrm{eV}`$, with observed lifetimes in Cu(110). For Al, interband transitions over the Fermi level yield hot-electron lifetimes that are smaller than those of electrons in a FEG. We acknowledge partial support by the University of the Basque Country, the Basque Unibertsitate eta Ikerketa Saila, and the Spanish Ministerio de Educación y Cultura.
no-problem/9907/cond-mat9907246.html
ar5iv
text
# Conducting phase in the two-dimensional disordered Hubbard model ## Abstract We study the temperature-dependent conductivity $`\sigma (T)`$ and spin susceptibility $`\chi (T)`$ of the two-dimensional disordered Hubbard model. Calculations of the current-current correlation function using the Determinant Quantum Monte Carlo method show that repulsion between electrons can significantly enhance the conductivity, and at low temperatures change the sign of $`d\sigma /dT`$ from positive (insulating behavior) to negative (conducting behavior). This result suggests the possibility of a metallic phase, and consequently a metal–insulator transition, in a two-dimensional microscopic model containing both repulsive interactions and disorder. The metallic phase is a non-Fermi liquid with local moments as deduced from a Curie-like temperature dependence of $`\chi (T)`$. When electrons are confined to two spatial dimensions in a disordered environment, common understanding until recently was that the electronic states would always be localized and the system would therefore be an insulator. This idea is based on the scaling theory of localization for non-interacting electrons and corroborated by subsequent studies using renormalization group (RG) methods. The scaling theory highlights the importance of the number of spatial dimensions and demonstrates that while in three dimensions for non-interacting electrons there exists a transition from a metal to an Anderson insulator upon increasing the amount of disorder, a similar metal–insulator transition (MIT) is not possible in two dimensions. The inclusion of interactions into the theory has been problematic, certainly when both disorder and interactions are strong and perturbative approaches break down. Following the scaling theory the effect of weak interactions in the presence of weak disorder was studied by diagrammatic techniques and found to increase the tendency to localize. Subsequent perturbative RG calculations, including both electron-electron interactions and disorder, found indications of metallic behavior, but also, for the case without a magnetic field or magnetic impurities, found runaway flows to strong coupling outside the controlled perturbative regime and therefore were not conclusive . The results of such approaches therefore have not changed the widely held opinion that in two dimensions (2D) the MIT does not occur. The situation changed dramatically with the recent transport experiments on effectively 2D electron systems in silicon metal-oxide-semiconductor field-effect transistors (MOSFETs) which have provided surprising evidence that a MIT can indeed occur in 2D. In these experiments the temperature dependence of the conductivity $`\sigma _{\mathrm{dc}}`$ changes from that typical of an insulator (decrease of $`\sigma _{\mathrm{dc}}`$ upon lowering $`T`$) at lower density to that typical of a conductor (increase of $`\sigma _{\mathrm{dc}}`$ upon lowering $`T`$) as the density is increased above a critical density. The fact that the data can be scaled onto two curves (one for the metal, one for the insulator) is seen as evidence for the occurrence of a quantum phase transition with carrier density $`n`$ as the tuning parameter. The possibility of such a transition has stimulated a large number of further experimental and also theoretical investigations , including proposals that a superconducting state is involved . Explanations in terms of trapping of electrons at impurities, i.e. not requiring a quantum phase transition have also been put forward. While there is no definitive explanation of the phenomena yet, it is likely that electron-electron interactions play an important role. The central question motivated by the experiments is: Can electron-electron interactions enhance the conductivity of a 2D disordered electron system, and possibly lead to a conducting phase and a metal–insulator transition? It is this question that we address by studying the disordered Hubbard model which contains both relevant ingredients: interactions and disorder. While the Hubbard model does not include the long range nature of the Coulomb repulsion, studying the simpler model of screened interactions is an important first step in answering the central qualitative question posed above. We use Quantum Monte Carlo simulation techniques which enable us to avoid the limitations of perturbative approaches (while of course being confronted with others). We mention that recent studies using very different techniques from ours have indicated that interactions may enhance conductivity: two interacting particles instead of one in a random potential has a delocalizing effect , and weak Coulomb interactions were found to increase the conductance of spinless electrons in (small) strongly disordered systems . The disordered Hubbard model that we study is defined by: $$\widehat{H}=\underset{i,j,\sigma }{}t_{ij}c_{i\sigma }^{}c_{j\sigma }^{}+U\underset{j}{}n_jn_j\mu \underset{j,\sigma }{}n_{j\sigma },$$ (1) where $`c_{j\sigma }`$ is the annihilation operator for an electron at site $`j`$ with spin $`\sigma `$. $`t_{ij}`$ is the nearest neighbor hopping integral, $`U`$ is the on-site repulsion between electrons of opposite spin, $`\mu `$ the chemical potential, and $`n_{j\sigma }=c_{j\sigma }^{}c_{j\sigma }^{}`$ is the occupation number operator. Disorder is introduced by taking the hopping parameters $`t_{ij}`$ from a probability distribution $`P(t_{ij})=1/\mathrm{\Delta }`$ for $`t_{ij}[1\mathrm{\Delta }/2,1+\mathrm{\Delta }/2]`$, and zero otherwise. $`\mathrm{\Delta }`$ is a measure for the strength of the disorder . We use the Determinant Quantum Monte Carlo (QMC) method, which has been applied extensively to the Hubbard model without disorder . While disorder and interaction can be varied in a controlled way and strong interaction is treatable, QMC is limited in the size of the lattice, and the sign problem restricts the temperatures which can be studied. The sign problem is minimized by choosing off-diagonal rather than diagonal disorder, as at least at half-filling ($`n=1`$) there is no sign problem in the former case, and consequently simulations can be pushed to significantly lower temperatures. For results away from half filling we choose $`n=0.5`$ where the sign problem is less severe compared to other densities. Also, interestingly, the sign problem is reduced in the presence of disorder . The quantity of immediate interest when studying a possible metal–insulator transition is the conductivity and especially its $`T`$-dependence. By the fluctuation–dissipation theorem $`\sigma _{dc}`$ is related to the zero-frequency limit of the current-current correlation function. A complication of the QMC simulations is that the correlation functions are obtained as a function of imaginary time. To avoid a numerical analytic continuation procedure to obtain frequency-dependent quantities, which would require Monte Carlo data of higher accuracy than produced in the present study, we employ an approximation that was used and tested before in studies of the superconductor–insulator transition in the attractive Hubbard model . This approximation is valid when the temperature is smaller than an appropriate energy scale in the problem. Additional checks and applicability to the present problem are discussed below. The approximation allows $`\sigma _{\mathrm{dc}}`$ to be computed directly from the wavevector $`𝐪`$\- and imaginary time $`\tau `$-dependent current-current correlation function $`\mathrm{\Lambda }_{xx}(𝐪,\tau )`$: $$\sigma _{\mathrm{dc}}=\frac{\beta ^2}{\pi }\mathrm{\Lambda }_{xx}(𝐪=0,\tau =\beta /2).$$ (2) Here $`\beta =1/T`$, $`\mathrm{\Lambda }_{xx}(𝐪,\tau )=j_x(𝐪,\tau )j_x(𝐪,0)`$, and $`j_x(𝐪,\tau )`$ the $`𝐪,\tau `$-dependent current in the $`x`$-direction, is the Fourier transform of $`j_x(\mathrm{})=i_\sigma t_{\mathrm{}+\widehat{x},\mathrm{}}(c_{\mathrm{}+\widehat{x},\sigma }^{}c_\mathrm{}\sigma ^{}c_\mathrm{}\sigma ^{}c_{\mathrm{}+\widehat{x},\sigma }^{})`$. (see also Ref. ). As a test for our conductivity formula (2) we first present results in Fig. 1(a) for $`\sigma _{\mathrm{dc}}(T)`$ at half-filling for $`U=4`$ and various disorder strengths $`\mathrm{\Delta }`$. The behavior of the conductivity shows that as the temperature is lowered below a characteristic gap energy, the high $`T`$ “metallic” behavior crosses over to the expected low $`T`$ Mott insulating behavior for all $`\mathrm{\Delta }`$, thereby providing a reassuring check of formula (2) and our numerics. In Fig. 1(b), we show $`\sigma _{\mathrm{dc}}(T)`$ for a range of disorder strengths at density $`n=0.5`$ and $`U=4`$. The figure displays a striking indication of a change from metallic behavior at low disorder to insulating behavior above a critical disorder strength, $`\mathrm{\Delta }_\mathrm{c}2.7`$. If this persists to $`T=0`$ and in the thermodynamic limit, it would describe a ground state metal–insulator transition driven by disorder. In order to obtain a more precise understanding of the role of interactions on the conductivity, we compare in Fig. 2 the results of Fig. 1(b) with the disordered non-interacting $`\sigma _0`$ . The comparison is made at strong enough disorder $`\mathrm{\Delta }=2.0`$ such that the localization length is less than the lattice size and the non-interacting system is therefore insulating with $`d\sigma _0/dT>0`$ at low $`T`$. Interactions are found to have a profound effect on the conductivity: in the high-temperature “metallic” region, interactions slightly reduce $`\sigma `$ compared to the non-interacting $`\sigma _0`$ behavior. On the other hand in the low-temperature “insulating” region of $`\sigma _0`$ the data shows that upon turning on the Hubbard interaction the behavior is completely changed with $`d\sigma /dT<0`$, characteristic of metallic behavior. This is the regime of interest for the MIT. In order to ascertain that the phase produced by repulsive interactions at low $`T`$ is not an insulating phase with a localization length larger than the system size but a true metallic phase we have studied the conductivity response for varying lattice sizes. We find a markedly different size dependence for the $`U=0`$ insulator and the $`U=4`$ metal, resulting in a confirmation of the picture given above. For $`U=0`$, the conductivity on a larger ($`12\times 12`$) system is lower than that on a smaller ($`8\times 8`$) system (see Fig. 2), consistent with insulating behavior in the thermodynamic limit, whereas for $`U=4`$ the conductivity on the larger ($`8\times 8`$) system is higher than that on the smaller ($`4\times 4`$) system (data not shown), indicative of metallic behavior. Thus the enhancement of the conductivity by repulsive interactions becomes more pronounced with increased lattice size . Concerning finite-size effects for the non-interacting system we note that at lower values of $`\mathrm{\Delta }`$, where the localization length exceeds the lattice size, $`\sigma _0`$ shows “metallic” behavior which is diminished upon turning on the interactions . Based on our analysis above, we would predict that at low enough $`T`$ and large enough lattice size, the conductivity curves for the non-interacting $`\sigma _0`$ and interacting $`\sigma `$ cross with $`\sigma >\sigma _0`$ at sufficiently low $`T`$. To obtain information on the spin dynamics of the electrons and because it is a quantity often discussed in connection with the localization transition, we also compute the spin susceptibility $`\chi `$ as a function of T (through $`\chi (T)=\beta S_0(T)`$ where $`S_0`$ is the magnetic structure factor at wavevector $`𝐪=0`$). Fig. 3 shows two things: 1) $`\chi (T)`$ is enhanced by interactions with respect to the non-interacting case (at fixed disorder strength), and 2) starts to diverge when $`T`$ is lowered, both on the metallic ($`\mathrm{\Delta }=2`$) and insulating ($`\mathrm{\Delta }=4`$) sides of the alleged transition. This is in agreement with experimental and theoretical work on phosphorus-doped silicon, where a (3D) MIT is known to occur and the behavior is explained by the existence of local moments, and also with diagrammatic work on 2D disordered, interacting systems. In order to definitively establish the existence of a possible quantum phase transition in the disordered Hubbard model requires: (i) Extending the present data at $`T=0.1=W/80`$, where $`W`$ is the non-interacting bandwidth, to lower $`T`$, which is however difficult because of the sign problem. (ii) A more detailed analysis of the scaling behavior in both linear dimension and some scaled temperature. (iii) A more accurate analytic continuation procedure to extract the conductivity. The condition for the validity of the approximate formula (2) for $`\sigma _{dc}(T)`$, requires that $`T`$ be less than an appropriate energy scale which is fulfilled within the two phases, but breaks down close to a quantum phase transition where the energy scale vanishes. In summary, we have studied the temperature-dependent conductivity $`\sigma (T)`$ and spin susceptibility $`\chi (T)`$ of a model for two-dimensional electrons containing both disorder and interactions. We find that the Hubbard repulsion can enhance the conductivity and lead to a clear change in sign of $`d\sigma /dT`$. More significantly, from a finite size scaling analysis we demonstrate that repulsive interactions can drive the system from one phase to a different phase. We find that $`\sigma (T)`$ has the opposite behavior as a function of system size in the two phases indicating that the transition is from a localized insulating to an extended metallic phase. The $`\chi (T)`$ data further suggests the formation of an unusual metal, a non-Fermi liquid with local moments. While the simplicity of the model we study prevents any quantitative connection to recent experiments on Si-MOSFETs, there is nevertheless an interesting qualitative similarity between Fig. 1(b) and the experiments. Varying the disorder strength $`\mathrm{\Delta }`$ at fixed carrier density $`n`$, as in our calculations, can be thought of as equivalent to varying carrier density at fixed disorder strength, as in experiments, since in a metal–insulator transition one expects no qualitative difference between tuning the mobility edge through the Fermi energy (by varying $`\mathrm{\Delta }`$) and vice-versa (by varying $`n`$). Our work then suggests that electron-electron interaction induced conductivity plays a key role in the 2D metal–insulator transition. We would like to thank C. Huscroft for useful comments on the manuscript, H.V. Kruis for help with the calculations, and D. Belitz, R.N. Bhatt, C. Di Castro, T.R. Kirkpatrick, T.M. Klapwijk, S. V. Kravchenko, M.P. Sarachik, and G.T. Zimanyi for stimulating discussions. Work at UCD was supported by the SDSC, by the CLC program of UCOP, and by the LLNL Materials Institute.
no-problem/9907/math9907015.html
ar5iv
text
# A dynamical property unique to the Lucas sequence ## 1. Introduction A dynamical system is taken here to mean a homeomorphism $$f:XX$$ of a compact metric space $`X`$ (though the observations here apply equally well to any bijection on a set). The number of points with period $`n`$ under $`f`$ is $$\mathrm{Per}_n(f)=\mathrm{\#}\{xXf^nx=x\},$$ and the number of points with least period $`n`$ under $`f`$ is $$\mathrm{LPer}_n(f)=\mathrm{\#}\{xX\mathrm{\#}\{f^kx\}_{kZ}=n\}.$$ There are two basic properties that the resulting sequences $`\left(\mathrm{Per}_n(f)\right)`$ and $`\left(\mathrm{LPer}_n(f)\right)`$ must satisfy if they are finite. Firstly, the set of points with period $`n`$ is the disjoint union of the sets of points with least period $`d`$ for each divisor $`d`$ of $`n`$, so $$\mathrm{Per}_n(f)=\underset{d|n}{}\mathrm{LPer}_d(f).$$ (1) Secondly, if $`x`$ is a point with least period $`d`$, then the $`d`$ distinct points $`x,f(x),f^2(x),\mathrm{},f^{d1}(x)`$ are all points with least period $`d`$, so $$0\mathrm{LPer}_d(f)0\text{ mod }d.$$ (2) Equation (1) may be inverted via the Möbius inversion formula to give $$\mathrm{LPer}_n(f)=\underset{d|n}{}\mu (n/d)\mathrm{Per}_d(f),$$ where $`\mu ()`$ is the Möbius function defined by $`\mu (n)=\{\begin{array}{cc}1& \text{ if }n=1,\hfill \\ 0& \text{ if }n\text{ has a squared factor, and}\hfill \\ (1)^r& \text{ if }n\text{ is a product of }r\text{ distinct primes.}\hfill \end{array}`$ A short proof of the inversion formula may be found in \[4, Section 2.6\]. Equation (2) therefore implies that $$0\underset{d|n}{}\mu (n/d)\mathrm{Per}_d(f)0\text{ mod }n.$$ (4) Indeed, equation (4) is the only condition on periodic points in dynamical systems: define a given sequence of non-negative integers $`\left(U_n\right)`$ to be exactly realizable if there is a dynamical system $`f:XX`$ with $`U_n=\mathrm{Per}_n(f)`$ for all $`n1`$. Then $`\left(U_n\right)`$ is exactly realizable if and only if $$0\underset{d|n}{}\mu (n/d)U_d0\text{ mod }n\text{ for all }n1,$$ since the realizing map may be constructed as an infinite permutation using the quantities $`\frac{1}{n}_{d|n}\mu (n/d)U_d`$ to determine the number of cycles of length $`n`$. Our purpose here is to study sequences of the form $$U_{n+2}=U_{n+1}+U_n,n1,U_1=a,U_2=b,a,b>0$$ (5) with the distinguished Fibonacci sequence denoted $`(F_n)`$, so $$U_n=aF_{n2}+bF_{n1}\text{ for }n3.$$ (6) ###### Theorem 1. The sequence $`\left(U_n\right)`$ defined by (5) is exactly realizable if and only if $`b=3a`$. This result has two parts: the existence of the realizing dynamical system is described first, which gives many modular corollaries concerning the Fibonacci numbers. One of these is used in the obstruction part of the result later. The realizing system is (essentially) a very familiar and well-known system, the golden-mean shift. The fact that (up to scalar multiples) the Lucas sequence $`(L_n)`$ is the only exactly realizable sequence satisfying the Fibonacci recurrence relation to some extent explains the familiar observation that $`(L_n)`$ satisfies a great array of congruences. Throughout, $`n`$ will denote a positive integer and $`p,q`$ distinct prime numbers. ## 2. Existence An excellent introduction to the family of dynamical systems from which the example comes is the recent book by Lind and Marcus . Let $$X=\{𝐱=(x_k)\{0,1\}^Zx_k=1x_{k+1}=0\text{ for all }k\text{}\}.$$ The set $`X`$ is a compact metric space in a natural metric (see \[2, Chapter 6\] for the details). The set $`X`$ may also be thought of as the set of all (infinitely long in both past and future) itineraries of a journey involving two locations ($`0`$ and $`1`$), obeying the rule that from $`1`$ you must travel to $`0`$, and from $`0`$ you must travel to either $`0`$ or $`1`$. Define the homeomorphism $`f:XX`$ to be the left shift, $$(f(𝐱))_k=x_{k+1}\text{ for all }k\text{}.$$ The dynamical system $`f:XX`$ is a simple example of a subshift of finite type. It is easy to check that the number of points of period $`n`$ under this map is given by $$\mathrm{Per}_n(f)=\mathrm{trace}\left(A^n\right)$$ (7) where $`A=\left[\begin{array}{cc}1& 1\\ 1& 0\end{array}\right]`$ (see \[2, Proposition 2.2.12\]; the $`01`$ entries in the matrix $`A`$ correspond to the allowed transitions $`00`$ or $`1`$; $`10`$ in the elements of $`X`$ thought of as infinitely long journeys in a graph with vertices $`0`$ and $`1`$). ###### Lemma 2. If $`b=3a`$ in (5), then the corresponding sequence is exactly realizable. ###### Proof. A simple induction argument shows that (7) reduces to $`\mathrm{Per}_n(f)=L_n\text{ for }n1,`$ so the case $`a=1`$ is realized using the golden mean shift itself. For the general case, let $`\overline{X}=X\times B`$ where $`B`$ is a set with $`a`$ elements, and define $`\overline{f}:\overline{X}\overline{X}`$ by $`\overline{f}(𝐱,y)=(f(𝐱),y)`$. Then $`\mathrm{Per}_n(\overline{f})=a\times \mathrm{Per}_n(f)`$ so we are done. ∎ The relation (4) must as a result hold for $`(L_n)`$. ###### Corollary 3. $`_{d|n}\mu (n/d)L_d0\text{ mod }n`$ for all $`n1`$. This has many consequences, a sample of which we list here. Many of these are of course well-known (see \[3, Section 2.IV\]) or follow easily from well-known congruences. (a) Taking $`n=p`$ gives $$L_p=F_{p2}+3F_{p1}1\text{ mod }p.$$ (8) (b) It follows from (a) that $$F_{p1}1\text{ mod }p\text{ }F_{p2}2\text{ mod }p,$$ (9) which will be used below. (c) Taking $`n=p^k`$ gives $$L_{p^k}L_{p^{k1}}\text{ mod }p^k$$ (10) for all primes $`p`$ and $`k1`$. (d) Taking $`n=pq`$ (a product of distinct primes) gives $$L_{pq}+1L_p+L_q\text{ mod }pq.$$ ## 3. Obstruction The negative part of Theorem 1 is proved as follows. Using some simple modular results on the Fibonacci numbers, we show that if the sequence $`\left(U_n\right)`$ defined by (5) is exactly realizable, then the property (4) forces the congruence $`b3a`$ mod $`p`$ to hold for infinitely many primes $`p`$, so $`(U_n)`$ is a multiple of $`(L_n)`$. ###### Lemma 4. For any prime $`p`$, $`F_{p1}1`$ mod $`p`$ if $`p=5m\pm 2`$. ###### Proof. From Hardy and Wright, \[1, Theorem 180\], we have that $`F_{p+1}0`$ mod $`p`$ if $`p=5m\pm 2`$. The identities $`F_{p+1}=2F_{p1}+F_{p2}0`$ mod $`p`$ and (8) imply that $`F_{p1}1`$ mod $`p`$. ∎ Assume now that the sequence $`\left(U_n\right)`$ defined by (5) is exactly realizable. Applying (4) for $`n`$ a prime $`p`$ shows that $$U_pU_10\text{ mod }p,$$ so by (6) $$aF_{p2}+bF_{p1}a\text{ mod }p.$$ If $`p`$ is $`2`$ or $`3`$ mod $`5`$, Lemma 4 then implies that $$\left(F_{p2}1\right)a+b0\text{ mod }p.$$ (11) On the other hand, for such $`p`$, (9) implies that $`F_{p2}2`$ mod $`p`$, so (11) gives $$b3a\text{ mod }p.$$ By Dirichlet’s theorem (or simpler arguments) there are infinitely many primes $`p`$ with $`p`$ equal to $`2`$ or $`3`$ mod $`5`$, so $`b3a`$ mod $`p`$ for arbitrarily large values of $`p`$. We deduce that $`b=3a`$, as required. ## 4. Remarks (a) Notice that the example of the golden mean shift plays a vital role here. If it were not to hand, exhibiting a dynamical system with the required properties would require proving Corollary 3, and a priori we have no way of guessing or proving this congruence without using the dynamical system. (b) The congruence (8) gives a different proof that $`F_{p1}0\text{ or }1`$ mod $`p`$ for $`p2,5`$. If $`F_{p1}\alpha `$ mod $`p`$, then (8) shows that $`F_{p2}13\alpha `$ mod $`p`$, so $`F_p12\alpha `$. On the other hand, the recurrence relation gives the well-known equality $$F_{p2}F_p=F_{p1}^2+1,$$ (since $`p`$ is odd) so $`15\alpha +6\alpha ^2\alpha ^2+1`$, hence $`5(\alpha ^2\alpha )0`$ mod $`p`$. Since $`p5`$, this requires that $`\alpha ^2\alpha `$ mod $`p`$ so $`\alpha 0\text{ or }1`$. (c) The general picture of conditions on linear recurrence sequences that allow exact realization is not clear, but a simple first step in the Fibonacci spirit is the following question. For each $`k1`$ define a recurrence sequence $`(U_n^{(k)})`$ by $$U_{n+k}^{(k)}=U_{n+k1}^{(k)}+U_{n+k2}^{(k)}+\mathrm{}+U_n^{(k)}$$ with specified initial conditions $`U_j^{(k)}=a_j`$ for $`1jk`$. The subshift of finite type associated to the $`01`$ $`k\times k`$ matrix $$A^{(k)}=\left[\begin{array}{cccccc}1& 1& 1& \mathrm{}& 1& 1\\ 1& 0& 0& \mathrm{}& 0& 0\\ 0& 1& 0& \mathrm{}& 0& 0\\ & & \mathrm{}\\ 0& 0& \mathrm{}& 1& 0& 0\\ 0& 0& \mathrm{}& 0& 1& 0\end{array}\right]$$ shows that the sequence $`(U_n^{(k)})`$ is exactly realizable if $`a_j=2^j1`$ for $`1jk`$. If the sequence is exactly realizable, does it follow that $`a_j=C(2^j1)`$ for $`1jk`$ and some constant $`C`$? The special case $`k=1`$ is trivial, and $`k=2`$ is the argument above. Just as in Corollary 3, an infinite family of congruences follows for each of these multiple Fibonacci sequences from the existence of the exact realization. 1991 Mathematics Subject Classification.
no-problem/9907/nucl-th9907064.html
ar5iv
text
# Baryonic contributions to 𝑒⁺⁢𝑒⁻ yields in a hydrodynamic model of Pb+Au collisions at the SPS ## 1 Theoretical Approach The observation of low-mass dielectron excess over conventional sources by the CERES collaboration has spurred considerable theoretical activity. In addition to the microscopic production rates, confrontation of theory with data requires a description of the space-time evolution of the produced matter. We have used a one-fluid hydrodynamical description, which is constrained to reproduce the measured hadron spectra . The initial state of the hydrodynamic evolution, which is assumed to be sufficiently thermalized, is parametrized to reproduce both the baryonic and mesonic components. The adiabatic expansion is then governed by the laws of hydrodynamics (including baryon number and strangeness conservation) and the input equation of state (EOS). In the calculations reported here, the EOS admits a phase transition to the quark-gluon plasma at a critical temperature $`T_c=200`$ MeV. The hadronic part of the EOS includes particles and resonances up to 2 GeV. The system is assumed to maintain both local thermal and chemical equilibrium until freeze-out. The freeze-out criterion employed, energy density $`ϵ_f=0.15`$ GeV/fm<sup>3</sup>, corresponds to an average freeze-out temperature of $`T_f=140`$ MeV. Admissible changes in both $`T_c`$ and $`T_f`$ do not affect our conclusions (see ). We use $`e^+e^{}`$ production rates from the calculations of Steele, Yamagishi and Zahed (hereafter SYZ )and Rapp, Urban, Buballa and Wambach (hereafter RUBW ). SYZ use experimentally extracted spectral functions and on-shell chiral reduction formulas coupled with a virial expansion scheme. For baryon number density $`n_b=0`$, these rates reproduce those of Gale and Lichard . The RUBW rates are based on a many-body approach, in which phenomenological interactions are used to calculate the $`\rho `$-meson spectral function in matter containing baryons. These two rates represent contrast both in terms of their input physics and the theoretical techniques employed. They also differ significantly in their absolute magnitudes (see Fig. 2). The distinguishing features of the SYZ rates are: (1) Enhancements relative to the baryon-free case are of order 2-3 and are restricted to $`M_{e^+e^{}}/\mathrm{MeV}<500`$. (2) The prominent signature at the $`\rho `$-meson vacuum mass persists at nearly all values of $`T`$ and $`n_b`$. In contrast to SYZ, the two most striking features of the RUBW results are: (1) Rates are significantly larger than those of SYZ in the range $`200<M_{e^+e^{}}/\mathrm{MeV}<600`$. (2) The tell-tale signature at the $`\rho `$-meson vacuum mass is weakened, predominantly with increasing $`n_b`$. ## 2 Results In Fig. 2, we show the results of dielectrons radiated during the lifetime of the fireball. These results are folded with the cuts and resolution of CERES. Our calculation is tuned to reproduce the hadronic results of NA49 and yields an average multiplicity of $`dN_{ch}/d\eta 330`$ within the CERES acceptance region. The CERES collaboration finds both the shape of the spectrum and the yield scaled with multiplicity to vary with multiplicity . We have therefore opted to compare our results with the preliminary data from nearly central collisions with $`dN_{ch}/d\eta =350`$. The contribution from the quark-gluon plasma is about an order of magnitude below the hadronic contributions. The SYZ rates with baryons are about a factor of $`2`$ larger than those without baryons, but mostly below $`M_{e^+e^{}}=400`$ MeV. This translates to an enhancement of about a factor of two or less relative to the baryon-free case. The larger rates of RUBW result in enhancements of the thermal yield, even up to $`M_{e^+e^{}}=300600`$ MeV, by a factor of about three relative to those in mesons-only matter. In addition to thermal pairs, the measured yield contains contributions from meson decays after freeze-out. This background was calculated from the distributions and abundances of hadrons at freeze-out given by our calculations. The only exception is the $`\varphi `$-yield, which is suppressed by a factor 0.6 to achieve consistency with the data . The resulting background is in agreement with the background estimated by the CERES collaboration. The total yield (sum of the thermal and background contributions) is presented Fig. 3(a). Results for the SYZ rates with and without baryons are virtually indistinguishable from each other, despite significant enhancements in the microscopic rates for $`M_{e^+e^{}}<400`$ MeV. In this mass region, the Dalitz decay backgrounds are an order of magnitude larger than the thermal yields and entirely mask the baryonic contributions. The RUBW microscopic rates, being larger than those of SYZ in the region below the $`\rho `$-mass, lead to total yields that are somewhat distinguishable from the case without baryons, but lie below the data roughly by a factor of two. It is instructive to bin the calculated results to contrast with the bins used by the CERES collaboration (see Fig. 3(b)). The calculated spectra fall below the data at only two points. At $`M=360`$ MeV, the results are close to the experimental lower limit. At $`M=570`$ MeV, the difference between the data and the calculated result is only about 1.5 standard deviations for the SYZ rates with and without baryonic contributions, whereas the use of the RUBW rates leads to an yield which is 1.3 standard deviations below the data. Given the uncertainties, these differences may not be statistically significant. We thus conclude that thermal production of electron pairs may well be large enough to account for the observed enhancement. ## 3 Summary We have calculated $`e^+e^{}`$ emission in Pb+Au collisions at 158 AGeV/c using two different dielectron production rates within the framework of hydrodynamics. The rates calculated by SYZ include baryonic contributions arising from pion-nucleon interactions and those of RUBW account for additional in-medium modifications, which leads to a substantial broadening of the $`\rho `$-meson spectral function. We found that the additional contributions due to baryons in the rates of SYZ give modest contributions, but mainly at low values of invariant mass where the spectrum is dominated by background decays. The final dielectron spectra with and without baryonic contributions are thus almost identical. On the other hand, the larger $`\rho `$-width in the rates of RUBW leads to comparatively larger yields in the 300–600 MeV mass region. In all cases, the calculated results are below the data, but the differences are not large to indicate statistical significance. The yield of thermal dielectrons appears to be large enough to explain the preliminary data. We thank A. Drees and I. Tserruya for helping us to put the CERES data in perspective.
no-problem/9907/cond-mat9907219.html
ar5iv
text
# Sorry, this file is empty. See cond-mat/9912430
no-problem/9907/astro-ph9907119.html
ar5iv
text
# X-ray properties of LINERs ## 1 Introduction Low-ionization nuclear emission line regions, LINERs, are characterized by their optical emission line spectrum which shows a lower degree of ionization than Seyfert galaxies (e.g., Heckman et al. 1980). Their major power source and line excitation mechanism has been a subject of lively debate ever since their discovery (for reviews see, e.g., Filippenko 1989, 1993, Ho 1998). LINERs manifest the most common type of activity in the local universe. If powered by accretion, they probably represent the low-luminosity end of the quasar phenomenon and their presence has relevance to, e.g., the evolution of quasars, the faint end of the AGN luminosity function, and the presence of supermassive black holes (SMBHs) in nearby galaxies. A detailed study of the LINER phenomenon is thus very important. Many different mechanisms that might account for their optical emission line spectra have been examined, including collisional ionization and excitation (Burbidge & Burbidge 1962), shock heating (e.g., Fosbury et al. 1978, Heckman 1980, Dopita et al. 1996, Contini 1997), photoionization by hot stars (e.g., Shields 1992, Ho et al. 1993), photoionization by a non-stellar continuum source (e.g., Ferland & Netzer 1983, Halpern & Steiner 1983, Binette 1984,1985,1986, Ho et al. 1993), and photoionization by an absorption-diluted AGN continuum (Halpern & Steiner 1983, Schulz & Fritsch 1994). Despite this detailed shock and photoionization modelling the nature of the main ionizing source of LINERs remained elusive, although there is now growing evidence that they are accretion powered (e.g., Falcke et al. 1997, Falcke 1998, Ho 1998). Eracleous et al. (1995), in an effort to explain the UV bright centers detected in some but not all LINERs, suggested a duty cycle model where central activity in LINERs is governed by occasional tidal disruptions of stars by central black holes. In an alternative approach, Barth et al. (1998) suggested that dust extinction could cause the UV darkness of some LINERs. X-rays are a powerful tool to investigate the presence of an AGN via X-ray variability, luminosity, and extent, and to explore the physical properties of LINERs in general. Nevertheless, not many LINERs have been examined in X-rays, particularly not larger samples in an homogeneous way. The largest previous one we are aware of was presented by Ptak et al. (1999, see also Serlemitsos et al. 1997) and consisted of several low-luminosity AGN (LLAGN) including 5 LINERs observed with ASCA. They find that the ASCA spectra are best described by a two-component model consisting of soft thermal emission and a powerlaw with photon index $`\mathrm{\Gamma }_\mathrm{x}`$ $`1.7`$, with varying relative contributions of the two spectral components from object to object. Several studies of individual objects (e.g., Mushotzky 1982, Koratkar et al. 1995, Ehle et al. 1995, Cui et al. 1997, Terashima et al. 1998, Pietsch et al. 1998, Roberts et al. 1999) revealed results consistent with the above spectral results. A fairly complex X-ray spectrum was recently reported for the LINER NGC 1052 (Weaver et al. 1999). Concerning the spatial extent of the X-ray emission, Koratkar et al. (1995; K95 hereafter) found their two LINERs to be consistent with a point source within the limits of the ROSAT HRI resolution. X-ray luminosities range up to $``$ 10<sup>41</sup> erg s<sup>-1</sup> (e.g., Ptak et al. 1999, Stockdale et al. 1998, K95; see Halpern & Steiner 1983 for a collection of Einstein luminosities) and are presently biased towards the X-ray brightest objects. Given the importance to better understand the LINER phenomenon and activity in nearby galaxies in general, with its potential bearing on the evolution of SMBHs in galaxies, the contribution to the faint end of the AGN luminosity function, and the soft X-ray background; and given the still limited number of objects previously studied in the X-ray spectral region, we examined a sample of 13 LINERs with the ROSAT (Trümper 1983) instruments (Pfeffermann et al. 1987). We report here the results of an investigation of the spectral, spatial, and temporal X-ray properties of these galaxies. Our sample consists of spiral galaxies and lenticulars. The primary selection was according to the list of Huchra & Burg (1992, with most LINERs identified in Heckman 1980, Staufer 1982, and Keel 1983) and Huchra (1998, priv. com.). We then excluded LINERs that were, on the basis of emission lines, re-classified as Seyferts or listed to contain a Seyfert-component as well according to the NED database. This resulted in 13 remaining LINERs. Results for a larger sample of LLAGN, including the objects with composite spectra and Seyfert 2 galaxies will be reported elsewhere. For the selected sources we analyzed ROSAT all-sky survey data. In addition, 8 of the galaxies were targets of, or serendipituously located in the field of view of, PSPC and/or HRI observations. All except one are detected, and for 5 of them a PSPC spectral analysis was possible. The brightest source turned out to be NGC 4450 which is studied in most detail below. The paper is organized as follows: The data reduction is described in Sect. 2. In the next two Sections we present the general assumptions on which the data analysis is based (Sect. 3) and results for the individual objects (Sect. 4), including a discussion of (the reality of) further X-ray sources close to the target sources. The discussion (Sect. 5) is followed by the concluding summary in Sect. 6. ## 2 Data reduction We used ROSAT all-sky survey (RASS) data as well as archival and serendipituous pointed observations of the galaxies. The observations are summarized in Table 1. For further analysis of the pointed PSPC and HRI data the source photons were extracted within a circular cell centered on the target source. The background was determined in a source-free region around the target source and subtracted. The data were corrected for vignetting and dead-time, using the EXSAS software package (Zimmermann et al. 1994). To carry out the spectral analysis source photons in the amplitude channels 11-240 were binned according to a constant signal/noise ratio of $`>`$ 4$`\sigma `$. Lightcurves were created with a binning of 800 sec to account for the wobble mode of the satellite. ## 3 Data analysis: General considerations and assumptions The following models were fit to the X-ray spectra of all objects: (i) a powerlaw of the form $`\mathrm{\Phi }E^{\mathrm{\Gamma }_\mathrm{x}}`$, and (ii) emission from a Raymond-Smith (1977; RS hereafter) plasma, abundances fixed to either the solar value (Anders & Grevesse 1989) or below, up to 1/100 $`\times `$ solar. The amount of cold absorption was constrained not to underpredict the Galactic value (Dickey & Lockman 1990) along the line-of-sight. If several PSPC observations were available, we used the one with the deepest exposure time (for the present sample, this also always happened to correspond to the pointing were the source was on-axis, if an on-axis pointing existed at all.) To calculate X-ray fluxes and luminosities, we proceeded as follows: For sources bright enough to allow spectral fits we integrated over the SED in the (0.1-2.4 keV) band after correcting for cold absorption. For RASS sources too weak to perform spectral fits, and for HRI sources we assumed a powerlaw spectrum with $`\mathrm{\Gamma }_\mathrm{x}`$ =–1.9 and absorption of the Galactic value. Distances were calculated using a Hubble constant of $`H_0`$=75 km/s/Mpc for the distant ($`v>3000`$ km/s) objects. For those nearby or even with blueshift we used Tully’s (1988) catalog of nearby galaxies (if not stated otherwise) which is based on the virgocentric model of Tully & Shaya (1984). To check for the influence of the assumed distances, which can be important given that all galaxies are very nearby, we re-calculated all luminosities based on distances obtained with the flow field model of Mould et al. (2000; ApJ, in prep.). We find that our conclusions are unaltered and that luminosities of individual objects are changed by a factor $`\text{ }<`$ 2. If not stated otherwise, X-ray luminosities given below refer to the energy interval (0.1–2.4) keV. Some of the RASS sources were not significantly detected. Namely, formally only 1, 3, 0, 5, 1, 2 source photons above the background were registered in the (0.5–2) keV band for NGC 404, NGC 1167, NGC 4419, NGC 5675, NGC 5851, and IC 1481, respectively. In this case we conservatively assumed that $`<`$10 source photons would have escaped detection and the upper limits for countrates and luminosities listed in Tables 1, 2 were calculated correspondingly. To derive blue luminosities, we used the observed blue magnitudes of de Vaucouleur et al. (1991; see also Huchra & Burg 1992). To carry out the extinction correction we adopted the same amount of Galactic absorption (Dickey & Lockman 1990) as we did in the X-ray analysis, assumed a standard gas/dust ratio, and utilized the relation of Bohlin et al. (1978; see also Predehl & Schmitt 1995). ## 4 Notes on individual objects Below, we first give a brief summary of what is known for the individual galaxies (only the detected ones plus NGC 1167) from the literature and then describe the results from our X-ray temporal, spectral and spatial analysis of the individual objects. A detailed literature search revealed that some of the present galaxies were already very briefly discussed in other/previous samples with different aims. Given the inhomogeneity of the assumptions made and models fit (for details see below), we extent here the spectral analysis of these objects and also perform a spatial and temporal analysis. ### 4.1 NGC 404 NGC 404 is blueshifted (Stromberg 1925). Baars & Wendker (1976) noted its peculiar radio properties. Optical spectroscopy was performed by, e.g., Burbidge & Burbidge (1965), Keel (1983), and Filippenko & Sargent (1985) and revealed very narrow emission lines; for an image see Sandage (1961). Larkin et al. (1998) obtained NIR spectra and reported the detection of strong \[FeII\] emission in this and several further LINERs (but not in all of their sample) and suggested X-ray heating to be at work. A molecular gas ring was observed by Wiklind & Henkel (1990). The detection of a UV core with HST was presented by Maoz et al. (1995). Based on the analysis of UV spectra, Maoz et al. (1998) explained the data by the presence of a central star cluster. A deep HRI observation of NGC 404 is available. The source is detected ($``$25 source photons) but too weak to allow a more detailed temporal or spatial analysis. Assuming a powerlaw spectral shape as described above we derive a (0.1-2.4 keV) luminosity of $`L_\mathrm{x}=10^{37.7}`$ erg s<sup>-1</sup>, the lowest $`L_\mathrm{x}`$ among the present objects, and among the lowest so far detected for a LINER. The X-ray emission of NGC 404 is consistent with originating completely from discrete stellar sources given the galaxy’s blue luminosity. Using the relation between $`L_\mathrm{x}`$ and $`L_\mathrm{B}`$ of Canizares et al. (1987), we predict $`L_\mathrm{x}^{0.54.5\mathrm{keV}}=10^{38.25}`$ erg s<sup>-1</sup> in the (0.5–4.5 keV) band which compares to the observed value of $`10^{37.6}`$ erg s<sup>-1</sup>, which is below the expectation but consistent within the scatter. (The intrinsic X-ray luminosity could be boosted if there is some excess absorption or the spectral shape is different from the assumed one.) It is interesting to note that Wiklind & Henkel (1990) argue for a much larger distance of NGC 404 than derived from, e.g., the Tully catalog (1988): they suggest 10 Mpc instead of 2.4 Mpc which would correspondingly increase the values of both $`L_\mathrm{x}`$ and $`L_\mathrm{B}`$. ### 4.2 NGC 1167 The galaxy is a well-known radio source (4C +34.09) and has been intensively studied at radio wavelengths (e.g., Long et al. 1966, Condon & Dressel 1978, Bridle & Fomalont 1978, Sanghera et al. 1995). Optical spectra were presented by, e.g., Wills (1967), Wills & Wills (1976), and Gelderman & Whittle (1994). Despite earlier suspicions, Ho et al. (1997; H97 hereafter) did not detect a broad component in H$`\alpha `$. An upper limit for the X-ray luminosity derived from Einstein observations, $`L_\mathrm{x}^{0.53.5\mathrm{keV}}<\mathrm{5\hspace{0.17em}10}^{41}`$ erg s<sup>-1</sup>, was reported by Dressel & Wilson (1985; see also Canizares et al. 1987, Fabbiano et al. 1992). The source is undetected in the PSPC pointing, which might be partly traced back to the large $`N_\mathrm{H}`$ value in its direction, $`N_{\mathrm{gal}}=\mathrm{13.3\hspace{0.17em}10}^{20}`$ cm<sup>-2</sup>. We estimate an upper limit for the countrate of 0.005 cts/s, from the conservative assumption of a countrate less than that of the weakest detected source in the field of view at similar off-axis angle. ### 4.3 NGC 2768 No broad H$`\alpha `$ component was detected by H97. Weak CO emission was found by Wiklind et al. (1995). The source is included in a sample of galaxies by Davis & White (1996) who fit a Raymond-Smith model and find $`kT3`$ keV for metal abundances 0.2 $`\times `$ solar, and absorption of $`N_\mathrm{H}`$ = $`\mathrm{1.9\hspace{0.17em}10}^{20}`$ cm<sup>-2</sup>, less than the Galactic value. The source seems to be slightly variable from the first to the second pointing with a drop in countrate from 0.021$`\pm `$0.002 cts s<sup>-1</sup> to 0.013$`\pm `$0.003 cts s<sup>-1</sup>. The short-term light curve (first pointing) shows constant source flux. Spectral fits were performed for the deeper PSPC observation only. Neither a single powerlaw with $`N_\mathrm{H}=N_{\mathrm{gal}}`$ nor emission from a Raymond-Smith plasma with solar abundances provides a successful X-ray spectral fit. The fit becomes acceptable for very subsolar abundances. In that case we find a lower temperature than Davis & White (1996) (and cold absorption consistent with the Galactic value, which should not be underpredicted). This $`T`$ also is more consistent with the $`kT\sigma `$ relation of Davis & White. A comparison of the source’s spatial extent with the theoretical point spread function (PSF) of a point source shows that most of the X-ray emission is consistent with arising from a point source (Fig. 3). There may be some extended emission at weak levels (Fig. 2). A nearby weak second source is detected with a countrate of 0.003$`\pm `$0.001 cts/s. It coincides with a stellar objects on a POSS plate. ### 4.4 NGC 3642 Some X-ray properties of this LINER were earlier examined by K95 who fit a powerlaw model to the ROSAT PSPC spectrum and found the HRI source extent to be consistent with a point source. A broad component in H$`\alpha `$ is present (e.g., K95). Barth et al. (1998) reported the detection of a compact nuclear UV source based on HST data, and conclude that the extrapolation of the UV continuum, assuming an AGN-like shape, would provide enough ionizing photons to power the NLR emission of this galaxy. We neither detect short-time variability nor variability between the two PSPC observations separated by 5 months. The spectrum is best fit by a Raymond-Smith model of heavily depleted abundances, around 0.03 $`\times `$ solar (or, alternatively, by a powerlaw model with some excess absorption, confirming K95). A comparison with the PSF of the PSPC shows that most of the X-ray emission arises from an unresolved source (Fig. 3). There is a second source nearby with a countrate of 0.0025 $`\pm `$0.0007 cts/s. Its position falls close to two star-like knots projected onto (or in) one of the spiral arms of NGC 3642 (they could be HII regions or foreground stars). If the X-ray source is intrinsic to NGC 3642, its luminosity of $`L_\mathrm{x}=\mathrm{1.8\hspace{0.17em}10}^{39}`$ erg s<sup>-1</sup> assuming a powerlaw spectral shape as described in Sect. 3 is fairly high. For instance, its exceeds the Eddington luminosity of a solar mass black hole by a factor $``$10. One possible interpretation is a powerful X-ray binary with either a super-eddington low-mass black hole or a massive black hole. We note in passing that no optical supernova was detected in NGC 3642. ### 4.5 NGC 3898 There have been several studies of this galaxy in the optical (e.g., Burbidge & Burbidge 1965, Barbon et al. 1978, Mizuno & Hamajima 1986, and references given in van Driel & van Woerden 1994). H97 tentatively concluded that broad H$`\alpha `$ is absent from the optical spectrum. 21 cm HI observations with the WRST were presented by van Driel & van Woerden (1994). The source is quite weak with only about 50 detected photons, very close to the limits of meaningful $`\chi ^2`$ spectral fits. Therefore, we only applied a powerlaw model with fixed Galactic absorption. This results in $`\mathrm{\Gamma }_\mathrm{x}`$ =–2.1 and gives an acceptable fit. ### 4.6 NGC 4450 A fairly weak broad H$`\alpha `$ line is probably present in the optical spectrum (Staufer 1982, H97). For an optical image see, e.g., Sandage (1961). The HII region population of the galaxy was studied by Gonzalez Delgado et al. (1997). An Einstein IPC image is shown in Fabbiano et al. (1992). They derive an (0.2-4 keV) X-ray flux $`f_\mathrm{x}=\mathrm{11.5\hspace{0.17em}10}^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> under the assumption of a thermal bremsstrahlung spectrum with $`kT`$=5 keV. Again, we do not detect short-timescale variability. The source is quite bright and nearly 2000 photons are available for the spectral analysis (we used the deepest pointing). No Raymond-Smith fit is possible. When allowing $`N_\mathrm{H}`$ to be free, it underpredicts the Galactic value. If subsolar abundances are allowed, the best fit requires abundances less than 1/100 solar and that fit is still unsatisfactory. In contrast, a single powerlaw with $`\mathrm{\Gamma }_\mathrm{x}`$ = –2.0 near the AGN-canonical value (e.g., Pounds et al. 1994; Svensson et al. 1994) gives an excellent fit. If $`N_\mathrm{H}`$ is treated as free parameter, the Galactic value is recovered. We derive a soft X-ray luminosity of $`L_\mathrm{x}=10^{40.8}`$ erg s<sup>-1</sup>, the highest value found among the present galaxies. The corresponding (0.5-4.5 keV) X-ray luminosity is $`L_\mathrm{x}^{0.54.5\mathrm{keV}}=10^{40.6}`$ erg s<sup>-1</sup>, a factor $``$5 above the value expected from the stellar contribution using the relation of Canizares et al. (1987). We note that Tully’s catalog places NGC 4450 at the distance of the Virgo cluster. If the galaxy is instead located in the sheet of galaxies behind the Virgo cluster, the luminosities inferred above increase correspondingly. A comparison with the PSF of the PSPC shows that most of the X-ray emission is consistent with arising from a point source. At weak emission levels there is evidence for source extent (Fig. 3; several of the structures are seen in both, the soft (0.1-0.5 keV) and hard (0.5-2.4 keV) band). Again, there is a nearby second source. Its countrate is 0.011 $`\pm `$0.002 cts/s and since the pointing is deep, a spectral analysis is possible. A powerlaw spectral fit gives a spectrum similar to NGC 4450 itself, but a bit softer with $`\mathrm{\Gamma }_\mathrm{x}`$ =–2.4. At the distance of NGC 4450 this corresponds to a luminosity of $`L_\mathrm{x}=\mathrm{7\hspace{0.17em}10}^{39}`$ erg s<sup>-1</sup>. The source is also present in the second PSPC pointing of slightly lower exposure time (Table 1). Its countrate is constant. Inspection of the POSS plates does not reveal any optical counterpart. Neither is there any X-ray source visible in the Einstein IPC image (see Fig. 7 of Fabbiano et al. 1992). The ‘reality’ of these nearby sources is examined in Section 4.9. ### 4.7 NGC 5371 The galaxy was classified as a LINER by Rush et al. (1993). Elfhag et al. (1996) presented CO measurements and suggested NGC 5371 to be a good candidate for a post-starburst galaxy (Koorneef 1993). The rotation curve was measured by, e.g., Zasov & Sil’chenko (1987). Gonzalez Delgado et al. (1997) studied the HII region population. The X-ray lightcurve does not show short-timescale variability (the countrate in individual bins falls slightly outside the 1$`\sigma `$ error, occasionally, but this is most likely due to the closeness of the source to the PSPC support grid structure). The source appears to be extended or double. Thus, photons from the total emission region were first extracted for analysis. A spectrum was fit to this ‘double’ source (since their contributions cannot be disentangled from each other safely; it is these results that are listed in Table 2). In that case, a powerlaw of index $`\mathrm{\Gamma }_\mathrm{x}`$ $`1.6`$ yields a successful X-ray spectral fit. The cold absorption, if treated as free parameter, underpredicts the Galactic value, and even more so if a RS model is applied. The latter type of model only provides an acceptable fit if the metal abundances are depleted below 0.01 $`\times `$ solar. Secondly, since the optical position of NGC 5371 falls on the northernmost of the two sources, source photons centered on the optical position of the galaxy were extracted within a circular region of diameter 250″. In this case, the X-ray spectrum is dominated by the northernmost source, but the second one contributes to some extent. The spectral analysis then yields a best fit in terms of a powerlaw with $`\mathrm{\Gamma }_\mathrm{x}`$ $`1.96`$ ($`\chi _{\mathrm{red}}^2`$=0.5), and $`N_\mathrm{H}`$ recovers the Galactic value if treated as free parameter. Again, RS emission can only successfully describe the spectrum for heavily depleted abundances. The source appears double, or extended. No HRI observation is available for a more detailed study of the spatial extent. ### 4.8 NGC 6500 There are several lines of evidence for the presence of a nuclear outflow or wind as judged from radio continuum emission measurements (Unger et al. 1989) and optical emission lines (Gonzalez Delgado & Perez 1996). H97 did not detect broad H$`\alpha `$. Barth et al. (1997) using HST data concluded that the resolved UV emission of NGC 6500 is likely dominated by massive stars. They also derived a ROSAT HRI flux for this galaxy, assuming a spectrum with $`\mathrm{\Gamma }_\mathrm{x}`$ =–2. Although NGC 6500 is detected in the HRI observation, the low number of source photons prevents a more detailed analysis in terms of source extent or variability. ### 4.9 Origin of nearby sources In PSPC observations of several galaxies (NGC 2768, NGC 3642, NGC 4450) there is a second source detected near the target source with a countrate always roughly 1/10 of the central source. The same was found for NGC 4736 by Cui et al. (1997) who considered the second source to be real and of transient nature due to its presence in the PSPC and absence in the HRI observation. Given the similar locations relative to the central source, and same (factor $``$ 1/10) relative countrates, we suspected these second sources to be an instrumental artifact, namely ghost imaging (Briel et al. 1994) to be at work. To more closely examine this problem we selected the second source near NGC 4450 since it is the brightest, thus allowing the most detailed analysis, and since no potential optical counterpart shows up near the X-ray position. We extracted the photons around the X-ray center of the source and made several tests. However, we find no indications of ghost imaging: Source photons do not exclusively cover the very soft channels (ghost imaging only operates below $``$ 0.2 keV; Nousek & Lesser 1993, Briel et al. 1994), and the source is not fixed in detector coordinates but follows the wobble. Using the X-ray $`\mathrm{log}N\mathrm{log}S`$ distribution of Hasinger et al. (1994), we expect only 0.17 sources of X-ray flux greater or equal to that of the source near NGC 4450 in a region of size 10′ $`\times `$ 10′. For a discussion of an excess of bright X-ray sources around nearby galaxies see Arp (1997, and references therein). Unfortunately, no HRI observation of NGC 4450 is available for further scrutiny. It will certainly be interesting to check for the presence of the second source once further high spatial resolution observations of NGC 4450 become available. ## 5 Discussion ### 5.1 X-ray luminosity and $`L_\mathrm{x}`$/$`L_\mathrm{B}`$ ratio The objects examined span a luminosity range from $`L_\mathrm{x}10^{37.7}10^{40.8}`$ erg s<sup>-1</sup> in the (0.1-2.4 keV) band. There is still some bias towards selecting the high $`L_\mathrm{x}`$ LINERs. This does not hold for the ROSAT survey data but given the short exposure times of typically 400s upper limits, although already meaningful, are not very restrictive concerning the low-luminosity end. None exceeds the limit of $`L_\mathrm{x}10^{42}`$ erg s<sup>-1</sup> which is usually taken as indicative for the presence of a ‘normal’ AGN (e.g., Wisotzki & Bade 1997). Also, none reaches the high $`L_\mathrm{x}`$ usually observed for ellipticals in the group/cluster environment (e.g., Brown & Bregman 1998, Irwin & Sarazin 1998, Beuing et al. 1999) which occasionally show LINER-like emission lines. Most of the present objects fall in the intermediate $`L_\mathrm{x}`$/$`L_\mathrm{B}`$ range (Fig. 4). Since the majority of LINERs are found in bulge-dominated early-type galaxies (e.g., Ho 1998) the same emission mechanisms might contribute to the observed X-ray luminosity. Among the suggestions for early-type galaxies are accumulated stellar winds, SN heating and cooling flows (see Pellegrini 1999 for a recent overview). The low $`L_\mathrm{X}/L_\mathrm{B}`$ systems are dominated by discrete sources, mainly LMXBs (e.g., Canizares et al. 1987, Irwin & Sarazin 1998, Irwin & Bregman 1999). Among the present sample, this holds best for NGC 404. Further clues on the emission mechanism can be drawn from the observed spectral shapes. ### 5.2 X-ray spectral shapes The LINERs analyzed here show some spectral variety. Some of them are best described by a powerlaw model of photon index similar to that observed in AGN (e.g., Schartel et al. 1996a,b, 1997), the others are best fit by Raymond-Smith (RS) emission of a very sub-solar abundances plasma (see Table 2). We consider the RS model with heavily depleted gas-phase metal abundances to be quite unrealistic, and as previously found for other galaxies (ellipticals, AGN; e.g., Matsushita et al. 1997, Buote & Fabian 1998, Komossa & Schulz 1998) we prefer the alternative of a two-component spectrum, consisting of contributions from both, a powerlaw (or other hard component) and thermal RS emission of $``$solar abundances gas. Such models were not fit, though, due to the quite low number of photons available per spectrum. Other possibilities (than the two-component solution) include a contribution from Fe L emission, not yet fully modeled (but see Buote & Fabian 1998 who excluded the FeL region from fitting ASCA spectra of early type galaxies, and still find similar results concerning abundances), a multi-temperature distribution of the emitting medium (e.g., Strickland & Stevens 1998) and the possibility that the X-ray emitting gas is far out of collisional-ionization equilibrium (e.g., Breitschwerdt & Schmutzler 1994, 1999, Komossa et al. 1999; see Böhringer 1998 for a recent review). In case of the two-component interpretation – as indeed observed by ASCA with its broader X-ray energy range for several early-type galaxies (e.g., Matsushita et al. 1994, Matsumoto et al. 1997, Buote & Fabian 1998) and some LINERs (e.g., Ptak et al. 1999) – the hard component could be due to stellar sources, namely LMXBs, or a low-luminosity AGN, the soft component due to emission from hot gas (see previous Section). In the case of NGC 6500 the X-ray emission might be related to the outflow/wind for which optical and radio evidence was reported (Unger et al. 1989, Gonzales Delgado & Perez 1996), in analogy to the X-ray emission associated with starburst-driven winds observed in several starburst galaxies (e.g., Heckman et al. 1990, Schulz et al. 1998). It is also interesting to note that in two further cases, NGC 4450 and NGC 2768, the weak extended emission appears roughly perpendicular to the galaxy’s disk and could result from an outflow. We interpret the powerlaw spectral component, in those cases where it dominates the spectrum, as arising most likely from a low-luminosity active nucleus, since the inferred luminosities are above those expected from discrete stellar sources and the powerlaw indices derived are in the range $`\mathrm{\Gamma }_\mathrm{x}`$ $`1.6`$ to $`2.1`$ (Table 2) similiar to what is observed for AGNs. However, we cannot exclude a more complex situation where the superposition of several different emission components mimics a single AGN-like powerlaw. For a detailed spatial and spectroscopic disentanglement of the contributing components (thermal RS-like emission, presumably extended, and a pointlike powerlaw-like component) and a better determination of the metal abundances high-spectral resolution observations with future X-ray observatories like XMM, AXAF or Spectrum-X-Gamma will be very useful. ### 5.3 X-ray variability We do not find evidence for X-ray variability on the timescale of hours/days. This also holds for the LINERs examined by Ptak et al. (1998) and some further sources, and is not in line with a continuation of the trend of higher variability at lower luminosity seen in AGN. It is consistent with the presence of advection-dominated accretion disks (e.g., Abramowicz et al. 1988, Narayan & Yi 1994) in LINERs as suggested by Ptak et al. (1998; see also Lasota et al. 1996). ## 6 Summary and conclusions We have investigated the soft X-ray properties of a number of LINERs based on ROSAT survey and pointed PSPC and HRI observations. Luminosities range between $`\mathrm{log}L_\mathrm{x}=37.7`$ (NGC 404) and 40.8 (NGC 4450). The ratios $`L_\mathrm{x}`$/$`L_\mathrm{B}`$, when compared to early-type galaxies, are located in the intermediate region and similar emission mechanisms may contribute to the observed X-ray luminosities. Whereas the bulk of the X-ray emission is consistent with arising from a point source there is some extended emission seen at weak emission levels in some sources. Spectra are best described by either a powerlaw of photon index $`\mathrm{\Gamma }_\mathrm{x}`$ $`2`$ (NGC 3898, NGC 4450, NGC 5371), or a Raymond-Smith model with heavily depleted abundances (NGC 2768, NGC 3642). Since the inferred metal abundances are implausibly low we take this latter model as indication of a more complex soft X-ray spectral shape/ emission mechanism (e.g., a powerlaw or other hard component plus RS emission from gas with $``$solar abundances; or plasma out of equilibrium). Some sources are best described by a single AGN-like powerlaw with X-ray luminosity above that expected from discrete stellar sources. These spectra most likely indicate the presence of low-luminosity AGNs in the centers of the LINER galaxies. The absence of (short-timescale) variability is consistent with the earlier suggestion that LINERs may accrete in the advection-dominated mode. Only one source (NGC 2768) seems to be slightly variable on a timescale of months. For several LINER galaxies nearby second X-ray sources are discovered with countrates roughly one-tenth those of the target sources. We cannot identify an obvious detector effect. If at the distances of the galaxies, their luminosities are on the order of several $`10^{39}`$ erg s<sup>-1</sup>. Given the spectral variety of LINERs with contributions from several emission components, future studies of both, spectra of individual objects as well as larger samples will certainly give further insight into the LINER phenomenon which provides an important link between active and ‘normal’ galaxies. ###### Acknowledgements. H.B. and St.K. thank the Verbundforschung for support under grant No. 50 OR 93065. J.P.H. acknowledges support from the Smithsonian Institution and NASA grant NAGW-201. We thank Andreas Vogler for providing the software to plot the overlay contours of Fig. 2. The ROSAT project has been supported by the German Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie (BMBF/DLR) and the Max-Planck-Society. This research has made use of the NASA/IPAC extragalactic database (NED) which is operated by the Jet Propulsion Laboratory, Caltech, under contract with the National Aeronautics and Space Administration. The optical images shown are based on photographic data of the National Geographic Society – Palomar Observatory Sky Survey (NGS-POSS) obtained using the Oschin telescope on Palomar Mountain.
no-problem/9907/cond-mat9907291.html
ar5iv
text
# Mechanism for Surface Waves in Vibrated Granular Material ## Granular material under vibration has been a constant source of interesting phenomena among which surface waves are particularly interesting . Particles in a vertically vibrating box display a variety of steady state patterns—stripes, squares, hexagons, and localized excitations called “oscillons”, and even a richer set of transient patterns . The fact that such diverse patterns can exist in the seemingly simple system has attracted a lot of efforts on the study of the mechanism for the formation of the surface waves . It is generally accepted that horizontal movements of particles play an important role in the formation of the surface waves. The present theories can be roughly divided into two groups according to how such movements are generated and maintained. In the first group of theories, it is argued that horizontal movements are generated while particles are colliding with the bottom of a box . The duration of the collisions is very short compared to the period of the vibration. It is also argued that between the collisions with the bottom, horizontal movements gradually decrease while the particles move through the medium. Using these assumptions, many of the patterns observed in the experiments can be reproduced. In the other group of theories, it is argued that horizontal movements depend only on continuum variables such as height and/or density fields . Since these fields change smoothly over time, changes in the movements are also gradual. In particular, changes in the movements during the collisions (with the bottom) are not particularly different from those during other phases of the vibration. These theories can also reproduce many of the experimental patterns, typically by coupling density and height fields. The underlying assumptions of all of these theories sound plausible. Also, it is very difficult to determine that which of the theories describes the experiments better. All of them reproduce many of the experimental patterns, and have their own strong and weak points. Also, one should note that such theories cannot be validated just because they reproduce the observed patterns. Quantitative agreements (e.g., of dispersion relation) are necessary for the validation. In order to check the validity of the theories, it is thus necessary to obtain detailed information on the system, and compare it with the predictions of the theories. Unfortunately, such information is usually difficult to obtain by experiments. In this paper, we use molecular dynamics (MD) simulation method, which provides detailed information on the motion of individual particles as well as time averaged fields. We study the system in two dimensions. We focus on horizontal movements of particles, which are essential to the formation of the surface waves. We find that the time evolution of average horizontal speed $`U`$ is made up of two separate processes. First, there are sharp increases in $`U`$ during the short periods that the pile is colliding with the bottom plate. The other process is gradual decay of $`U`$ between such collisions, which results from interparticle collisions. The time evolution of $`U`$ strongly supports the theories in the first group . The present results do not imply that a continuum description is not possible for the system, it implies that the interpretation of the continuum variables has to be modified. We then study the processes in more detail. We find that the horizontal velocity field $`V_x(x)`$ after collisions (with the bottom) shows a strong correlation with $`_xh(x)`$ before the collisions, where $`h(x)`$ is the center of mass field. Such correlation is assumed in the theories of . The second process can be characterized by temporal decay of $`U`$. We find that the decay time is very small when there is no surface wave, and it is comparable to the period of the vibration when surface waves are present. We also study the parameter dependence of the decay time and maximum horizontal speed. The simulations are done in two dimensions with disk shaped particles, using a form of interaction due to Cundall and Strack . Particles interact only by contact, and the force between two such particles $`i`$ and $`j`$ is the following. Let the coordinate of the center of particle $`i`$ ($`j`$) be $`\stackrel{}{R}_i`$ ($`\stackrel{}{R}_j`$), and $`\stackrel{}{r}=\stackrel{}{R}_i\stackrel{}{R}_j`$. The normal component $`F_{ji}^n`$ of the force acting on particle $`i`$ from particle $`j`$ is $$F_{ji}^n=k_n(a_i+a_j|\stackrel{}{r}|)\gamma m_e(\stackrel{}{v}\widehat{n}),$$ (1) where $`a_i`$ ($`a_j`$) is the radius of particle $`i`$ ($`j`$), $`\widehat{n}=\stackrel{}{r}/r`$, and $`\stackrel{}{v}=d\stackrel{}{r}/dt`$. Here, $`k_n`$ is the elastic constant, $`\gamma `$ the friction coefficient, and $`m_e`$ is the effective mass, $`m_im_j/(m_i+m_j)`$. The shear component $`F_{ji}^s`$ is given by $$F_{ji}^s=\mathrm{sign}(\delta s)\mathrm{min}(k_s|\delta s|,\mu |F_{ji}^n|),$$ (2) where $`\mu `$ is the friction coefficient, $`\delta s`$ the total shear displacement during a contact, and $`k_s`$ is the elastic constant of a virtual tangential spring. The shear force applies a torque to the particles, which then rotate. Particles can also interact with walls. The force and torque on particle $`i`$, in contact with a wall, are given by (1) - (2) with $`a_j=0`$ and $`m_e=m_i`$. Also, the system is in a vertical gravitational field $`\stackrel{}{g}`$. The interaction parameters used in this study are fixed as follows, unless otherwise specified: $`k_n=k_s=10^7,\gamma =10^4`$ and $`\mu =0.2`$. And, the timestep for integration is $`5\times 10^7`$. In order to avoid artifacts of a monodisperse system (e.g., hexagonal packing), we choose the radius of the particles from a Gaussian distribution with the mean $`0.1`$ and width $`0.02`$. The density of the particles is $`5`$. Throughout this paper, CGS units are implied. We put particles on a horizontal plate which oscillates sinusoidally along the vertical direction with given amplitude $`A`$ and frequency $`f`$. Let the width of the plate be $`W`$. We apply a periodic boundary condition in the horizontal direction. We start the simulation by inserting the particles at random positions above the plate. We let them fall by gravity and wait while they lose energy by collisions. We wait for $`10^6`$ iterations for the particles to relax, and during this period we keep the plate fixed. The typical velocity at the end of the relaxation is of order $`10^2`$. After the relaxation, we vibrate the plate, and start to take measurements. The coefficient of restitution between the particles $`e_{pp}`$, determined from the above interaction parameters, is $`0.21`$, and the coefficient between the particles and the plate $`e_{pw}`$ is $`8.0\times 10^2`$. The particles are thus strongly inelastic. We have studied the motion of a single particle for several values of $`A`$ with $`f=10`$, and find good agreements with the predictions by Mehta and Luck . Also, the present model was shown to reproduce the dispersion relation from experiment . We then study the center of mass motion of the particles. As the depth of the pile increases, its motion can be different from that of a single particle . For the parameters with which the motion of a single particle is periodic with period $`T=1/f`$, the motion of the pile can be subharmonic at large depth. Although studying the effect of the subharmonic motion on the surface waves can be interesting, here we limit ourselves to the simpler case of no such motion. The minimum depth at which the subharmonic motion occurs depends on the interaction parameters, and is an increasing function of $`k_n`$. We thus use a rather large value of $`k_n(10^7)`$, so the subharnomic motion is absent in all cases studied here. As a check, we study the motion of the particles in a narrow tube of $`W=1`$ (five particle width) for several $`\mathrm{\Gamma }`$ with fixed $`f`$. Here, $`\mathrm{\Gamma }`$ is dimensionless peak acceleration $`4\pi Af^2/g`$. We find that the pile behaves like a single particle for at least $`10`$ particle depth. In particular, a bifurcation of the motion occurs at $`\mathrm{\Gamma }3.7`$, which then terminates at $`\mathrm{\Gamma }4.4`$, which agree well with the results of a single inelastic particle . We proceed to study surface waves. We choose $`W=16`$ and $`N=800`$. Thus the system is, on average, $`80`$ particle wide and $`10`$ particle deep. We fix the frequency $`f=10`$, and study the waves for several values of $`\mathrm{\Gamma }`$. We find that $`f/2`$ waves start to appear for $`\mathrm{\Gamma }2.5`$, then disappear when $`\mathrm{\Gamma }`$ becomes about $`4`$. When $`\mathrm{\Gamma }`$ further increases, $`f/4`$ waves start to appear for $`\mathrm{\Gamma }5.5`$. The features of this “phase diagram” agree well with those of the experiments . Inspection of the motion of the particles shows that horizontal movements of the particles play a crucial role in maintaining the waves—an observation which was made in many previous studies on the problem (e.g, ). We focus on the horizontal movements which we characterize by average horizontal speed $`U(t)`$, defined as $$U(t)=\frac{1}{N}\underset{i=1}{\overset{N}{}}|v_x^i(t)|,$$ (3) where $`v_x^i`$ is the horizontal velocity of particle $`i`$. Even if there are active horizontal movements, the average horizontal velocity can be small since the movements can occur in both positive and negative $`x`$ directions. We thus use $`U(t)`$ instead of the average horizontal velocity. We normalize $`U(t)`$ with $`2\pi Af`$, the maximum velocity of the bottom plate. In Fig. 1, we show normalized $`U(t)`$ for $`\mathrm{\Gamma }=2`$ and $`3`$ for $`10`$ vibration periods. Here, $`f=10`$, $`W=16`$, and $`N=800`$. The $`\mathrm{\Gamma }=3`$ curve has been offset for clarity. As shown in the figure, there clearly exist two separate processes: sharp increase in $`U`$ within short time intervals and gradual decay of $`U`$ during other phases of the vibration. We also measure the time series of the total pressure on the plate $`p(t)`$, which consists of sharp peaks occurring when the pile collides with the plate. We find that the locations of the peaks in $`U(t)`$ coincide with those in $`p(t)`$. Thus, the horizontal movements are being fed by collisions of the pile with the plate, and are being lost while the pile is not in contact with the plate. In order to maintain the surface waves, the particles have to travel distance $`\lambda `$ within the period of the surface waves, where $`\lambda `$ is their wavelength. When $`U(t)`$ decays slowly, we expect that the particles can travel long enough distance to maintain the waves. Indeed, we find that the surface waves are present only when $`U(t)`$ decays slowly. To be more precise, the waves are observed when the decay time $`\tau `$, which will be defined later, is comparable to the period of the vibration. These observations strongly support the theories which argue that the horizontal motion is being supplied by collisions with the bottom plate, and is being dissipated by interparticle collisions. The short time intervals during which the pile is colliding with the plate play a crucial role in maintaining the surface waves. We then consider each process in more detail. In Fig. 2, we show time evolution of the horizontal velocity field $`V_x(x,t)`$, which is defined as the average horizontal velocity of the particles whose centers are in $`[x,x+dx]`$. Here, $`\mathrm{\Gamma }=3`$, $`f=10`$, and other parameters are the same as in Fig. 1. We also normalize $`V_x(x,t)`$ by $`2\pi Af`$. In the figure, sharp changes in $`V_x(x,t)`$ occur at certain values of $`t`$, which we find to be identical to the positions of the peaks in $`U(t)`$. Thus, sudden increases in $`V_x(x,t)`$ occur when the pile collides with the plate. The shape of $`V_x(x,t)`$ after such collisions does not change much until next collisions, but the overall magnitude of $`V_x(x,t)`$ decreases. The profile of $`V_x`$ just after the collisions is important for the dynamics of the surface waves. Since the motion of the particles is deterministic, it is possible that the profile is determined from certain field just before the collisions. We check the dependency of $`V_x`$ on several fields. Specifically, we calculate the correlation coefficient $`r`$ between $`V_x`$ after the collisions and the following fields just before the collisions: $`x`$ and $`y`$ velocity fields $`V_x`$ and $`V_y`$, their spatial derivative $`_xV_x`$ and $`_xV_y`$, the number density $`m`$ and its derivative $`_xm`$, and the center of mass field $`h`$ and its derivative $`_xh`$. Here, $`m(x)dx`$ ($`h(x)`$) is defined to be the total number (the center of mass) of the particles whose centers are in $`[x,x+dx]`$. Since the determination of the collision times becomes difficult for large $`\mathrm{\Gamma }`$, we study only for $`\mathrm{\Gamma }4`$. We average $`r`$ over $`10`$ collisions. We find that $`r`$ is large ($`0.8`$) for $`_xm,_xh`$ and $`V_x`$, and there is essentially no correlation with the other fields. The large value of $`r`$ for $`_xh`$ suggests $$V_x(x,t_a)_xh(x,t_b),$$ (4) where $`t_a`$ and $`t_b`$ is the time just after and before the collisions, respectively. This very equation was assumed in the theories of Cerda et al. and Eggers and Riecke , where they proposed the form motivated from the flow of granular material on an inclined plane. The strong correlation occurs for all $`\mathrm{\Gamma }`$ studied here, including the $`\mathrm{\Gamma }=2`$ case where no surface wave is observed. We find that the spatial variation of the packing fraction is not significant, so $`m`$ is roughly proportional to $`h`$. We thus expect that the correlation for $`_xm`$ is also large. What is strange is the strong correlation between $`V_x`$ before and after the collisions, which states that the particles reverse the direction of their horizontal movements while the pile is colliding with the plate. We do not understand the origin of the reversal of the motion. We also measure the magnitude of the horizontal movements for several values of $`\mathrm{\Gamma }`$. To be more specific, we measure the peak values of $`U(t)`$ like the ones shown in Fig. 1, and averaged them over all the collisions. One might expect that the average is proportional to the collision velocity of particles on the plate. Even though both display minimum around $`\mathrm{\Gamma }=4.6`$, we find that there is no strong correlation between them, and the average seems to show complicated dependence on $`\mathrm{\Gamma }`$. Further study is necessary to quantify and understand the dependence. Next, we study the decay process, where horizontal movements of the particles gradually decrease between the collisions with the plate. We quantify the process as follows. We start from a time series of $`U(t)`$ like the ones in Fig. 1. We translate the positions of the peaks so that they all coincide. We then normalize the peak values of $`U(t)`$ to unity, and average $`U(t)`$ over all the peaks. The resulting quantity, defined as $`U_a(t)`$, can be used to characterize the decay process. In Fig. 3, we show $`U_a(t)`$ for $`\mathrm{\Gamma }=2,3`$ and $`4`$, where other parameters remain unchanged. We then define decay time $`\tau `$ as the “half-life”—the time at which $`U_a(t)`$ becomes $`1/2`$. The half-life for $`\mathrm{\Gamma }=2`$ is about $`0.0045`$, which is much smaller than the period of the vibration ($`T=0.1`$). On the other hand, $`\tau `$ is close to $`0.08`$ for $`\mathrm{\Gamma }=3`$ and $`4`$. For all $`\mathrm{\Gamma }`$ we have studied, we find that the surface waves are present if and only if $`\tau `$ is comparable to the period of the vibration. The need for large $`\tau `$ for the formation of the surface waves is evident as previously discussed. The fact that the dynamics consists of two separate processes does not seem to depend on small variations of interaction parameters. The quantitative understanding of the decay process requires that one should be able to predict the dependence of $`\tau `$ on parameters such as $`\mathrm{\Gamma }`$. However, it seems that such dependences are rather complex. For example, we find that $`\tau `$ does not depends on $`\mathrm{\Gamma }`$ in a simple fashion: $`\tau `$ is not a monotonic function of $`\mathrm{\Gamma }`$, and a small increase in $`\mathrm{\Gamma }`$ can result in a ten-fold increase or decrease in $`\tau `$. At the particle level, the decay process occurs by the collisions between the particles, and the resulting changes in the horizontal movements. In order to make quantitative predictions, one thus has to understand both the collision frequency and the resulting momentum changes. Unfortunately, both of these quantities are poorly understood. More studies are again needed in order to understand the decay process. The time evolution of horizontal movements of the particles can also be studied using auto-correlation function $`c(t)`$ defined as $`v_x(0)v_x(t)`$, where the average is taken over the particles. We find that $`c(t)`$ also displays two basic processes: sharp change during the collisions with the plate and slow decay between them. In sum, we show that horizontal movements of the particles, which are essential to the formation of the surface waves, consist of two separate processes: sharp increase of the movements by the collisions with the bottom plate, and slow decrease of the movements due to interparticle collisions, which strongly support the theories of . We also find that the horizontal velocity field $`V_x(x)`$ after the collisions with the plate is strongly correlated to $`_xh(x)`$ just before the collisions. Here, we are mainly interested in qualitative features of the mechanism for the formation of the surface waves. Their quantitative understanding, such as the parameter dependence of $`\tau `$, will be subject of future works. After the present work has been completed, we became aware of the work of Kim et al , where similar results were obtained. I thank H. K. Pak, S. Kim, K. Kim and S.-O. Jeong for useful discussions. This work is supported in part by the Department of Energy under grant DE-FG02-93-ER14327, Korea Science and Engineering Foundation through the Brain-Pool program, and SNU-CTP.
no-problem/9907/hep-ph9907412.html
ar5iv
text
# Orbitally Excited Baryons in Large 𝑁_𝑐 QCDInvited parallel session talk presented at PANIC99, Uppsala Sweden, June 10, 1999. William and Mary preprint no. WM-99-113. ## 1 Introduction QCD admits a useful and elegant expansion in powers of $`1/N_c`$, where $`N_c`$ is the number of colors . This expansion has been utilized successfully in baryon effective field theories to isolate the leading and subleading contributions to a variety of physical observables . Here we study the mass spectrum of the nonstrange, $`\mathrm{}=1`$ baryons (associated with the SU(6) 70-plet for $`N_c=3`$) in a large-$`N_c`$ effective theory . We describe the states as a symmetrized “core” of $`(N_c1)`$ quarks in the ground state plus one excited quark in a relative $`P`$ state. “Quarks” in the effective theory refer to eigenstates of the spin-flavor-orbit group, SU(6) $`\times `$ O(3), such that an appropriately symmetrized collection of $`N_c`$ of them have the quantum numbers of the physical baryons. Baryon wave functions are antisymmetric in color and symmetric in the spin-flavor-orbit indices of the quark fields. While this construction assures that we obtain states with the correct total quantum numbers, we do not assume that SU(6) is an approximate symmetry of the effective Lagrangian. Rather, we parameterize the most general way in which spin and flavor symmetries are broken by introducing a complete set of quark operators that act on the baryon states. Matrix elements of these operators are hierarchical in $`1/N_c`$, so that predictivity can be obtained without recourse to ad hoc phenomenological assumptions. The nonstrange 70-plet states which we consider in this analysis consist of two isospin-$`3/2`$ states, $`\mathrm{\Delta }_{1/2}`$ and $`\mathrm{\Delta }_{3/2}`$, and five isospin-$`1/2`$ states, $`N_{1/2}`$, $`N_{1/2}^{}`$, $`N_{3/2}`$, $`N_{3/2}^{}`$, and $`N_{5/2}^{}`$. The subscript indicates total baryon spin; unprimed states states have quark spin $`1/2`$ and primed states have quark spin $`3/2`$. These quantum numbers imply that two mixing angles, $`\theta _{N1}`$ and $`\theta _{N3}`$, are necessary to specify the total angular momentum $`1/2`$ and $`3/2`$ nucleon mass eigenstates, respectively. Definitions for these angles can be found in Ref. . ## 2 Operators Analysis To parameterize the complete breaking of SU(6)$`\times `$O(3), it is natural to write all possible mass operators in terms of the generators of this group. The generators of orbital angular momentum are denoted by $`\mathrm{}^i`$, while $`S^i`$, $`T^a`$, and $`G^{ia}`$ represent the spin, flavor, and spin-flavor generators of SU(6), respectively. The generators $`S_c^i`$, $`T_c^a`$, $`G_c^{ia}`$ refer to those acting upon the $`N_c1`$ core quarks, while separate SU(6) generators $`s^i`$, $`t^a`$, and $`g^{ia}`$ act only on the single excited quark. Factors of $`N_c`$ originate either as coefficients of operators in the Hamiltonian, or through matrix elements of those operators. An $`n`$-body operator, which acts on $`n`$ quarks in a baryon state, has a coefficient of order $`1/N_c^{n1}`$, reflecting the minimum number of gluon exchanges required to generate the operator in QCD. Compensating factors of $`N_c`$ arise in matrix elements if sums over quark lines are coherent. For example, the unit operator $`1`$ contributes at $`O(N_c^1)`$, since each core quark contributes equally in the matrix element. The core spin of the baryon $`S_c^2`$ contributes to the masses at $`O(1/N_c)`$, because the matrix elements of $`S_c^i`$ are of $`O(N_c^0)`$ for baryons that have spins of order unity as $`N_c\mathrm{}`$. Similarly, matrix elements of $`T_c^a`$ are $`O(N_c^0)`$ in the two-flavor case since the baryons considered have isospin of $`O(N_c^0)`$, but the operator $`G_c^{ia}`$ has matrix elements on this subset of states of $`O(N_c^1)`$. This means that the contributions of the $`O(N_c)`$ quarks add incoherently in matrix elements of the operator $`S_c^i`$ or $`T_c^a`$ but coherently for $`G_c^{ia}`$. Thus, the full large $`N_c`$ counting of the matrix element is $`O(N_c^{1n+m})`$, where $`m`$ is the number of coherent core quark generators. A complete operator basis for the nonstrange 70-plet masses is shown in Table 1<sup>1</sup><sup>1</sup>1Some of these operators were considered previously in Ref. .. Index contractions are left implicit wherever they are unambiguous, and the $`c_i`$ are operator coefficients. The tensor $`\mathrm{}_{ij}^{(2)}`$ represents the rank two tensor combination of $`\mathrm{}^i`$ and $`\mathrm{}^j`$ given by $`\mathrm{}_{ij}^{(2)}=\frac{1}{2}\{\mathrm{}_i,\mathrm{}_j\}\frac{\mathrm{}^2}{3}\delta _{ij}`$. ## 3 Results Since the operator basis in Table 1 completely spans the $`9`$-dimensional space of observables, we can solve for the $`c_i`$ given the experimental data. For each baryon mass, we assume that the central value corresponds to the midpoint of the mass range quoted in the Review of Particle Properties ; we take the one standard deviation error as half of the stated range. To determine the off-diagonal mass matrix elements, we use the mixing angles extracted from the analysis of strong decays given in Ref. , $`\theta _{N1}=0.61\pm 0.09`$ and $`\theta _{N3}=3.04\pm 0.15`$. These values are consistent with those obtained in from radiative decays. Solving for the operator coefficients, we obtain the values shown in Table 2. Naively, one expects the $`c_i`$ to be of comparable size. Using the value of $`c_1`$ as a point of comparison, it is clear that there are no operators with anomalously large coefficients. Thus, we find no conflict with the naive $`1/N_c`$ power counting rules. However, only three operators of the nine, $`𝒪_1`$, $`𝒪_3`$, and $`𝒪_6`$, have coefficients that are statistically distinguishable from zero! A fit including those three operators alone is shown in Table 3, and has a $`\chi ^2`$ per degree of freedom is $`1.87`$. Fits involving other operator combinations are studied in Refs. . Clearly, large $`N_c`$ power counting is not sufficient by itself to explain the $`\mathrm{}=1`$ baryon masses—the underlying dynamics plays a crucial role. ## 4 Interpretation and Conclusions We will now show that the preference in Table 2 for two nontrivial operators, $`\frac{1}{N_c}\mathrm{}^{(2)}gG_c`$ and $`\frac{1}{N_c}S_c^2`$, can be understood in a constituent quark model with a single pseudoscalar meson exchange, up to corrections of order $`1/N_c^2`$. The argument goes as follows: The pion couples to the quark axial-vector current so that the $`\overline{q}q\pi `$ coupling introduces the spin-flavor structure $`\sigma ^i\tau ^a`$ on a given quark line. In addition, pion exchange respects the large $`N_c`$ counting rules given in Section 2. A single pion exchange between the excited quark and a core quark is mapped to the operators $`g^{ia}G_c^{ja}\mathrm{}_{ij}^{(2)}`$ and $`g^{ia}G_c^{ia}`$, while pion exchange between two core quarks yields $`G_c^{ia}G_c^{ia}`$. These exhaust the possible two-body operators that have the desired spin-flavor structure. The first operator is one of the two in our preferred set. The third operator may be rewritten $$2G_c^{ia}G_c^{ia}=C_21\frac{1}{2}T_c^aT_c^a\frac{1}{2}S_c^2$$ (1) where $`C_2`$ is the SU(4) quadratic Casimir for the totally symmetric core representation (the 10 of SU(4) for $`N_c=3`$). Since the core wavefunction involves two spin and two flavor degrees of freedom, and is totally symmetric, it is straightforward to show that $`T_c^2=S_c^2`$. Then Eq. (1) implies that one may exchange $`G_c^{ia}G_c^{ia}`$ in favor of the identity operator and $`S_c^2`$, the second of the two operators suggested by our fits. The remaining operator, $`g^{ia}G_c^{ia}`$, is peculiar in that its matrix element between two nonstrange, mixed symmetry states is given by $$\frac{1}{N_c}gG=\frac{N_c+1}{16N_c}+\delta _{S,I}\frac{I(I+1)}{2N_c^2},$$ (2) which differs from the identity only at order $`1/N_c^2`$. Thus to order $`1/N_c`$, one may make the replacements $$\{1\text{ , }g^{ia}G_c^{ja}\mathrm{}_{ij}^{(2)}\text{ , }g^{ia}G_c^{ia}\text{ , }G_c^{ia}G_c^{ia}\}\{1\text{ , }g^{ia}G_c^{ja}\mathrm{}_{ij}^{(2)}\text{ , }S_c^2\}.$$ (3) We conclude that the operator set suggested by the data may be understood in terms of single pion exchange between quark lines. This is consistent with the interpretation of the mass spectrum advocated by Glozman and Riska . Other simple models, such as single gluon exchange, do not directly select the operators suggested by our analysis and may require others that are disfavored by the data.
no-problem/9907/astro-ph9907433.html
ar5iv
text
# Core-Collapse Simulations of Rotating Stars ## 1 Introduction The study of the collapse of rotating massive stars is nearly as old as the study of core-collapse supernovae themselves. Four years after the first numerical simulations of neutrino powered core-collapse supernovae (Colgate & White 1966), LeBlanc & Wilson (1970) modeled the core-collapse of a rotating massive star. Instead of using neutrinos to convert gravitational energy into kinetic energy and power the explosion (Colgate & White 1966), LeBlanc & Wilson (1970) proposed that supernovae were powered by the conversion of rotational energy using magnetic fields. Although neutrino heating is now considered to be the dominant power source driving supernovae, the importance of rotation remains a matter of dispute. Most of the recent work studying the collapse of rotating massive stars has concentrated on the emission of gravitational waves (Müller, Różyczka, & Hillebrandt 1980; Tohline, Schombert, & Boss 1980; Finn & Evans 1990; Mönchmeyer et al. 1991; Bonazzola & Marck 1993; Yamada & Sato 1995; Zwerger & Müller 1997; Rampp, Müller, & Ruffert 1998). Some of the work, however, was devoted to understanding the effect of rotation upon the supernova explosion itself (Müller & Hillebrandt 1981; Bodenheimer & Woosley 1983; Symbalisty 1984; Mönchmeyer & Müller 1989; Janka & Mönchmeyer 1989; Janka, Zwerger, & Mönchmeyer 1993; Yamada & Sato 1994). Currently, no consensus on the effects of rotation has been reached. For instance, there is still disagreement on whether rotation increases or decreases the explosion energy. Mönchmeyer & Müller (1989) found that rotation weakens the core bounce, ultimately weakening the explosion. On the other hand, the asymmetry in the neutrino emission caused by rotation may help seed convection, increasing the efficiency of neutrino heating and ultimately producing a more powerful explosion (Yamada & Sato 1994). The simplifying assumptions in all of these simulations make it difficult to resolve this disagreement. For instance, none of the above simulations incorporated neutrino transport into their multi-dimensional hydrodynamic simulations. Many of the simulations use extremely simplified equations of state which, as Müller & Hillebrandt (1981) discovered, alters significantly the effect of rotation. In addition, the pre-collapse cores used in these models were all created assuming no rotation, and the angular momentum was then added artificially prior to the collapse simulation. Two major advances in supernova theory seek to resolve the question of rotation effects on core-collapse. First, in the past 5 years, codes have been developed with the necessary equations of state and neutrino physics to model the core collapse in two-dimensions from the initial collapse all the way through explosion (Herant et al. 1994; Burrows, Hayes, & Fryxell 1995, Fryer 1999). In addition to improvements on the collapse codes, Heger (1998); Heger, Langer, & Weaver (1999), has evolved rotating massive stars to core-collapse using a prescription for the transport of angular momentum, producing rotating core-collapse progenitors. In this paper, we present results of collapse simulations of these rotating massive cores. In §2, we outline the assumptions of the progenitor models and discuss the specific progenitor we use in our simulations. The core collapse code is described in §3 and the basic effects of rotation are outlined in §4. We find that not only is the bounce of rotating stars weaker, but the angular momentum in the star stabilizes the core, constraining convection to the poles. These two effects decrease dramatically the efficiency at which convection is able to convert neutrino energy into kinetic energy. The net effect is to delay the supernova explosion, producing larger compact remnants and weaker explosions. We present the explosion simulations for all our models in §5. The asymmetry in the convection produces asymmetric explosions which may explain polarization measurements of supernovae and may inject alpha-elements deep into the star’s envelope. We conclude by reviewing the implications of these results on core-collapse supernovae, black hole formation, and the collapsar gamma-ray burst model. ## 2 Progenitor All of the previous collapse simulations of rotating cores prescribed an angular momentum profile onto a non-rotating progenitor. For our simulations, we use the 15 Mrotating progenitor E15B of Heger et al. (1999). At central hydrogen ignition, this model has a “solar” composition (Grevese & Noels 1993) and an equatorial rotation velocity of 200 km s<sup>-1</sup> which is a typical value for these stars (e.g., Fukuda 1982). For this model, redistribution of angular momentum and chemical mixing was formulated similar to the technique of Endal & Sophia (1978) and used a parameterization of the different mixing efficiencies similar to that of Pinsonneault et al. (1989). The rotationally induced mixing processes include dynamical and secular shear instability, Solberg-Høiland instability, Goldreich-Schubert-Fricke instability, and Eddington-Sweet circulation. All these mixing processes, as well as the convective instability, were assumed to lead to rigid rotation on their corresponding time-scale (Fig. 1 shows the evolution of the angular momentum at various stages of the star’s life). Equipotential surfaces where assumed to be rigidly rotation and chemically homogeneous due to the barotropic instability and horizontal turbulence (Zahn 1975; Chaboyer & Zahn 1992; Zahn 1992). Magnetic fields were not considered, since their efficiency in redistributing angular momentum (and mixing) inside stars is still very controversial (e.g., Spruit & Phinney 1998, Spruit 1998 vs. Livio & Pringle 1998) and no reliable prescriptions exist yet. For more detail on the progenitor model and the input physics we refer the reader to Heger (1998); Heger et al. (1999). If magnetic fields are strong, the angular velocity between the core and surface may couple during most evolutionary phase, leading to much smaller core rotation rates than found by Heger et al. (1999). In the model for the evolution of an internal stellar magnetic field by Spruit & Phinney (1998), the core rotation decouples from the surface only *after* central carbon depletion. The cores resulting from their calculation carry far too little angular momentum to explain the observed rotation rates of young pulsars (Marshall et al. 1998), and thus Spruit & Phinney (1998) have to employ another mechanism to spin them up again. Livio & Pringle (1998) instead find that the coupling between core and envelope due to magnetic fields should be far less than assumed by Spruit & Phinney (1998), allowing a natural explanation for spin rates of both young pulsars and white dwarfs. With regard to the relevant physics in core-collapse and the neutrino-driven supernova mechanism, the structure of our rotating progenitor is not too different from that of Woosley & Weaver (1995). Figure 2 shows the density and temperature profiles just before collapse for our rotating progenitor and model s15s7b from Woosley & Weaver (1995). Note that beyond a radius of $`3\times 10^8`$ cm, the density of the non-rotating star is roughly a factor of 2 lower than that of the non-rotating Woosley & Weaver progenitor. Pressure equilibrium requires that the temperature at these radii also be lower. Burrows & Goshy (1993) have shown that cores with lower densities are more likely to explode because the ram pressure which prevents the explosion is lower for these cores (see §4). However, the differences between these two stars are small enough that they do not change the ultimate features in the core-collapse significantly (§5). The angular momentum distribution, on the other hand, does dramatically effect the collapse. The angular velocity profile of our rotating progenitor is shown in Figure 3. For comparison, we have included the angular velocity profiles of many of the models used by Mönchmeyer & Müller (1989). Rather than use a rotating progenitor, Mönchmeyer & Müller (1989) added these angular velocities onto a non-rotating progenitor. Their model A closely resembles our rotating progenitor, but most of their models have higher angular velocities, and the effects of rotation are stronger for most of their simulations. In addition, our angular velocity distribution consists of a series of discreet shells which rotate at constant angular velocity whereas the Mönchmeyer & Müller (1989) models all assume a continuous rotation distribution. These discreet shells are the result of discreet convection regions in the progenitor star. The strong convection in the star causes the material in each shell to reach a constant angular velocity (rigid body rotation). However, shear viscosity does not equilibrate adjacent shells and, at the boundaries of convective regions, the angular velocity can change abruptly. The evolution of the core collapse depends most sensitively on the magnitude of the rotation speed and not its derivative and, hence, the initial collapse of model A from Mönchmeyer & Müller (1989) agrees well with our simulations (see §4). ## 3 Numerical Techniques For our simulations, we use the smooth particle hydrodynamics (SPH) core-collapse code originally described in Herant et al. (1994) and Fryer et al. (1999). This code models the core collapse in two dimensions continuously from collapse through bounce and ultimately to explosion. The neutrino transport is mediated by a single energy flux-limiter. Beyond a critical radius, $`\tau <0.1`$, a simple “light-bulb” approximation for the neutrinos is invoked which assumes that any material beyond that radius is bathed by an isotropic flux equal to the neutrino flux escaping that radius. We use all the neutrino production and destruction processes discussed in Herant et al. (1994) except electron scattering because our neutrino transport algorithm overestimates its effect. We implement gravity assuming the mass distribution is spherical. Assuming a spherical mass distribution also allows us to easily implement general relativity fully into the equations of motion using the technique described by Van Riper (1979). This treatment includes both the relativistic modifications to the energy equation and time dilation. We further red-shift the neutrino energies as the neutrinos climb out of the core’s potential well. Our two-dimensional simulations assume cylindrical symmetry about the rotation axis, modeling a full 180 wedge with a reflective boundary around the symmetry axis. The inner 0.01Mis also modeled as a reflective boundary which collapses along with the star as described in Herant et al. (1994). Because the progenitor models did not account for centrifugal acceleration, we had to artificially add 0.04 Mto the inner core to induce a collapse (hence, the inner core has an effective mass of 0.05 M). We also ran a set of simulations with an additional 0.04 Mto the core but the results did not change (Table 2). For most of our simulations, we use $`9000`$ particles with a resolution in the convection parts of the star of roughly 2. We have run one model with $`32,000`$ particles (twice the resolution) and the results do not change significantly (Table 2). The initial angular momentum distribution of the progenitor (See §2; Heger et al. 1999) is mapped onto the 2-dimensional grid of particles assuming that the rotation axis is identical to the symmetry axis. Each particle is given an initial specific angular momentum: $`j_i=(x_i/r_i)j(r)`$ where $`r_i`$, $`x_i`$ are the radial distance and the particle distance from the rotation axis and $`j(r)`$ is the angular momentum given by E15B progenitor of Heger et al. (1999; Fig. 3). We have also set up models using $`j_i=0.5(x_i/r_i)j(r)`$ and $`j_i=0.`$ For most of our simulations, we assume that the angular momentum is conserved, which in our 2-dimensional SPH code is true simply by holding $`j_i`$ constant for each particle. The force on the particles, which are rings in our rotationally symmetric simulation, is modified by adding the centrifugal force: $`j_i^2/x_i^3`$. By ranging the initial angular momenta of the particles, we can calculate the effects of rotation. However, in our rotating models, the angular velocity of the matter ($`v^\varphi `$) becomes so high that it may rotate 10-100 periods ($`P_i=2\pi x_i/v_i^\varphi `$) during the course of the $`1`$ s simulation. In this case, it is likely that viscous forces will facilitate the transport of angular momentum. To estimate the effects of these viscous forces, we have implemented an $`\alpha `$disk viscosity (Shakura & Sunyaev 1973) which is suitable for the disk-like structures that develop as the core collapses. We use the shear viscosity tensor from Tassoul (1978) and drop the terms involving a derivative in $`\varphi `$ (since $`X/\varphi =0`$ in our rotational symmetry). Using our SPH formalism, the transport of angular momentum is: $$\frac{Dj_i}{Dt}=\underset{k}{}\alpha _\mathrm{d}\overline{c}_s\overline{H}_\mathrm{p}\overline{x}\frac{\overline{m}}{m_i}\left[\frac{3\mathrm{\Delta }(v^\varphi /x)}{r_{ik}+ϵ\overline{h}}+\frac{\overline{x}\mathrm{\Delta }(v^\varphi /x)}{r_{ik}^2+ϵ\overline{h}^2}+\frac{\overline{x}\mathrm{\Delta }(v^\varphi )}{r_{ik}+ϵ\overline{h}}\right]$$ (1) where $`\alpha _\mathrm{d}=0.110^4`$ is the $`\alpha `$disk parameter, $`\overline{c}_s`$, $`\overline{H}_\mathrm{p}`$, $`\overline{x}`$, $`\overline{m}`$, and $`\overline{h}`$ are the mean values between neighbor particle $`i`$ and $`k`$ of, respectively, the adiabatic sound speed, the disk scale height, the distance from the rotation axis, the particle mass, and the SPH smoothing length. $`\mathrm{\Delta }(v^\varphi /x)=(v_i^\varphi /x_iv_k^\varphi /x_k)`$, $`\mathrm{\Delta }(v^\varphi )=(v_i^\varphi v_k^\varphi )`$, $`r_{ik}`$ is the separation of particles $`i`$ and $`k`$ and $`ϵ=0.1`$. The net energy lost by the particles is then added to the particles equally as dissipation energy which is equivalent to using the dissipation function given in Tassoul (1978). The summation occurs over all of the SPH neighbors of particles for which $`P_i<1`$ s. This implementation conserves angular momentum and energy and the $`\alpha `$disk viscosity gives some insight into the effects of viscous forces. To follow the progression of the supernova explosion to late times ($`1`$yr after core collapse), we convert from our 2-dimensional results back into the one-dimensional stellar evolution code KEPLER (Weaver, Zimmermann, & Woosley 1978). This is the same code that has been used to evolve the stars until core collapse (Heger 1998; Heger, Langer, & Woosley 1999). Unfortunately, KEPLER simulates the explosion using a “piston” rather than a simple injection of energy which matches the physics more reliably. The piston moves inward at free-fall until then infall velocity reaches about 1,000 km s<sup>-1</sup> at which point it bounces and moves outward until it reaches a final position of $`10^4`$ km. The first part of the piston movement is taken form the SPH simulation by following the trajectory of a spherical mass shell. This method leads to a systematic underestimate of the energy input, because any spherically determined mass shell has matter flowing through it and although the mean velocities of the exploding matter is matched by the spherically determined mass trajectories, total energy in the explosion can be much higher. As the simulation progresses beyond the maximum time from our two-dimensional simulations, we continue the movement of the piston such that it resembles the free movement of a test particle in the gravitational field of a point mass equal to $`\alpha `$ times the mass interior to the piston which reaches zero velocity at the final location of the piston ($`10^4`$ km; see above), following the prescription by Woosley & Weaver (1995). The value of $`\alpha `$ is chosen such that it “smoothly” continues the piston movements obtained from the SPH simulations. This, however, involves some ambiguity (about a factor of two in $`\alpha `$, we estimate), since we have no measure to quantify the quality of this fit. The values of $`\alpha `$ we use are given in Table 3. We carry out these extended SN simulations for a rotating model (Model 1; see Table 1) and a non-rotating model (Model 6) and for different locations, in mass, of the piston (Table 3). For Model 6 we switch to the “artificial” piston after 1.9 s after the onset of core collapse, for Model 6 after 1.0 s. The contribution of this artificial continuation of the SPH-derived pistons to the total work of the piston is also given in Table 3: in most cases it is only small and thus our choice of $`\alpha `$ should not significantly alter our results. We note that during the inward movement of the piston a work of about $`10^{51}`$ erg (for Model 1, 1.1 Mpiston) is done against the piston by the infall and thus the work of the piston counting form its minimum location (in radius) is higher by this amount. The piston energies given in Table 3 are the integrated work energies from the beginning of the SPH simulation till $`1`$yr after the core collapse. For the non-rotating Model 6 we also carry out a series of explosions where we start the “artificial” piston at the minimum location of the piston (as obtained from the SPH simulation) at a mass coordinate of 1.1 M and for different values of $`\alpha `$. In comparison to the model of same mass cut but using the artificial piston only after the end of the SPH data we get the same energies within less than 1 %. The explosion energy turns out to be rather insensitive to the value of $`\alpha `$. Due to the above-mentioned underestimate of the explosion energy in the 1D simulation, a more than six times higher value of $`\alpha `$ ($`\sqrt{6}`$ times higher piston velocity) is necessary to obtain the same explosion energy than in the SPH simulation. By using the results from this series of piston simulations, we can estimate not only the kinetic energies of our supernova explosions, but also the amount of fallback and <sup>56</sup>Ni ejected (Table 2). ## 4 Collapse and Convection The evolutionary scenario for the current paradigm of core-collapse supernovae begins with the collapse of a massive star ($`M_{\mathrm{star}}8`$M) which occurs when the densities and temperatures at the core are sufficiently high to cause both the dissociation of the iron core (removing energy from the core) and the capture of electrons onto protons which further reduces the pressure. This sudden decrease in pressure causes the core to collapse nearly at free-fall, and the collapse stops only when nuclear densities are reached and nucleon degeneracy pressure once again stabilizes the core and drives a bounce shock back out through the core. The bounce shock stalls at roughly 100-300 km as neutrino emission and iron dissociation sap its energy. It leaves behind an entropy profile which is unstable to convection, and sets up a convective region from $``$50 km out to $``$300 km (Fig. 4) capped by the accretion shock of infalling material. The convective region must overcome this cap to launch a supernova explosion. Neutrino heating deposits considerable energy into the bottom layers of the convective region. If this material were not allowed to convect (which is the case for most of the 1-dimensional simulations), it would then re-emit this energy via neutrinos producing a steady state with no net energy gain. In the meantime, the increase in pressure as more material piles up at the accretion shock and the decrease in neutrino luminosity as the proto-neutron star cools make it increasingly difficult to launch an explosion. In the “delayed-neutrino” supernova mechanism (Wilson & Mayle 1988; Miller, Wilson, & Mayle 1993; Herant et al. 1994; Burrows, Hayes, & Fryxell 1995; Janka & Müller 1996), convection aids the explosion in two ways: a) as the lower layers of the convective region are heated, that material rises and cools adiabatically and converts the energy from neutrino deposition into kinetic and potential energy rather than re-radiating it as neutrinos, and b) the material does not simply pile at the shock but instead convects down to the surface of the proto-neutron star where it either accretes onto the proto-neutron star providing additional neutrino emission or is heated and rises back up. Thus, convection both increases the efficiency at which neutrino energy is deposited into the convective region and reduces the energy required to launch an explosion by reducing the pressure at the accretion shock. It appears that nature has conspired to make core-collapse supernovae straddle the line between success and failure, where convection, which for years was thought to be a mere detail, plays a crucial role in the explosion. It is not surprising that the outcome of the core collapse of massive stars depends upon numerical approximations of the physics, for example, the algorithm for neutrino transport (Messer et al. 1998; Mezzacappa et al. 1998a). It is also not surprising that changes in the progenitor, i.e. the subject of this paper, rotation, also can lead to different outcomes. For a rotating core-collapse, this basic evolutionary history remains the same, but many aspects of the collapse, bounce, convection, and explosion phases change. To determine these variations, we compare a rotating star (E15B from Heger et al. 1999) with its non-rotating counterpart (Unless otherwise stated, all quantitative results and figures compare Models 1 and 6). First, angular momentum slows the collapse and delays the bounce (Mönchmeyer & Müller 1989; Janka & Mönchmeyer 1989). For our rotating model, the bounce occurred 200 ms into the simulation, a full 50 ms longer than the non-rotating counterpart. For very high angular momenta, the central density at bounce can be lower than a non-rotating star by over an order of magnitude (Mönchmeyer & Müller 1989; Janka & Mönchmeyer 1989), but for the more modest angular momenta from Heger et al. (1999), the central density at bounce drops from $`4\times 10^{14}`$ g cm<sup>-3</sup> for a non-rotating star to $`3\times 10^{14}`$ g cm<sup>-3</sup> for the rotating case. This lower critical bounce density occurs because the centrifugal force begins to provide significant support (it increases from 2% to 10% that of the gravitational force over the course of the simulation). The resulting bounce shock is weaker and, along the poles, stalls at lower radii than the non-rotating case. Along the equator where the effective gravity is less, the accretion shock moves much further, and the star quickly loses its spherical symmetry (Fig. 5). 60 ms after the bounce, the accretion shock of the rotating model is at $`160,300`$ km respectively for the poles,equator. At that time, the spherically symmetric accretion shock of the non-rotating star is at $`300`$ km. The position of the accretion shock, initially set by the position at which the bounce stalls, is important for the success or failure of the convection-driven supernova mechanism because it determines the ram pressure that the convective region must overcome to drive an explosion. Even more important, however, is the entropy profile that is left behind after the bounce fails. Because the bounce is weaker for rotating stars, the angle-averaged entropy decreases with increasing angular momentum (Fig. 6). The entropy in the strong shock limit is $`v_{\mathrm{shock}}^{1.5}`$, and along the equator, where the material is collapsing less quickly, the entropy is much lower (Fig. 7). The steep entropy gradient in supernovae is what seeds the convection and the shallower gradients in the collapse of rotating stars lead to much weaker convection and, as we shall see in §5, weaker explosions. Rotation further weakens the explosion because the angular momentum profile is stable to convection and constrains convection to the polar regions. This can be understood physically by estimating the change of force on a blob of material as it is slightly raised from a position $`r`$ with density ($`\rho `$), angular momentum ($`j`$) and pressure ($`P`$) to a position $`r+\mathrm{\Delta }r`$ with its corresponding density, angular momentum, and pressure ($`\rho +\mathrm{\Delta }\rho ,j+\mathrm{\Delta }j,P+\mathrm{\Delta }P`$). If the blob is allowed to reach pressure equilibrium in its new surroundings, the acceleration that it feels without rotation is simply: $`\mathrm{\Delta }a=\mathrm{\Delta }a_{\mathrm{buo}}=g(1(\rho +\mathrm{\Delta }\rho )/\rho _{\mathrm{blob}})`$ where $`g`$ is the gravitational acceleration and $`\rho _{\mathrm{blob}}`$ is the density of the blob of material after it has expanded/contracted to reach pressure equilibrium. If the change in acceleration is positive, the blob of material will continue to rise and that region is convectively unstable. This gives us the standard Schwarzschild/Ledoux instability criteria (Ledoux 1947): a region is unstable if $`(\rho /r)<(\rho /r)_{\mathrm{adiabat}}`$, or, for constant composition, $`S/r>0`$. If there is an angular momentum gradient, however, the net force on the blob becomes: $$\mathrm{\Delta }a=g\left(1\frac{\rho +\mathrm{\Delta }\rho }{\rho _{\mathrm{blob}}}\right)+\frac{j_{\mathrm{blob}}^2(j+\mathrm{\Delta }j)^2}{(r+\mathrm{\Delta }r)^3}$$ (2) where $`j_{\mathrm{blob}}=j`$. The corresponding Solberg-Høiland instability criterion is (Endal & Sofia 1978): $$\frac{g}{\rho }\left[\left(\frac{d\rho }{dr}\right)_{\mathrm{adiabat}}\frac{d\rho }{dr}\right]>\frac{1}{r^3}\frac{dj^2}{dr}$$ (3) If the angular momentum increases with increasing radius as it does for our core collapse models (Fig. 8), then the entropy gradient must overcome the angular momentum gradient to drive convection. In our simulations, the high entropy bubbles are unable to rise through the large angular momentum gradient and the convection is constrained to the polar region. The overwhelming effect of rotation on supernova models is this constraint on the convection and it causes weaker, asymmetric explosions. ## 5 Explosions And Compact Remnants Because the convection is limited in rotating models, the convective region has less energy and is unable to overcome the pressure at the accretion shock as early as the non-rotating models which explode within 200 ms of bounce. However, at later times (500 ms after bounce) when the density of the infalling material decreases, the rotating models are able to launch an explosion, but the explosion occurs later and the proto-neutron star accretes slightly more mass (Fig. 9, Table 2). The final compact remnant mass, however, depends upon the amount of material that falls back onto the proto-neutron star. By mapping our explosions back into one-dimension, we can estimate the amount of fallback and the observed explosion energy (Table 2). The masses in Table 2 are baryonic masses and the gravitational masses, which can be compared to observations, are typically $``$10 % lower. Mass estimates from observations of close binaries predict gravitational masses for stars without hydrogen envelopes to be $``$1.4 M. Even including fallback, our non-rotating models produce remnants with masses which are much lower than those observed. Although the remnant masses of rotating models match observations much better, the explosion energies are too low to explain all the observations. Due to the uncertainties in other aspects of the supernova simulation (e.g., neutrino physics), we can not claim that rotating stars are required to make close compact binaries or that non-rotating stars are required to explain most supernova observations. But we can be reasonably sure of the trends: rotating stars produce weaker explosions and more massive remnants. As a further consequence, the rotating stars eject notably less <sup>56</sup>Ni (Table 2), up to one half for some of our explosions. Not only are the explosions weaker for rotating stars, but since the convection is constrained to the poles, the explosion is stronger along the poles (Figures 10, 11). At the end of the simulation, $`1.5`$ s after bounce, the shock radius is 1.4 times further out along the pole than along the equator for our rotating star (Table 2). The mean velocity of the shocked material being ejected at the end of the simulation (2/3 of the explosion energy is still thermal) is 13,000 km s<sup>-1</sup> along the pole and only 5,100 km s<sup>-1</sup> along the equator! The maximum velocity along the pole can be as high as 18,000 km s<sup>-1</sup>. These results must be taken with some degree of caution, since the axis of symmetry lies along the polar axis, and some of this asymmetry could be due to numerical artifacts. However, if we compare these results with a non-rotating star, we are encouraged that this asymmetry is real. In the case of a non-rotating star, the shock radius is only 4 % further out along the poles, and the mean polar, equatorial velocities are 8,000 km s<sup>-1</sup>, 7,700 km s<sup>-1</sup> respectively (maximum velocities are as high as 20,000 km s<sup>-1</sup>). The variations from pole to equator in the non-rotating simulations are likely to be caused by the position of the rising bubbles and down-flows during the explosion. Even if these variations were all due to numerical artifacts (it could be due to convection plumes and hence, physical), it appears that numerical artifacts can account for only 10 % variations in the velocities and the large asymmetry in the rotating simulations are likely to be real. By constraining the convection to the poles, rotating models produce more than a factor of 2 asymmetry in the explosion. Neutrino luminosities for rotating and non-rotating stars are plotted in figure 12. The non-rotating core has a much larger $`\mu `$ and $`\tau `$ ($`\nu _\mathrm{x}`$) neutrino luminosity, especially just after bounce. This is because the non-rotating core compresses more and, at the $`\mu `$ and $`\tau `$ neutrinosphere, the temperature is over a factor of 1.5 higher than that of the rotating core. The large dependence of neutrino emission on temperature (the luminosity from pair annihilation $`T^9`$), causes this small change in temperature to have large effects on the neutrino luminosity. Also, bear in mind that beyond 0.2 s after bounce, the convective regions in the star look dramatically different. The non-rotating star has already launched an explosion, leaving a hot, bare proto-neutron star which continues to cool by emitting neutrinos. The rotating core is still convecting, and its neutrinosphere is further out. These differences are more easily seen in a plot of neutrino energies (Fig. 13). We infer no net asymmetry in the neutrino luminosity. But because of our coarse resolution, and the rapid time variation of the neutrino luminosity as bubbles rise above the neutrinosphere, asymmetries of $`2030`$% could be hidden in our data and we can neither confirm nor rule out the asymmetric neutrino emission seen by Janka & Mönchmeyer (1989). Rotating progenitors produce rotating compact remnants. Table 2 lists the spin period of the compact remnant assuming solid body rotation both at the end of the simulation before the neutron star has cooled and after it has cooled to a 10 km neutron star. Even for Model 3 (Fig. 8, Table 2), which includes the transport of angular momentum using an $`\alpha `$-disk prescription, at the end of the simulation, the angular momentum in the proto-neutron star is sufficient to produce a ms pulsar. This is because, at the end of our SPH simulations, the proto-neutron star is still large, and it is not rotating rapidly. Hence, viscous momentum transport does not have an opportunity to remove the angular momentum from the core. As the neutron star contracts and the spin increases, the neutron star will lose angular momentum through a wind. Hence, the spin period of the cooled neutron star should be seen as a lower limit. In addition, convection in the cooling proto-neutron star (Burrows, Mazurek, & Lattimer 1981; Keil, Janka, & Müller 1996) will increase the efficiency of the viscous transport of angular momentum. Even in the absence of any wind, gravitational radiation limits the spin period to $``$3 ms (Lindblom, Mendell, & Owen 1999). Fallback may be able to spin up the neutron star, but will not be able to exceed the limit from gravitational radiation because the accreting material heats the neutron star to temperatures where gravitational radiation will effectively remove the angular momentum (Fryer, Colgate, & Pinto 1999). A common problem to all core-collapse simulations which produce explosions is the large amount of neutron rich ejecta. Although the amount of material ejected with extremely low electron fraction ($`Y_\mathrm{e}<0.4`$) decreases from $`1.2\times 10^3`$ Mfor the non-rotating model to $`1.1\times 10^5`$ Mfor the rotating case, but due to the delayed explosion which gives more time for neutrino emission to deleptonize the ejecta, the amount of mildly neutron rich material ($`Y_\mathrm{e}<0.49`$) actually increases: 0.36, 0.60 Mrespectively for the non-rotating, rotating models. The material which falls back onto the neutron star will be mostly comprised of this neutron rich material. However, even assuming only neutron rich material falls back onto the neutron star, $`>0.1`$ M of neutron rich material is ejected, several orders of magnitude greater than the $`10^210^3`$ Mconstraint from nucleosynthesis (Trimble 1991). Clearly, the delayed explosion caused by rotating does not solve the nucleosynthesis problem of the current exploding core-collapse models. ## 6 Implications Rotation limits the convection in core-collapse supernovae, both by weakening the shock and by constraining the convection to the polar regions. Because of these constraints, the convective region takes longer to overcome the accretion shock and the explosion occurs at later times and is less energetic. The resultant compact remnants of the collapse of rotating stars are more massive. For higher mass progenitors, the density of the infalling material, and hence the shock pressure, increases (Burrows & Goshy 1993). Because of their limited convection, rotating cores are less likely to overcome the accretion shock, and the critical mass limits for black hole formation (both from fallback and direct collapse) are likely to be lower than those predicted for non-rotating stars (see Fryer 1999). Hence, the same rotation required to power a collapsar (MacFadyen & Woosley 1999) also causes the supernova to fail and the formation of a black hole. We expect a range of progenitor mass limits for black hole formation depending on the core rotation. This may help to explain the observed range of black hole systems (Ergma & van den Heuvel 1998). We stress that the uncertainties in the numerical implementation of the physics (e.g., neutrino transport, neutrino cross-sections, equation of state, symmetric gravity) limit our ability to make quantitative estimates, and the numbers should be regarded merely as “best-estimates”. The trends (e.g., weaker explosions, lower black hole formation limits) are more secure. If angular momentum were conserved, the rotation periods of neutron stars formed from our progenitor model would be much faster than that expected in nascent neutron stars. However, winds and gravitational radiation will remove much of the angular momentum. If anything, it will be difficult to explain any rapidly spinning nascent neutron stars. Not only are the explosions of rotating core-collapses weaker, but they are highly asymmetric (roughly a factor of 2 in the mean velocities from pole to equator). Polarization measurements of core-collapse supernova suggest that most supernovae are polarized (Wang et al. 1996; Wang & Wheeler 1998). In modeling Supernova 1993J, Höflich et al. (1995) required explosions with asymmetries of a factor of 2, roughly the same asymmetry we see in our simulations, and hence, rotation provides a simple explanation of polarization measurements. These asymmetric explosions also provide a simple explanation for the extended mixing in supernovae. Observations of Supernova 1987A in the X-rays (Dotani et al. 1987, Sunyaev et al. 1987), $`\gamma `$-rays (Matz et al. 1988), and even in the infra-red (Spyromilio, Meikle, & Allen 1990) all push toward extended mixing of iron-peak elements. The matter ejected along the poles, with their much higher velocities, will mix deeper into the star. To quantitatively determine the amount of mixing and polarization produced by the asymmetric explosions of our rotating cores, multi-dimensional simulations must be run out to late times. We do not model magnetic fields in our simulations, but we can place some constraints on their importance in the supernova explosion. Thompson & Duncan (1993) have argued against primordial magnetic fields for neutron stars and, instead, suggest that strong neutron star magnetic fields are produced in dynamos during the supernova explosion itself. They propose a high Rossby number ($`Ro=P_{\mathrm{rot}}/\tau _{\mathrm{con}}`$ where $`P_{\mathrm{rot}}`$ is the rotation period and $`\tau _{\mathrm{con}}`$ is the convective turnover timescale) dynamo, which takes advantage of the fast convection velocities, and hence high rossby numbers, of core-collapse supernovae. However, this dynamo requires many convective turnovers ($`\tau _{\mathrm{con}}10^3T_{\mathrm{exp}}`$, where $`T_{\mathrm{exp}}`$ is the explosion time) to significantly magnify the magnetic fields. Unfortunately, for our rotating supernovae, with typical convection length scales and velocities of $`400`$ km, $`4000`$ km s<sup>-1</sup> respectively, the explosion is launched before convection can produce a strong magnetic field. In addition, until the proto-neutron star cools and contracts, the rotation period of the disk-like structure is also slow (Table 2), and a rotation driven dynamo will not occur until after the supernova explosion. As the proto-neutron star cools and shrinks, however, the Thompson & Duncan (1993) dynamo, driven by Ledoux convection (see Burrows, Mazurek, & Lattimer 1981; Keil, Janka, & Müller 1996) is more promising. The strength of this convection is still a matter of debate (Mezzacappa et al. 1998b), but if it is as strong as Keil, Janka, & Müller (1996) predict, magnetic fields in excess of $`10^{15}`$G are obtainable at late times using the convection-driven dynamo of Thompson & Duncan (1993). These magnetic fields will enhance the angular momentum lost in the proto-neutron star wind. But in all of our simulations, this occurs after the launch of the explosion, and the magnetic field would not affect the supernova explosion. Lastly, and most speculatively, we come to the topic of neutron star kicks. There is a growing set of evidence that neutron stars are born with high space velocities with a mean magnitude $`450`$ km s<sup>-1</sup> (see Fryer & Kalogera 1998 for a review). Rotation breaks one symmetry, and it is tempting to speculate that it is then easy to break an additional symmetry causing a net momentum in the ejecta which is non-zero. To balance out the momentum, the neutron star must then have gained some momentum. Indeed, the net momentum of the ejecta along the poles in our simulation is $`6\times 10^{39}`$ g cm s<sup>-1</sup>. This corresponds to a kick velocity of only $`30`$ km s<sup>-1</sup>, over an order of magnitude too small to explain the observed pulsar velocity distribution. And this momentum may be entirely a numerical artifact. Asymmetric neutrino emission can also cause neutron star kicks, but we detect no significant neutrino asymmetry. In any event, calculating quantitative results on neutron star kicks requires 3-dimensional simulations where a fixed inner boundary is not necessary and the proto-neutron star is allowed to move. This research has been supported by NASA (NAG5-2843, MIT SC A292701, and NAG5-8128), the NSF (AST-97-31569), the US DOE ASCI Program (W-7405-ENG-48), and the Alexander von Humboldt-Stiftung (FLF-1065004). It is a pleasure to thank Stan Woosley, Andrew MacFadyen and Norbert Langer for encouragement and advice.
no-problem/9907/astro-ph9907008.html
ar5iv
text
# A possible solution of the G-dwarf problem in the frame-work of closed models with a time-dependent IMF ## 1 Introduction It is well known that the metallicity distribution of G-dwarfs in the solar neighbourhood shows a deficit of metal-poor stars relative to the predictions of the Simple Model of chemical evolution. This is the so-called G-dwarf problem, originally noted by van den Bergh (1962) and Schmidt (1963). Many solutions have been proposed, some in the frame-work of analytical models, based on the IRA. The problem is still present if we adopt the oxygen abundances in place of metallicity. An important result in the frame-work of IRA models with gas flows was pointed out by Edmunds (1990). He found that the G-dwarf problem cannot be solved by any outflow but it is possible to solve it with particular forms of inflow. Lynden Bell (1975) ’best accretion model’ and Clayton’s models (Clayton 1988) are models of just this kind. Of course it is also possible to reduce the number of metal-poor stars with respect to that predicted by the Simple Model by assuming metal dependent stellar yields (Maeder 1992) (more metals are produced at lower metallicity with a consequent Prompt Initial Enrichment (P.I.E.) (Truran and Cameron 1971)). However, it has been shown by several papers (Giovagnoli and Tosi, 1995; Carigi 1996), that in this way shallow gradients along the Galactic disk are produced, which do not agree with the most recent observational estimates (see Matteucci and Chiappini 1999 for a review). Actually, this is an unavoidable problem still present in the models discussed in this paper where we consider time-dependent IMFs. It is still possible to obtain a Prompt Initial Enrichment by assuming an IMF variable with time and in particular very flat at early times, in order to favour the formation of massive stars. In this paper we exploit the previous idea to investigate the time behaviour of the IMF. We assume the same hypothesis of the Simple Model (Tinsley 1980) with the exception of adopting a time-dependent IMF. In such a way it is possible to find an equation (Section 2.1) which must be satisfied in order to reproduce the observed distribution of G-dwarfs. We then use this equation to investigate the time behaviour of IMFs with one and two slopes. In Section 3 we use a numerical model without IRA to test the IMFs on other observational constraints. ## 2 Recovering the history of the IMF In the following we need an analytical expression to approximate the data concerning the metallicity distribution of G-dwarfs in the solar neighbourhood. The data are plotted in Fig. 1 where the differential distribution of oxygen abundances for the solar cylinder from Rocha-Pinto and Maciel (1996) is given (triangles). We have made use of the oxygen abundance rather than the iron abundance, in order to be consistent with chemical evolution models which use the instantaneous recycling approximation. Indeed, IRA is a good approximation for oxygen which is produced on short timescales (few million years) by supernovae (SNe) II as opposed to iron which is produced on long timescales (up to a Hubble time) by SNe Ia (Matteucci and Greggio 1986). In order to compare theory and data we adopt a simple relation between O and Fe reproducing the observed behaviour of these elements in the solar neighbourhood, the same as given by Pagel (1989): $$log(\mathrm{\Phi })=\left[\frac{O}{H}\right]=0.5\left[\frac{Fe}{H}\right]$$ (1) The continuous line in Fig. 1 is the best function approximating data points according to the theory discussed in the appendix. $$f(\mathrm{\Phi })=ln\left(\frac{\mathrm{\Delta }N_{}}{\mathrm{\Delta }\mathrm{\Phi }}\right)=w_1e^{\frac{(\mathrm{\Phi }\mathrm{\Phi }_1)^2}{2\sigma ^2}}+w_2e^{\frac{(\mathrm{\Phi }\mathrm{\Phi }_2)^2}{2\sigma ^2}}$$ (2) with $$\sigma =0.454;\mathrm{\Phi }_1=0.690;\mathrm{\Phi }_2=1.06;w1=4.12;w2=2.61;$$ (3) We have used the sophisticated method in the appendix to approximate the data because of the strong dependence of our results on the chosen function $`f(\mathrm{\Phi })`$ and in particular on the value at $`\mathrm{\Phi }=0`$, $`f(0)`$. Indeed, the point $`\mathrm{\Phi }=0`$ is outside of the range of the available data and the metallicity distribution extrapolated to $`\mathrm{\Phi }=0`$ depends strongly on the adopted method. Since the aim of this paper is to recover the history of the IMF from the data in Fig. 1, we need a method that does not introduce any a priori assumption on the specific functional form of the relation $`ln(\mathrm{\Delta }N_{}/\mathrm{\Delta }\mathrm{\Phi })`$. In other words our approach must be independent of any physical hypothesis on the star formation law at the basis of the observed data, and this is just the essence of the method discussed in the appendix. ### 2.1 Basic Equations The basic assumptions of our models are the same of the Simple Model (Tinsley 1980) with the only exception of adopting a time-dependent IMF. Therefore we consider a model with the following properties: 1. The system is one-zone and closed, namely there are no inflows or outflows 2. The initial gas is primordial (no metals) 3. The gas is well mixed at any time 4. The instantaneous recycling approximation holds (namely the lifetimes of stars above $`1M_{}`$ are negligible whereas those of stars below $`1M_{}`$ are larger than the age of the Galaxy). 5. The IMF is time-dependent, i.e. $`\phi =\phi (m,t)`$, with the following normalization at any time $$_0^{\mathrm{}}\phi (m,t)m𝑑m=1$$ (4) Under the previous assumptions the oxygen abundance $`\mathrm{\Phi }`$ (defined by eq.(1)) in the interstellar medium is governed by $$d\mathrm{\Phi }=\frac{p}{gZ_0}\alpha ds$$ (5) where $`Z_0=H\frac{O_{}}{H_{}}9.5410^3`$, $`g`$ is the gas mass, $`p\alpha ds=p^{}ds`$ is the mass of oxygen produced and almost immediately returned into the interstellar medium when $`ds`$ of interstellar material goes into stars. A fraction $`\alpha `$ of the mass gone into stars is not returned to the interstellar medium, but remains in long-lived stars or stellar remnants. It has also been assumed that $`\mathrm{\Phi }Z_0<<1`$. We have: $$\alpha =_0^M_{}\phi (m,t)m𝑑m+_M_{}^{\mathrm{}}\phi (m,t)m_{rem}𝑑m$$ (6) and $$p_{oxy}^{}=_M_{}^{\mathrm{}}\phi (m,t)mp_o𝑑m$$ (7) where $`p_o`$ is the fraction (by mass) of newly produced and ejected oxygen by a star of mass $`m`$. We used for $`p_o`$ the expression given by Woosley and Weaver (1995) whereas for $`m_{rem}`$ the expression given by Tinsley (1980). In our model the gas and stellar masses are related by $$dg=\alpha ds$$ (8) Now we want to relate the previous quantities to the observed metallicity distribution of G-dwarfs ($`f(\mathrm{\Phi })`$ in eq. (2)) in order to find an equation for $`\phi (m,t)`$. We have: $$f(\mathrm{\Phi })=ln\left(\alpha _c\frac{ds}{d\mathrm{\Phi }}\right)$$ (9) where $$\alpha _c=_{0.8M_{}}^{1.1M_{}}\phi (m,t)m𝑑m$$ (10) because the stars in our sample are in the range $`0.81.1M_{}`$. Equations (5) and (9) give: $$f(\mathrm{\Phi })=ln\alpha _cln\alpha +lnglnp+lnZ_0$$ (11) On the other hand we can derive $`lng`$ from (5) and (8) in the following way: $$lng=lng_0Z_0_0^\mathrm{\Phi }\frac{d\mathrm{\Phi }}{p}$$ (12) where $`g_0`$ is the initial gas mass. Hence from (11) we obtain: $$f(\mathrm{\Phi })=ln\alpha _cln\alpha Z_0_0^\mathrm{\Phi }\frac{d\mathrm{\Phi }}{p}+lng_0lnp+lnZ_0$$ (13) If we evaluate the previous equation at $`\mathrm{\Phi }=0`$ (that is $`t=0`$) we have: $$f(0)=ln\alpha _{c0}+lng_0lnp_0^{}+lnZ_0$$ (14) where $`p_0^{}`$ and $`\alpha _{c0}`$ are the quantities in (6) and (10), respectively, with $`\phi (m,t)=\phi (m,0)`$. By differentiating eq. (13) with respect to $`\mathrm{\Phi }`$ we obtain finally $$\frac{d}{d\mathrm{\Phi }}\left[ln\alpha _clnp^{}\right]\frac{\alpha }{p^{}}Z_0=\frac{df(\mathrm{\Phi })}{d\mathrm{\Phi }}$$ (15) This is an integro-differential equation for the function $`\phi (m,\mathrm{\Phi })`$ with the initial condition given by (14). The previous problem has, of course, infinite solutions. Therefore to proceed further we have to make some assumption on the behaviour of the $`\phi (m,\mathrm{\Phi })`$. In the next sections we shall investigate IMF with one slope (Sect. 2.2) and with two slopes (Sect. 2.3). ### 2.2 Single Power-law IMF Let consider a single power-law IMF, namely $$\phi (m,\mathrm{\Phi })=Cm^{[1+x(\mathrm{\Phi })]}$$ (16) where $`M_L`$ and $`M_U`$ are respectively the smallest and the largest stellar mass (that we assume do not depend on the time) and the normalization is performed in the above mass range, i.e. $`C=\frac{1x(\mathrm{\Phi })}{M_U^{1x(\mathrm{\Phi })}M_L^{1x(\mathrm{\Phi })}}`$ Substituting eq. (16) in (15) we find the following equation for $`x(\mathrm{\Phi })`$: $$\frac{dx}{d\mathrm{\Phi }}=\frac{F_2(x)+F_3(\mathrm{\Phi })}{F_1(x)}$$ (17) which is a nonlinear I order differential equation with initial condition given by substituting (16) in (14). The functions in (17) are: $$F_1(x)=\frac{d}{dx}[ln\alpha _clnp^{}]$$ (18) $$F_2(x)=\frac{\alpha }{p^{}}Z_0$$ (19) $$F_3(\mathrm{\Phi })=\frac{df}{d\mathrm{\Phi }}=\left[w_1\frac{\mathrm{\Phi }\mathrm{\Phi }_1}{\sigma ^2}e^{\frac{(\mathrm{\Phi }\mathrm{\Phi }_1)^2}{2\sigma ^2}}+w_2\frac{\mathrm{\Phi }\mathrm{\Phi }_2}{\sigma ^2}e^{\frac{(\mathrm{\Phi }\mathrm{\Phi }_2)^2}{2\sigma ^2}}\right]$$ (20) Since $`F_2(x)>0`$, eq. (17) tells us that a constant IMF (i.e. with a slope $`x=x_0`$ at any time) corresponds to a straight line for $`f(\mathrm{\Phi })`$ with a negative slope (Indeed, in this case eq. (17) becomes $`F_3(\mathrm{\Phi })=\frac{df}{d\mathrm{\Phi }}=F_2(x_0)`$). This is the right behaviour since our model becomes the Simple Model when $`x=x_0`$. The solutions $`x(\mathrm{\Phi })`$ are plotted in Fig. 2 for two different values of $`M_U`$ and $`M_L`$. We found very flat IMFs for low oxygen abundances (i.e. at initial times). At the solar metallicity ($`\mathrm{\Phi }=1`$) the slope is always steeper than a Salpeter (1955) ($`x=1.35`$). Decreasing $`M_U`$ decreases, of course, $`x`$, i.e. the IMF becomes flatter. The effect of a change in the $`M_L`$ value is negligible (especially at low $`\mathrm{\Phi }`$). ### 2.3 IMF with two slopes Let now consider the following IMF: $$\phi (m,\mathrm{\Phi })=C\{\begin{array}{cc}m^{[1+x_1]}\hfill & \text{if }mM\hfill \\ M^{x_2x_1}m^{[1+x_2]}\hfill & \text{if }m>M\hfill \end{array}$$ The above IMF depends on the three parameters $`x_1,x_2`$ and $`M`$. We shall investigate the three following cases: 1. $`M=M(\mathrm{\Phi })`$, $`x_2=1.35`$ for values of $`x_1`$ ranging from $`1`$ to $`0.2`$ (this IMF is similar to the one proposed by Larson (1998)); 2. $`x_1=x_1(\mathrm{\Phi })`$, $`x_2=1.35`$ for values of $`M`$ ranging from $`2`$ to $`10M_{}`$; 3. $`x_2=x_2(\mathrm{\Phi })`$, for values of $`x_1`$ ranging from $`1`$ to $`1`$. In all the previous cases equation (15) gives us a nonlinear I order differential equation for the $`\mathrm{\Phi }`$-dependent parameter, with the initial conditions given by the equation (14). We consider always $`M_U=100M_{}`$ and $`M_L=0.1M_{}`$ since the dependence on these parameters is negligible. The function $`M(\mathrm{\Phi })`$ in the first case considered is shown in Fig. 3. At the initial time ($`\mathrm{\Phi }=0`$) the mass $`M`$ increases by increasing the slope at low mass end. In particular when $`x_1=0.2`$ $`M(0)100M_{}=M_U`$ and therefore there is no solution for $`x_1`$ greater than $`0.2`$. Fig. 4 shows the slope at the low mass end ($`x_1(\mathrm{\Phi })`$) for values of $`M`$ in the range $`210M_{}`$. The initial values of $`x_1`$ are always negative and moreover they increase by increasing $`M`$, as expected. Finally Fig. 5 shows the results for the third case considered. Here the mass $`M`$ is chosen in order to reproduce a final slope $`x_2=1.35`$. ## 3 Application of the derived IMFs to a numerical model Both the proposed IMFs can reproduce the observed G-dwarf distribution and therefore we want to test the validity of the inferred IMF by using numerical models of galactic chemical evolution . This can be done by studying the effect of the above IMFs on the chemical evolution of the solar neighbourhood. The model used is that of Matteucci and Francois (1989) where a detailed description can be found. The main difference with that model is that we assume here a very rapid formation of either the halo and the disk thus simulating a closed model. This is required by the fact that we derived the IMF under the assumption of a closed model. In the original model of Matteucci and Francois (1989) the timescale for the formation of the solar neighbourhood was about $`34`$ Gyr and it was chosen in order to best fit the G-dwarf metallicity distribution of Pagel and Patchett (1975), under the assumption of a constant IMF. Recently, Chiappini et al. (1997) presented a more realistic model for the Galaxy where the evolution of the halo-thick disk and the thin disk are decoupled in the sense that the halo-thick disk is formed on a short time scale of the order of $`12`$ Gyr, whereas the thin disk is assumed to form very slowly. In particular, in order to best fit the G-dwarf distribution of Rocha-Pinto and Maciel (1996), Chiappini et al. (1997) found that, with an IMF constant in time, a time scale of about $`8`$ Gyrs is required for the formation of the solar neighbourhood. It is interesting to check if the derived IMFs are able to reproduce other observational constraints besides the G-dwarf distribution, for example the $`[O/Fe]`$ vs $`[Fe/H]`$ trend which is normally very well reproduced by models with constant IMF and infall and taking into account detailed nucleosynthesis from type II and Ia SNe. In this framework, in fact, the plateau shown by the data for $`[Fe/H]<1.0`$ is interpreted as due to the pollution by type II SNe which produce an almost constant $`[O/Fe]`$ ratio. The subsequent decrease of the $`[O/Fe]`$ ratio for $`[Fe/H]1.0`$ is then due to the occurrence of type Ia SNe exploding with a temporal delay relative to SNe II. In the Matteucci and Francois (1989) and Chiappini et al. (1997) model the SNe Ia are supposed to originate from white dwarfs in binary systems following the formalism of Matteucci and Greggio (1986). In Fig. 6 we show the G-dwarf metallicity distribution as predicted by the numerical model and, as expected, the agreement is quite good. However, in Fig. 7 we show the predicted $`[O/Fe]`$ versus $`[Fe/H]`$ relation and the agreement with the observations is poor, especially in the domain of metal poor stars where the new IMF predicts an increasing amount of massive stars. On the other hand, the slope for disk stars is well reproduced. Another problem with this IMF is the high O and Fe content reached by the model at the sun birth and at the present time. These high abundances are not evident in Fig. 7 since the abundances are normalized to the predicted solar abundances. Finally, another problem is that Fig. 7 shows that \[Fe/H\] starts decreasing more than oxygen after having reached the solar abundance. This is due to the large dilution from dying low mass stars and to the fact that the Fe abundance decreases more rapidly than that of oxygen, due to the fact that O is continuously produced, although at a low level, by SNe II on very short timescales. Iron, on the other hand, comes mostly from type Ia SNe born at early times when the IMF was top-heavy favoring massive stars relative to the type Ia SN progenitors. The predicted solar abundances, namely the abundances at 4.5 Gyrs ago, are $`X_O=2.610^2`$ and $`X_{Fe}=6.310^3`$ to be compared with the same abundances from Anders and Grevesse (1989) : $`X_O=9.5910^3`$ and $`X_{Fe}=1.1710^3`$ (mass fractions). Concerning the IMF with two slopes (discussed in Sect. 2.3), the G-dwarf metallicity distribution as predicted by the numerical model is again in good agreement with the observations, in all the cases considered. On the other hand, the predicted $`[O/Fe]`$ versus $`[Fe/H]`$ is still in poor agreement with the observations with the exception of the second case ($`x_1=x_1(\mathrm{\Phi })`$) for which we give the G-dwarf metallicity distribution and the $`[O/Fe]`$ versus $`[Fe/H]`$ in Figs 8 and 9, respectively, for $`M=5M_{}`$ (we found results very similar to the ones shown in the Figs 8 and 9 for values of $`M`$ in the range $`210M_{}`$). This time-dependent IMF fits the $`[O/Fe]`$ vs $`[Fe/H]`$ relation much better than the other time dependent IMFs, since the variation in the number of type II SNe at early times is less than in the other cases. However, it is clear that the best agreement with the data (especially for $`[Fe/H]<1.0`$) is achieved with a constant IMF. We also investigated the case of an IMF with only the lower mass limit dependent on $`\mathrm{\Phi }`$ (($`M_L=M_L(\mathrm{\Phi })`$). However, in this case, the equations (14) and (15) give solutions which are not physical. In fact, in order to lower the number of stars in the range of mass $`0.81.1M_{}`$ at early times, to solve the G-dwarf problem, our equations fix the value of $`M_L(\mathrm{\Phi })`$ (for $`\mathrm{\Phi }0`$) very close to $`1.1M_{}`$. This result does not have any physical meaning because it is strongly dependent on the particular mass range of interest. This is the reason why the inferred solution, when used in a numerical model of chemical evolution, it is not able to reproduce the observational G-dwarf distribution. The predicted solar abundances related to the IMF adopted in Figs 8 and 9 are still larger than the observed ones ($`X_O=3.110^2`$ and $`X_{Fe}=8.610^3`$). However, these absolute values are strongly model dependent and in particular it is possible to lower these abundances by increasing the infall. ## 4 Conclusions We proposed a method to solve the G-dwarf problem in a closed box model with a time-dependent IMF, based on the IRA. The method gives us an equation (eq. 15) which has infinite solutions. Therefore, in order to use this equation, we had to make some assumptions on the behaviour of the IMF. In particular, we considered both a single power-law IMF and an IMF with two slopes. We tested the validity of the inferred IMF by using numerical models of galactic chemical evolution, namely by relaxing the IRA, and we find the following results: 1. all the IMFs investigated can reproduce the observed G-dwarf distribution; 2. a single power-law IMF fails in reproducing the behaviour of abundances (in particular the $`[O/Fe]`$ vs $`[Fe/H]`$ relation); 3. in order to reproduce the behaviour of abundances besides the G-dwarf problem, an IMF with a time dependence at the low mass end is required. However, the fit produced by such an IMF of the $`[O/Fe]`$ vs $`[Fe/H]`$, is not as good as that produced by a constant IMF. Moreover, this IMF, like most of the variable IMF proposed insofar, fails in reproducing the oxygen gradient along the Galactic disk, unless other assumptions such as increasing star formation efficiency with galactocentric distance and/or radial flows are introduced. We computed the expected gradients of oxygen along the disk by adopting this IMF and the model of Chiappini et al. (1997). We found that the O gradient disappears. A variable efficiency of star formation could recover the gradient but at expenses of the gas distribution along the disk. We did not try to include radial flows but the conclusion that a variable IMF of this kind worsen the agreement with the disk propeties relative to a constant IMF seems unavoidable. 4. an IMF similar to the one proposed recently by Larson (1998), fails in reproducing the $`[O/Fe]`$ vs $`[Fe/H]`$ relation because of the strong time dependence of the number of type II SNe at early times. 5. In summary, from the analysis of models with variable IMF done in this paper, we are tempted to conclude that infall models with a constant IMF are the best solution of the G-dwarf problem, since variable IMFs could in principle solve it but they are not able to reproduce the properties of the galactic disk. ## 5 Appendix We give a method to approximate observational data in Fig. 1, based on the regularization theory of Tikonhov (1963). The method is general and it applies whenever one wants to approximate some data by an analytical function, and nothing is known about the physics at the basis of these data. Accordingly to Tikonhov’s regularization theory (1963) the function $`f(\mathrm{\Phi })`$ which approximates the data in Fig. 1 is determined by minimizing a cost functional $`E[f]`$, so-called because it maps functions (in some suitable function space) to the real line. $`E[f]`$ is the sum of two terms $$E[f]=E_s[f]+E_c[f]$$ (21) where $`E_s[f]`$ is the standard error term and $`E_c[f]`$ the regularizing term. The first-one measures the standard error (distance) between the desired response $`f_i`$ and the actual response $`f(\mathrm{\Phi }_i)`$, $$E_s[f]=\frac{1}{2}\underset{i=1}{\overset{N}{}}(f_if(\mathrm{\Phi }_i))^2$$ (22) where $`N`$ is the total number of available data (in our case $`N=12`$). The second-one depends on the geometric properties of the approximating function $`f(\mathrm{\Phi })`$, $$E_c[f]=\frac{1}{2}Pf^2$$ (23) where $`P`$ is a differential operator. As suggested by Poggio and Girosi (1990) the best choice for $`P`$ consists in a differential operator invariant under both rotations and translations. This is defined by $$Pf^2=\underset{k=0}{\overset{\mathrm{}}{}}a_kD^kf^2$$ (24) where $`a_k=\frac{\sigma ^{2k}}{k!2^k}`$ with $`\sigma `$ a constant associated with the data point $`\mathrm{\Phi }_i`$ and $$D^kf^2=_{\mathrm{}}^{\mathrm{}}\left(\frac{^kf}{\mathrm{\Phi }^k}\right)^2𝑑\mathrm{\Phi }$$ (25) The function which solves the variational problem given by eq. (21) (when the regularizing term is specified by eq. (24)) is: $$f(\mathrm{\Phi })=\underset{j=1}{\overset{M}{}}w_je^{\frac{(\mathrm{\Phi }\mathrm{\Phi }_j)^2}{2\sigma ^2}}$$ (26) which consists of a linear superposition of multivariate Gaussian basis functions with centers $`\mathrm{\Phi }_j`$. The above theory establishes only the form of the function $`f(\mathrm{\Phi })`$, but does not solve completely our problem. We do not know for example the number $`M`$ of gaussians in (25). As suggested by Guyon et al. (1992) and Vapnik (1992), the method to fix the parameters in the expression (26) is to minimize the sum of a pair of competing terms. The former is again the standard error given by eq. (22), and decreases monotonically as the number of parameters is increased. The latter measures the complexity of the model and increases by increasing the number of parameters. Therefore there is an optimal compromise which minimizes the sum. It is possible to demonstrate (Amari et al. 1997) that we obtain this compromise by using $`Nk`$ data and the average error made on the patterns left out. This method is called leave-k-out cross validation. When $`N`$ is small, the most reasonable choice is $`k=1`$ (leave-one-out). Since in our case the value of $`N`$ is small we apply the leave-one-out cross-validation method in order to fix the parameters in (26). The average errors made on the left data point are reported in Table 1 related to several values of $`M`$. Consequently, the best model occurs for $`M=2`$ with values of the parameters appearing in eq. (26) given by eq. (3).
no-problem/9907/cs9907043.html
ar5iv
text
# A simple C++ library for manipulating scientific data sets as structured data ## 1 Introduction ### 1.1 Classical input/output mechanisms Early programming languages had surprisingly advanced features for reading and writing data from external memory. For example, COBOL already had some sort of data definition language, several file formats and data query statements. It was based on the notion that the physical representation of the data on external storage was bit-by-bit identical to the representation in internal memory. As very early computers had primitive operations to copy data from external storage into internal memory, this was efficient, but it also provided a framework for external data representation: records and fields were usually of fixed length, no separator characters were necessary, and numbers could be either represented as ASCII or EBCDIC strings (eventually with an implied decimal point), binary-coded decimal (storing two digits in a byte) or binary numbers. If you could read COBOL’s data definition language, interpreting file contents was merely a matter of counting bytes. In contrast, the data formats used today in scientific computing are much more flexible, but without human-readable documentation or careful reading of the source code, it is often impossible to decipher the contents of a data file. Designing data exchange between different scientific applications can become a major headache, especially in small- and medium-sized applications where big input/output libraries like NCSA’s HDF or CERN’s RD45 may be inappropriate. This problem is in part founded in the design of the C language: instead of including input/output statements in the language definition itself, the designers of C decided to implement the whole input/output functionality in the standard C library, using only standard function calls. This implies that C cannot provide a standardized way to store structures - and external data most frequently is organized in structures, i.e. records of data containing dissimilar fields. It is possible to output an arbitrary structure bit-by-bit, but implementation dependencies such as alignment rules easily jeopardize compatibility even between different compiler revisions. Thus input/output in C programs is usually done manually by writing explicit code that serializes and reassembles data from a byte stream. Worse than the additional work and sources of errors associated with this is the lack of a formal definition of the external data. The structure of the data files must either be given in the human-readable documentation, or—worse—be inferred from the actual source code. As a consequence, there are no general utilities to manipulate binary files, e.g. to produce formatted listings or extract individual records and fields. The situation is only slightly remedied in the more popular scientific computing language, FORTRAN. While there are input/output statements in the language, there is no real notion of structured data in FORTRAN 77, so the actual data representation is again encoded in the sequence of READ and WRITE statements in the code. ### 1.2 Object persistence In the object-oriented programming paradigma, objects encapsulate data and the operations that act on the data. A persistent object is an object with a life-time that extends beyond the life-time of the program that created it. This means that the object’s data must be stored in external storage, and some operation must be specified to save and recreate the object from external storage. While this can be done by implementing the appropriate read and write methods in the object’s class, it is more desirable to have an automated and standardized way to do so. A similar problem arises in the design of distributed programming systems. Architectures like CORBA (the Common Object Request Broker Architectue ) specify external representations for objects in terms of their methods, but not of their data. They do so by means of an Interface Definition Language (IDL) that is abstractly declares (but not defines) the methods associated with an object class. The IDL is mapped to appropriate method declarations in specific programming languages where the implementations of the methods can be provided. An interoperability protocol defines how methods can be invoked on data residing on different computers in a network. A logical extension would be to extend the Interface Definition Language by a Data Definition Language (DDL). The relation between the DDL and the actual representation of the data in memory can then be included in the language mapping, and between the DDL and the actual representation in external storage in the interoperability definition. However, as CORBA is still quite complex and viable implementations are only recently entering the scientific computing community, and a standard for a DDL is still missing, one might look around for a poor man’s solution for object persistence. Such a solution should at least cover the following requirements: * The structure of external data should be formally specified, similar to the interface definition language of distributed systems. * The actual representation of the data on external storage should be sufficiently defined by the formal definition of the data structure such that the extraction of the data can be performed automatically. This corresponds to the interoperability specification in distributed programming systems. * A language mapping or application programmers’ interface that makes external data easily accessible from application programs. In the following, we present an approach to fulfill these requirements using a set of C++ classes. * The structure of data is formally defined by a type tree using elementary and composite data types including structures, arrays, and unions (in C parlance). The type tree is built from appropriate C++ classes or specified in textual form, making use of the appropriate methods for reading and printing type trees. * To provide different storage representations, the interface between the abstract data layer and concrete representations is specified as an abstract C++ class that can be filled in by different data representations. A simple format for arbitrary structured binary data is specified. * From an application program, data objects are accessed using a C++ class that represents generic structured data and operations performed on such data, such as reading, writing, and extracting members of composite data. ## 2 Implementation ### 2.1 Overview The main class provided by the library is SomeData, an universal access layer to structured data. To the programmer, each object of this class represents a structured data item (which may be elementary or composite). To each object of this class a type tree is associated that is described by an object of the class SomeType. Before an object can be used, its data type must be specified. This can be done in three ways: 1. The data type can be specified in the external representation of the data and then be queried by the application program. 2. The data type can be explicitly built using the constructors of the subclasses of SomeType: > ``` > StructType *t1 = new StructType; > t1->addField("comment",new StringType); > StructType *tAtom = new StructType; > tAtom->addField("name",new StringType); > tAtom->addField("z",new NumType(NumType::i2)); > tAtom->addField("partial_charge", new NumType(NumType::f4)); > ArrayType *tAtoms = new ArrayType(tAtom); > t1->addField("atoms",tAtoms); > StructType *tBond = new StructType; > tBond->addField("from_atom", new NumType(NumType::i2)); > tBond->addField("to_atom", new NumType(NumType::i2)); > tBond->addField("type", new NumType(NumType::i2)); > ArrayType *tBonds = new ArrayType(tBond); > t1->addField("bonds",tBonds); > > ``` 3. Or the data type can be specified in its textual representation as a string, e.g. > ``` > const char *typetext = > "struct { " > "comment : string; " > "atoms : array of struct { " > "name : string; " > "z : integer*2; " > "partial_charge : real*4; " > "}; " > "bonds : array of struct { " > "from_atom : integer*2; " > "to_atom : integer*2; " > "type : integer*2; " > "}; " > "}; "; > SomeType *t1 = SomeType::parse(typetext); > > ``` and then parsed by a static member function of SomeType. After the data type has been specified, a data object must be created. This is done by a data-set class that manufactures an instance SomeData. Data-set classes represent mechanisms where data is stored and retrieved, e.g. file formats or databases. The most basic data-set class is DirectData that stores the data in a linked tree in heap memory. While this is of no use by itself, it can be used by file formats that read their data files in whole and do not wish to provide their own individual access operators. One such data-set class is DataFile that acts as an interface to text or binary structured files. It provides a method data() that returns the data object associated with the file: > ``` > DataFile DF1; > DF1.openOut("outfile",t1); > SomeData D1 = DF1.data(); > > DataFile DF2; > DF2.openIn("outfile"); > SomeData D2 = DF.data(); > SomeType *t2 = D2.typ(); > > ``` The first group of lines opens a data file for writing using the previously built type t1 and acquires the object D1 to access the data. The second group opens a data file for input and obtains a pointer to the data type in t2. It may use this pointer to ascertain that the data has a certain structure. The main task of SomeData is to provide data access methods. An example code fragment manipulating an object D would be: > ``` > D["comment"] = "blubb blubb"; > SomeData Datoms = D["atoms"]; > for (int i=0; i<10; i++) { > SomeData Datom = Datoms[i]; > Datom["z"] = 12; > Datom["partial_charge"] = 0.0; > Datam["name"] = "C"; > } > SomeData Dbonds = D["bonds"]; > for (int i=0; i<10; i++) { > SomeData Dbond = Dbonds[i]; > Dbond["from_atom"] = i; > Dbond["to_atom"] = (i+1)%10; > Dbond["type"] = 1; > } > cout << Datom[0]["z"].getInt() << " " << > Datom[0]["name"].getString() << endl; > > ``` If D is a structured data object, its members can be accessed by an overloaded indexing operator using either a symbolic name (given as a string, for structure data) or an integer index (for array data). Elementary data types are operated upon either by the assignment operator, which is properly overloaded for the different data types, or using access operators like getInt(). ### 2.2 The data type classes #### 2.2.1 Elementary data types Three elementary data types are supported 1. Signed or unsigned integer numbers with 1, 2, 4, or 8 bytes, e.g. > ``` > integer*4 > > ``` designates a 4-byte signed integer. 2. Floating point numbers in IEEE-format with 4, 8, or 16 bytes, e.g. > ``` > real*8 > > ``` is a 8-byte IEEE 754 floating point number. 3. Character strings (that are subject to character-set conversion) and (opaque) byte strings, e.g. > ``` > string*10 > > ``` for a 10-byte character string or > ``` > opaque*255 > > ``` for a 255-byte opaque byte string. The numerical data types can appear as matrices with arbitrary rank (i.e. number of dimensions). Along with the rank, the number of elements in each dimension must be specified, e.g. > ``` > real*4[100,100] > > ``` for a 100$`\times `$100 floating-point matrix, or > ``` > integer*4[.,2,.] > > ``` for a three-dimensional integer matrix whose first and third dimension is specified in the data stream. #### 2.2.2 Composite data types The composite data types are arrays and structures. Arrays a repetitions of data elements of the same type accessed by integer indices, while structures are sequences of data elements with different types accessed either by names or integer indices. A variation of the structure is the union, in which only exactly one of many elements of the structure is actually present. An example of an array is > ``` > array[100] of integer*4 > > ``` for an array of 100 4-byte integers. The number of elements may also be specified in the data stream, e.g. > ``` > array[.] of array[3] of real*4 > > ``` is an array in which each element consists of three real numbers. The number of elements is specified in the data stream. Structures are specified in the following syntax: > ``` > struct { > atoms : array of > struct { > z : integer; > partial_charge : real; > }; > bonds : array of > struct { > from_atom : integer; > to_atom : integer; > type : integer; > }; > positions : array of integer[3]; > optional velocities : array of integer[3]; > } > > ``` This structure has four fields named atoms, bonds, positions, and velocities. The optional specifier indicates that this field may or may not be present in the data stream. #### 2.2.3 Implementation Any data type is represented by a subclass of class SomeType. It provides a method typeP(t) that returns true if the data type is t, where t is one of the constants nilType, numType, stringType, arrayType, structType, or unionType defined in SomeType, similar to a dynamic cast. It also defines a virtual method print() to print the data type and a static method parse() to parse a textual type specification. The class NumType defines numerical data. It stores the base type of data (one of the enumeration constants i1, u1, i2, u2, i4, u4, i8, u8, f4, f8, f16) and an array of the dimensions of the matrix. A special value dimFree is used for variable-sized dimensions. All this information can be accessed using accessor methods or specified in the constructor. Similarly, the class StringType defines byte-string data. It stores the number of bytes (or dimFree for variable-sized data) and a flag to indicate whether the data is character or opaque. In the latter case, it will not be subject to any character-set conversion. Structured data types are represented by the class StructType. It stores an array of fields, each of which is defined by its name, its type and a flag to indicate whether it is optional. Accessor methods allow to access fields by index or by name. Structures a constructed empty, and a method addField(name,typ) is used to add fields. Unions are also represented by StructType, using a special flag that indicates that the structure is to be treated as a union. Arrays use the ArrayType class which stores the number of elements and the type of the elements. Again, the size can be given as the constant dimFree to indicate variable-sized arrays. Parsing of textual type specifications is done by the static member function parse() in SomeType. The syntax is chosen such that the first word of the type specifications indicates which subclass the type belongs to. so that the parser can then invoke the static member function parse() in the corresponding subclass. ### 2.3 The data object class The class SomeData provides the basic interface for an application to manipulate data. We chose not to duplicate the hierarchy of types classes in corresponding data classes but to include accessor methods for all types of data in a single class. Most accessor routines for data return objects of the class SomeData, so this approach saves the programmer from tedious recasts. When a method is called that is improper for the object’s data type, it can either return a null object, or throw an exception. More important, we wished to include garbage collection by reference counting in the implementation. This is only possible if the application program does not use pointers to access objects of the class SomeData. Instead, the functionality of the access layer is split in two parts: Its actual functionality is provided by the class SomeDataImpl, that can be subclassed by the different data representations, while objects of class SomeData contain a reference-counted pointer to an object of SomeDataImpl. Application programs thus can manipulate objects of class SomeData like pointers (or handles). Most methods in SomeData either pass through directly to the corresponding methods in SomeDataImpl or implement some convenience function that can be expressed in terms of these methods. This is especially advantageous as many operations on SomeData are expressed by overloaded functions, e.g. assign() to assign any type of elementary data. Classes that derive from SomeData and implement just one of these operations, e.g. integer assignment, would have to implement all overloaded versions of assign(). In SomeData, these operations are seperated into functions like assignInt() or assignString() that provide default implementations (namely throwing the appropriate exception) and can be overloaded individually. SomeData provides a method typeP(t) to test if the data object is of type t, and a method typ() that returns of pointer to its type (represented by an object of class SomeType). It also defines the method copy() to copy the contents of one data object into another, assuming that the types are identical. The assignment operator and copy constructor is defined in that way. For array and structure types, SomeData contains a convenience access operator by overloading the indexing operator operator\[\]. If used with a string argument, it accesses a field of a structure, while with an integer argument, it can be used on both array and structures to access by index. Its return value is another object of class SomeData. This makes accessing structures and arrays nearly as simple as accessing the corresponding native data types in C++, however, access to fields by name comes with some performance penalty as the character string must first be matched to the names of all fields. This is a general problem with languages that do not provide a symbol data type (as in LISP): The access would be faster if the compiler could convert the string into a more easily manipulated quantity like a 32-bit number that could then be used to perform a hash or binary search in the field table. For structures, the indexing operator maps to a method getField(). A method nFields() returns the number of fields in the structure, and getFieldName() the name of each field. For optional fields, unsetField() removes the field while fieldPresent() checks whether the field is present in the actual data. For unions, getActiveField() and setActiveField() are used to define which field is used. For arrays, a method nElements() returns the number of elements in the array, and getElem() is used to access elements. resize() can be used to resize the array to a specified number of elements. Elementary data are read by the methods getInt(), getDouble(), and getString(). Each returns the corresponding C++data type, or throws an exception if it does not match the actual data type. A method assign() with suitable argument types is used to assign data values. ### 2.4 Matrix data If the elementary data is a matrix, it is represented in C++ by objects of the utility class template Matrix\<T\> where T is the elementary C++ data type. Associated with each matrix is a shape of class MatrixShape that stores rank, minimum and maximum indices in each dimension and information about the storage layout. The methods getShape() and setShape() are used to manipulate the shape of the data. To access the data itself, the methods getData() and assignData() read and write the actual representation in memory (as defined by the shape). ### 2.5 Implementation classes Actual implementations of data objects are provided by subclassing the class SomeDataImpl. Its member functions are similar to the member functions of SomeData without convenience functions. Its only member field is thetype which is a reference-counted pointer to its data type object. #### 2.5.1 Direct representation The class DirectData provides an in-memory representation for structured data in a linked tree. Its subclasses DirectStructData, DirectArrayData, DirectNumData, DirectMatData, and DirectStringData are modelled after the subclasses of StructType and provide storage for the respective data types. These classes are also used to define a simple text file representation of the data. Each implements a static member function read() to read tokens from a lexical parser and convert them to an appropriate object. The data format is simple: numbers are represented naturally, strings are quoted, and arrays and structures are surrounded by brackets or braces and their elements separated by (optional) commas. Matrix data are also represented by bracketed lists of numbers, with free dimensions specified in front of the data. Similarly, DirectData objects know how to print themselves. These methods already allow a complete implementation of the structured data format. Their main shortcoming is that the data file must be read as a whole and converted to the DirectData format, before it can be accessed by the application. #### 2.5.2 Binary-file representation Similarly to the textual representation, a sequential binary stream representation is defined (see sec. 3 for more details). Data fields follow each other without intervening structure information, except for length information of variable-sized arrays and matrices and tag bytes for optional fields and unions. The class StreamData provides read access to such files. Each data item is represented simply by its position in the file. Elementary data are accessed by reading the bytes at the specified position into memory. To get a member of a structured data object, the position of the member is calculated. As the members are in general of variable size, this usually involves reading all the members before the requested item (at least so far, that their size can be determined). This could be avoided by adding size fields to all composite data types, but is not implemented in the basoc data format to keep it as simple as possible. The big advantage of StreamData is that only the requested parts of the data files are held in memory. The binary format for this implementation was designed to be as simple as possible and, in particular, easily writable from FORTRAN programs. However, write access to such files is not as simple, since this kind of data format requires the data to be written in sequence. To provide a convenient representation for writing such data, we once again resort to the DirectData implementation and provide a function to write a DirectData in binary format to a file. This (as well as reading data) is performed by a class BinaryDataIO that encapsulates the parameters of the binary representation and itself used C++ streams. It is, however, desirable to provide a mechanism to write data in smaller chunks instead of having to store the whole output file in memory. To do so, we need a way of specifying a part of a data structure. This is done by the SomeDataIterator class. An object of this class is a reference to a SomeData object somwhere in a composite data object. The method next() moves the pointer to the next object in the tree on the same level. If the object is composite, hasSubs() is true and a method down() can be used to access the first member object. After the last object on a level has been retrieved, and end-of-file condition is raised, and the application can use up() to go up one level and continue with next(). In this way, all objects in a data tree can be retrieved in exactly the sequence in which they are written in the data format. The method writeBinaryRegion() in BinaryDataIO writes the data between to SomeDataIterators to disk. To fill in the length information of variable-sized arrays, it keeps a region stack that contains the byte offsets of all composite data objects that enclose the current object. Using these methods, the class DataFile that implements data files provides a method commit(D) to commit all data up to but not including the data item D to disk. Before they are written to disk, the data are stored in a DirectData object associated to the data file. After they are written, the corresponding parts of the data tree are deallocated and the memory thus freed. This procedure may be somewhat unsatisfactory as it does provide full flexibility. However, an implementation that allows filling in data objects in arbitrary order can be achieved only using a more advanced data format. #### 2.5.3 MallocFile representation A more flexible data format can be provided by using an MallocFile. This is a flexible block-structured file whose blocks can be manipulated similarly to the blocks in the C heap. Blocks can be allocated to arbitrary size and returned to the free list in any order. Each block is identified by an address and be accessed by acquiring a handle object based on the address. As long as the handle exists, a copy of the block is locked in memory for manipulation and written back when the handle is released. The simplest mapping of structured data to a MallocFile is to represent each data object by exactly one block. Composite objects then contain a list of addresses that identify blocks that represent their members. When an array is resized, it then suffices to resize the block that contains the array. Unfilled member fields can be represented by null pointers and filled at will. The format can be improved by not allocating a block to each data objects. Instead, data objects of constant size can be stored directly in their parent blocks, in place of the pointers. Thus, an array whose objects are of constant size can be stored in a single block. A simple rule determines the storage layout: If an object has variable size, a pointer to the object is stored, otherwise, the object itself shows up. An object has variable size, if it is a variable-sized elementary data item (like a matrix with free dimensions), or if it is a structure with variable-sized or optional fields, or if it is an array with a variable-sized element type or with an unspecified number of elements. ## 3 Data formats ### 3.1 Binary data format The primary binary data format has been designed with simplicity in mind. In particular, it can be written from FORTRAN 77 or C simulation programs without using the C++ library. The format is basically a sequential byte string format in which the data fields are written in the sequence in which they appear when traversing the type tree without intervening meta-information. The only exception are length specifications for variable-sized items (matrices or arrays), tag bytes for optional structure fields, and selector tags for unions. There are no alignment requirements for data items. The binary data stream is preceded by an ASCII portion of the file that contains an identification line with some metainformation, followed by the textual representation of the data type. Simulation programs can write this part easily as a string constant. The following is an example of such a header portion: > ``` > STRUCTURED FILE V0.1 BINARY_BE > #@Date= 18. 3.1998 Time: 15:26 > TYPE > struct { > molecule_description : struct { > molecule_name: string; > atom_classes : array of struct { > atom_class_id : integer*4; > atom_class_number : integer*4; > atom_class_name : string; > }; > atoms : array of struct { > atom_id : integer*4; > atom_name : string; > }; > bonds : array of struct { > bond_from_id : integer*4; > bond_to_id : integer*4; > bond_type : integer*4; > }; > }; > timesteps : array of struct { > global_obs : real*4[.]; > coordinates : real*4[3,.]; > optional velocity : real*4[3,.]; > optional potential : struct { > bb : real*4[3,2]; > data : real*4[.,.,.]; > } > }; > }; > DATA > > ``` The first line starts with the constant ”STRUCTURED FILE” to identify the file type, followed by a version specification and optional keyword, here ”BINARY\_BE” indicating that the data are written in binary format with big-endian byte order. Lines starting with hash signs are comment lines. The keyword TYPE initiates parsing of the type tree textual representation. The binary data stream starts immediately after the end of the line containing the ”DATA” keyword. The following is the specification for the binary data stream: 1. Elementary data objects are written in their natural representation with the byte-order indicated in the identification line. Floating-point numbers are written in IEEE standard representation. 2. Multidimensional data objects are written in FORTRAN order, i.e. the first index varies fastest. If any dimension is unspecified in the data type, the number of elements in this dimension is written as a 4-byte integer in front of the data. This is only done for unspecified dimensions. 3. Character and byte strings are written byte-by-byte. If they are of unspecified size, the actual size precedes them as a4-byte integer. 4. Structure and array data are written as consecutive data elements. If the array size is unspecified in the data type, it precedes the data as a 4-byte integer. 5. Optional fields are preceded by a single byte. If this byte is zero, the field is not present and there follow no data. 6. Unions are preceded by a 2-byte integer indicating the index (starting from zero) of the active field. It is followed by the binary data for this field only. ## 4 Extensions ### 4.1 Named types Named types are used to built recursive type trees. In order that a recursive type tree does not lead to a data tree with infinite recursion, recursive types usually appear along with unions. An instructive example of their usage is the following data type that is used to externalize type trees: > ``` > typedef TypeDescriptor = union { > num : struct { > isFloat : integer*1; > size : integer*1; > dim : array of integer*4; > }; > string : struct { > isOpaque : integer*1; > size : integer*4; > }; > struct : struct { > isUnion : integer*1; > fields : array of struct { > name : string; > typ : type TypeDescriptor; > isOptional : integer*1; > }; > }; > array : struct { > size : integer*4; > subtype : type TypeDescriptor; > }; > named : struct { > name : string; > }; > } > type TypeDescriptor; > > ``` The syntax > ``` > typedef typename = type > > ``` declares the typename to stand in for the type wherever type typename is used. This enables the struct and array variants of the union to reference other type descriptor trees. The last line of the example is not part of the named-type definition but declares this type to the application. ### 4.2 Late-type binding It is not always advantageous to specify the data type separately from the data. As far as the methods of the SomeData class are concerned, it is sufficient for the data type to specify that an object is a structure or an array, but not what the member fields are. This decision could be deferred to the moment when the fields are actually accessed. The choice of a strong or early type binding that determines the complete type tree when creating the data object was motivated by efficiency considerations. In particular, if a data structure contains arrays of structures, a late-binding data format is forced to repeat the field names in each element of the array. However, in many cases it is desirable to be able to specify that any data type may appear in a data object, e.g. > ``` > struct { > … > userdata: any; > lots_of_userdata: array of any; > … > }; > > ``` specifies a structure with a field userdata that can contain an arbitrary data type, and an array lots\_of\_userdata whose elements can contain any and in particular different data types. This is especially desirable if the underlying data format is a late-binding format where the complete type tree can only be retrieved by reading the whole file. To be able to store arbitrary data types in a file, the data type must be specified in the data. One way to do so is to use the textual representation of the data type and use the parse() static member function of SomeType. Another is to make use of the data structure from the example in 4.1. It contains all the fields necessary to externalize a complete type tree. The class SomeType() provides methods to convert type trees from and to data objects using this type specification. A data format that wishes to make use of the any type will create a data object of this type, have it filled in by these methods and arrange to have it written along with the actual data object that contains the any data. To the programmer, the any type is completely transparent when reading data. In particular, no instance of SomeData should ever be of the any type when reading. It will only appear in the type description of members of composite data types. Whenever such a member is accessed by the overloaded indexing operator, it will return the actual data type found in the data file. When writing data, a data type will be of the any type until data is actually written to it. Before you can assign data to an any type data object, you must specify the actual data type t by using the actualizeType(t) method of SomeData. After this method has been called, the data object behaves as if it was of the type t. Objects of class any are implemented by means of forwarding. Their implementation contains a pointer to the target data object whose type is specified by actualizeType(). Each method of SomeDataImpl forwards to the corresponding method in the target data object when the forwarding pointer is set. Otherwise, the any type object behaves as a null object, illegal to read or write. ### 4.3 Object persistence strategies An object persistence mechanism, in theory, relieves the programmer of the need to program any input/output. Instead, the compiler and/or the runtime system take over the task of reading and writing objects from or to external storage. This could, in theory, be achieved by a precompiler, but there are several conceptual problems: * A class may contain temporary fields, or fields that have meaning only when the object is in a certain state. The programmer probably does not want these fields to be made persistent, especially if these fields are pointer fields. Handling this requires the introduction of a special keyword to mark persistent fields in an object. * C++ relies heavily on pointers which cannot be externalized. They must be replaced by appropriate object handles, and consequently care must be taken that all objects that are referenced are also made persistent. * Data structures are often implemented by means of template classes, e.g. the Standard Template Library. These template classes must also be made persistent. Most of these problems stem from the requirement that C++ be efficient and compatible with C. Languages like Java avoid these problems by disallowing pointers or template classes. Is there a way to achieve something similar without using a precompiler? In Java, a reflection mechanism makes it possible to read out the structure of any data type at runtime. Using such a mechanism, we could provide a method that takes an arbitrary object and constructs an appropriate data type for its externalization. Such a mechanism is currently not available in C++. In particular, there is no way to get a list of the fields in a structure though this could be envisaged as a part of a more general template implementation. An alternative is the approach taken by Sun’s XDR: Define an abstract data description class whose methods can be used to built data objects. Then, in each persistent class, add a single method that takes a pointer to such an abstract data description object and invoke the methods corresponding to the fields of the class: > ``` > struct S : public ADRInterface { > int a; > float b; > struct S2 c; > struct S3 d[5]; > > void adr(ADROperation *op) { > op->adrInt("a",a); > op->adrFloat("b",b); > op->adrStruct("c",c); > op->adrArray("d",d,5); > } > }; > > ``` This can be further enhanced by using overloaded functions for describing the data. The methods adrStruct() and adrArray() here expect that their second argument, i.e. the data object, is subclassed from ADRInterface so they can call the adr() method to obtain the structure of the objects. To use such an interface for the SomeData class, three different ADROperations must be defined: one to obtain the type tree, and one each for reading and writing. Using these operations, any object that implements the adr() method can be automatically converted into a SomeData object and back. ## 5 Conclusions and outlook The library presented here presents a uniform interface for handling persistent structured data objects in C++. Together with the simple binary file format defined here, it enables applications to store data on disk in an easily interpreted but highly flexible format. It relieves the programmer from the burden to define binary file formats and moves the data exchange specification to the level of symbolic field names. Practical experience shows that this is the most important feature of the library as it allows to add more fields to a data format without interfering with already existing programs. For reasons of performance, more sophisticated data representation can be added to the library. One such, based on the MallocFile, has been discussed above, others could include interfaces to established data formats like HDF or to relational or object-oriented data bases using SQL. As everyone talks about object-oriented programming, extending the library in this direction can be achieved by adding method members to structures, making it possible to invoke operations on SomeData objects. Return type and argument lists of such methods can again be specified by abstract type trees and passes as SomeData objects. The actual code executed by a method can be hidden away from the application by the data-set classes, thus making it possible to invoke e.g. a Java method from a C++ program. However, this is exactly the feature provided by distributed object systems like CORBA or ILU . This also shows that abstract type trees and a uniform interface to structured data is not restricted to object persistence. Java recently introduced a reflection interface that makes type trees of Java objects accessible from Java code at run-time, and as not everybody can move to Java, especially in scientific computing, a simple kitchen-sink solution like the one presented here might ease some of the everyday problems in C++ programming. Acknowledgements: I would like to thank Frank Cordes for his effort at integrating the data format in a molecular dynamics application, and Daniel Runge, Johannes Schmidt-Ehrenberg, and Hans-Christian Hege for discussion.
no-problem/9907/hep-th9907190.html
ar5iv
text
# Two-Loop Euler-Heisenberg QED Pair-Production Rate ## I Introduction Euler and Heisenberg , and many others since , computed the exact renormalized one-loop QED effective action for electrons in a uniform electromagnetic field background. When the background is purely that of a static magnetic field, the effective action is minus the effective energy of the electrons in that background. When the background is purely that of a uniform electric field, the effective action has an imaginary part which determines the pair-production rate of electron-positron pairs from vacuum . In this paper we consider the two-loop Euler-Heisenberg effective action, and we show how the divergence of the perturbative expression for the effective action with a uniform magnetic background is related to the non-perturbative imaginary part of the effective action with a uniform electric background. The two-loop Euler-Heisenberg effective Lagrangian, describing the effect of a single photon exchange in the electron loop, was first calculated by Ritus , and later recalculated by Dittrich and Reuter for the magnetic field case. In both cases the proper-time method and the exact Dirac propagator in the uniform field were used. More recently the magnetic field computation was repeated in using the more convenient ‘worldline’ formalism . This calculation revealed that the previous results by Ritus and Dittrich/Reuter were actually incompatible, and differed precisely by a finite electron mass renormalization. This prompted yet another recalculation of this quantity in the worldline formalism , now using dimensional regularisation instead of a proper-time cutoff as had been used in the previous calculations. That calculation confirmed the correctness of Ritus’s result, and conversely showed that the final result given by was not expressed in terms of the physical electron mass. As part of our analysis here, we show how this finite difference in the mass renormalization affects the large-order behaviour of perturbation theory, and how this affects the leading contribution to the imaginary part of the effective action in the electric field case. For a uniform magnetic background, of strength $`B`$, the one-loop effective Lagrangian has a simple “proper-time” integral representation : $$^{(1)}=\frac{m^4}{8\pi ^2}\left(\frac{eB}{m^2}\right)^2_0^{\mathrm{}}\frac{ds}{s^2}(\mathrm{coth}s\frac{1}{s}\frac{s}{3})e^{m^2s/(eB)}$$ (1) The $`\frac{1}{s}`$ term is a subtraction of the zero field ($`B=0`$) effective action, while the $`\frac{s}{3}`$ subtraction corresponds to a logarithmically divergent charge renormalization. For a given strength $`B`$, this integral can be evaluated numerically . Alternatively, we can make contact with a perturbative evaluation of the one-loop effective action by making an asymptotic expansion of the integral in the weak field limit – i.e., for small values of the dimensionless parameter $`\frac{eB}{m^2}`$: $`^{(1)}{\displaystyle \frac{2m^4}{\pi ^2}}\left({\displaystyle \frac{eB}{m^2}}\right)^4{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{2^{2n}_{2n+4}}{(2n+4)(2n+3)(2n+2)}}\left({\displaystyle \frac{eB}{m^2}}\right)^{2n}`$ (2) Here the $`_{2n}`$ are Bernoulli numbers . Each term in this expansion of $`^{(1)}`$ is associated with a one-fermion-loop Feynman diagram. Note that only even powers of $`eB`$ appear, as expected due to charge conjugation invariance (Furry’s theorem). The divergent $`\mathrm{O}(e^2)`$ self-energy term is not included as it contributes to the bare Lagrangian by charge renormalization. The expansion (2) is the prototypical “effective field theory” effective Lagrangian , where the low energy effective Lagrangian, for energies well below the fermion mass scale $`m`$, is expanded as $$=m^4\underset{n}{}a_n\frac{O^{(n)}}{m^n}$$ (3) with $`O^{(n)}`$ being an operator of dimension $`n`$. For QED in a uniform background, the higher dimensional operators $`O^{(n)}`$ are formed from powers of Lorentz invariant combinations of the uniform field strength $`F_{\mu \nu }`$. For a uniform magnetic background this simply means even powers of $`B`$, as in (2). Note that the ‘low energy’ condition here means that the cyclotron energy $`\frac{eB}{m}`$ is well below the fermion mass scale $`m`$; in other words, $`\frac{eB}{m^2}1`$. The Euler-Heisenberg Lagrangian encodes the information on the low-energy limit of the one-loop $`N`$ \- photon amplitudes in a way which is highly convenient for the derivation of various nonlinear QED effects such as vacuum birefringence (see, e. g., and refs. therein) or photon splitting . The experimental observation of vacuum birefringence is presently attempted by laser experiments . There is also recent experimental evidence for vacuum effects in pair production with strong laser electric fields . The one-loop Euler-Heisenberg perturbative effective action (2) is not a convergent series. The one-loop expansion coefficients in (2) alternate in sign \[since $`\mathrm{sign}(_{2n})=(1)^{n+1}`$\], but grow factorially in magnitude (see also Table 1): $`a_n^{(1)}`$ $`=`$ $`{\displaystyle \frac{2^{2n}_{2n+4}}{(2n+4)(2n+3)(2n+2)}}`$ (4) $``$ $`(1)^n{\displaystyle \frac{1}{8\pi ^4}}{\displaystyle \frac{\mathrm{\Gamma }(2n+2)}{\pi ^{2n}}}\left(1+{\displaystyle \frac{1}{2^{2n+4}}}+{\displaystyle \frac{1}{3^{2n+4}}}+\mathrm{}\right)`$ (5) So the perturbative expansion (2) is a divergent series. This divergent behaviour is not a bad thing; it is completely analogous to generic behaviour that is well known in perturbation theory in both quantum field theory and quantum mechanics . For example, Dyson argued physically that QED perturbation theory is non-analytic, and therefore presumably divergent, as an expansion in the fine structure constant $`\alpha `$, because the theory is unstable when $`\alpha `$ is negative. As is well known, the divergence of high orders of perturbation theory can be used to extract information about non-perturbative decay and tunneling rates, thereby providing a bridge between perturbative and non-perturbative physics . It has been argued , based on the behaviour of the one-loop Euler-Heisenberg effective Lagrangian (2), that the effective field theory expansion (3) is generically divergent. Here we consider this question at the two-loop level. We stress that for energies well below the scale set by the fermion mass $`m`$, the divergent nature of the effective Lagrangian is not important, as the first few terms in the series (2) provide an accurate approximation. However, the divergence properties do become important when the external energy scale approaches the fermion mass scale $`m`$. The divergence is also the key to understanding how non-perturbative imaginary contributions to the effective action arise from real perturbation theory. ## II Borel Analysis of the One-Loop Euler-Heisenberg Effective Lagrangian To begin, we review very briefly some basics of Borel summation . Consider an asymptotic series expansion of some function $`f(g)`$ $`f(g){\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}a_ng^n`$ (6) where $`g0^+`$ is a small dimensionless perturbation expansion parameter. In an extremely broad range of physics applications one finds that perturbation theory leads not to a convergent series but to a divergent series in which the expansion coefficients $`a_n`$ have leading large-order behaviour $`a_n(1)^n\rho ^n\mathrm{\Gamma }(\mu n+\nu )(n\mathrm{})`$ (7) for some real constants $`\rho `$, $`\mu >0`$, and $`\nu `$. When $`\rho >0`$, the perturbative expansion coefficients $`a_n`$ alternate in sign and their magnitude grows factorially, just as in the Euler-Heisenberg case (5). Borel summation is a useful approach to this case of a divergent, but alternating series. Non-alternating series must be treated somewhat differently. To motivate the Borel approach, consider the classic example : $`a_n=(1)^n\rho ^nn!`$, and $`\rho >0`$. The series (6) is clearly divergent for any value of the expansion parameter $`g`$. Write $`f(g)`$ $``$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}(1)^n(\rho g)^n{\displaystyle _0^{\mathrm{}}}𝑑ss^ne^s`$ (8) $``$ $`{\displaystyle \frac{1}{\rho g}}{\displaystyle _0^{\mathrm{}}}𝑑s\left({\displaystyle \frac{1}{1+s}}\right)\mathrm{exp}\left[{\displaystyle \frac{s}{\rho g}}\right]`$ (9) where we have formally interchanged the order of summation and integration. The final integral, which is convergent for all $`g>0`$, is defined to be the sum of the divergent series. To be more precise , the formula (9) should be read backwards: for $`g0^+`$, we can use Laplace’s method to make an asymptotic expansion of the integral, and we obtain the asymptotic series in (6) with expansion coefficients $`a_n=(1)^n\rho ^nn!`$. For a non-alternating series, such as $`a_n=\rho ^nn!`$, we need $`f(g)`$. The Borel integral (9) is an analytic function of $`g`$ in the cut $`g`$ plane: $`|\mathrm{arg}(g)|<\pi `$. So a dispersion relation (using the discontinuity across the cut along the negative $`g`$ axis) can be used to define the imaginary part of $`f(g)`$ for negative values of the expansion parameter: $`\mathrm{Im}f(g){\displaystyle \frac{\pi }{\rho g}}\mathrm{exp}[{\displaystyle \frac{1}{\rho g}}]`$ (10) The imaginary contribution (10) is non-perturbative (it clearly does not have an expansion in positive powers of $`g`$) and has important physical consequences. Note that (10) is consistent with a principal parts prescription for the pole that appears on the $`s>0`$ axis if we make the formal manipulations as in (9): $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\rho ^nn!g^n{\displaystyle \frac{1}{\rho g}}{\displaystyle _0^{\mathrm{}}}𝑑s\left({\displaystyle \frac{1}{1s}}\right)\mathrm{exp}\left[{\displaystyle \frac{s}{\rho g}}\right]`$ (11) Similar formal arguments can be applied to the case when the expansion coefficients have leading behaviour (7). Then the leading Borel approximation is $`f(g){\displaystyle \frac{1}{\mu }}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{ds}{s}}\left({\displaystyle \frac{1}{1+s}}\right)\left({\displaystyle \frac{s}{\rho g}}\right)^{\nu /\mu }\mathrm{exp}\left[\left({\displaystyle \frac{s}{\rho g}}\right)^{1/\mu }\right]`$ (12) For the corresponding non-alternating case, when $`g`$ is negative, the leading imaginary contribution is $`\mathrm{Im}f(g){\displaystyle \frac{\pi }{\mu }}\left({\displaystyle \frac{1}{\rho g}}\right)^{\nu /\mu }\mathrm{exp}\left[\left({\displaystyle \frac{1}{\rho g}}\right)^{1/\mu }\right]`$ (13) Note the separate meanings of the parameters $`\rho `$, $`\mu `$ and $`\nu `$ that appear in the formula (7) for the leading large-order growth of the expansion coefficients. The constant $`\rho `$ clearly combines with $`g`$ as an effective expansion parameter. The power of the exponent in (13) is determined by $`\mu `$, while the power of the prefactor in (13) is determined by the ratio $`\frac{\nu }{\mu }`$. It must be stressed that these formulas (12) and (13) are formal, being based on assumed analyticity properties of the function $`f(g)`$. The Borel dispersion relations could be complicated by the appearance of additional poles and/or cuts in the complex $`g`$ plane, signalling new physics . In certain special cases these analyticity assumptions can be tested rigorously, but we have in mind the situation in which one is confronted with the expansion coefficients $`a_n`$ of a perturbative expansion, without corresponding information about the function that this series is supposed to represent. This is a common circumstance in physical applications of perturbation theory. For example, Borel techniques have recently been used to study the divergence of the derivative expansion for QED effective actions in inhomogeneous backgrounds . Returning to the Euler-Heisenberg effective Lagrangian, the question of whether the perturbative expansion is alternating or non-alternating is directly relevant. For a uniform magnetic background, the one-loop Euler-Heisenberg series (2) is precisely of the form (6) with $`g=(\frac{eB}{m^2})^2`$. Moreover, from (5) the expansion coefficients $`a_n^{(1)}`$ have leading large-order behaviour of the form (7), with $`\rho =\frac{1}{\pi ^2}`$, and $`\mu =\nu =2`$. In fact, taking into account the sub-leading corrections indicated in (5), the proper-time integral representation (1) is precisely the Borel sum, using (12), of the divergent series (2) . For a uniform electric background, the only difference perturbatively is that $`B^2`$ is replaced by $`E^2`$; that is, $`g=(\frac{eB}{m^2})^2`$ is replaced by $`g=(\frac{eE}{m^2})^2`$. So the perturbative one-loop Euler-Heisenberg series (2) becomes non-alternating. Then from (13), with $`\rho =\frac{1}{\pi ^2}`$ and $`\mu =\nu =2`$, we immediately deduce the leading behaviour of the imaginary part of the one-loop Euler-Heisenberg effective Lagrangian: $`\mathrm{Im}^{(1)}{\displaystyle \frac{m^4}{8\pi ^3}}\left({\displaystyle \frac{eE}{m^2}}\right)^2\mathrm{exp}\left[{\displaystyle \frac{m^2\pi }{eE}}\right]`$ (14) This imaginary part has direct physical significance - it gives half the electron-positron pair production rate in the uniform electric field $`E`$ . Actually, since we also know the sub-leading corrections (5) to the leading large-order behaviour of the expansion coefficients $`a_n^{(1)}`$, we can apply (13) successively to go beyond the leading behaviour in (14): $`\mathrm{Im}^{(1)}{\displaystyle \frac{m^4}{8\pi ^3}}\left({\displaystyle \frac{eE}{m^2}}\right)^2{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{k^2}}\mathrm{exp}\left[{\displaystyle \frac{m^2\pi k}{eE}}\right]`$ (15) This is Schwinger’s classic result for the imaginary part of the one-loop effective Lagrangian in a uniform electric field $`E`$. To elucidate the physical meaning of the individual terms of this series it is useful to employ the following alternative representation due to Nikishov , $`{\displaystyle \frac{2}{\mathrm{}}}\mathrm{Im}^{(1)}VT`$ $`=`$ $`{\displaystyle \underset{r}{}}{\displaystyle \frac{d^3pV}{(2\pi \mathrm{})^3}\mathrm{ln}(1\overline{n}_p)},`$ (16) $`\overline{n}_p`$ $`=`$ $`\mathrm{exp}\left(\pi {\displaystyle \frac{m^2+p_{}^2}{eE}}\right)`$ (17) Here $`\overline{n}_P`$ is the mean number of pairs produced by the field in the state with given momentum $`p`$ and spin projection $`r`$. An expansion of the logarithm in $`\overline{n}_P`$ and term-by-term integration leads back to Schwinger’s formula (15). Thus the leading term in this formula can be interpreted as the mean number $`\overline{n}_P`$ of pairs in the unit 4-volume $`VT`$, while the higher ($`k2`$) terms describe the coherent creation of $`k`$ pairs. Pair creation can occur for any value of the electric field strength, though due to the exponential suppression factors one is presently still far away from being able to observe spontaneous pair creation by macroscopic fields in the laboratory. However, it can be arranged for electrons traversing the focus of a terawatt laser to see a critical field in their rest frame. This has recently led to the first observation of pair creation in a process involving only real photons . For the one-loop Euler-Heisenberg QED effective Lagrangian, this large-order perturbation theory analysis is greatly simplified by the fact that we know the exact formula (5) for the expansion coefficients $`a_n^{(1)}`$. This will not be the case below, when we discuss the two-loop Euler-Heisenberg effective Lagrangian. So, for the sake of numerical comparison, we compare the exact one-loop coefficients $`a_n^{(1)}`$ with their leading large-order behaviour. The coefficients are listed in Table 1 up to $`n=15`$. Since the growth is fast, it is convenient to compare the logarithms, as is done in Figure 1. With 16 terms it is straightforward to fit the the values of $`\rho `$, $`\mu `$ and $`\nu `$ appearing in (7); moreover, there is sufficient accuracy to fit the overall coefficient $`\frac{1}{8\pi ^4}`$. In Figure 1 we plot $`A_n^{(1)}\mathrm{log}|a_n^{(1)}|`$, and $`C_n^{(1)}=\mathrm{log}[\mathrm{\Gamma }(2n+2)/(8\pi ^{2n+4})]`$. The agreement is spectacular, already for $`n=0`$. Indeed, on this scale the two plots are indistinguishable. To go beyond the leading large-order behaviour, we plot the difference $`A_n^{(1)}C_n^{(1)}`$. This can be fitted to the correct form $`\mathrm{log}(1+\frac{1}{2^{2n+4}})\frac{1}{2^{2n+4}}`$, with remarkable accuracy for $`n2`$, as illustrated in Figure 2. ## III The Two-Loop Euler-Heisenberg Effective Lagrangian We now turn to the two-loop Euler-Heisenberg effective Lagrangian, describing the effect of a single photon exchange in the electron loop. This quantity was first studied by Ritus . Using the exact electron propagator in a constant field found by Fock and Schwinger , and a proper-time cutoff as the UV regulator, he obtained the on-shell renormalized $`^{(2)}`$ in terms of a certain two-parameter integral. From this integral the imaginary part of the Lagrangian was then extracted by a painstaking analysis of its analyticity properties, yielding a representation analogous to Schwinger’s one-loop formula (15) . Adding up the one-loop and the two-loop contributions to the imaginary part the result reads $`\mathrm{Im}^{(1)}+\mathrm{Im}^{(2)}`$ $``$ $`{\displaystyle \frac{m^4}{8\pi ^3}}\beta ^2{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}\left[{\displaystyle \frac{1}{k^2}}+\alpha \pi K_k(\beta )\right]\mathrm{exp}\left[{\displaystyle \frac{\pi k}{\beta }}\right]`$ (18) where $`\beta =\frac{eE}{m^2}`$. For the function $`K_k(\beta )`$ the following small $`\beta `$ \- expansion was obtained in , $`K_k(\beta )`$ $`=`$ $`{\displaystyle \frac{c_k}{\sqrt{\beta }}}+1+\mathrm{O}(\sqrt{\beta })`$ (19) $`c_1=0,`$ $`c_k={\displaystyle \frac{1}{2\sqrt{k}}}{\displaystyle \underset{l=1}{\overset{k1}{}}}{\displaystyle \frac{1}{\sqrt{l(kl)}}},k2`$ (20) According to the physical interpretation of the individual terms in the series in terms of coherent multipair creation can be carried over to the two-loop level. This requires one to absorb the term linear in $`\alpha `$ into the exponential factor, rewriting $`\left[{\displaystyle \frac{1}{k^2}}+\alpha \pi K_k\left({\displaystyle \frac{eE}{m^2}}\right)\right]\mathrm{exp}\left[{\displaystyle \frac{k\pi m^2}{eE}}\right]{\displaystyle \frac{1}{k^2}}\mathrm{exp}\left[{\displaystyle \frac{k\pi m_{}^2}{eE}}\right]`$ (22) The individual terms in the expansion (18) of $`K_k(\beta )`$ are then to be absorbed into the mass shift $`m_{}m`$. For the lowest order terms in this expansion, those given in (18), the physical meaning of the corresponding mass shifts in terms of the coherent pair production picture is discussed in . For example, the leading $`\mathrm{`}\mathrm{`}1^{\prime \prime }`$ in the expansion of $`K_1(\beta )`$ after exponentiation yields a mass shift that can be identified as the classical change in mass caused for one particle in a created pair by the acceleration due to its partner. Assuming this exponentiation to work one can, of course, obtain some partial information on the higher-loop corrections to the imaginary part. More remarkably, since the above physical interpretation requires the mass appearing in the exponent to be the physical one, it allows one to determine the physical renormalized mass from an inspection of the renormalized Lagrange function alone, rather than by a calculation of the (lower order) electron self energy. Following the pioneering work by Ritus and his collaborators, a first recalculation of the Euler-Heisenberg Lagrangian was done by Dittrich and Reuter for the magnetic field case. The more recent recalculation in showed that the two previous results were actually incompatible, and differed precisely by a finite electron mass renormalisation. All three calculations had been done using a proper-time cutoff rather than dimensional regularisation. This cutoff leads to relatively simple integrals, but due to its non-universality makes it, at the two-loop level, already non-trivial to determine the physical renormalized electron mass. A fourth calculation of this quantity , now using dimensional regularisation, yielded complete agreement with Ritus’s result after a perturbative expansion of both results in powers of the $`B`$ field had been performed. In the following we will push the same calculation to $`\mathrm{O}(B^{34})`$, and analyze the rate of growth of the expansion coefficients. ## IV Borel Analysis of the Two-Loop Euler-Heisenberg Effective Lagrangian The world-line expression for the two-loop on-shell renormalized Euler-Heisenberg effective Lagrangian is : $`^{(2)}`$ $`=`$ $`\alpha {\displaystyle \frac{m^4}{(4\pi )^3}}\left({\displaystyle \frac{eB}{m^2}}\right)^2{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{ds}{s^3}}e^{m^2s/(eB)}{\displaystyle _0^1}𝑑u\left[L(s,u)2s^2+{\displaystyle \frac{6}{u(1u)}}\left({\displaystyle \frac{s^2}{\mathrm{sinh}^2s}}+s\mathrm{coth}s\right)\right]`$ (24) $`12\alpha {\displaystyle \frac{m^4}{(4\pi )^3}}{\displaystyle \frac{eB}{m^2}}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{ds}{s}}e^{m^2s/(eB)}\left[\mathrm{coth}s{\displaystyle \frac{1}{s}}{\displaystyle \frac{s}{3}}\right]\left[{\displaystyle \frac{3}{2}}\gamma \mathrm{log}\left({\displaystyle \frac{m^2s}{eB}}\right)+{\displaystyle \frac{eB}{m^2s}}\right]`$ Here $`\alpha \frac{1}{137}`$ is the fine-structure constant, and $`\gamma =0.5772\mathrm{}`$ is Euler’s constant. The function $`L(s,u)`$ appearing in the integrand of the first term in (24) is defined by the following relations: $`L(s,u)`$ $`=`$ $`s\mathrm{coth}s\left[{\displaystyle \frac{\mathrm{log}\left(\frac{u(1u)}{G(u,s)}\right)}{[u(1u)G(u,s)]^2}}F_1+{\displaystyle \frac{F_2}{G(u,s)[u(1u)G(u,s)]}}+{\displaystyle \frac{F_3}{u(1u)[u(1u)G(u,s)]}}\right]`$ (25) $`G(u,s)`$ $`=`$ $`{\displaystyle \frac{\mathrm{cosh}s\mathrm{cosh}((12u)s)}{2s\mathrm{sinh}s}}`$ (26) $`F_1`$ $`=`$ $`4s(\mathrm{coth}s\mathrm{tanh}s)G(u,s)4u(1u)`$ (27) $`F_2`$ $`=`$ $`2(12u){\displaystyle \frac{\mathrm{sinh}((12u)s)}{\mathrm{sinh}s}}+s(8\mathrm{t}\mathrm{a}\mathrm{n}\mathrm{h}s4\mathrm{c}\mathrm{o}\mathrm{t}\mathrm{h}s)G(u,s)2`$ (28) $`F_3`$ $`=`$ $`4u(1u)2(12u){\displaystyle \frac{\mathrm{sinh}((12u)s)}{\mathrm{sinh}s}}4sG(u,s)\mathrm{tanh}s+2`$ (29) The second term in the two-loop expression (24) is generated by the one-loop electron mass renormalisation, which at the two-loop level becomes necessary in addition to the photon wave function renormalisation. For this mass renormalization term we have found the following exact expansion: $`{\displaystyle \frac{eB}{m^2}}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{ds}{s}}e^{m^2s/(eB)}\left[\mathrm{coth}s{\displaystyle \frac{1}{s}}{\displaystyle \frac{s}{3}}\right]\left[{\displaystyle \frac{3}{2}}\gamma \mathrm{log}\left({\displaystyle \frac{m^2s}{eB}}\right)+{\displaystyle \frac{eB}{m^2s}}\right]`$ (30) $`=\left({\displaystyle \frac{eB}{m^2}}\right)^4{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{2^{2n+4}_{2n+4}}{(2n+4)(2n+3)}}\left({\displaystyle \frac{3}{2}}\gamma \psi (2n+2)\right)\left({\displaystyle \frac{eB}{m^2}}\right)^{2n}`$ (31) Here $`_{2n}`$ are the Bernoulli numbers, and $`\psi (x)=\frac{\mathrm{\Gamma }^{}(x)}{\mathrm{\Gamma }(x)}`$ is the digamma function . We have not succeeded in finding a closed-form expression for the expansion coefficients arising from the expansion of the double integral in (24), although we suspect that one may exist. Instead, we have made an algebraic expansion of this integral, using MATHEMATICA and MAPLE. When combined with the exact expansion (31) of the mass renormalization term we obtain an expansion of the form: $`^{(2)}=\alpha {\displaystyle \frac{m^4}{(4\pi )^3}}\left({\displaystyle \frac{eB}{m^2}}\right)^4{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}a_n^{(2)}\left({\displaystyle \frac{eB}{m^2}}\right)^{2n}`$ (32) The expansion coefficients $`a_n^{(2)}`$ are listed in Table 1, up to $`n=15`$. Note that those coefficients are in some sense less universal than their one-loop counterparts, since they depend on the one-loop normalization condition imposed on the renormalized electron mass. Several comments are in order. First, the two-loop expansion coefficients $`a_n^{(2)}`$ alternate in sign, just as in the one-loop magnetic background case (5). Second, the magnitude $`|a_n^{(2)}|`$ is clearly growing factorially fast with $`n`$. Thus, the two-loop Euler-Heisenberg series (32) is a divergent series, as is the one-loop Euler-Heisenberg series (2). Note also that for each series the smallest magnitude coefficient is reached already for $`n=1`$, after which the coefficients begin to increase rapidly in magnitude. To extract the leading large-n growth of $`|a_n^{(2)}|`$ we fit $`a_n^{(2)}`$ to the form in (7). Once again, it is convenient to work with the logarithm $`D_n=\mathrm{log}(|a_n|)`$ since the growth is so rapid. It is relatively straightforward to find that $`\mu =\nu =2`$ and $`\rho =\frac{1}{\pi ^2}`$. It is more difficult to fit the overall coefficient, but if we assume this is a simple power of $`\pi `$ then our best fit for the leading large-order growth of the two-loop expansion coefficients in (32) is: $`a_n^{(2)}(1)^n{\displaystyle \frac{16}{\pi ^2}}{\displaystyle \frac{\mathrm{\Gamma }(2n+2)}{\pi ^{2n}}}`$ (33) This leading fit is displayed in Figure 3, in terms of $`A_n^{(2)}\mathrm{log}(|a_n^{(2)}|)`$ . The fit is not as good as the one-loop fit shown in Figure 1, but it is still very good. Note the remarkable similarity of the leading large-order growth (33) of the two-loop expansion coefficients to the leading large-order growth of the one-loop expansion coefficients in (5). The only difference is the overall coefficient. The parameters $`\rho `$, $`\mu `$ and $`\nu `$ in the general form (7) are identical. Using the Borel technique to relate this leading growth rate to the leading non-perturbative imaginary part of the effective Lagrangian in a uniform electric field $`E`$, we deduce that the two-loop leading imaginary part is proportional to the one-loop leading imaginary part (14). In fact, from (13) and (33), we find the leading contribution $`\mathrm{Im}^{(2)}\alpha \pi {\displaystyle \frac{m^4}{8\pi ^3}}\left({\displaystyle \frac{eE}{m^2}}\right)^2\mathrm{exp}\left[{\displaystyle \frac{m^2\pi }{eE}}\right]`$ (34) This has exactly the same dependence on the electric field $`E`$ as the one-loop case. So to two-loop order the leading non-perturbative behaviour of the imaginary part of the effective Lagrangian is: $`\mathrm{Im}\left(^{(1)}+^{(2)}\right)\left(1+\alpha \pi \right){\displaystyle \frac{m^4}{8\pi ^3}}\left({\displaystyle \frac{eE}{m^2}}\right)^2\mathrm{exp}\left[{\displaystyle \frac{m^2\pi }{eE}}\right]`$ (35) This agrees with the leading term of Ritus’s formula (18). To go beyond this leading term we need to look at corrections to the leading behaviour in (34). In Figure 4 we plot the difference of the logarithms, and we see that the $`n`$ dependence is much more gentle than the rapid fall-off found in the one-loop case, which was plotted in Figure 2. In fact, from the terms up to $`n=15`$, we obtain the fit $`a_n^{(2)}(1)^n{\displaystyle \frac{16}{\pi ^2}}{\displaystyle \frac{\mathrm{\Gamma }(2n+2)}{\pi ^{2n}}}\left[1{\displaystyle \frac{0.44}{\sqrt{n}}}+\mathrm{}\right]`$ (36) This is a considerably weaker $`n`$ dependence than is found for the first correction in the one-loop case (5). This means that in the two-loop case the dominant corrections are to the prefactor in the leading behaviour (34). This is in contrast to the one-loop case (15) where the first correction to the leading behaviour is exponentially suppressed. Indeed, applying the Borel relations, the correction term (36) leads to $`\mathrm{Im}\left(^{(1)}+^{(2)}\right)\left(1+\alpha \pi \left[1(0.44)\sqrt{{\displaystyle \frac{2eE}{\pi m^2}}}+\mathrm{}\right]\right){\displaystyle \frac{m^4}{8\pi ^3}}\left({\displaystyle \frac{eE}{m^2}}\right)^2\mathrm{exp}\left[{\displaystyle \frac{m^2\pi }{eE}}\right]`$ (37) We emphasize that the fit in (36) is based on a simple fit to the first 16 two-loop coefficients. Nevertheless, the structure of (37) conforms already to the form of Ritus’s expansion in eq.(18). It would be interesting to probe this correction term in more detail by a further study of the analyticity properties of the integral representations of the two-loop Euler-Heisenberg effective Lagrangian, or by looking at still higher orders in perturbation theory. ## V Concluding Remarks Our analysis also permits us to study the dependence of (34), the leading non-perturbative imaginary contribution to the effective Lagrangian, on the electron mass renormalisation. In the world-line expression (24) for the two-loop Euler-Heisenberg effective Lagrangian, a finite change of the renormalised electron mass would amount to an arbitrary change of the constant $`\frac{3}{2}\gamma `$ appearing in the second bracket of the second term. For example, in it had been shown that the renormalised Lagrangian obtained by differs from (24) precisely by a replacement of $`\frac{3}{2}`$ by $`\frac{5}{6}`$. A separate study of the contributions of the first and second term in (24) shows that, due to cancellations between both terms, the leading large-n growth of their sum is smaller than for each term separately. However this property holds true only if the renormalised electron mass is the physical one. For definiteness, in Figure 5 we compare the correct leading growth (33) of the two-loop coefficients with the coefficients obtained by expanding out the two-loop result of . The difference in the leading large-n growths is obvious. Thus, in agreement with the analysis of , we find that if and only if expressed in terms of the true electron mass will the imaginary part of the renormalised two-loop Lagrangian show the same exponential suppression factor $`\mathrm{exp}[\pi m^2/(eE)]`$ as for the one-loop Lagrangian. To summarize, we have constructed the imaginary part of the two-loop QED Euler-Heisenberg Lagrangian in a constant electric field by a computer based calculation of its weak field expansion together with Borel summation techniques. The knowledge of the first 16 coefficients has turned out to be sufficient to verify the structure of the leading $`(k=1)`$ term in Ritus’s eq.(18), and to obtain a numerical value for the first $`O(\sqrt{eE/m^2})`$ correction contained in that formula. The method used here is significantly simpler than the one in , where the imaginary part was obtained by an analysis of the analyticity properties of the two-loop parameter integrals. In particular, we have seen that the large order behaviour of the two-loop coefficients is the same (up to an overall constant factor) as the one-loop case. This means that the leading contribution to the imaginary part of the two-loop effective Lagrangian has the same form as in the one-loop case. This gives a new perspective to Ritus’s arguments that the true renormalized electron mass $`m`$ is such that the leading exponential factor in the pair production rate is $`\mathrm{exp}[\pi m^2/(eE)]`$. Since those arguments pertain to arbitrary loop orders, and the leading exponential factor is directly related to the leading growth rate in the weak field expansion, they also lead one to expect that the Euler-Heisenberg Lagrangian may be amenable to this type of Borel analysis at any fixed loop order in perturbation theory. ## VI Acknowledgements: C. S. would like to thank R. Stora for discussions. This work has been supported in part (GD) by the U.S. Department of Energy grant DE-FG02-92ER40716.00.
no-problem/9907/cond-mat9907413.html
ar5iv
text
# Metal-insulator transition in three dimensional Anderson model: scaling of higher Lyapunov exponents ## Abstract Numerical studies of the Anderson transition are based on finite-size scaling analysis of the smallest positive Lyapunov exponent. We prove numerically that the same scaling holds also for higher Lyapunov exponents. This scaling supports the hypothesis of the one-parameter scaling of the conductance distribution. From collected numerical data for quasi one dimensional systems up to system size $`24^2\times \mathrm{}`$ we found the critical disorder $`16.50W_c16.53`$ and the critical exponent $`1.50\nu 1.54`$. Finite-size effects and the role of irrelevant scaling parameters are discussed. PACS numbers: 71.30.+h, 71.23.-k, 72.15.Rn The main problem of the theory of Anderson transition is to prove that there is only one relevant parameter which controls the behavior of all quantities of interest in the neighborhood of the critical point. An excellent example of such a quantity is the smallest positive Lyapunov exponent (LE) $`z_1`$ which follows the one-parametric scaling relation $$z_1(L,W)=f(L/\xi (W)).$$ (1) In (1), $`W`$ is the disorder, $`L`$ defines the width of the quasi-one dimensional system $`L\times L\times L_z`$ and $`\xi (W)`$ is the universal scaling parameter. In Q1D geometry, Lyapunov exponents $`z_i`$ are defined through eigenvalues $`t_i`$ of the transfer matrix $`T^{}T`$ as $`z_i=2\frac{L}{L_z}\mathrm{log}t_i`$. In the limit $`L_z>>L`$, all $`z`$s are self-averaged quantities . Relation (1) determines the disorder and the system size dependence of $`z_1`$ in the neighborhood of the critical point $`W_c`$ and enables us to determine $`W_c`$ and critical exponents for conductance ($`s`$, $`W<W_c`$) and for localization length ($`\nu `$, $`W>W_c`$) In the pioneering work , critical parameters were found as $`W_c16.5\pm 0.5`$ for the box distribution of the disorder energies, and $`s=\nu 1.5\pm 0.1`$. This result was later confirmed by more accurate numerical studies , and also by analysis of the level statistics . Calculations performed for different microscopic models confirmed the universality of exponent $`\nu `$ within a given universality class . To complete the proof of the universality of the metal-insulator transition, the one parameter scaling should be found also for more realistic variables, such as the conductance $`g`$ . This must be done for the cubic samples. Here, owing to the absence of self-averaging, it is necessary to test the universal scaling of the whole distribution $`P(g)`$. It is unrealistic to perform such analysis with the numerical accuracy comparable to that achieved from Q1D studies. Therefore, previous studies of $`P(g)`$ concentrated only on estimation of the conductance distribution at the critical point and in the metallic and localized regime . The aim of this work is to support the idea of the one-parametric scaling of the conductance and of its distribution. Instead of the study $`g`$, we prove numerically that the higher Lyapunov exponents $`z_2,z_3,\mathrm{}`$ follow the same scaling behavior as the first one in the Q1D systems. Common scaling proves that the spectrum of the transfer matrix in the Q1D limit is determined only by one parameter. Strong correlation of $`z`$s gives also the serious basis for the generalization of the random - matrix theory to the description of the critical region . Although we deal only with Q1D geometry, it is reasonable to suppose that the observed correlation survives also for the cubic geometry. Then the relation between $`g`$ and $`z`$s, $`g=_i\mathrm{cosh}^2z_i/2`$, assures that $`g`$ also follows the one-parametric scaling. Collected numerical data also provide us with a very accurate estimation of the critical disorder $`W_c`$ and the critical exponent $`\nu `$. It is the first time that numerical data for system size $`L>16`$ has been collected and analyzed. Our data for large $`L`$ enable us to check the finite-size corrections to scaling proposed in . The scaling behavior of higher LEs was originally studied in the Henneke’s PhD Thesis . Due to the insufficient accuracy of his data and small system size, no acceptable proof of the common scaling was found. The first indication of the common scaling was shown in and generalized to the neighborhood of the critical point in . The common behavior of higher LEs, $`z_ii`$ is well known in the metallic regime; it was already used in to explain the physical meaning of the scaling parameter $`\xi (W)`$, and confirmed by random-matrix studies . In the localized regime, $`z`$s follow the relation $`z_i(W,L)=z_1(W,L)+\mathrm{\Delta }_i`$ with $`W`$ and $`L`$ independent constants $`\mathrm{\Delta }_i`$ . For the Q1D systems $`L^2\times L_z`$ we calculated all LEs for cca 21 different values of disorder, $`16W17`$. $`L`$ grows from $`L=4`$ up to 24. For the smallest $`L`$, the relative accuracy of the first LE $`z_1(W,L)`$, $`\epsilon _1=\sqrt{\mathrm{var}z_1}/z_1(W,L)`$ was 0.05% while $`\epsilon _1`$ was only 1% for $`L=21,22`$ and 24, being 0.5% for $`L=16,18`$. The accuracy of higher LE is much better; in particular, $`\epsilon _2\epsilon _1/2`$ and $`\epsilon _90.17\epsilon _1`$ for all system size. The interval of the disorder is narrow enough to approximate the $`W`$ dependence of $`z`$’s by the linear fit $$z_j(W,L)=z_j^{(0)}(L)+Wz_j^{(1)}(L),j=1,2,\mathrm{}$$ (2) Small differences between fits containing higher powers of $`W`$ and (2) appear only for $`L>18`$ and even then they do not exceed the numerical inaccuracy of the raw data. The typical $`W`$ \- dependence of our data is presented in Fig. 1 for $`z_2`$. The scaling behavior requires that $$z_j(W,L)z_{jc}+A\times (WW_c)\times L^\alpha ,\alpha =1/\nu .$$ (3) Comparison of (2) with (3) offers the simplest way to estimate the critical exponent $`\alpha `$. In Fig. 2. we present the $`L`$-dependence of $`z_j^{(1)}`$ for the first six LEs and for $`z_9`$. It confirms that close to the critical point these LEs scale with the same exponent $`\alpha `$: $$\alpha 0.655\pm 0.010$$ (4) which determines $`\nu =1.526\pm 0.023`$. This estimation is in very good agreement with the result of MacKinnon and differs slightly from . Figs. 1 and 2 also show also the important influence of the finite-size effects (FSE) in the present analysis. Evidently, the small $`L`$ data are of no use in the analysis of higher LEs. We found that the numerical data for $`z_j`$ could be used only when $$L>z_j.$$ (5) It is easy to understand. If $`z_j>L`$ then the $`j`$th channel is rather ”localized” than critical on this length scale. Therefore only a small part of the spectra which fulfills the relation (5) follows the scaling behavior. The rest of the spectrum depends on $`L`$ even at the critical point. This conclusion is supported also by the analysis of the density $`\rho (z)`$ of all LEs for the cubic samples . At the critical point, the number of system-size independent LEs grows as $`L`$ when $`L\mathrm{}`$ . As $`z_13.4`$, the above mentioned effect does not influence the analysis of the first LE $`z_1`$. Nevertheless, other FSE must be taken into account in the scaling analysis of $`z_1`$ . More reliable estimation of the exponent $`\alpha `$ (4) and of the critical disorder $`W_c`$ is given by the position of the minimum of the function $$F(W_c,\alpha ,\mathrm{})=\frac{1}{N}\underset{W,L}{}\frac{1}{\sigma _j^2(W,L)}\left[z_j(W,L)z_j^{\mathrm{fit}}(W,L)\right]^2.$$ (6) In (6), $`N=_{W,L}`$ is the number of points, and …stays for all other fitting parameters. The natural choice of the fitting function $`z_j^{\mathrm{fit}}`$ in (6) is the rhs of (2). None FSE are explicitly included in (2). Nevertheless, it still enables us to test the sensitivity of the critical parameters to the size of the analyzed systems. To do so, we considered different sets of input data $`z_j(L,W)`$ with the restriction $`L_{\mathrm{min}}LL_{\mathrm{max}}`$ ($`L_{\mathrm{min}}12`$). Then, the $`L_{\mathrm{min}}`$\- and $`L_{\mathrm{max}}`$\- dependence of $`W_c`$ and $`\alpha `$ was analyzed. While the influence of the choice of $`L_{\mathrm{max}}`$ is, as supposed, negligible,both $`W_c`$ and $`z_{jc}`$ increase with $`L_{\mathrm{min}}`$. We found the $`L_{\mathrm{min}}`$ \- independent results only for the two smallest LEs $`z_1`$ and $`z_2`$. For the higher LEs, critical parameters do not reach their limiting values even for $`L_{\mathrm{min}}=12`$. In difference to $`W_c`$, the estimation of the critical exponent $`\alpha `$ does not depend on the choice of interval of $`L`$. Obtained data are in good agreement with estimation (4) for all LEs under consideration. The weak $`L_{\mathrm{min}}`$ \- sensitivity of the critical exponent agrees with an assumption that FSE influence primarily the $`W`$-independent part of $`z_j`$ . Fig 1. offers a simple interpretation of this result: to eliminate FSE one has to shift each line by the disorder-independent constant $`B(L)`$ which should be added to the rhs. of (2). The proper choice of $`B(L)`$, assures that all lines cross at the same point as it is proposed by the scaling theory. Finite size corrections to the line slope are only of the ”higher order”. Slevin and Ohtsuki fitted $`z_1(W,L)`$ (resp. its inverse $`z_1^1`$) to the more general function $$z_1^{\mathrm{fit}}(L,W)=z_{jc}+\underset{n=0}{\overset{N_x}{}}\underset{m=0}{\overset{N_y}{}}A_{nm}x^ny^m$$ (7) with $`N_x=3`$, $`N_y=1`$. In (7), $`x=(w+b_1w^2+b_2w^3)L^\alpha `$, $`w=WW_c`$ and $`y=L^\beta `$ with $`\beta <0`$. Exponent $`\beta `$ represents the second critical (irrelevant) scaling exponent. We applied this function to our data with restriction (5) and with $`b_1=b_2=0`$, $`n+m1`$. Then $$z_j^{\mathrm{fit}}(L,W)=z_{jc}+A\times (WW_c)L^\alpha +BL^\beta .$$ (8) We have checked that more sophisticated fits do not provide us with any reasonable improvement of the accuracy of critical parameters. To test the quality of the fit (8), we again studied the sensitivity of our results to a change of the input data. Evidently, for large enough $`L_{\mathrm{min}}`$ the role of the irrelevant scaling exponent is negligible. The finite size effects become small and difficult to measure. Value of the irrelevant critical exponent $`\beta `$ obtained from fitting function (8) decreases to $`20`$ for large $`L_{\mathrm{min}}`$. For small values of $`L_{\mathrm{min}}`$, however, the three-parametric fit (8) still does not provide us with the $`L_{\mathrm{min}}`$-independent estimation of critical parameters. We averaged therefore the values of $`W_c`$ and $`\alpha `$ obtained from various choices of $`L_{\mathrm{min}}`$. Table 1. presents our results for the first five LEs obtained from fits (2) and (8). On the basis of the presented data we estimate $$16.50W_c16.53\mathrm{and}1.50\nu 1.54$$ (9) These values are in a very good agreement with . Our results (9) differ from that obtained by many parametric fitting procedure in Ref. (Table 1.). None of the analyzed statistical ensemble provides us with such high value of $`\nu `$. This discrepancy is probably caused by different input data. Contrary to previous treatments , we collected data for large system size in order to simplify the fitting function. The main shortcoming of this strategy is a lower accuracy of our data for $`z_1`$. On the other hand, the fact that the results obtained from the many parametric fitting procedure depend on $`L_{\mathrm{min}}`$ indicates that the fitting function (8) is still insufficient to reflect completely the corrections to scaling. The only way to obtain a more accurate estimation of the critical parameters is to collect more exact numerical data for large system size. To conclude, we have collected numerical data for the quasi one dimensional Anderson model up to system size $`L=24`$. Our data prove that higher Lyapunov exponents of the transfer matrix follow the one-parametric scaling law. The critical exponent $`\nu `$ coincides with that calculated from the scaling treatment of the smallest LE. The scaling holds only for Lyapunov exponents which are smaller than the system size considered. The common scaling enables us to express all relevant LEs as the unique function of the first one. Evidently, the same holds also for any function of $`z`$s. This indicates the validity of the scaling theory for the conductance. However, our analysis was restricted to the quasi-one dimensional systems. Rigorous proof of the one-parametric scaling of the conductance requires repeating the performed scaling analysis for the cubic samples, where no self-averaging of $`z`$s and of $`g`$ takes place. We show for the first time, that the numerical data for higher LE could be used for calculation of critical parameters of the metal- insulator transition. The numerical accuracy of higher LE is much better than that of $`z_1`$. The price we pay for it is a stronger influence of the finite-size effects which causes that the data obtained for small system size cannot be used for the scaling analysis. The best compromise between the accuracy and FSE offer data for the second LE $`z_2`$. We discussed the methods of elimination of the finite size effects and estimated the critical disorder and the critical exponent $`\nu `$ by relation (9). Acknowledgment This work has been supported by Slovak Grant Agency, Grant n. 2/4109/98. Numerical data have been partially collected using the computer Origin 2000 in the Computer Center of the Slovak Academy of Sciences.
no-problem/9907/astro-ph9907082.html
ar5iv
text
# Is I Zw 18 a young galaxy? ## 1 Introduction I Zw 18 remains the most metal-poor blue compact dwarf (BCD) galaxy known since its discovery by Sargent & Searle (1970). Later spectroscopic observations by Searle & Sargent (1972), Lequeux et al. (1979), French (1980), Kinman & Davidson (1981), Pagel et al. (1992), Skillman & Kennicutt (1993), Martin (1996), Izotov, Thuan & Lipovetsky (1997c), Izotov & Thuan (1998), Vílchez & Iglesias-Páramo (1998), Izotov & Thuan (1999) and Izotov et al. (1999) have confirmed its low metallicity with an oxygen abundance of only $``$ 1/50 the solar value. Zwicky (1966) described I Zw 18 as a double system of compact galaxies, which are in fact two bright knots of star formation with an angular separation of 5$`\stackrel{}{.}`$8. These two star-forming regions are referred to as the brighter northwest (NW) and fainter southeast (SE) components (Fig. 1) and form what we will refer to as the main body. Later studies by Davidson, Kinman & Friedman (1989) and Dufour & Hester (1990) have revealed a more complex optical morphology. The most prominent diffuse feature, hereafter component C (Fig. 1), is a blue irregular star-forming region $``$ 22″ northwest of the NW component. Dufour, Esteban & Castañeda (1996a), van Zee et al. (1998) and Izotov & Thuan (1998) have shown the C component to have a systemic radial velocity equal to that of the ionized gas in the NW and SE components, thus establishing its physical association to I Zw 18. Furthermore, van Zee et al. (1998) have shown that this component is embedded in a common H I envelope with the main body. Searle & Sargent (1972) and Hunter & Thronson (1995) have suggested that I Zw 18 may be a young galaxy, recently undergoing its first burst of star formation. The latter authors concluded from HST images that the colours of the diffuse unresolved component surrounding the SE and NW regions are consistent with a population of B and early A stars, i.e. with no evidence for older stars. Ongoing star formation in the main body of I Zw 18 is implied by the discovery of a population of Wolf-Rayet stars in the NW component (Izotov et al. 1997a; Legrand et al. 1997; de Mello et al. 1998). Flux-calibrated optical spectra of the C component (Izotov & Thuan 1998; van Zee et al. 1998) reveal a blue continuum with weak Balmer absorption features and faint H$`\alpha `$ and H$`\beta `$ in emission. Such spectral features imply that the H II region is ionized by a population of early B stars and suggest that the C component is older than the main body of I Zw 18. Izotov & Thuan (1998) have suggested an age sequence from the C component ( $``$ 200 Myr ) to the SE region ( $``$ 5 Myr ) of active star formation. Dufour et al. (1996b) have discussed new HST imagery of I Zw 18, including the C component, which is resolved into stars. Based on the analyses of colour-magnitude diagrams, they concluded that star formation in the main body began at least 30 – 50 Myr ago and is maintained to the present, as is apparent in the SE component. Martin (1996) and Dufour et al. (1996b) have discussed the properties of expanding superbubbles of ionized gas driven by supernova explosions and have inferred dynamical ages of respectively 15 – 27 Myr and 13 – 15 Myr. As for the age of the C component, Dufour et al. (1996b) found in a $`(BV)`$ vs. $`V`$ colour-magnitude analysis a well defined upper stellar main sequence indicating an age of the blue stars of $``$ 40 Myr. However, numerous faint red stars were also present in the colour-magnitude diagram implying an age of 100 – 300 Myr. Dufour et al. (1996b) therefore concluded that the C component consists of an older stellar population with an age of several hundred Myr, but which has experienced recently a modest starburst in its southeastern half as evidenced by the presence of blue stars and H$`\alpha `$ emission. Recently, Aloisi, Tosi & Greggio (1999) have discussed the star formation history in I Zw 18 using the same HST WFPC2 archival data ( i.e. those by Hunter & Thronson (1995) and Dufour et al. (1996b)). They compared observed colour-magnitude diagrams and luminosity functions with synthetic ones and concluded that there were two episodes of star formation in the main body, a first episode occuring over the last 0.5 – 1 Gyr, an age more than 10 times larger than that derived by Dufour et al. (1996b), and a second episode with more intense activity taking place between 15 and 20 Myr ago. No star formation has occurred within the last 15 Myr. For the C component, Aloisi et al. (1999) estimated an age not exceeding 0.2 Gyr. Garnett et al. (1997) have presented measurements of the gas-phase C/O abundance ratio in both NW and SE components, based on ultraviolet spectroscopy with the Faint Object Spectrograph (FOS) onboard HST. They determined values of log C/O = –0.63 $`\pm `$ 0.10 for the NW component and log C/O = –0.56 $`\pm `$ 0.09 for the SE component. These ratios, being significantly higher than in other metal-poor irregular galaxies, apparently require that carbon in I Zw 18 has been enriched by an older generation of stars. Garnett et al. (1997) concluded that I Zw 18 must have undergone an episode of star formation several hundred million years ago. Using the same HST ultraviolet spectra, Izotov & Thuan (1999) have rederived the C/O abundance ratio in both NW and SE components of I Zw 18. They obtained lower values of log C/O equal to –0.77 $`\pm `$ 0.10 for the NW component and –0.74 $`\pm `$ 0.09 for the SE component. With these lower C/O ratios, I Zw 18 does not stand apart anymore from other low-metallicity BCDs. Furthermore, the C/O ratios are in excellent agreement with those predicted by massive star nucleosynthesis theory. Therefore, no preceding low-mass carbon-producing stellar population needs to be invoked, thus, supporting the original idea that I Zw 18 is a young galaxy undergoing its first episode of star formation (Searle & Sargent 1972). The main source of the differences ($``$ 0.2 dex) with the values derived by Garnett et al. (1997) comes from the adopted electron temperatures. Izotov & Thuan (1999) use higher electron temperatures (by 1900 K and 2300 K respectively for the NW and SE components) as derived from recent MMT spectral observations in apertures which match more closely those of the HST FOS observations used to obtain carbon abundances. Thus, despite the extensive multiwavelength studies of I Zw 18, its evolutionary status remains controversial. Increasing evidence is accumulating, however, in favor of the idea that this BCD underwent its first episode of star formation less than 100 Myr ago. The first line of evidence is based on heavy element abundance ratios. Izotov & Thuan (1999) have studied these ratios in a sample of low-metallicity BCDs. They found that all galaxies with heavy element mass fraction $`Z`$ $``$ $`Z_{}`$/20, including I Zw 18, show constant C/O and N/O abundance ratios which can be explained by element production in massive stars ( $`M`$ $``$ 9 $`M_{}`$ ) only. Intermediate-mass stars ( 3 $`M_{}`$ $``$ $`M`$ $`<`$ 9 $`M_{}`$ ) in these galaxies have not had time to die and release their C and N production. Izotov & Thuan (1999) put an upper limit to the age of the order of 100 Myr for these most metal-deficient galaxies. The chemical evidence for a young age of galaxies with $`Z`$ $``$ $`Z_{}`$/20 is supported by photometric and spectroscopic evidence. Thuan, Izotov & Lipovetsky (1997) and Papaderos et al. (1998) have argued, on the basis of colour profiles and spectral synthesis studies, that the second most metal-deficient galaxy known after I Zw 18, the BCD SBS 0335–052 with $`Z`$ $``$ $`Z_{}`$/40, is a young galaxy with age less than 100 Myr. Using the same techniques in addition to colour-magnitude diagram studies, Thuan, Izotov & Foltz (1999a) have shown that the age of the BCD SBS 1415+437 with $`Z`$ $``$ $`Z_{}`$/21 is also less than 100 Myr. In view of the contradictory conclusions reached by different authors based on the same HST data set, we have decided to address anew the issue of the age of I Zw 18 by reexamining the HST data. We use the archival HST WFPC2 $`B`$, $`V`$ and $`R`$ images to construct colour-magnitude diagrams for I Zw 18. We complement the imaging data with high signal - to - noise ratio Multiple Mirror Telescope (MMT) spectroscopic observations. These data are interpreted with the help of spectral energy distributions (SEDs). Spectroscopic observations are crucial for this type of analysis as they allow to correct SEDs for contamination by emission from ionized gas. In Sect. 2 we discuss the photometric and spectroscopic data. A determination of the distance to I Zw 18 and constraints on its age from $`(BV)`$ vs. $`V`$ and $`(VR)`$ vs. $`R`$ colour-magnitude diagrams are given in Sect. 3. In Sect. 4 age constraints inferred from the spectral energy distributions are discussed. Hydrodynamical age constraints from observations of large expanding shells of ionized gas are considered in Sect. 5. We summarize our results in Sect. 6. ## 2 Observational data ### 2.1 Photometric data Our main goal here is to study the evolutionary status of I Zw 18, the integrated photometric properties of which are shown in Table 1, as a whole (i.e. including both the main body and the C component) by means of colour-magnitude diagrams (CMD). To construct the CMDs, we use archival deep $`B`$ (F450W), $`V`$ (F555W) and $`R`$ (F702W) HST observations ( PI: Dufour, GO-5434, November 1994) where both the main body and the C component were imaged with the WFPC2 camera. Additionally, for the measurement of the total magnitudes (Table 1), we use $`U`$ (F336W) and $`I`$ (F814W) PC1 images of the main body (PI: Hunter, GO-5309, November 1994). $`U`$ and $`I`$ observations of the C component have not yet been done by HST. Images reduced by the standard pipeline at the STScI were retrieved from the HST archives. They are listed in Table 2. The next steps of data reduction included combining all observations with the same filter, removal of cosmic rays using the IRAF<sup>3</sup><sup>3</sup>3IRAF: the Image Reduction and Analysis Facility is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, In. (AURA) under cooperative agreement with the National Science Foundation (NSF). routine CRREJ, and correction for geometric distortion by producing mosaic images. The latter procedure introduces small photometric corrections of $``$ 0.02 mag for objects near the edge of CCD chip (Holtzman et al. 1995a), like the C component, and are even smaller for objects located far from the edge, like the main body. Figure 1 shows the HST WFPC2 image of I Zw 18 in the $`R`$ band. A filamentary low surface brightness (LSB) emission pattern extending far away from the main body of I Zw 18 can be seen. A number of distinct super-shells are visible within this LSB envelope of the main body, testimony to a large scale perturbation of the surrounding gaseous component and suggesting that emission by ionized gas contributes a significant fraction of the integrated luminosity of the system. This is obviously not the case for the C component in which weak H$`\alpha `$ emission is barely seen and is localized in a compact region (Dufour et al. 1996b). To construct the CMDs, we use the DAOPHOT package in IRAF. The point-spread function in each broad-band filter is obtained with the PSF routine in an interactive mode, by examining and extracting 5 – 10 stars in relatively uncrowded fields. The photometry of point sources is done with a 2-pixel radius circular aperture. Instrumental magnitudes are then converted to magnitudes within an aperture of radius 0$`\stackrel{}{.}`$5, adopting corrections of –0.19 mag in $`B`$ and $`V`$ and –0.21 mag in $`R`$ (Holtzman et al. 1995a). They are finally transformed to magnitudes in the standard Johnson-Cousins $`UBVRI`$ photometric system using the prescriptions of Holtzman et al. (1995b). ### 2.2 Spectroscopic data Spectroscopic observations of I Zw 18 were carried out with the MMT on the nights of 1997 April 29 and 30. Signal-to-noise ratios S/N $``$ 50 were reached in the continuum of the bright central part. Observations were made in the blue channel of the MMT spectrograph using a highly optimized Loral 3072 $`\times `$ 1024 CCD detector. A 1$`\stackrel{}{.}`$5 $`\times `$ 180″ slit was used along with a 300 groove mm<sup>-1</sup> grating in first order and an L-38 second-order blocking filter. This yields a spatial resolution along the slit of 0$`\stackrel{}{.}`$3 pixel<sup>-1</sup>, a scale perpendicular to the slit of 1.9 Å pixel<sup>-1</sup>, a spectral range 3600 – 7500 Å, and a spectral resolution of $``$ 7 Å ( FWHM ). For these observations, CCD rows were binned by a factor of 2, yielding a final spatial sampling of 0$`\stackrel{}{.}`$6 pixel<sup>-1</sup>. The total exposure time was 180 minutes broken up in six subexposures, 30 minutes each, to allow for a more effective cosmic-ray removal. All exposures were taken at small airmasses ( $``$ 1.1 – 1.2 ), so no correction was made for atmospheric dispersion. The seeing during the observations was 0$`\stackrel{}{.}`$7 FWHM. The slit was oriented in the position angle P.A. = –41 to permit observations of the NW and SE components and the SE tip of the C component simultaneously. The spectrophotometric standard star HZ 44 was observed for flux calibration. Spectra of He-Ne-Ar comparison lamps were obtained before and after each observation to provide wavelength calibration. Data reduction of spectral observations was carried out at the NOAO headquarters in Tucson using the IRAF software package. This included bias subtraction, cosmic-ray removal and flat-field correction using exposures of a quartz incandescent lamp. After wavelength mapping, night-sky background subtraction, and correcting for atmospheric extinction, each frame was calibrated to absolute fluxes. One-dimensional spectra were extracted by summing, without weighting, different numbers of rows along the slit depending on the exact region of interest. We have extracted spectra of two regions: (1) the brightest part of the main body with size 1$`\stackrel{}{.}`$5 $`\times `$ 8$`\stackrel{}{.}`$5 centered on the NW component and (2) the southeastern tip of the C component within an aperture 1$`\stackrel{}{.}`$5 $`\times `$ 3″. The observed and extinction-corrected emission-line intensities in the main body of I Zw 18 are listed in Table 3. The ionic and elemental abundances have been derived following the procedure by Izotov et al. (1994, 1997c). The extinction coefficient $`C`$(H$`\beta `$) and the absorption equivalent width $`EW`$(abs) for the hydrogen lines obtained by an iterative procedure are included in Table 3 together with the observed flux $`F(\mathrm{H}\beta )`$ and the equivalent width $`EW(\mathrm{H}\beta )`$ of the H$`\beta `$ emission line. The electron temperature $`T_e`$(O III) was determined from the \[O III\] $`\lambda `$4363/ ($`\lambda `$4959 + $`\lambda `$5007) flux ratio and the electron number density $`N_e`$(S II) from the \[S II\] $`\lambda `$6717/$`\lambda `$6731 flux ratio. The ionic and elemental abundances are shown in Table 4 together with ionization correction factors (ICFs). They are in good agreement with the abundances derived by Skillman & Kennicutt (1993), Izotov & Thuan (1998), Vílchez & Iglesias-Páramo (1998) and Izotov et al. (1999). ## 3 Stellar population ages from colour-magnitude diagrams The superior spatial resolution of HST WFPC2 images combined with the proximity of I Zw 18 permits to resolve individual bright stars and study stellar populations in this galaxy by means of colour - magnitude diagrams. Such a study has been done by Hunter & Thronson (1995) for the main body in the $`U`$, $`V`$ and $`I`$ bands. Their photometry shows a broad main sequence of massive stars and blue and red supergiants. The NW component contains the brightest and reddest, presumably most evolved stars, spanning a range 2 – 5 Myr in age. The stars in the SE component are likely to be even younger. Dufour et al. (1996b) were able to resolve the C component into stars and found it to be older than the NW and SE components. On the basis of $`(BV)`$ vs. $`V`$ colour-magnitude diagrams, they concluded that star formation started in the main body several tens of Myr ago and in the C component several hundred of Myr ago. In this section we re-analyze the observations by Dufour et al. (1996b) and re-examine their conclusions concerning the age of the stellar content of I Zw 18. In order to compare observed to theoretical CMDs and derive ages, a precise distance to I Zw 18 is needed, which we discuss next. ### 3.1 The distance to I Zw 18 For the nearest galaxies, distances can be derived from colour-magnitude diagrams themselves by measuring the apparent magnitude of the tip of the red giant branch clump (e.g. Schulte-Ladbeck, Crone & Hopp 1998; Lynds et al. 1998). However, red giants are too faint to be seen in more distant galaxies (such as I Zw 18), and other methods must be used. Any such distance determination should always be checked for consistency, i.e. galaxy properties derived from CMD analysis such as the age of the stellar populations or the luminosities of the brightest stars should be compatible with other known observed characteristics of the galaxy. A distance of 10 Mpc to I Zw 18 has generally been adopted by previous authors (Hunter & Thronson 1995, Dufour et al. 1996b and Aloisi et al. 1999). This assumes that the observed heliocentric radial velocity of the galaxy $``$ 740 km s<sup>-1</sup> is pure Hubble flow velocity and a Hubble constant $`H_0`$ = 75 km s<sup>-1</sup> Mpc<sup>-1</sup>. Adopting this distance would lead to a conflict with the well-observed ionization state of I Zw 18. At 10 Mpc the brightest stars observed in the main body and in the C component have absolute $`V`$ magnitudes fainter than –8 and –6 mag respectively (Hunter & Thronson 1995; Dufour et al. 1996b; Aloisi et al. 1999). If that is the case, comparison with evolutionary tracks implies that the most massive stars in the main body and C component would have masses less than 15 $`M_{}`$ and 9 $`M_{}`$ respectively (Dufour et al. 1996b; Aloisi et al. 1999). According to Vacca (1994), O stars have masses exceeding 13 $`M_{}`$. Thus, only very late O stars would be present in the main body. This conclusion is in severe contradiction with the observed ionization state of I Zw 18. Indeed, the equivalent width of the H$`\beta `$ emission line expected from a stellar population having an upper mass limit of 15 $`M_{}`$ is $`<`$ 10 Å, while the observed H$`\beta `$ equivalent widths in the NW and SE components lie in the range 60 – 130 Å, implying the presence of stars with masses $``$ 40 – 50 $`M_{}`$. If the upper stellar mass limit of 9 $`M_{}`$ derived by Dufour et al. (1996b) and Aloisi et al. (1999) for the C component is correct, then ionized gas should not be present in this component because of the lack of O and early B stars. But H$`\alpha `$ and H$`\beta `$ are clearly observed (Dufour et al. 1996ab; Izotov & Thuan 1998; van Zee et al. 1998; this paper) in the C component. Izotov & Thuan (1998) derived $`EW`$(H$`\beta `$) = 6 Å, which, after correction for underlying stellar absorption, would result in a value as high as 10 Å. Thus, late O and early B stars with masses as high as 15 $`M_{}`$ must be postulated in the C component. We argue therefore that the stellar absolute magnitudes derived by Dufour et al. (1996b) and Aloisi et al. (1999) from their CMDs are too faint because they are based on too small an adopted distance. The real distance to I Zw 18 appears to be considerably larger. Correction of the I Zw 18 heliocentric radial velocity to the centroid of the Local Group and for a Virgocentric infall motion of 300 km s<sup>-1</sup> ( Kraan-Korteweg 1986 ) gives a velocity of 1114 km s<sup>-1</sup>. If the Hubble constant is in the currently accepted range of 55 – 70 km s<sup>-1</sup> Mpc<sup>-1</sup> this would give a distance between $``$ 16 and $``$ 20 Mpc for I Zw 18. Because of uncertainties introduced by the statistical nature of the Virgocentric infall correction and those in the value of the Hubble constant, we prefer to base our estimate of the distance to I Zw 18 on two firm observational results: 1) the presence of ionized gas in the C component and 2) the presence of WR stars in the NW component. Concerning the ionized gas in the C component, the distance should be increased to a value of $``$ 20 Mpc. Increasing the distance by a factor of 2 would make the most massive stars more luminous by a factor of 4 and push the mass upper limit to $``$ 15 $`M_{}`$. These more massive stars would then provide enough ionizing photons to account for the observed emission lines in the C component. Concerning the WR stars, they have been seen both spectroscopically (Izotov et al. 1997a; Legrand et al. 1997) and photometrically (Hunter & Thronson 1995; de Mello et al. 1998) in the NW component of I Zw 18, in the region where the brightest post-main-sequence stars are located. The existence of WR stars implies the presence of very massive stars in the NW component. De Mello et al. (1998) using Geneva stellar evolutionary tracks for massive stars with enhanced stellar wind and with heavy element mass fraction $`Z`$ = 0.0004 found the minimum initial mass $`M_{min}`$ for stars evolving to WR stars to be $``$ 90 $`M_{}`$ and that the WR stage is very short-lived, being only $``$ 0.8 Myr, if the instantaneous burst model is adopted. The models used by de Mello et al. (1998) do not take rotation into account, which may decrease $`M_{min}`$, but probably by not more than a factor of 1.5 (Langer & Heger 1998; Meynet 1998). Thus the observation of WR stars in the NW component implies that post-main-sequence massive stars with $`M`$ $``$ 40 – 60 $`M_{}`$ and with lifetimes $``$ 3 – 5 Myr (Fagotto et al. 1994; Meynet et al. 1994) must be present in I Zw 18. This short time scale is in excellent agreement with the age of 5 Myr derived from the equivalent width of H$`\beta `$ in the main body (Table 3), using the calibration by Schaerer & Vacca (1998). To accomodate this small time scale, we are forced again to increase the distance of I Zw 18 to $``$ 20 Mpc. Increasing the distance would increase the luminosity of all stars and the location of the brightest stars in the CMD of the main body can be accounted for by isochrones with age as short as $``$ 5 Myr. In summary, three different lines of argument — a non Hubble flow velocity component, the ionization state of I Zw 18 and the presence of WR stars in its NW component — have led us to conclude that I Zw 18 is twice as distant as thought before. We shall thus adopt a distance of 20 Mpc to I Zw 18. At this distance, 1″ = 97 pc. ### 3.2 The resolved component and colour-magnitude diagrams In Fig. 2 we show the spatial distribution of the ($`VR`$) colours of point sources along the main body for increasing photometric uncertainties $`\sigma `$. Panels (a) to (d) show CMD data points with successively larger photometric errors, but smaller than the $`\sigma `$ value given in each panel. The origin is taken to be at the south-eastern tip of the main body. The boundaries of the SE and NW components are shown by vertical lines. In the most crowded fields of the SE and NW starburst components, $`V`$ and $`R`$ magnitudes are measured with a precision better than 0.15 mag only for very few stars (Fig. 2a). However, if the upper limits are greater than $``$ 0.25 mag, then almost all stars in these two components are recovered as suggested by the similarity of the distributions of stars in panels (c) and (d). Figure 3 shows the distribution of photometric errors as a function of apparent $`B`$, $`V`$ and $`R`$ stellar magnitudes for the C component (Figs. 3a – 3c) and the main body (Figs. 3d – 3f). The photometric errors in the $`V`$ band at a fixed magnitude are similar to those derived by Hunter & Thronson (1995). In the main body, the errors remain rather large even for the brightest stars ( $`\sigma `$ $``$ 0.2 mag at $`V`$ $``$ 22 mag ). This is due to the spatially varying background in the highly crowded region of the main body. These photometric errors are especially large at the faint magnitudes of red evolved stars ( $`\sigma `$ $``$ 0.4 mag at $`V`$ $``$ 28 mag ). By contrast, the photometric errors in all filters are lower for the sources in the C component, being $``$ 0.1 mag at $`V`$ $``$ 25 mag and increasing to $``$ 0.3 mag at $`V`$ $``$ 28 mag, because stellar crowding and contamination of the stellar background by the ionized gas emission are smaller. In the following CMD analysis, we shall consider only point sources which are brighter than 27 mag in each band and those for which the photometric errors do not exceed 0.25 mag. For the C component, each of these two conditions selects out nearly the same point sources, as 27 mag sources have a mean error of $``$ 0.25 mag. However, in the main body, some bright sources are rejected because they have photometric errors larger than 0.25 mag (Fig. 3). The correction for internal extinction poses a problem. The extinction for the main body can be estimated from the optical emission-line spectrum. However, different measurements give somewhat different values for the extinction. Dufour et al. (1996b) have adopted a value $`A_V`$ = 0.25 mag based on the spectroscopic observations of Skillman & Kennicutt (1993). The value of $`C`$(H$`\beta `$) = 0.09 derived in this paper (Table 3) for the main body translates into a mean $`A_V`$ = 0.19 mag. The extinction may differ in the NW and SE components and it is not possible to estimate it in the C component because Balmer hydrogen emission lines in its spectrum are not detected with high enough signal-to-noise ratio. It is worth noting that recent studies reveal that dust formation may proceed very efficiently in a low-metallicity starburst environment and cause a highly inhomogeneous extinction pattern. Thuan, Sauvage & Madden (1999b) have analyzed ISO observations of the second most metal - deficient galaxy known, SBS 0335–052, and found that an intrinsic extinction of up to $`A_V`$ = 19 – 21 mag is required around some stellar complexes to account for the galaxy’s spectral energy distribution in the mid-infrared. The mean extinction in SBS 0335–052 derived from the optical spectroscopic observations is much lower ( $`A_V`$ $``$ 0.5 mag, Izotov et al. 1997b), although it varies significantly along the slit. Thuan et al. (1999b) found that as much as 3/4 of the massive stars in SBS 0335–052 may be hidden by dust. Keeping in mind these findings, we adopt for simplicity and for lack of more information a spatially constant extinction of $`A_V`$ = 0.19 mag, $`E(BV)`$ = 0.06 mag and $`E(VR)`$ = 0.04 mag for the main body of I Zw 18 (Table 3), while the extinction for the C component is taken to be zero. An inhomogeneous and locally much higher extinction than the value adopted above will evidently result in a larger dispersion in the CMDs and in increased colour indices, i.e. in an overestimate of the age for I Zw 18. We show the $`(BV)`$ vs. $`V`$ diagram for the C component in Fig. 4a and that for the main body in Fig. 4b. Solid lines represent theoretical isochrones (Bertelli et al. 1994), for a heavy element mass fraction $`Z`$ = 0.0004, labeled according to the logarithm of age. The largest stellar mass associated with each isochrone is also given in parentheses. It may be seen that there exists a well-defined main sequence in the C component with a turn-off indicative of an age of $``$ 15 Myr. There is a very bright point source with $`V`$ $``$ 22 mag, which may be in fact a compact star cluster as suggested by Dufour et al. (1996b). Although no red sources with ($`BV`$) $``$ 1 are seen, there is a group of relatively redder points with 0.2 $``$ ($`BV`$) $``$ 0.6 in Fig. 4a. Because of their faint magnitudes ( $`V`$ $``$ 26 ), some of the red sources may be attributed to photometric uncertainties ( $`\sigma `$ $``$ 0.2 mag at $`B`$ and $`V`$ $``$ 26, resulting in $`\sigma `$($`BV`$) $``$ 0.3 mag, Fig. 3 ). The fact that there are also points that scatter to the blue at that faint magnitude would support that hypothesis. However, even if we do accept that all these faint red stars are real, comparison with theoretical isochrones says that their age cannot exceed 100 Myr. As for the main body, the main sequence turn-off at $`V`$ $``$ 22 mag implies an age of $``$ 5 Myr ( Fig. 4b ). Red stars ( ($`BV`$) $``$ 0.2 ) are seen to possess a large range of absolute magnitudes ( $``$ 3 – 4 mag ) implying that star formation has been undergoing in the main body in the last $``$ 15 – 30 Myr. This conclusion is essentially the same as that by Dufour et al. (1996b) except for differences introduced by the larger distance to I Zw 18 which results in smaller ages. We note that the spread of the points in the $`(BV)`$ vs. $`V`$ diagram of the main body (Fig. 4b) is larger than the one in the CMD of the C component. It is probably not only due to evolutionary effects, but also to larger photometric uncertainties in the main body because of more crowding and contamination from gaseous emission as discussed above (cf. Fig. 3). $`(VR)`$ vs. $`R`$ CMD diagrams for the C component and the main body along with theoretical isochrones from Bertelli et al. (1994) for $`Z`$ = 0.0004 (solid lines) are shown in Fig. 5. As in Fig. 4, the logarithm of age and the maximum stellar mass are given for each isochrone. Figure 5a shows a well-defined main-sequence for component C corresponding to an age of $``$ 15 Myr. Several red and faint ($`V`$ $``$ 25 mag) stars with $`(VR)`$ $``$ 1.1 mag are seen. They probably are red supergiants and/or massive young luminous asymptotic giant branch (AGB) stars, $``$ 2 mag brighter than the older AGB stars observed in the BCD VII Zw 403 (Lynds et al. 1998; Schulte-Ladbeck et al. 1998) and Local Group galaxies ( e.g. Gallart, Aparicio & Vílchez 1996 ). The location of these red stars in the CMD suggests an age of $``$ 100 Myr. We conclude that star formation in the C component probably began $``$ 100 Myr ago and finished $``$ 15 Myr ago. The spread of points in the main body ( Fig. 5b ) is similar to that in the $`(BV)`$ vs. $`V`$ CMD (Fig. 4). Again, red stars in the main body span a range by 3 – 4 magnitudes in brightness suggesting that star formation has occurred during the last $``$ 15 – 30 Myr. The age of $``$ 5 Myr derived for the brightest stars in the main body is in agreement with that inferred from spectral observations ( see Sect. 4 ). Our derived upper age limit of $``$ 100 Myr for I Zw 18 differs by a whole order of magnitude from the age of up to 1 Gyr derived by Aloisi et al. (1999) in their CMD analysis of the archival $`B`$, $`V`$, $`I`$ HST data. This large age difference comes mostly from the doubling of the distance to I Zw 18. Aloisi et al. (1999) found an appreciable number of faint red sources with $`V`$ = 26 – 27 mag and ($`VI`$) = 1 – 2 mag. With our new distance of 20 Mpc, these sources have absolute $`I`$ magnitude of –6.5 mag, or 2 mag brighter than the absolute $`I`$ magnitude of older and fainter AGB stars detected in the BCD VII Zw 403 (Lynds et al. 1998; Schulte-Ladbeck et al. 1998). Because of their high luminosities, these red stars cannot be interpreted as old AGB stars as proposed by Aloisi et al. (1999). Rather, it is more likely that they are young stars. At the new distance, the location in the ($`VI`$) vs. $`V`$ diagram of the faint red stars seen by Aloisi et al. (1999) is consistent with an age of $``$ 50 Myr if the Geneva tracks are used, and of $``$ 100 Myr if the Padova tracks are used. These red stars with $`R`$ = 25 – 26 mag and ($`VR`$) $``$ 0.6 are also seen by us in the ($`VR`$) vs $`R`$ CMD (Fig. 5b) and their age is again consistent with $``$ 50 Myr as inferred from the Padova isochrones. Aloisi et al. (1999) restricted their CMDs to points with photometric errors smaller than $`\sigma `$ = 0.2 mag. With such a cut, those authors found that the faint red stars are located mainly in the southeastern part of the main body. Adopting the same cut, we come to the same conclusion as can be seen in Figure 2b, where these stars are shown by open circles. They are seen to be located mainly in the relatively uncrowded region between the SE and NW components. However, if a slightly larger cut of $`\sigma `$ = 0.25 mag is used instead, the distribution of faint red stars in the main body becomes more uniform (Figure 2c). The ages of the faint red stars of $``$ 100 Myr derived here, should be considered as upper limits. If these sources are subject to local high extinction, their ages will be decreased. In summary, the upper ages derived for point sources from colour-magnitude diagrams in both the main body and C component of I Zw 18 do not exceed 100 Myr. Figure 6 shows the radial colour distribution of point sources along the body of the C component with the origin taken to be at the southeastern tip. While the ($`BV`$) colour does not show a gradient (Fig. 6a), there is a weak trend for the $`(VR)`$ colour of stars to become redder away from the southeastern tip towards the northwestern tip of the C component. This trend was also discussed by Dufour et al. (1996b) and may reflect propagating star formation in the C component from the NW to the SE. The dots in Fig. 6 show the colours of the unresolved stellar continuum of the C component down to a surface brightness limit of 25 $`R`$ mag arcsec<sup>-2</sup>, averaged within regions of size 0$`\stackrel{}{.}`$4 $`\times `$ 0$`\stackrel{}{.}`$4. The mean colours of the diffuse component were determined to be $`<BV>`$ = +0.05 mag and $`<VR>`$ = +0.16 mag with a standard deviation of 0.01 mag around the mean. A colour gradient in the diffuse stellar continuum is more evident in ($`BV`$) than in ($`VR`$), amounting to (0.15$`\pm `$0.03) mag kpc<sup>-1</sup> and (0.05$`\pm `$0.025) mag kpc<sup>-1</sup>, respectively. The diffuse stellar continuum distribution follows very closely that of the resolved point sources, although a small difference in the $`(VR)`$ colour of the resolved and unresolved components exists in the southeastern part. These results imply that, contrary to the majority of BCDs (e.g. Papaderos et al. 1996a), in which star-forming regions are immersed in an extended and much older stellar LSB envelope, the blue unresolved stellar continuum in the C component is of comparable age or even formed coevally with the population of resolved sources discussed in this section. The integrated photometric properties of the unresolved components in both the main body and C component are listed in Table 5. They are derived from the total luminosity of each component after the summed up emission of point-like sources has been subtracted. Quantities listed in Table 5 referring to the main body of I Zw 18 are corrected for intrinsic extinction. Note that while the emission of the resolved component in the main body of I Zw 18 is contributed by stellar sources, the unresolved emission includes both stellar and gaseous contributions. It follows from Table 5 that most of the light comes from the unresolved stellar component. The contribution of resolved sources to the total light of both the main body and C component is $`25`$%. ## 4 Synthetic spectral energy distribution As discussed before, the stellar emission in the main body of I Zw 18 is strongly contaminated by emission of ionized gas from supergiant H II regions. Therefore, to derive the BCD’s stellar populations age, synthetic spectral energy distributions which include both stellar and ionized gaseous emission need to be constructed. By contrast, the contamination of the light of the C component by gaseous emission is small, and photometric and spectral data give us direct information on its stellar populations. To analyze the stellar populations in the young ionizing clusters of the main body, we use SEDs calculated by Schaerer (1998, private communication) for a heavy element mass fraction of $`Z`$ = $`Z_{}`$/20 (SEDs with lower metallicity are not available) and ages in the range of $`t`$ = 2 – 10 Myr. As for the C component, the absence of strong ionized gas emission implies that its stellar population is older than 10 Myr. We have therefore calculated a grid of SEDs for stellar populations with ages between 4 Myr and 20 Gyr and heavy element abundances $`Z/Z_{}`$ = 10<sup>-2</sup>, using the stellar isochrones of Bertelli et al. (1994) and the compilation of stellar atmosphere models of Lejeune, Cuisinier & Buser (1998). An initial mass function (IMF) with a Salpeter slope equal to –2.35, an upper mass limit of 100 $`M_{}`$ and a lower mass limit of 0.6 $`M_{}`$ are adopted. The observed gaseous spectral energy distribution is then added to the calculated stellar spectral energy distribution, its contribution being determined by the ratio of the observed equivalent width of the H$`\beta `$ emission line to the one expected for pure gaseous emission. To calculate the gaseous continuum spectral energy distribution, the observed H$`\beta `$ flux and the electron temperature have been derived from the spectrum at each point along the slit. The contribution of bound-free, free-free and two-photon continuum emission has been taken into account for the spectral range from 0 to 5 $`\mu `$m (Aller 1984; Ferland 1980). Emission lines are superposed on top of the gaseous continuum SED with intensities derived from spectra in the spectral range $`\lambda `$3700 – 7500 Å. Outside this range, the intensities of emission lines (mainly hydrogen lines) have been calculated from the extinction-corrected intensity of H$`\beta `$. ### 4.1 The main body Fig. 7 shows the observed spectrum of the main body ( NW + SE components ) along with the synthesized gaseous and stellar continuum for a composite stellar population of ages 2 and 5 Myr contributing respectively 40 % and 60 % of the total mass, in the approximation of an instantaneous burst of star formation. The line intensities, electron temperature, electron number density and heavy element abundances are taken from Tables 3 and 4. Comparison with synthesized SEDs of different ages shows that there is good agreement with the observed SED only for an age of 5 Myr. However, the synthesized SED is systematically below the observed SED in the blue part of the spectrum with $`\lambda `$ $``$ 4000 Å. Hence, an additional younger stellar population with age $``$ 2 Myr is required. We conclude that the emission of the main body is dominated by a very young stellar population with age of $``$ 5 Myr. This age is considerably shorter than the one of 15 – 20 Myr derived by Aloisi et al. (1999) for their postulated second burst of star formation, and much shorter still than the 30 Myr – 1 Gyr time scale for the first episode of star formation. The contribution of ionized gas to the SED of star-forming regions in the main body is not as large in I Zw 18 as in SBS 0335–052 (Papaderos et al. 1998), as can be seen by the equivalent width of H$`\beta `$ and the observed intensities of other emission lines relative to H$`\beta `$. In the brightest part of the main body, the equivalent width of the H$`\beta `$ emission line is only 68 Å (Table 4) as compared to $``$ 200 Å in SBS 0335–052. ### 4.2 The C component Fig. 8 shows the spectrum of the southeastern part of the C component. It is fitted very well by a single stellar population with age $``$ 15 Myr. This is in excellent agreement with the age derived earlier from the main-sequence turn-off in the CMD of the C component, supporting our large adopted distance of 20 Mpc to I Zw 18. To illustrate the sensitivity of synthesized SEDs to age determination, we have also shown in Fig. 8 the SED of a stellar population with age 40 Myr, the value adopted by Dufour et al. (1996b). The agreement is not so good, the synthesized fluxes being systematically smaller than the observed fluxes for $`\lambda `$ $``$ 5000 Å. We conclude that evolutionary synthesis models further constrain the age of the C component to the range 15 – 40 Myr. ### 4.3 A 10 Gyr old stellar population? Although there is compelling evidence that the stellar emission in I Zw 18 ( in both the main body and the C component ) is due to stellar populations with age $`100`$ Myr, the presence of a very faint older stellar population with age $`10`$ Gyr cannot be definitely ruled out, since such a population would not be visible in CMDs with $`V`$ $``$ 27.5 mag and would cause a non-detectable effect in photometric and spectroscopic data. Figure 9 shows composite SEDs resulting from a mixture of a young 15 Myr and an old 10 Gyr stellar population. Each SED is labeled by the mass fraction of the old stellar population. The SED with comparable masses of young and old stellar populations ( labeled as 0.5 ) is indistinguishable from that of a pure young stellar population ( labeled 0 ). However, SEDs of composite populations where the mass of the old stellar population is 10 times greater than the mass of the younger stellar component can be excluded as they are systematically redder than the observed spectrum for $`\lambda `$ $``$ 5500 Å. We cannot exclude therefore an underlying very old stellar population with mass comparable to that of the young stellar population in the C component. It must be however spatially coincident with the young stellar population because the colours are nearly constant. This is very unlikely as photometric studies of other BCDs (e.g. Papaderos et al. 1996ab) show redder colours towards the outer parts implying that the old stellar population is spatially distinct and more extended than the young stellar population. In summary, there is no need for a stellar population older than $`100`$ Myr to account for photometric and spectrophotometric properties of I Zw 18. Deep near - infrared photometric observations are needed to put stronger constraints on a older stellar population. ## 5 Age constraints from the ionized gas shell structures Many shell structures are seen in H$`\alpha `$ images of I Zw 18 (Fig. 1), produced by the combined effect of stellar winds from massive stars and supernovae. Knowledge of the radii, the number of massive stars, the ambient density, the geometry of the gas distribution and the properties of stellar winds from low-metallicity massive stars and of supernovae allows, in principle, to estimate the age of the stellar populations responsible for these large-scale structures. However, this knowledge is still rather uncertain and stellar population age estimates from shell structures should be considered to be more qualitative than quantitative. If the largest shells were produced by the evolution of the NW and SE components, then the shell structures would have radii $``$ 1200 pc. However, the centers of symmetry for the large shells are not coincident with the NW component. It is more likely that the large shells are produced by older stellar clusters as suggested by Dufour et al. (1996b), and in that case, the structures would have radii of $``$ 400 – 800 pc. The radius of a superbubble produced by a population of stars with stellar winds and by supernovae is given, respectively, by (McCray & Kafatos 1987): $$R_S=269\mathrm{pc}(N_WL_{38}/n_0)^{1/5}t_7^{3/5},$$ (1) $$R_S=97\mathrm{pc}(N_{SN}E_{51}/n_0)^{1/5}t_7^{3/5},$$ (2) where $`R_S`$ is in pc, $`N_W`$ is the number of stars with stellar wind, $`L_{38}`$ = $`L_W`$/(10<sup>38</sup> ergs s<sup>-1</sup>), $`L_W`$ being the mechanical luminosity of a single stellar wind, $`t_7`$ = $`t`$/(10<sup>7</sup>yr), $`n_0`$ is the density of the ambient gas, $`N_{SN}`$ is the number of supernovae, $`E_{51}`$ = $`E_{SN}`$/(10<sup>51</sup> ergs), $`E_{SN}`$ being the mechanical energy of a single supernovae. Consider first the stellar wind mechanism. The mechanical energy of the stellar wind of a single star in our Galaxy is $`L_{38}`$ $``$ 1 (Abbott, Bieging & Churchwell 1981). Taking into account the decrease of wind efficiency with metallicity as $`Z^{0.5}`$ (Maeder & Meynet 1994), we derive $`L_{38}`$ = 0.14 for I Zw 18’s metallicity ( note that Hunter & Thronson (1995) used three times as large a value ). Guseva, Izotov & Thuan (1999) derived a number of 66 WR stars in I Zw 18. This number should be considered as a lower value of stars with stellar winds, since massive O stars can also contribute. The number density of the ambient neutral gas $`n_0`$ can be estimated from H I column densities (van Zee et al. 1998). It varies from $``$ 1 cm<sup>-3</sup> in the central part of the main body where the inner shell is observed, to $``$ 0.05 cm<sup>-3</sup> at the distance of $``$ 1 kpc from the main body where the largest shells are seen. We adopt two limiting values $`n_0`$ = 0.1 and 1 cm<sup>-3</sup>, within the range of number densities discussed by Martin (1996). The WR stage in an instantaneous burst at $`Z`$ $``$ $`Z_{}`$/50 is short, $`t_7`$ $``$ 0.1 (de Mello et al. 1999). Then, the radius of the shell produced by stellar winds of WR stars is $`R_S`$ $``$ 110 pc, if $`n_0`$ = 1 cm<sup>-3</sup> and $``$ 170 pc, if $`n_0`$ = 0.1 cm<sup>-3</sup>, in good agreement with the observed radius $``$ 100 pc of the inner shell. However, stellar winds alone cannot explain the presence of larger structures. Consider next the combined effect of supernovae. In the NW component, the formation of structures with radii as large as 1200 pc should be accounted for. For simplicity, we assume the number of supernovae to be equal to the number of O stars. The latter can be estimated from the total observed flux of the H$`\beta `$ emission line. The H$`\beta `$ flux in Table 3 cannot be used as it has not been corrected for aperture effects. Instead, we adopt the aperture-corrected H$`\beta `$ flux from Guseva et al. (1999), giving a number of O stars equal to 4800. Adopting $`t_7`$ = 0.5 for the age of the main body, Eq. (2) gives $`R_S`$ = 550 pc and 350 pc for $`n_0`$ = 0.1 cm<sup>-3</sup> and 1 cm<sup>-3</sup> respectively. To explain the presence of structures with radii as large as 1200 pc, the age for the NW component should be as high as 18 – 39 Myr, larger than the age of $``$ 5 Myr inferred from ionization constraints and the presence of WR stars. We conclude therefore that the stellar clusters in the NW and SE components ionizing the gas in the main body cannot be responsible for the largest structures, and these are due to the action of older clusters and stellar associations. The largest shell to the west of the NW component with radius 800 pc is likely to be connected with the stellar association located at 4″ to the NW of the NW component, near its center of symmetry. If we assume that a starburst occurred 10 Myr ago, that there were as many O stars as in the NW component and $`n_0`$ = 0.1 cm<sup>-3</sup>, then $`R_S`$ = 840 pc. Because of the weak dependence of $`R_S`$ on $`E_{51}`$ and $`t_7`$ ( Eq. 2 ) even a burst 10 times weaker with an age of 20 Myr can account for the radius of the largest shell. Hence, star formation in the main body of I Zw 18 is likely to have started $``$ 20 Myr ago, propagating generally from the NW to the SE direction, and continuing the star formation started earlier in the C component. This age estimate is in agreement with the one derived by Martin (1996) with the use of a more complex model of the gas distribution, and of the smaller distance of 10 Mpc. ## 6 Conclusions Our main goal here is to use observed properties of the blue compact dwarf galaxy I Zw 18 to put constraints on its age: the high ionization state of the gas and the presence of WR stars in the main body, the existence of ionized gas in the C component, and the colour-magnitude diagrams from HST images. We were motivated by the study of Izotov & Thuan (1999) who have analyzed the C/O and N/O abundance ratios of a sample of the most metal-deficient BCDs known, including I Zw 18. Those authors found that these ratios are constant for galaxies with $`Z`$ $``$ $`Z_{}`$/20 with a very small dispersion around the mean. This strongly suggests that intermediate-mass stars ( $`M`$ $``$ 8 $`M_{}`$ ) have not had time to release their carbon and primary nitrogen production, establishing an age upper limit of $``$ 100 Myr for very metal-deficient BCDs. The conclusion that galaxies with $`Z`$ $``$ $`Z_{}`$ / 20 are younger than 100 Myr has been supported by photometric and spectroscopic studies of two very metal-deficient BCDs, SBS 0335–052 ( $`Z_{}`$/40, Thuan et al. 1997, Papaderos et al. 1998) and SBS 1415+437 ( Thuan et al. 1999a). Here we examine the age evidence for I Zw 18. To put constraints on the age of I Zw 18, we have followed 3 independent lines of investigation: colour-magnitude diagram, spectral synthesis and hydrodynamical age constraints. We have arrived at the following main conclusions: 1. The distance to I Zw 18 must be increased by a factor of 2 from the previously adopted value of 10 Mpc to 20 Mpc. Such a distance is required to have stars bright and massive enough in I Zw 18 to account for its high state of ionization and the presence of Wolf-Rayet stars in its NW component. 2. ($`BV`$) vs. $`V`$ and ($`VR`$) vs. $`R`$ CMD studies with the new distance of 20 Mpc give ages derived from the main sequence turn-off of I Zw 18 of $``$ 15 Myr and $``$ 5 Myr for the C component and main body respectively. The location of the resolved luminous red stars with $`M_R`$ $``$ –6 mag and ($`VR`$) $``$ 0.6 – 1.0 mag is consistent with an age $``$ 100 Myr for the C component. The star formation in this component is likely to have stopped 15 – 20 Myr ago. As for the main body, CMD analysis implies that star formation started 20 – 50 Myr ago and still continues nowadays. Analysis of shell structures seen in H$`\alpha `$ images also suggests that star formation in the main body began $``$ 20 Myr ago in different locations at the NW side and has been propagating mainly in the SE direction. The age upper limit of $``$ 50 Myr derived for the main body is a whole order of magnitude smaller that the one derived by Aloisi et al. (1999) from CMD analysis of similar HST data. The difference is mainly due to the increase in distance of I Zw 18 by a factor of 2. 3. Fits to the spectral energy distributions give ages of $``$ 5 Myr for the main body and $``$ 15 – 40 Myr for the C component. In summary, all three lines of investigation ( CMDs, the distribution of H$`\alpha `$ shells and spectroscopy ) lead to the same conclusion, that I Zw 18 did not start to form stars until $``$ 100 Myr ago. This supports the contention of Izotov & Thuan (1999) that all very metal-deficient galaxies ( $`Z`$ $``$ $`Z_{}`$/20 ) are young. ###### Acknowledgements. Y.I.I. and N.G.G. thank the Universitäts–Sternwarte of Göttingen and Y.I.I. thanks the University of Virginia for warm hospitality. We are grateful to D. Schaerer for sending to us his stellar evolutionary synthesis models in electronic form and A. Aloisi for communicating her results. We acknowledge the financial support of Volkswagen Foundation Grant No. I/72919 (Y.I.I., N.G.G., P.P. and K.J.F.) and of National Science Foundation grants AST-9616863 ( T.X.T. and Y.I.I. ) and AST-9803072 ( C.B.F. ). Research by K.J.F. and P.P. has been supported by Deutsche Agentur für Raumfahrtangelegenheiten (DARA) GmbH grants 50 OR 9407 6 and 50 OR 9907 7.
no-problem/9907/astro-ph9907389.html
ar5iv
text
# The Chemical Compositions of the SRd Variable Stars– II. WY Andromedae, VW Eridani, and UW Librae ## 1 Introduction This series of papers presents and discusses determinations of the chemical compositions of the SRd variables for which the General Catalogue of Variable Stars provides the following prosaic definition: “Semiregular variable giants and supergiants of spectral types F, G, K sometimes with emission lines in their spectra.” This definition admits massive supergiants (e.g., $`\rho `$ Cas) and high-velocity metal-poor low-mass supergiants (e.g., TY Vir) to the same club. Our goal is to identify and analyse the metal-poor low mass stars by undertaking detailed abundance analyses of SRd variables listed in the GCVS. Presently, we have observed approximately 30 variables. Our primary interest is in the metal-poor low-mass stars. In our first paper (Giridhar, Lambert & Gonzalez 1998a, Paper I), we discussed four stars: XY Aqr, RX Cep, AB Leo, and SV UMa. The first two were shown to be of approximately solar metallicity and probably not variable stars. Both AB Leo and SV UMa were demonstrated to have a low metal abundance (\[Fe/H\] $`1.5`$) and are certainly variable stars. Here, we discuss three acknowledged variables (Table 1) of high radial velocity and show that they are indeed metal-poor supergiants. ## 2 Observations and Abundance Analyses Spectra were obtained at the McDonald Observatory with either the 2.1 m telescope equipped with a Cassegrain echelle spectrograph and a Reticon 1200 $`\times 400`$ pixel CCD (McCarthy et al. 1993) or the 2.7m telescope and the 2dcoudé echelle spectrograph (Tull et al. 1995). Spectra were reduced and analysed by procedures described in Paper I. Atmospheric parameters derived principally from the Fe i and Fe ii lines are given in Table 1. Our derived abundances are quoted to $`\pm 0.1`$ dex, and given in Table 3 as \[X/H\] = $`\mathrm{log}ϵ`$(X/H) $`\mathrm{log}ϵ_{}`$(X/H) and \[X/Fe\] where the H abundance is on the customary scale, and solar abundances are taken from Grevesse, Noels, & Sauval (1996). The total error in the absolute abundance of a well observed element may be about $`\pm 0.2`$ dex when the various sources of error (equivalent width, effective temperature, etc.) are considered. This does not include systematic errors arising, for example, from the adoption of LTE and the neglect of hyperfine splitting. ## 3 Results and Discussion ### 3.1 VW Eridani The SRd variables that are low-mass supergiants are cool stars with spectra that may contain TiO bands. Since these bands contribute many lines and reduce the number of unblended lines, we discuss the stars in order of increasing TiO band strength. Not surprisingly the star not showing TiO bands is the most metal-poor of the trio. Our analysis shows VW Eri to have \[Fe/H\] = -1.8. This SRd is assigned a period of 83.4 days in the GCVS (Kholopov et al. 1985) and a high radial velocity by Preston (1971). Eggen (1973) published UBVRI photometry. Our spectrum and analysis fully confirm that the star is a metal-poor supergiant. The spectrum gives a heliocentric radial velocity of 146.5 $`\pm `$ 0.6 km s<sup>-1</sup>, a value in agreement with Preston’s results. Our derived abundances as \[X/Fe\] are compared in Table 4 with those expected of a \[Fe/H\] = -1.8 star from analyses of metal-poor dwarfs and giants. Local field dwarfs and giants with very few exceptions show a common \[X/Fe\] at a given \[Fe/H\]. It is this common \[X/Fe\] for \[Fe/H\] = -1.8 that is given in Table 4. Sources for the expected \[X/Fe\] are as follows: Israelian et al. (1998) and Boesgaard et al. (1999) for O; Pilachowski, Sneden & Kraft (1996) for Na; Gratton & Sneden (1988) for Mg; McWilliam (1997) for Al and Eu; Gratton & Sneden (1991) for Si, Ca, Sc, Ti, V, Cr, Mn, Co, and Ni; Gratton (1989) for Mn; Sneden, Gratton & Crocker (1991) for Cu and Zn; Gratton & Sneden (1994) for Y, La, Ce, Pr, Nd, and Sm. All of these references review previous literature on the elemental abundances and note the generally close agreement between the referenced results and other results. For many elements, the expected value of \[X/Fe\] should be accurate to about $`\pm `$ 0.1 dex. The lack of scatter in \[X/Fe\] at a given \[Fe/H\] for samples composed of stars now in the solar neighborhood but originating from quite different parts of the Galaxy suggests that a SRd like VW Eri should have the expected pattern of abundances. There is surprisingly and probably fortuitously good agreement between the measured and common \[X/Fe\]. In particular, characteristic signatures of a metal-poor dwarf are found in the measured \[X/Fe\] of VW Eri: notably, an overabundance of the $`\alpha `$-elements, an underabundance of Mn, and the underabundance of Cu in the presence of a normal Zn abundance. Two elements from Na to Eu with a difference of greater than $`\pm `$ 0.3 dex between observed and expected abundance are V and Eu. We assume that these differences are simply due to above average errors of measurement. Neglect of hyperfine splitting is not a likely soure of error because the lines are weak. A more probable source is the possibility of blends affecting the weak lines. The strongest V i line, a 31 mÅ line, gives an abundance 0.3 dex less than the next strongest line at 14 mÅ. The oxygen abundance is based on the 6300Å \[O i\] line and the O i lines at 7774 and 7775Å. The forbidden line gives a systematically lower abundance by about 0.4 dex. The mean abundance corresponds to \[O/Fe\] = 0.8 with the \[O i\] line giving \[O/Fe\] = 0.6. This discrepancy between forbidden and permitted lines is similar to the discrepancy between the same lines in spectra of subdwarfs. Israelian et al. (1998) obtain consistent determinations of the O abundance from OH ultraviolet and the O i 7770-7775Å lines. Boesgaard et al. (1999) in an independent analysis obtain an identical result. The mean estimate from these recent determinations given in Table 4 is in good agreement with our measurement. Heavy elements are well represented in the spectrum. The Ba ii lines are rejected as unsuitable because they are very strong with an indication that they are contaminated by a circumstellar component. With one exception, the heavy elements give \[X/Fe\] in the range -0.2 to +0.2, as expected from Zhao & Magain (1991) and Gratton & Sneden (1994), i.e., the star has not experienced enrichment of $`s`$-process products through the third dredge-up. The apparent exception is Eu with \[Eu/Fe\] = 0.6. Observations of Eu show that this $`r`$-process element is enriched in normal metal-poor stars: Gratton & Sneden (1994 – see also McWilliam’s compilation) find \[Eu/Fe\] $``$ 0.3, a value less than our estimate based on 2 Eu ii lines. In summary, VW Eri judged by composition is a normal red giant that has experienced the first dredge-up but not the third dredge-up on the AGB. ### 3.2 UW Librae This SRd with a period of 84.7 days was studied extensively at low-dispersion by Joy (1952) who noted the spectral type to vary from G0 to K4 and the radial velocity from 142 km s<sup>-1</sup> to 194 km s<sup>-1</sup>. Photometry was provided by Eggen (1977). Our analysis is based on two Sandiford echelle spectra from 1995 June 21 and 23. The heliocentric radial velocity of 166 $`\pm 2`$ km s<sup>-1</sup> is within the range reported by Joy. The spectra provide coverage from 4450 - 4940 Å and 5770 - 7240 Å. Bands of TiO are quite prominent. The crowded spectrum limited our selection of useful lines. Results of the abundance analysis are given in Table 3. The star is clearly a metal-poor supergiant.<sup>1</sup><sup>1</sup>1Dawson (1979) argued on the basis of reddening-corrected DDO photometry that UW Lib was a G dwarf of near-solar composition. Iron is underabundant with \[Fe/H\] = -1.3. We consider UW Lib to have the expected composition for its \[Fe/H\]. The latter may be obtained for most elements by linear interpolation between the values given in Table 4 for \[Fe/H\] = -1.8 and \[X/Fe\] = 0 at \[Fe/H\]. The $`\alpha `$-elements have approximately the same overabundance at \[Fe/H\] = -1.0 as at -1.8. Agreement between observed and expected values of \[X/Fe\] is not as good as in the case of VW Eri. We attribute this to the presence of TiO lines in many portions of the spectrum. Oxygen based on the 6300 Å and 6363Å \[O i\] lines has the abundance \[O/Fe\] = 0.7, which is consistent with recent measurements (Israelian et al. 1998; Boesgaard et al. 1999). The traditional $`\alpha `$-elements have \[$`\alpha `$/Fe\] = 0.4 (Mg), 0.0 (Ca), and 0. 6 (Ti) when 0.3 to 0.4 is expected from analyses of the simpler spectra of warmer subdwarfs (McWilliam 1997). Given the crowded spectrum, the difference of up to 0.3 dex from expectation is plausibly attributable to measurement errors. The vanadium abundance \[V/Fe\] = 0.5 may be a reflection of our neglect of hyperfine splitting. There is marginal evidence for a mild enrichment of heavy elements with observed and expected initial abundances as follows: \[Y/Fe\] = 0.2 and -0.2 expected, \[Zr/Fe\] = 0.8 and 0.0, and \[Ce/Fe\] = 0.3 and -0.1. ### 3.3 WY Andromedae Photometry of WY And is reviewed by Zsoldos (1990) who showed the period to be about 108 days. Rosino (1951) and Joy (1952) found the spectral type to vary from G2 to K2. Our spectrum shows TiO bands with a strength greater than in the spectrum of UW Lib. Joy’s radial velocity of -191 km s<sup>-1</sup> is confirmed by our measurement of -193 $`\pm `$ 1.1 km s<sup>-1</sup>. Results of our abundance analysis are summarized in Table 3. Although there are no large differences between the observed composition and that expected for a red giant with \[Fe/H\] = -1.0 that has negotiated the first dredge-up but not yet encountered the third dredge-up, the overall agreement with expectation is noticeably inferior to that found for VW Eri and UW Lib. The $`\alpha `$-elements do not give the expected uniform enhancement of about 0.3 dex: \[Mg/Fe\] = 0.1, \[Si/Fe\] = 0, \[Ca/Fe\] = -0.3, but \[Ti/Fe\] = 0.3. Aluminum and Mn deficiencies are in excess of expectation by about 0.2 dex: \[Al/Fe\] = -0.3 and \[Mn/Fe\] = -0.6. Heavy elements with a dominant contribution from the $`s`$-process are in the mean enriched with \[$`s`$/Fe\] = 0.3 but two of the three elements with 5 or more lines show no enrichment. Europium is even underabundant relative to iron: \[Eu/Fe\] = -0.4 from a single Eu ii line but \[Eu/Fe\] = 0.3 is expected. We attribute these discrepancies to the greater strength of the TiO bands in this more metal-rich star, and to the enhanced probability of unsuspected blending with TiO lines. It seems unlikely that the apparent abundance anomalies in this or UW Lib are due to the dust-gas separation that greatly affects some RV Tauri variables; Ca may be underabundant but Sc is not. Possibly, the star belongs to a stellar population whose initial composition differs from the standard or expected composition. Certainly, subdwarfs with \[Fe/H\] $`1`$ and anomalies relative to the standard composition are now known (Nissen & Schuster 1997; Jehin et al. 1999) but these anomalies do not match well those found for WY And. Attribution of the latter to errors of measurement seems the most likely explanation. Oxygen from the \[O i\] 6300Å and 6363Å corresponds to \[O/Fe\] = 0.7, a value that is approximately consistent with the latest estimates of the O abundance for metal-poor stars (Israelian et al. 1998; Boesgaard et al. 1999). A collection of 3 C i lines gives \[C/Fe\] = 0.3, which is the value found by Gustafsson et al. (1999) for disk dwarfs at \[Fe/H\] = -1. This would suggest that there has been some C enrichment following the first dredge-up’s reduction of carbon on the first ascent of the red giant branch. ## 4 Concluding Remarks Our analyses of WY And, VW Eri, and UT Lib and our earlier work on AB Leo, and SV Uma show that the subset of SRd variables defined by weak metal lines and a high radial velocity are metal-poor supergiants with considerable similarities in composition that are traceable to the corresponding similarity of composition among metal-poor dwarfs, subgiants, and giants on the first red giant branch. To our collection may be added TY Vir (Luck & Bond 1985) and CK Vir (Leep & Wallerstein 1981). In seeking the origins of the SRd variables, two obvious questions arise: How are the stars related to red giants on the red giant branch (RGB) and asymptotic giant branches (AGB)? What is the relationship between the SRd variables and the RV Tauri variables? Clues to the answers may be sought from the theoretical ‘HR diagram’ of log $`g`$ versus log T<sub>eff</sub>. Figure 1 places our SRd stars in this diagram with RV Tauri stars drawn from our papers on the compositions of these stars (Giridhar, Rao & Lambert 1994; Gonzalez, Giridhar & Lambert 1997a, 1997b; Giridhar, Lambert & Gonzalez 1998b, 1999). Since our analyses of RV Tauri and SRd variables have used common procedures, systematic differences between the two kinds of stars are unlikely to be due to errors in the analyses. Lines corresponding to constant luminosity $`L`$ are drawn for log$`L/L_{}`$ = 3.3 and 4.3 and a stellar mass of 0.8$`M_{}`$. A theoretical isochrone for $`Z`$ = 0.0004 or \[Fe/H\] $`1.6`$ is also shown (Bertelli et al. 1994 ). Isochrones for higher metallicity are displaced to lower temperatures with very little change in the luminosity of the most luminous stars on the RGB and AGB. An increase from $`Z`$ = 0.0004 to 0.004 shifts the RGB tip from (log$`g`$, logT<sub>eff</sub>) = (0.64, 3.64) to (0.22, 3.54), and the AGB tip from (-0.1, 3.60) to (-0.4, 3.50). The $`Z`$ = 0.0004 isochrone is appropriate for VW Eri and AB Leo. The other stars are more metal-rich with the most metal-rich (WY And) falling almost midway between $`Z`$= 0.0004 and 0.004. Figure 1 shows that the SRds are either at the tip of the RGB or on the AGB at luminosities greater than the RGB tip. Lloyd Evans (1975) noted that red variables in metal-poor globular clusters are at the tip of a cluster’s giant branch. An AGB star may be distinguished by a C and $`s`$-process enrichment provided that the third dredge-up has been activated. Our analysis is most accurate for VW Eri which is neither C nor $`s`$-process enriched. There is a suspicion that WY And may be enriched, as may be AB Leo and SV UMa from Paper I. It remains to be discovered what sets an SRd variable apart from essentially identical giants that are not variable. Perhaps, they are stars that have begun a process of strong mass loss, or have lower mass envelopes as a result of earlier history. The period ranges for SRd and RV Tauri stars are similar, and the low temperature end of the RV Tauri sequence abuts the SRd domain. It is difficult to accept that the stars are unrelated. Observed abundance anomalies of some RV Tauri stars (see our papers cited above) attributed to acquisition of dust-free gas by the stars are not in conflict with the suggested relation because our work (Giridhar, Lambert & Gonzalez 1999) shows that the anomalies are not found in those RV Tauri stars that would be the closest relatives of SRd variables. RV Tauri stars that are intrinsically metal-poor or cool do not exhibit abundance anomalies. A relation is suggested by Figure 1. RV Tauri variables from our series of papers on the compositions of these stars are represented by different symbols for the spectroscopic classes RV A, RV B, and RV C. It is seen that the SRd’s are at the low temperature boundary of the RV Tauri regime which corresponds to a belt bounded by lines of constant luminosity for log$`L/L_{}`$ = 3.3 and 4.3 assuming a stellar mass $`M=0.8M_{}`$. Evolution at constant luminosity from the red giant branch is a signature of post-AGB evolutionary tracks. The lower luminosity bound would correspond to evolution from the tip of the first giant branch. The upper luminosity bound suggests evolution off the AGB. An interpretation of Figure 1 is that SRds evolve into RV Tauri variables as they cross the Hertzsprung gap from low to high temperatures. Heavy mass loss has been hypothesised as the cause of the early departure from the AGB. Stellar pulsations occurring in the SRd and RV Tauri variables may be the driver of the mass loss. Premature departure from the AGB is not an unexpected phenomenon and has been invoked to account for the rarity of C-rich AGB stars in the Magellanic Clouds. This research has been supported in part by the Robert A. Welch Foundation of Houston, Texas and the National Science Foundation (grant AST-9618414). ## Figure Caption
no-problem/9907/nucl-ex9907005.html
ar5iv
text
# Directed and Elliptic Flow in 158𝐴GeV Pb+Pb Collisions ## 1 Introduction In high energy collisions it is expected that a high density interaction zone is formed. If this system thermalizes, the thermal pressure will necessarily generate collective transverse expansion. If the initial state is azimuthally asymmetric, as in semi-central collisions, this property may be reflected in the azimuthal asymmetry of the final state particle distributions. The strength of collective flow will yield information on the nuclear equation of state during the expansion. Especially the in-plane or out-of-plane character of the elliptic flow should give important hints on the underlying mechanisms. ## 2 Method Two independent methods are used to determine the strength of the collective flow. To be able to compare the result to previous data the average transverse momentum $`p_x`$ method is used. In this method the transverse momentum $`p_T`$ of each identified particle is decomposed into components with respect to the measured reaction plane. In order to perform systematic studies the $`p_x`$ versus $`p_y`$ distributions are evaluated for different centrality and rapidity bins and for all identified particle species. The second method is based on the Fourier decomposition. The azimuthal distributions of identified particles with respect to the reaction plane for all events are constructed. These distributions are fitted with the function: $$\frac{1}{N}\frac{dN}{d\mathrm{\Delta }\mathrm{\Phi }}=1+2v_1\mathrm{cos}(\mathrm{\Delta }\mathrm{\Phi })+2v_2\mathrm{cos}(2\mathrm{\Delta }\mathrm{\Phi })$$ Where $`\mathrm{\Delta }\mathrm{\Phi }`$ is the azimuthal angle of the single particle with respect to the reaction plane. The strength of the collective flow is then given by $`v_1`$ ($`v_2`$) for the directed (elliptic) flow. The flow measurements with respect to the reaction plane assume a perfect event plane determination, i.e. that the reaction plane angle could be obtained from the data exactly. In reality, the limited detector resolution and effects like the finite number of detected particles produces a limited resolution in the measurement of the reaction plane angle. All observables that refer to the reaction plane must be corrected up to what they would be relative to the true event plane. This correction is done by dividing the observable by the event plane resolution. Another effect which has to be taken into account, is the auto correlation effect. Naturally, there is a correlation between the azimuthal angle of a particle with respect to the reaction plane, if this particle is included in the evaluation of the reaction plane angle. This auto correlation is avoided by calculating for each particle the event plane angle of the remaining particles. ## 3 Results A systematic study of the dependence of the flow signal on centrality for protons and pions in terms of $`p_x`$ is displayed in figure 2. The proton absolute momentum transfer in the reaction plane increases with the number of participants to a maximum $`|p_x|`$ for semi-central collisions at an impact parameter of $`b8\text{fm}`$ which is twice as large as that found for Au + Au collisions at AGS energies. For more head-on collisions the $`|p_x|`$ decreases again. In the limit of impact parameter zero the sideward flow vanishes due to symmetry. In addition to the small flow effect of pions due to the thermal motion, pions are subject to absorption and rescattering mainly through the delta resonance. Thus they should show flow effects comparable to that of protons. Since the observed $`p_x`$ of pions is positive it indicates that the pions are preferentially emitted away from the target spectators. This leads to the interpretation of an absorption of the pions in the target remnant, which appears as preferred emission toward the other side. If the apparent anti-flow is due to absorption, central collisions with little or no spectator matter should show no flow effect which is indeed seen for central events, where the pion flow signal is compatible with zero. The effect in semi-central collisions is weak but grows with the impact parameter nearly linearly. Figure 2 shows the rapidity dependence of the average transverse momentum for protons and pions in comparison with model predictions. For protons a clear maximum flow at target rapidity is evident in the data as well as in the simulations, though the absolute height of the data cannot be reproduced by RQMD or VENUS. There is also a large discrepancy between the pion data and the model predictions, though the absolute height is approximately reproduced in the most backward rapidity regions. The Fourier decomposition method also provides the transverse momentum dependence of the flow strength. Over a wide range of $`p_T`$ the directed flow in terms of $`v_1`$ is well described by a linear function of the transverse momentum as depicted in figure 4. The second harmonic or elliptic flow in terms of $`v_2`$ is consistent with zero in the target rapidity range, hence it is not plotted in the figure. Figure 4 shows the rapidity dependence of the directed flow in terms of $`v_1=\mathrm{cos}(\mathrm{\Delta }\mathrm{\Phi })`$. The filled symbols represent measured data, while the open symbols are the data reflected around midrapidity $`y=2.9`$. Shown are proton (circles) and pion (triangles) data from the Plastic Ball in the target rapidity region. In addition, pion data measured with the tracking arm in the WA98 experiment at midrapidity and data near midrapitiy measured by the NA49 collaboration are shown. It can be noticed that the directed flow of protons as well as of pions has a maximum in the fragmentation regions. The Plastic Ball data in the region $`y<0.5`$ are fitted with Gaussian distributions. These Gaussian distributions, shown as solid lines, are reflected around midrapidity like the data and describe the data rather well. It should be emphasized that the midrapidity data were not included in the Gaussian fit. The shape of the distribution appears different than that observed in heavy-ion collisions at lower beam momenta, where the flow strength increases from zero at midrapidity linearly to the peaks at target and projectile rapidity. In 158 AGeV Pb + Pb collisions however, the peaks are Gaussian and only the tails extend to midrapidity. It is conceivable that the S-shape curve obtained at lower beam energies could also be obtained by a combination of two Gaussian distributions. In this case the ratio of the relative width and the gap between the Gaussian peaks would be smaller so that a linear behaviour is found at midrapidity. This suspicion was confirmed by a good agreement with a Gaussian fit for the proton data from 200 AMeV Au + Au collisions provided by the Plastic Ball collaboration. Hence for a complete description of the rapidity distribution of the collective flow $`F`$ the formerly used slope at midrapidity ($`dF/dy|_{y=0}`$) is not sufficient. It is more reasonable to use the three parameters of the Gaussian distribution to describe the data. The peak position reflects the beam momentum, the peak height gives the strength of the flow and the width of the distribution provides information on how much the participants and the spectators are involved in the collectivity.
no-problem/9907/cond-mat9907026.html
ar5iv
text
# Contact resistance of quantum tubes ## 1 Introduction Since the recent discovery of the carbon nanotubes by Iijima there has been a significant progress in the studies of the conducting properties of both single-walled and multi-walled carbon nanotubes. Conductance of a mesoscopic system connected to metallic reservoirs is well understood and is usually described by the Landauer formula . For quantum point contacts in semiconductor structures and in metallic nanowires it is well establish experimentally that the differential conductance to a good approximation is quantized in units of $`G_0=2e^2/h`$ and at zero temperature given by $`G=G_0N`$ where $`N`$ is the number of propagating modes. In carbon nanotubes with metallic contacts most experiments show that the conductance is less than the conductance which one should expect for a smooth interface between tube and metal, e.g. $`G=4e^2/h`$ for metallic single-walled tubes where the extra factor of 2 comes from the two $`\pi `$ bands that are crossing the Fermi level . The reasons for this lower conductance are still not fully known. Theoretically, several groups have considered the effects of vacancies , disorder , distortion , and doping on the conductance of carbon nanotubes. The conducting properties have also been studied in e.g. the context of junctions between different metallic carbon nanotubes , Aharonow–Bohm effect in the presence of a magnetic field , and the Luttinger liquid behavior of a one-dimensional gas of interacting electrons . Also the ideal “hollow quantum cylinder”, i.e. a two-dimensional electron gas on a cylinder, has been studied in context of the difference between strip-like wires and tubes . However, with the exception of the recent qualitative study of Tersoff and the recent modeling-works of Anantram et al. and Sanvito et al. , less attention has been focused on the conditions for a good transmission between tube and a metal contact which is an important issue for practical devices with carbon nanotubes, or other quantum tubes. In quantum point contacts an adiabatic interface between the wire and reservoirs ensures a transmission coefficient close to unity . The condition for adiabaticity is that the shape of the contact region varies slowly on the scale of the Fermi wave length. In the opposite case with an abrupt interface, i.e. quasi-one-dimensional lead connected to a wide two-dimensional contact, Szafer and Stone found that the transmission rapidly increases to unity as the width of the confined region exceeds half of the Fermi wavelength, thus giving a reflectionless contact. For the contact between a quantum tube and a three-dimensional metal it is not obvious that the assumption of an ideal reflectionless contact applies and the aim of this work is to study the contact resistance for this case. The model we are studying is that of a hollow quantum cylinder of radius $`R_\mathrm{T}`$ contacted by a three-dimensional free-electron metal which we for convenience model by a cylindrical wire with radius $`R_\mathrm{C}R_\mathrm{T}`$, see Fig. 1. The system thus has full cylindrical symmetry and the angular momentum quantum number $`m`$ can therefore be used to label the scattering states. For the coupling of the quantum tube to the contact it is necessary to take a radial confinement potential for the quantum tube into account and here we model the confinement by an attractive delta-function potential. As an example we apply this model to metal contacts of Al or Au; the quantum tube parameters are chosen to mimic armchair carbon nanotubes: the strength of the confinement can be related to the work function for the material that constitutes the tube and in the case of a carbon nanotube we relate it to the work function of graphene. It should be noted that the employed free electron model does not fully describe the actual band structure of carbon nanotubes. Nevertheless, a study of contact resistance within this idealized model should yield valuable insights which are relevant to real materials. The paper is organized as follows: In Section II the eigenstates of a quantum tube connected to a cylindrical metal contact are found. In Section III these eigenstates are used to construct the scattering states to find the transmission coefficient, and hence the conductance of the contact. In Section IV, we apply our model to contacts between an armchair carbon nanotube and a metal. Finally, in Section V discussion and conclusions are given. Essential details of analytical calculations are given in Appendices A and B. ## 2 The eigenstates We separate the discussion into two parts: first we find the eigenstates in the tubular geometry and then the eigenstates for the cylindrical metal contact. In Section III the matching of these eigenstates are used to construct the scattering states of the contact. ### 2.1 Quantum tube The quantum tube of radius $`R_\mathrm{T}`$ with otherwise free electrons is modeled by the Hamiltonian $$\widehat{}_\mathrm{T}=\frac{\mathrm{}^2}{2m_e}\left[\frac{^2}{z^2}+\frac{^2}{r^2}+\frac{1}{r}\frac{}{r}+\frac{1}{r^2}\frac{^2}{\varphi ^2}\right]+V_\mathrm{T}(r),$$ (1) with a confining potential given by an attractive delta function potential $$V_\mathrm{T}(r)=H\delta (rR_\mathrm{T}),$$ (2) where the confinement strength $`H`$ is taken positive. The eigenstates of the Schrödinger equation have the form $$\mathrm{\Psi }_m(r,\varphi ,z)=R_m(r)\chi _m(\varphi )\psi _m(z),$$ (3) with angular and longitudinal wave functions $`\chi _m(\varphi )`$ $`=`$ $`(2\pi )^{1/2}\mathrm{exp}[im\varphi ],`$ (4) $`\psi _m(z)`$ $`=`$ $`\left[k_m(E)\right]^{1/2}\mathrm{exp}\left[\pm ik_m(E)z\right],`$ (5) where the angular momentum quantum numbers $`m`$ are integers, $`k_m(E)=\left[\frac{2m_e}{\mathrm{}^2}\left(E\epsilon _m\right)\right]^{1/2}`$ is the wave vector associated to the longitudinal free propagation, and $`E=E_m+\epsilon _m`$ is the total energy of the state. Here, $`E_m>0`$ is the energy associated to the longitudinal propagation and $`\epsilon _m<0`$ is the (binding) energy associated to the transverse motion. We can relate the strength of the confinement to the work function $`W=\left|\epsilon _m\right|\mathrm{}^2k_\mathrm{F}^2/2m_e`$, which is the energy required to remove an electron at the Fermi level (disregarding surface charge effects). The normalization $`\left[k_m(E)\right]^{1/2}`$ is chosen such that the propagating modes carry the same amount of current. The radial wave function $`R_m(r)`$ satisfies $$\left\{r^2\frac{^2}{r^2}+r\frac{}{r}+\left[\frac{2m_e\epsilon _m}{\mathrm{}^2}r^2m^2\right]\right\}R_m(r)=\gamma R_\mathrm{T}\delta (rR_\mathrm{T})R_m(r),$$ (6) where $`\gamma 2m_eHR_\mathrm{T}/\mathrm{}^2`$ is a dimensionless confinement strength. For the bound states ($`\epsilon _m<0`$) and $`rR_\mathrm{T}`$ this equation has the form of Bessel’s modified differential equation . The solutions are given by modified Bessel functions of order $`m`$ of the first and second kind, so that the full solution is given by $`R_m(r)`$ $`=`$ $`\{\begin{array}{ccc}A_mI_m(\kappa _mr)& ,& r<R_\mathrm{T}\\ B_mK_m(\kappa _mr)& ,& r>R_\mathrm{T}\end{array},`$ (9) where $`\kappa _m\left[2m_e\left|\epsilon _m\right|/\mathrm{}^2\right]^{1/2}`$. At $`r=R_\mathrm{T}`$, the radial wave function is continuous and the appropriate matching condition for the derivative $`R_m(r)/r`$ at $`r=R_\mathrm{T}`$ is found by integrating Eq. (6) from $`R_\mathrm{T}^{}=R_\mathrm{T}0^+`$ to $`R_\mathrm{T}^+=R_\mathrm{T}+0^+`$. In this way the matching conditions become $`R_m(R_\mathrm{T}^+)R_m(R_\mathrm{T}^{})`$ $`=`$ $`0,`$ (10) $`{\displaystyle \frac{R_m(r)}{r}}|_{R_\mathrm{T}^+}{\displaystyle \frac{R_m(r)}{r}}|_{R_\mathrm{T}^{}}`$ $`=`$ $`\gamma {\displaystyle \frac{R_m(R_\mathrm{T})}{R_\mathrm{T}}},`$ (11) and we get the following equation for the normalization coefficients $$\left(\begin{array}{cc}I_m(\kappa _mR_\mathrm{T})& K_m(\kappa _mR_\mathrm{T})\\ I_{m1}(\kappa _mR_\mathrm{T})+I_{m+1}(\kappa _mR_\mathrm{T})\frac{2\gamma I_m(\kappa _mR_\mathrm{T})}{\kappa _mR_\mathrm{T}}& K_{m1}(\kappa _mR_\mathrm{T})+K_{m+1}(\kappa _mR_\mathrm{T})\end{array}\right)\left(\begin{array}{c}A_m\\ B_m\end{array}\right)=\left(\begin{array}{c}0\\ 0\end{array}\right).$$ (12) Non-trivial solutions exist if the determinant vanishes, and hereby the wave vector $`\kappa _m`$ is a solution to the equation $$\gamma ^1=I_m(\kappa _mR_\mathrm{T})K_m(\kappa _mR_\mathrm{T}),$$ (13) where the result for the Wronskian $`W\{K_m\left(x\right);I_m\left(x\right)\}=1/x`$ has been used . Expanding Eq. (13) in the small-$`\kappa _mR_\mathrm{T}`$ limit we find that a bound state with angular momentum $`m\mathrm{}`$ exists for $`\gamma >2m`$. The number of bound states for a certain value of $`\gamma `$ is given by $`N=\mathrm{Int}(\gamma /2)+1`$ where $`\mathrm{Int}(x)`$ is the integer part of $`x`$. Thus, there is always at least a single bound state corresponding to $`m=0`$. From the matching conditions and the normalization of $`R_m`$ (see Appendix A), it follows that $`R_m(r)`$ $`=`$ $`A_m\times \{\begin{array}{ccc}I_m(\kappa _mr)& ,& r<R_\mathrm{T}\\ \frac{I_m(\kappa _mR_\mathrm{T})}{K_m(\kappa _mR_\mathrm{T})}K_m(\kappa _mr)& ,& R_\mathrm{T}<r\end{array},`$ (16) $`A_m`$ $`=`$ $`{\displaystyle \frac{\sqrt{2}}{R_\mathrm{T}}}\left[{\displaystyle \frac{I_m^2(\kappa _mR_\mathrm{T})}{K_m^2(\kappa _mR_\mathrm{T})}}K_{m1}(\kappa _mR_\mathrm{T})K_{m+1}(\kappa _mR_\mathrm{T})I_{m1}(\kappa _mR_\mathrm{T})I_{m+1}(\kappa _mR_\mathrm{T})\right]^{1/2}.`$ (17) A plot of the radial wave function is shown in the inset of Fig. 3. Increasing the confinement strength, the radial wave function becomes more localized which is accompanied by an increase in the binding energy. ### 2.2 Metal contacts For the metal it is convenient to assume a cylindrical geometry and consider a cylindrical wire of radius $`R_\mathrm{C}R_\mathrm{T}`$. The Hamiltonian is written as $$\widehat{}_\mathrm{C}=\frac{\mathrm{}^2}{2m_e}\left[\frac{^2}{z^2}+\frac{^2}{r^2}+\frac{1}{r}\frac{}{r}+\frac{1}{r^2}\frac{^2}{\varphi ^2}\right]+V_\mathrm{C}(r),$$ (18) with a hard-wall confining potential $$V_\mathrm{C}(r)=\{\begin{array}{ccc}V_\mathrm{C}^0& ,& r<R_\mathrm{C}\\ \mathrm{}& ,& R_\mathrm{C}<r\end{array}.$$ (19) Obviously, for $`rR_\mathrm{C}`$, $`\mathrm{\Psi }(r,\varphi ,z)=0`$ and for $`r<R_\mathrm{C}`$ the eigenstates have the form $`\mathrm{\Psi }_{\nu m}(r,\varphi ,z)`$ $`=`$ $`R_{\nu m}(r)\chi _m(\varphi )\psi _{\nu m}(z),`$ (20) $`R_{\nu m}(r)`$ $`=`$ $`C_{\nu m}J_m(\kappa _{\nu m}r),`$ (21) $`\chi _m(\varphi )`$ $`=`$ $`(2\pi )^{1/2}\mathrm{exp}[im\varphi ],`$ (22) $`\psi _{\nu m}(z)`$ $`=`$ $`\left[k_{\nu m}(E)\right]^{1/2}\mathrm{exp}\left[\pm ik_{\nu m}(E)z\right],`$ (23) where $`J_m`$ is a Bessel function of the first kind of order $`m`$, $`\kappa _{\nu m}^2=2m_e\epsilon _{\nu m}/\mathrm{}^2`$ is a wave vector corresponding to the radial energy $`\epsilon _{\nu m}`$, $`k_{\nu m}(E)=\left[\frac{2m_e}{\mathrm{}^2}\left(E+V_\mathrm{C}^0\epsilon _{\nu m}\right)\right]^{1/2}`$ is the wave vector of the longitudinal motion, and $`E`$ is the total energy of the state. Again the normalization $`\left[k_{\nu m}(E)\right]^{1/2}`$ makes the propagating modes carry the same amount of current. The boundary condition for the radial wave function leads to $`J_m(\kappa _{\nu m}R_\mathrm{C})=0,`$ from which we find $`\kappa _{\nu m}`$ numerically. Since $`J_m(x)(2/\pi x)^{1/2}\mathrm{cos}(xm\pi /2\pi /4)`$ for large $`x`$ , we have $`\kappa _{\nu m}R_\mathrm{C}(\nu +m/21/4)\pi `$ with $`\nu =1,2,3,\mathrm{}`$. The normalization $`C_{\nu m}`$ is given by (see Appendix A) $$C_{\nu m}=\frac{1}{R_\mathrm{C}}\sqrt{\frac{2}{J_{m1}(\kappa _{\nu m}R_\mathrm{C})J_{m+1}(\kappa _{\nu m}R_\mathrm{C})}},$$ (24) which is a real number. ## 3 Transmission of contact We consider an electron in the tube in mode $`m`$ incident on the contact (see Fig. 1) and compute the transmission and reflection coefficients. We construct the scattering states in the basis of the eigenstates of the Schrödinger equation (see previous section). Since the angular momentum is a conserved quantity, the transmitted and reflected parts of the wave function also have the same quantum number $`m`$. In the quantum tube ($`z<0`$) the scattering state is given by $$\mathrm{\Psi }_m(r,\varphi ,z)=R_m(r)\frac{\mathrm{exp}(im\varphi )}{\sqrt{2\pi }}\left[\frac{\mathrm{exp}(ik_mz)}{\sqrt{k_m}}+r_m\frac{\mathrm{exp}(ik_mz)}{\sqrt{k_m}}\right],$$ (25) and in the contact ($`z>0`$) by $$\mathrm{\Psi }_m(r,\varphi ,z)=\underset{\nu =1}{\overset{\mathrm{}}{}}t_{\nu m}R_{\nu m}(r)\frac{\mathrm{exp}(im\varphi )}{\sqrt{2\pi }}\frac{\mathrm{exp}(ik_{\nu m}z)}{\sqrt{k_{\nu m}}}.$$ Here $`r_m`$ is the reflection amplitude for mode $`m`$ and $`t_{\nu m}`$ is the corresponding transmission amplitude. We assume that the effective electron mass is the same in the two materials so that the continuity of $`\mathrm{\Psi }_m(r,\varphi ,z)`$ and $`\mathrm{\Psi }_m(r,\varphi ,z)/z`$ at $`z=0`$ are appropriate boundary conditions. For carbon nanotubes and metals like Al and Au this is a reasonable approximation ($`m_e^{}1.2m_0`$ ). For general details on how to account for differences in the effective mass and the underlying symmetry of the lattice we refer to Refs. and references therein. The boundary conditions lead to $`r_m`$ $`=`$ $`{\displaystyle \frac{1_{\nu =1}^{\mathrm{}}\varrho _{\nu m}^2}{1+_{\nu =1}^{\mathrm{}}\varrho _{\nu m}^2}},`$ (26) $`t_{\nu m}`$ $`=`$ $`{\displaystyle \frac{2\varrho _{\nu m}}{1+_{\nu =1}^{\mathrm{}}\varrho _{\nu m}^2}},`$ (27) where $`\varrho _{\nu m}\sqrt{k_{\nu m}/k_m}R_m|R_{\nu m}`$, with the radial overlap defined as $`R_m|R_{\nu m}_0^{\mathrm{}}drrR_m(r)R_{\nu m}(r).`$ In addition we have the sum-rule $`_{\nu =1}^{\mathrm{}}R_m|R_{\nu m}^2=1`$, which can be used to verify the numerical convergence. The overlap can be calculated analytically (see Appendix B) and the squared overlap is given by $`R_m|R_{\nu m}^2`$ $`=`$ $`\left({\displaystyle \frac{R_\mathrm{T}}{R_\mathrm{C}}}\right)^2{\displaystyle \frac{\left[J_m(\kappa _{\nu m}R_\mathrm{T})+\kappa _{\nu m}I_m(\kappa _mR_\mathrm{T})K_m(\kappa _mR_\mathrm{C})J_{m+1}(\kappa _{\nu m}R_\mathrm{C})\right]^2}{\left[\left(\kappa _mR_\mathrm{T}\right)^2+\left(\kappa _{\nu m}R_\mathrm{T}\right)^2\right]^2\left[J_{m1}(\kappa _{\nu m}R_\mathrm{C})J_{m+1}(\kappa _{\nu m}R_\mathrm{C})\right]}}`$ $`\times {\displaystyle \frac{4}{\left[K_m^2(\kappa _mR_\mathrm{T})I_{m1}(\kappa _mR_\mathrm{T})I_{m+1}(\kappa _mR_\mathrm{T})I_m^2(\kappa _mR_\mathrm{T})K_{m1}(\kappa _mR_\mathrm{T})K_{m+1}(\kappa _mR_\mathrm{T})\right]}}.`$ The total transmission from mode $`m`$ in the quantum tube into the contact is thus $`𝒯_m`$ $`=`$ $`{\displaystyle \underset{\nu =1}{\overset{\mathrm{}}{}}}𝒫_{\nu m}\left|t_{\nu m}\right|^2`$ (29) $`=`$ $`\left|{\displaystyle \frac{2}{1+_{\nu =1}^{\mathrm{}}\varrho _{\nu m}^2}}\right|^2{\displaystyle \underset{\nu =1}{\overset{\mathrm{}}{}}}𝒫_{\nu m}\varrho _{\nu m}^2,`$ where $`𝒫_{\nu m}`$ projects onto the propagating modes ($`k_{\nu m}`$ real) of the metal contact. Here we have assumed that the lengths of the quantum tube and the contact are semi-infinite so that tunneling through evanescent modes can be neglected. These should be included in the case of two metal contacts connected by a quantum tube of finite length. Introducing real and imaginary parts by $`_{\nu =1}^{\mathrm{}}\varrho _{\nu m}^2\mathrm{\Gamma }_m^{}+i\mathrm{\Gamma }_m^{\prime \prime }`$ we obtain $$𝒯_m=\frac{4\mathrm{\Gamma }_m^{}}{\left(1+\mathrm{\Gamma }_m^{}\right)^2+\mathrm{\Gamma }_{m}^{\prime \prime }{}_{}{}^{2}}.$$ (30) The reflection probability $`_m`$ can be calculated in a similar manner which provides us with the usual sum-rule $`𝒯_m+_m=1`$, ensuring the conservation of probability current density. To summarize, the transmission probability of mode $`m`$ can be calculated from Eq. (30) with $`k_m(E)=\left[\frac{2m_e}{\mathrm{}^2}\left(E\epsilon _m\right)\right]^{1/2}`$ and $`k_{\nu m}(E)=\left[\frac{2m_e}{\mathrm{}^2}\left(E+V_\mathrm{C}^0\epsilon _{\nu m}\right)\right]^{1/2}`$, with $`\epsilon _m=\frac{\mathrm{}^2}{2m_e}\kappa _m^2`$ being the energy of the $`m`$th transverse mode in the tube and similarly $`\epsilon _{\nu m}=\frac{\mathrm{}^2}{2m_e}\kappa _{\nu m}^2`$ is the transverse energy in the contact. Here $`\kappa _m`$ and $`\kappa _{\nu m}`$ are solutions to $`I_m(\kappa _mR_\mathrm{T})K_m(\kappa _mR_\mathrm{T})=\gamma ^1`$ and $`J_m(\kappa _{\nu m}R_\mathrm{C})=0`$, respectively. For a numerical implementation, an upper cut-off $`\nu _c`$ in the sum over modes in the contact is needed and the sum-rule for the squared radial overlap is then a measure of the numerical convergence for a given cut-off. When choosing values for the confinement parameters, $`\gamma `$ and $`V_\mathrm{C}^0`$, we take into account that the Fermi momenta of the quantum tube and the metal can be different since the relevant electrons are located at the Fermi levels of the two materials. For the metal contact we use the known Fermi energies for e.g. Al and Au to relate the confinement potential to the Fermi level as $`E_\mathrm{F}^\mathrm{C}=E+V_\mathrm{C}^0`$, with the Fermi energy being defined positive. For the tube, the Fermi energy enters as $`E_\mathrm{F}^\mathrm{T}=E+|\epsilon _m|`$. When the two materials are brought into contact the chemical potentials align, but the difference in Fermi wave vectors remains. Thus for the metal contact $`E+V_\mathrm{C}^0=\mathrm{}^2\left[k_\mathrm{F}^\mathrm{C}\right]^2/2m_e`$, and for the tube $`E+|\epsilon _m|=\mathrm{}^2\left[k_\mathrm{F}^\mathrm{T}\right]^2/2m_e`$. For the tube we need to specify $`\gamma `$, which follows from the work function $`W=|\epsilon _m|\mathrm{}^2\left[k_\mathrm{F}^\mathrm{T}\right]^2/2m_e`$. We have neglected the charge density induced at the interface by a mismatch of the work functions. For a discussion of this in the context of the screening properties of one-dimensional systems, see e.g. Ref. . ## 4 Contact resistance of single-walled armchair carbon nanotubes The $`(n,n)`$ armchair single-walled carbon nanotube can be regarded as the result of rolling one sheet of graphite (with the carbon atoms in a hexagonal lattice) in the direction of one of the bonds . The resulting tube has a periodicity $`a0.246\mathrm{nm}`$ along the tube axis ($`z`$-axis) and a radius $`R_\mathrm{T}n\times \sqrt{3}a/2\pi `$ with $`4n`$ atoms along the perimeter, arranged in two rows that resemble a chain of armchairs, see Fig. 2. Their metallic character is caused by two $`\pi `$ bands crossing the Fermi level at a wave vector $`k_\mathrm{F}^\mathrm{T}2\pi /3a`$. As discussed recently by Tersoff the metallic armchair carbon nanotubes have electrons at the Fermi level which can be regarded as having an angular momentum quantum number $`m=0`$. In order to apply our simple model to the problem of the contact resistance of $`(n,n)`$ single-walled carbon nanotubes embedded in a free-electron metal we notice that $`k_\mathrm{F}^\mathrm{T}R_\mathrm{T}n/\sqrt{3}`$. In Fig. 3 the transmission probability at the Fermi level is shown for several values of $`k_\mathrm{F}^\mathrm{T}R_\mathrm{T}`$ corresponding to $`(n,n)`$ armchair carbon nanotubes for various values of the dimensionless confinement strength $`\gamma `$. In the particular case of an Al contact, the mismatch is given by $`k_\mathrm{F}^\mathrm{T}/k_\mathrm{F}^{\mathrm{Al}}0.49`$ and the corresponding transmission is presented in panel (a) of Fig. 4. In panel (b) we show similar results for an Au contact for which $`k_\mathrm{F}^\mathrm{T}/k_\mathrm{F}^{\mathrm{Au}}0.70`$. In order to estimate $`\gamma `$ we relate it to the work function of the carbon nanotube which is of the order $`45\mathrm{eV}`$. For the quantum tube we associate a work function to the $`m=0`$ bound state via its binding energy, i.e. $`W\frac{\mathrm{}^2\kappa _0^2}{2m_e}E_\mathrm{F}^\mathrm{T}`$ where $`\kappa _0`$ is the solution to $`I_0(\kappa _0R_\mathrm{T})K_0(\kappa _0R_\mathrm{T})=\gamma ^1`$. In Fig. 5 this work function is shown as a function of the confinement strength for quantum tubes corresponding to $`(n,n)`$ armchair carbon nanotubes. From this we estimate that $`\gamma <1020`$ is a reasonable regime for the curves shown in Figs. 3 and 4. This means that a transmission close to unity (per band at the Fermi level) can be expected if the armchair nanotube is embedded in free-electron metals like Al or Au. Assuming a work function $`W=4.5\mathrm{eV}`$ of the nanotube we have used the corresponding value of $`\gamma `$ to calculate the transmission for the different nanotubes. For $`(n,n)`$ armchair nanotubes with Al and Au contacts we find $`𝒯_{\mathrm{Al}}0.93`$ and $`𝒯_{\mathrm{Au}}0.98`$, respectively. In the case of matching Fermi wave vectors ($`k_\mathrm{F}^\mathrm{T}/k_\mathrm{F}^\mathrm{C}=1`$) we get $`𝒯0.87`$. For the considered tubes ($`3n17`$), these transmissions are found to be almost independent of the specific value of the tube indices $`(n,n)`$. Here we have only taken the geometry-related contact scattering into account. Physically, a lower transmission can be caused by electrons being scattered by interface imperfections/roughness, deviations from a spherical Fermi surface of the metal contact, and scattering due to non-matching work functions of the nanotube and metal. Also scattering due to the non-matching Fermi velocities of the nanotube and the metal could be expected. However, as shown in Fig. 4, a mismatch between Fermi wave vectors can actually in some cases increase the transmission (and thereby the conductance) due to quantum interferences, even though the mismatch by itself is known to give rise to momentum relaxation and thereby resistance. ## 5 Discussion and conclusion We have considered the contact resistance (in terms of transmission) of a quantum tube embedded in a free-electron metal. For the quantum tube we have modeled the radial confinement of the electron motion by an attractive delta function potential which gives rise to at least one bound state in the radial direction. The strength of the attractive potential can phenomenologically be associated to the work function of the quantum tube. Within this model we have calculated the transmission of a quantum tube contacted by a free-electron metal. Due to the cylindrical geometry of the contact, considerable analytical progress was possible and with the resulting equations the scattering problem is readily solved numerically. As an application we have considered the transparency of contacts with armchair carbon nanotubes embedded in free-electron metals. Our calculations show that in the absence of scattering mechanisms associated to e.g. interface imperfections/roughness, deviations from a spherical Fermi surface of the metal contact, and scattering due to non-matching work functions of the nanotube and metal, the geometry itself allows for a high transparent contact between armchair carbon nanotubes and free-electron metal contacts. Furthermore, from this simple model we find that Al would be a good candidate for such a metal as it was suggested recently by Tersoff . For Au however, we find that the present 3D geometry allows for good contact in contrast to Tersoff’s findings for Au, which were based on 1D considerations. ## Acknowledgements We would like to thank M. Brandbyge, H. Bruus, D.H. Cobden, and J. Nygård for useful discussions. ## A Normalization of radial wave functions From the radial wave function of the quantum tube, Eq. (16), it follows that the normalization is given by $$A_m=\kappa _m\left[_0^{\kappa _mR_\mathrm{T}}d\alpha \alpha I_m^2(\alpha )+\frac{I_m^2(\kappa _mR_\mathrm{T})}{K_m^2(\kappa _mR_\mathrm{T})}_{\kappa _mR_\mathrm{T}}^{\mathrm{}}d\alpha \alpha K_m^2(\alpha )\right]^{1/2},$$ (31) and since $`{\displaystyle d\alpha \alpha I_m^2(\alpha )}`$ $`=`$ $`\alpha ^2\left[I_m^2(\alpha )I_{m1}(\alpha )I_{m+1}(\alpha )\right]/2,`$ (32) $`{\displaystyle d\alpha \alpha K_m^2(\alpha )}`$ $`=`$ $`\alpha ^2\left[K_m^2(\alpha )K_{m1}(\alpha )K_{m+1}(\alpha )\right]/2,`$ (33) we get the result in Eq. (17). Similarly, from the radial wave function of the free-electron metal contact, Eq. (21), it follows that the normalization is given by $$C_{\nu m}=\left[_0^{R_\mathrm{C}}drrJ_m^2(\kappa _{\nu m}r)\right]^{1/2},$$ (34) and since $`J_m(\kappa _{\nu m}R_\mathrm{C})=0`$ and $$d\alpha \alpha J_m^2(\alpha )=\alpha ^2\left[J_m^2(\alpha )J_{m1}(\alpha )J_{m+1}(\alpha )\right]/2,$$ (35) we obtain Eq. (24). ## B Overlap of radial wave functions The overlap of radial wave functions can be written as $`R_m|R_{\nu m}`$ $`=`$ $`A_mC_{\nu m}\left[{\displaystyle _0^{R_\mathrm{T}}}drrI_m(\kappa _mr)J_m(\kappa _{\nu m}r)+{\displaystyle \frac{I_m(\kappa _mR_\mathrm{T})}{K_m(\kappa _mR_\mathrm{T})}}{\displaystyle _{R_\mathrm{T}}^{R_\mathrm{C}}}drrK_m(\kappa _mr)J_m(\kappa _{\nu m}r)\right]`$ (36) $`=`$ $`{\displaystyle \frac{A_mC_{\nu m}}{\kappa _m^2+\kappa _{\nu m}^2}}{\displaystyle \frac{J_m(\kappa _{\nu m}R_\mathrm{T})+\kappa _{\nu m}R_\mathrm{C}I_m(\kappa _mR_\mathrm{T})K_m(\kappa _mR_\mathrm{C})J_{m+1}(\kappa _{\nu m}R_\mathrm{C})}{K_m(\kappa _mR_\mathrm{T})}},`$ where we have used the integrals $`{\displaystyle drrI_m(\alpha r)J_m(\beta r)}`$ $`=`$ $`{\displaystyle \frac{r\left\{\alpha I_{m+1}(\alpha r)J_m(\beta r)+\beta I_m(\alpha r)J_{m+1}(\beta r)\right\}}{\alpha ^2+\beta ^2}},`$ (37) $`{\displaystyle drrK_m(\alpha r)J_m(\beta r)}`$ $`=`$ $`{\displaystyle \frac{r\left\{\alpha K_{m+1}(\alpha r)J_m(\beta r)\beta K_m(\alpha r)J_{m+1}(\beta r)\right\}}{\alpha ^2+\beta ^2}},`$ (38) together with the boundary condition $`R_{\nu m}(R_\mathrm{C})=0`$.
no-problem/9907/cond-mat9907034.html
ar5iv
text
# A-type Antiferromagnetic and C-type Orbital-Ordered State in LaMnO3 Using Cooperative Jahn-Teller Phonons ## Abstract The effect of Jahn-Teller phonons on the magnetic and orbital structure of LaMnO<sub>3</sub> is investigated using a combination of relaxation and Monte Carlo techniques on three-dimensional clusters of MnO<sub>6</sub> octahedra. In the physically relevant region of parameter space for LaMnO<sub>3</sub>, and after including small corrections due to tilting effects, the A-type antiferromagnetic and C-type orbital structures were stabilized, in agreement with experiments. The theoretical understanding of manganese oxides is among the most challenging current areas of research in condensed matter physics. Experimental studies of manganites have revealed a rich phase diagram originating from the competition between charge, spin, and orbital degrees of freedoms. A simple starting framework for Mn-oxide investigations is contained in the double-exchange (DE) ideas, where ferromagnetism induced by hole doping arises from the optimization of the hole kinetic energy. In addition, recent results using the one-orbital model revealed a more complicated ground state, with phase separation tendencies strongly competing with ferromagnetism, leading to a potential explanation of the Colossal Magneto-Resistance effect. However, to understand the fine details of the phase diagram of manganites the one-orbital model is not sufficient since the highly nontrivial A-type spin antiferro (AF) and C-type orbital structures observed experimentally in the undoped material LaMnO<sub>3</sub> cannot be properly addressed in such a simple context. Certainly two-orbital models are needed to consider the nontrivial state of undoped manganites. In this framework the two-band model without phonons has been studied before, and the importance of the strong Coulomb repulsion has been remarked for the appearance of the A-AF state. However, Coulombic approaches have presented conflicting results regarding the orbital order that coexists with the A-type spin state, with several approaches predicting G-type orbital order, which is not observed in practice. In addition, many experiments suggest that Jahn-Teller (JT) phonons are important in manganites and, thus, at present it is unclear whether the dominant interaction between electrons in Mn-oxides should be considered Coulombic or phonon mediated. For this reason it is important to analyze if a purely JT-phononic calculation is able to reproduce the experimental properties of undoped manganites, goal that provides the motivation for the present paper. The main result observed in this effort is that A-AF order, in combination with a C-type orbital arrangement, is indeed induced by JT-phonons in realistic parameter regions for LaMnO<sub>3</sub>, namely large Hund-coupling between e<sub>g</sub>-electrons and t<sub>2g</sub>-spins, small AF interaction between t<sub>2g</sub>-spins, and strong electron-lattice coupling. This shows that JT-based calculations can lead to correct qualitative predictions for manganites, actually improving on purely Coulombic approaches in the orbital sector. To carry out the calculations note that experiments have revealed an orbital structure tightly related to the JT-distortion of the MnO<sub>6</sub> octahedron. If each JT-distortion would occur independently, optimal orbitals can be determined by minimizing the kinetic and interaction energy of the e<sub>g</sub>-electrons, as in models with only Coulomb interactions. However, oxygens are shared between adjacent MnO<sub>6</sub> octahedra, indicating that the JT-distortions occurs cooperatively. Particularly in the undoped situation, all MnO<sub>6</sub> octahedra exhibit JT-distortions, indicating that such a cooperative effect is important. Thus, in order to understand the magnetic and orbital structures of LaMnO<sub>3</sub>, the electron and lattice systems must be optimized simultaneously. However, not much effort has been devoted to the microscopic treatment of the cooperative effect, although the JT-effect in the Mn-oxides has been addressed by several groups. To remedy this situation, here a computational investigation of cooperative JT-phonons in manganites is carried out, focusing on the $`n=1`$ density, where $`n`$ is the electron number per site. The motion of e<sub>g</sub>-electrons tightly coupled to the localized t<sub>2g</sub>-spins and the local distortions of the MnO<sub>6</sub> octahedra is described by $`H`$ $`=`$ $`{\displaystyle \underset{\mathrm{𝐢𝐚}\gamma \gamma ^{}\sigma }{}}t_{\gamma \gamma ^{}}^𝐚c_{𝐢\gamma \sigma }^{}c_{𝐢+𝐚\gamma ^{}\sigma }J_\mathrm{H}{\displaystyle \underset{𝐢\gamma \sigma \sigma ^{}}{}}𝐒_𝐢c_{𝐢\gamma \sigma }^{}𝝈_{\sigma \sigma ^{}}c_{𝐢\gamma \sigma ^{}}`$ (1) $`+`$ $`\lambda {\displaystyle \underset{𝐢\sigma \gamma \gamma ^{}}{}}c_{𝐢\gamma \sigma }^{}(Q_{1𝐢}\sigma _0+Q_{2𝐢}\sigma _1+Q_{3𝐢}\sigma _3)_{\gamma \gamma ^{}}c_{𝐢\gamma ^{}\sigma }`$ (2) $`+`$ $`J^{}{\displaystyle \underset{𝐢,𝐣}{}}𝐒_𝐢𝐒_𝐣+(1/2){\displaystyle \underset{𝐢}{}}(\beta Q_{1𝐢}^2+Q_{2𝐢}^2+Q_{3𝐢}^2),`$ (3) where $`c_{𝐢a\sigma }`$ ($`c_{𝐢b\sigma }`$) is the annihilation operator for an e<sub>g</sub>-electron with spin $`\sigma `$ in the $`d_{x^2y^2}`$ ($`d_{3z^2r^2}`$) orbital at site $`𝐢`$. The vector connecting nearest-neighbor (NN) sites is $`𝐚`$, $`t_{\gamma \gamma ^{}}^𝐚`$ is the hopping amplitude between $`\gamma `$\- and $`\gamma ^{}`$-orbitals connecting NN-sites along the $`𝐚`$-direction via the oxygen 2$`p`$-orbital, $`J_\mathrm{H}`$ is the Hund coupling, $`𝐒_𝐢`$ the localized classical t<sub>2g</sub>-spin normalized to $`|𝐒_𝐢|=1`$, and $`𝝈=(\sigma _1,\sigma _2,\sigma _3)`$ are the Pauli matrices. The dimensionless electron-phonon coupling constant is $`\lambda `$, $`Q_{1𝐢}`$ denotes the dimensionless distortion for the breathing mode of the MnO<sub>6</sub> octahedron, $`Q_{2𝐢}`$ and $`Q_{3𝐢}`$ are, respectively, JT distortions for the $`(x^2y^2)`$\- and $`(3z^2r^2)`$-type modes, and $`\sigma _0`$ is the unit matrix. $`J^{}`$ is the AF-coupling between NN t<sub>2g</sub>-spins, and $`\beta `$ is a parameter to be defined below. To account for the cooperative nature of the JT-phonons, the normal coordinates for distortions of the MnO<sub>6</sub> octahedron are written as $`Q_{1𝐢}=(1/\sqrt{3})(L_{\mathrm{𝐱𝐢}}+L_{\mathrm{𝐲𝐢}}+L_{\mathrm{𝐳𝐢}})`$, $`Q_{2𝐢}=(1/\sqrt{2})(L_{\mathrm{𝐱𝐢}}L_{\mathrm{𝐲𝐢}})`$, and $`Q_{3𝐢}=(1/\sqrt{6})(2L_{\mathrm{𝐳𝐢}}L_{\mathrm{𝐱𝐢}}L_{\mathrm{𝐲𝐢}})`$, where $`L_{\mathrm{𝐚𝐢}}`$ denotes the distance between neighboring oxygens along the $`𝐚`$-direction, given by $`L_{\mathrm{𝐚𝐢}}=L_𝐚+(u_𝐢^𝐚u_{𝐢𝐚}^𝐚)`$. Here, $`L_𝐚`$ is the length between Mn-ions along the $`𝐚`$-axis and $`u_𝐢^𝐚`$ denotes the deviation of oxygen from the equilibrium position along the Mn-Mn bond in the $`𝐚`$-direction. In general, $`L_𝐚`$ can be different for each direction, depending on the bulk properties of the lattice. Since the present work focuses on the microscopic mechanism for A-AF formation in LaMnO<sub>3</sub>, the undistorted lattice with $`L_𝐱=L_𝐲=L_𝐳`$ is treated first, and then corrections will be added. In the cubic undistorted lattice, the hopping amplitudes are given by $`t_{\mathrm{aa}}^𝐱=\sqrt{3}t_{\mathrm{ab}}^𝐱=\sqrt{3}t_{\mathrm{ba}}^𝐱=3t_{\mathrm{bb}}^𝐱=t`$ for the $`𝐱`$-direction, $`t_{\mathrm{aa}}^𝐲=\sqrt{3}t_{\mathrm{ab}}^𝐲=\sqrt{3}t_{\mathrm{ba}}^𝐲=3t_{\mathrm{bb}}^𝐲=t`$ for the $`𝐲`$-direction, and $`t_{\mathrm{bb}}^𝐳=4t/3`$ with $`t_{\mathrm{aa}}^𝐳=t_{\mathrm{ab}}^𝐳=t_{\mathrm{ba}}^𝐳=0`$ for the $`𝐳`$-direction. The energy unit is $`t`$. The parameter $`\beta `$ is defined as $`\beta =(\omega _{\mathrm{br}}/\omega _{\mathrm{JT}})^2`$, where $`\omega _{\mathrm{br}}`$ and $`\omega _{\mathrm{JT}}`$ are the vibration energies for manganite breathing- and JT-modes, respectively, assuming that the reduced masses for those modes are equal. Using experimental results and band-calculation data for $`\omega _{\mathrm{br}}`$ and $`\omega _{\mathrm{JT}}`$, it can be shown that $`\beta 2`$. However, the results presented here are basically unchanged as long as $`\beta `$ is larger than unity. To study Hamiltonian Eq. (1), two numerical techniques were here applied. One is the relaxation technique, in which the optimal positions of the oxygens are determined by minimizing the total energy. In this calculation, only the stretching mode for the octahedron, namely $`u_𝐢^𝐚+u_{𝐢𝐚}^𝐚=0`$, is taken into account. Moreover, the relaxation has been performed for fixed structures of the t<sub>2g</sub>-spins such as ferro (F), A-type AF (A-AF), C-type AF (C-AF), and G-type AF (G-AF), shown in Fig. 1(a). The advantage of this method is that the optimal orbital structure can be rapidly obtained on small clusters. However, the assumptions involved in the relaxation procedure should be checked with an independent method. Such a check is performed with the unbiased MC simulations used before by our group. The dominant magnetic and orbital structures are deduced from correlation functions. In the MC method, the clusters currently reachable are $`2\times 2\times 2`$, $`4\times 4\times 2`$, and $`4\times 4\times 4`$. In spite of this size limitation, arising from the large number of degrees of freedom in the problem, the available clusters are sufficient for our mostly qualitative purposes. In addition, the remarkable agreement between MC and relaxation methods lead us to believe that our results are representative of the bulk limit. In Fig. 1(b), the mean-energy is presented as a function of $`J^{}`$ for $`J_\mathrm{H}=8`$ and $`\lambda =1.5`$, on a $`2\times 2\times 2`$ cluster with open boundary conditions. The solid lines and symbols indicate the results obtained with the relaxation technique and MC simulations, respectively. The agreement is excellent, showing that the relaxation method is accurate. The small deviations between the results of the two techniques are caused by temperature effects. As intuitively expected, with increasing $`J^{}`$ the optimal magnetic structure changes from ferro- to antiferromagnetic, and this occurs in the order F$``$A-AF$``$C-AF$``$G-AF. To check size effects, the t<sub>2g</sub>-spin correlation function $`S(𝐪)`$ was calculated also in $`4\times 4\times 2`$ and $`4\times 4\times 4`$ clusters, where $`S(𝐪)=(1/N)_{𝐢,𝐣}e^{i𝐪(𝐢𝐣)}𝐒_𝐢𝐒_𝐣`$, $`N`$ is the number of sites, and $`\mathrm{}`$ indicates the thermal average value. As shown in Fig. 1(c), with increasing $`J^{}`$ the dominant correlation changes in the order of $`𝐪=(0,0,0)`$, $`(\pi ,0,0)`$, $`(\pi ,\pi ,0)`$, and $`(\pi ,\pi ,\pi )`$. The values of $`J^{}`$ at which the spin structures changes agree well with those in Fig. 1(b). The shape of the occupied orbital arrangement with the lowest energy for each magnetic structure is in Fig. 1(d). For the F-case, the G-type orbital structure is naively expected, but actually a more complicated orbital structure is stabilized, indicating the importance of the cooperative treatment for JT-phonons. For the A-AF state, only the C-type structure is depicted, but the G-type structure, obtained by a $`\pi `$/2-rotation of the upper $`x`$-$`y`$ plane of the C-type state, was found to have exactly the same energy. Small corrections will remove this degeneracy in favor of the C-type as described below. For C- and G-AF, the obtained orbital structures are G- and C-types, respectively. Although the same change of the magnetic structure due to $`J^{}`$ was already reported in the electronic model with purely Coulomb interactions, the orbital structures in those previous calculations were G-, G-, A-, and A-type for the F-, A-AF, C-AF, and G-AF spin states, respectively. Note that for the A-AF state, of relevance for the undoped manganites, the G-type order was obtained, although in another treatment for the Coulomb interaction, the C- and G-type structures were found to be degenerate, as in our calculation. In Figs. 2 (a) and (b), the phase diagrams on the $`(J^{},\lambda )`$-plane are shown for $`J_\mathrm{H}=4`$ and $`8`$, respectively. The curves are drawn by the relaxation method. As expected, the F-region becomes wider with increasing $`J_\mathrm{H}`$. When $`\lambda `$ is increased at fixed $`J_\mathrm{H}`$, the magnetic structure changes from F$``$A-AF$``$C-AF$``$G-AF. This tendency is qualitatively understood if the two-site problem is considered in the limit $`J_\mathrm{H}1`$ and $`E_{\mathrm{JT}}1`$, where $`E_{\mathrm{JT}}`$ is the static JT-energy given by $`E_{\mathrm{JT}}=\lambda ^2/2`$. The energy-gain due to the second-order hopping process of e<sub>g</sub>-electrons is roughly $`\delta E_{\mathrm{AF}}1/J_\mathrm{H}`$ and $`\delta E_\mathrm{F}1/E_{\mathrm{JT}}`$ for AF- and F-spin pairs, respectively. Increasing $`E_{\mathrm{JT}}`$, $`\delta E_\mathrm{F}`$ decreases, indicating the relative stabilization of the AF-phase. In our phase diagram, the A-AF phase appears for $`\lambda 1.1`$ and $`J^{}0.15`$. This region does not depend much on $`J_\mathrm{H}`$, as long as $`J_\mathrm{H}1`$. Although $`\lambda `$ seems to be large, it is realistic from an experimental viewpoint: $`E_{\mathrm{JT}}`$ is $`0.25`$eV from photoemission experiments and $`t`$ is estimated as $`0.20.5`$eV, leading to $`1\lambda 1.6`$. As for $`J^{}`$, it is estimated as $`0.02J^{}0.1`$. Thus, the location in parameter-space of the A-AF state found here is reasonable when compared with experimental results for LaMnO<sub>3</sub>. Let us now focus on the orbital structure in the A-AF phase. In the cubic lattice studied thus far, the C- and G-type orbital structures are degenerate, and it is unclear whether the orbital pattern in the $`x`$-$`y`$ plane corresponds to the alternation of $`3x^2r^2`$ and $`3y^2r^2`$ orbitals observed in experiments. To remedy the situation, some empirical facts observed in manganites become important: (i) The MnO<sub>6</sub> octahedra are slightly tilted from each other, leading to modifications in the hopping matrix. Among these modifications, the generation of a non-zero value for $`t_{\mathrm{aa}}^𝐳`$ is important. (ii) The lattice is not cubic, but the relation $`L_𝐱L_𝐲>L_𝐳`$ holds. From experimental results, these numbers are estimated as $`L_𝐱=L_𝐲=4.12`$Å and $`L_𝐳=3.92`$Å, indicating that the distortion with $`Q_3`$-symmetry occurs spontaneously. Note that the hopping amplitude and $`J^{}`$ along the $`z`$-axis are different from those in the $`x`$-$`y`$ plane due to this distortion. Motivated by these observations, the energies for C- and G-type orbital structures were recalculated including this time a nonzero value for $`t_{\mathrm{aa}}^𝐳`$ in the magnetic A-AF state (see Fig. 3(a)). In the real material, it can be shown based on symmetry considerations that the tilting of the MnO<sub>6</sub> octahedra will always lead to a positive value for $`t_{\mathrm{aa}}^𝐳`$. Then, the results of Fig. 3(a) suggest that the C-type orbital structure should be stabilized in the real materials, and the explicit shape of the occupied orbitals is shown in Fig. 3(b). The experimentally relevant C-type structure with the approximate alternation of $`3x^2r^2`$ and $`3y^2r^2`$ orbitals is indeed successfully obtained by this procedure. Although the octahedron tilting actually leads to a change of all hopping amplitudes, effect not including in this work, the present analysis is sufficient to show that the C-type orbital structure is stabilized in the A-AF magnetic phase when $`t_{\mathrm{aa}}^𝐳`$ is a small positive number, as it occurs in the real materials. In this work the Coulomb interactions (intra-orbital Coulomb $`U`$, inter-orbital Coulomb $`U^{}`$, and inter-orbital exchange $`J`$) have been neglected. For Mn-oxides, they are estimated as $`U=7`$eV, $`J=2`$eV, and $`U^{}=5`$eV, which are large compared to $`t`$. However, the result for the optimized distortion described in this paper, obtained without the Coulomb interactions, is not expected to change since the energy gain due to the JT-distortion is maximized when a single e<sub>g</sub>-electron is present per site. This is essentially the same effect as produced by a short-range repulsion. In fact, it has been checked explicitly by using the Exact Diagonalization method that the JT- and breathing-distortions are not changed by $`U^{}`$ on a $`2\times 2`$ cluster using the F-state in which $`U`$ and $`J`$ can be neglected. In addition, the MC simulations show that the probability of doble occupancy of a single orbital is negligible where the A-type spin, C-type orbital state is stable. Based on all these observations, it is believed that that the effect of the Coulomb interaction is not crucial for the appearance of the A-AF state with the proper orbital order. Another way to rationalize this result is that the integration of the JT-phonons at large $`\lambda `$ will likely induce Coulombic interactions dynamically. Finally, let us briefly discuss transitions induced by the application of external magnetic fields on undoped manganites. When the A-AF state is stabilized, the energy difference (per site) obtained in our study between the A-AF and F states is about $`t/100`$. As a consequence, magnetic fields of $`2050`$T could drive the transition from A-AF to F order accompanied by a change of orbital structure, interesting effect which may be observed in present magnetic field facilities. Summarizing, using numerical techniques at $`n=1`$ it has been shown that the A-AF state is stable in models with JT-phonons, using coupling values physically reasonable for LaMnO<sub>3</sub>. Our results indicate that it is not necessary to include large Coulombic interactions in the calculations to reproduce experimental results for the manganites. Considering the small but important effect of the octahedra tilting of the real materials, the C-type orbital structure (with the alternation pattern of $`3x^2r^2`$ and $`3y^2r^2`$ orbitals) has been successfully reproduced for the A-AF phase in this context. T.H. thanks Y. Takada and H. Koizumi for enlightening discussion. T.H. is supported from the Ministry of Education, Science, Sports, and Culture of Japan. E.D. is supported by grant NSF-DMR-9814350.
no-problem/9907/astro-ph9907176.html
ar5iv
text
# Radio Detections of Stellar Winds from the Pistol Star and Other Stars in the Galactic Center Quintuplet Cluster ## 1 Introduction High resolution near-infrared (near-IR) observations over the past decade have revealed the presence of massive, unusual stars in the inner 50 pc of the Galaxy, where observations suffer 20$``$30 visual magnitudes of obscuration. Three clusters of massive stars have been discovered (Glass et al. 1990; Nagata et al. 1990; Okuda et al. 1990; Krabbe et al. 1995; Nagata et al. 1995; Figer et al. 1995, Cotera et al. 1996): (1) the Central cluster, located within a parsec of SgrA\*; (2) the Arches cluster, located $``$30 pc N of SgrA\* at $`\mathrm{}`$=0$`\stackrel{}{\mathrm{.}}`$12, b=0$`\stackrel{}{\mathrm{.}}`$02, and (3) the Quintuplet cluster, also located $``$35 pc N of SgrA\*, at $`\mathrm{}`$=0$`\stackrel{}{\mathrm{.}}`$16, b=0$`\stackrel{}{\mathrm{.}}`$06. The stars detected in these clusters have near-IR signatures of OB supergiants and Wolf-Rayet type stars. In the densely populated Quintuplet cluster alone, at least 8 Wolf-Rayet and over a dozen OB supergiants have been discovered. Based on the evolutionary stages of the stars, this cluster is likely to be 3.5 Myr old, with a total estimated mass of $``$10<sup>4</sup> M, and a mass density of a few thousand M pc<sup>-3</sup> (Figer et al. 1999a). The near-IR emission line spectra of stars in the three Galactic center clusters indicate that these stars have evolved away from the zero-age main sequence and have high-velocity stellar winds with terminal wind speeds of 500$``$1000 km s<sup>-1</sup> (Nagata et al. 1995, Figer et al. 1999a, Cotera et al. 1996, Krabbe et al. 1995, Tamblyn et al. 1996). These powerful winds should be detectable at radio wavelengths, as the radio emission is thermal in nature (i.e., free-free) and arises from the outer parts of the ionized wind envelope. The classic theory of Panagia & Felli (1975) and Wright & Barlow (1975) predicts that in the radio regime, the spectrum of wind emission is proportional to $`\nu ^{+0.6}`$ for a spherically symmetric, isothermal, stationary wind expanding at a constant velocity. Previous surveys made with the Very Large Array (VLA)<sup>1</sup><sup>1</sup>1The VLA is a facility of the National Science Foundation, operated under a cooperative agreement by the Associated Universities, Inc. have detected radio emission arising from the ionized winds surrounding OB supergiants and Wolf-Rayet stars (Abbott et al. 1986; Bieging et al. 1989). VLA continuum images at 6 cm (4.9 GHz) and 3.6 cm (8.3 GHz) of the Sickle and Pistol H II regions near the Galactic center reveal six point sources located in the vicinity of the Quintuplet cluster including the radio source at the position of the Pistol Star (Lang et al. 1997; Yusef-Zadeh & Morris 1987). The coincidence of the Pistol star in the near-IR with a peak in the 6 cm radio continuum image was first noted by Figer et al. (1998). In this paper we report that two of the newly identified radio sources in addition to the Pistol Star source are found to be coincident in position with massive stars in the $`\lambda `$=2.05 $`\mu `$m HST/NICMOS image of the Quintuplet cluster (Figer et al. 1999b). We discuss the nature of the radio sources and the association with stellar sources in the HST/NICMOS image. ## 2 Observations ### 2.1 VLA Continuum Observations Table 1 summarizes the VLA continuum images in which the radio point sources are detected. Standard procedures for data reduction and imaging in AIPS have been used in all cases. Both images were made with uniform weighting and have been corrected for primary beam attenuation. The 6 cm continuum image was made with the data published in Yusef-Zadeh & Morris (1987), observed with the VLA in the B, C, and D arrays, and later supplemented with A-array data, to achieve a resolution of 1$`\stackrel{}{\mathrm{.}}`$33 $`\times `$ 1$`\stackrel{}{\mathrm{.}}`$05, at PA=10°. ### 2.2 HST/NICMOS Imaging In order to search for stellar counterparts to the radio point sources, a careful alignment was made between the HST/NICMOS $`\lambda `$=2.05 $`\mathrm{\mu m}`$ image of the Quintuplet cluster (Figer et al. 1999b) and the VLA 8.3 GHz continuum image of this region. The Quintuplet cluster was imaged by HST/NICMOS in a mosaic pattern in the NIC2 aperture (19$`\stackrel{}{\mathrm{.}}`$2 on a side) on UT 1997 September 13/14 in the F205W filter ($`\lambda `$=2.05 $`\mathrm{\mu m}`$). The MULTIACCUM read mode with NREADS=11 was used for an effective exposure time of 255 seconds per image. The plate scale was 0$`\stackrel{}{\mathrm{.}}`$076 pixel<sup>-1</sup> (x) by 0$`\stackrel{}{\mathrm{.}}`$075 pixel<sup>-1</sup> (y), in detector coordinates. The cluster was imaged in a 4$`\times `$4 mosaic, and the +y axis of the detector was oriented 135° East of North. The images were reduced via the standard NICMOS pipeline (CALNICA, CALNICB; MacKenty et al. 1997) (see Figer et al. 1999b for further details). A coordinate solution for the HST/NICMOS image was generated by assigning known positions to $``$30 Quintuplet cluster stars using the stellar identifications and coordinates from Figer et al. (1999a), which were obtained using the 3-m Shane telescope at University of California’s Lick Observatory. These near-IR positions have an absolute accuracy of $``$0$`\stackrel{}{\mathrm{.}}`$5. The VLA observations at both 3.6 cm and 6 cm have an uncertainty of only $``$0$`\stackrel{}{\mathrm{.}}`$1, due to signal to noise and the known precision of the calibrator sources. Thus, the alignment of the VLA and HST/NICMOS images has a positional accuracy of $`\sigma `$=0$`\stackrel{}{\mathrm{.}}`$5. ## 3 Results ### 3.1 Radio Flux Density Measurements Figure 1 shows the 3.6 cm continuum image of Lang et al. (1997) in the vicinity of the Pistol nebula and the Quintuplet cluster. The six radio point sources discussed in this paper are labelled QR1$``$QR5 and the Pistol Star. The crosses in the image represent the positions of HST/NICMOS sources that are associated with the radio sources; these associations are further discussed below in $`\mathrm{\S }`$3.2. In previous radio observations, Yusef-Zadeh & Morris (1987) identified the Pistol nebula as the prominent pistol-shaped source at the center of Figure 1; it has a stellar source near its center of curvature, the Pistol Star (Figer et al. 1998; and references therein). The H92$`\alpha `$ recombination line study of Lang et al. (1997) characterizes the Pistol nebula as having an electron temperature of T<sub>e</sub>=3300 K, a complex velocity structure with central velocity near v<sub>LSR</sub> $``$120 km s<sup>-1</sup>, and extremely broad lines ($`\mathrm{\Delta }`$v$``$60 km s<sup>-1</sup>). In addition, a possible detection of He92$`\alpha `$ was made, with a helium to hydrogen abundance, Y<sup>+</sup>=14$`\pm `$6%. The continuum emission from the Pistol nebula suggests an H II mass of 11 M. The absence of molecular material associated with the Pistol nebula, coupled with the low value of T<sub>e</sub> compared to other Galactic center H II regions, suggest that this nebula may in fact be the ejecta from a previous stage of the Pistol Star’s evolution (Figer et al. 1995; 1998). The source of ionization of the Pistol nebula is primarily due to the radiation field from several of the Quintuplet cluster members (Figer et al. 1995, 1999c; Lang et al. 1997). With an rms noise level in the 3.6 cm image of 0.2 mJy beam<sup>-1</sup>, the six point sources shown in Figure 1 (QR1$``$QR5 and the Pistol Star source) are detected with S/N ratios between 5 and 10. These sources are also detected at 6 cm with S/N ratios between 5 and 8. In order to calculate the flux densities at both frequencies, cross cuts were made in both RA and DEC across each point source. Table 2 lists the positions of the point sources, the flux densities at each wavelength, and the spectral index and the deconvolved source size derived from these measurements. The radio sources QR1$``$QR5 have rising spectral indices, $`\alpha `$ = +0.5$`\pm `$0.4 to +0.8$`\pm `$0.4, (where S<sub>ν</sub> $``$ $`\nu ^\alpha `$), whereas the Pistol Star has a spectral index of $`\alpha `$ = $``$0.4$`\pm `$0.2, consistent with a flat or slightly falling spectrum. ### 3.2 HST/NICMOS Counterparts to VLA sources The crosses in Figure 1 show three HST/NICMOS sources (q15, q10, and the Pistol star) which are likely associated with the radio point sources QR4, QR5, and the Pistol Star. The angular offsets in the radio/near-IR positions are $``$ 3$`\sigma `$; the error in the alignment is dominated by the uncertainty in the near-IR positions of $`\sigma `$=0$`\stackrel{}{\mathrm{.}}`$5. Figure 2 shows the overlay between the HST/NICMOS image and the 8.3 GHz continuum image (shown in Figure 1). It is also apparent in Figure 2 that three of the radio sources (QR4, QR5 and Pistol Star source) are coincident with HST/NICMOS sources, and that three of the radio sources (QR1, QR2 and QR3) do not have HST/NICMOS counterparts. Given the relatively large surface density of stars in the HST/NICMOS image of the Quintuplet cluster, the possibility of a chance superposition between a radio source and any HST/NICMOS source is not negligible. However, the probability is much smaller that a randomly placed radio source with a flux density $`>`$ 1 mJy (5 $`\sigma `$) is coincident with a near-IR source that has been classified as a hot, massive star with a high mass-loss rate (16 sources total in a 77″ $`\times `$ 74″ region of the HST/NICMOS image; c.f. Figer et al. 1999b). Excluding the Pistol Star as a special case, we calculate that the combined probability that 2 out of 5 radio sources would be randomly aligned (within the 3$`\sigma `$ positional uncertainty of 1$`\stackrel{}{\mathrm{.}}`$5) with one of the 16 near-IR supergiants is 4 $`\times `$ 10<sup>-5</sup>. Therefore, it is highly unlikely that these coincidences are due to chance superposition, and indeed represent real associations. ## 4 Discussion ### 4.1 The Nature of the Radio Sources The near-IR counterparts of QR4 and QR5 have been classified as hot, massive stars with high mass-loss rates: q15 has been classified as an OB I supergiant and q10 as WN9/Ofpe, according to Figer et al. (1999a). The radio point sources QR4 and QR5 are presumably detections of the stellar winds arising from the near-IR stars, since their spectra are consistent with $`\nu `$<sup>+0.6</sup> and they have near-IR counterparts. In addition, based on the classic theory of Wright & Barlow (1975) and Panagia & Felli (1975), it is possible to predict the radio flux density of the stellar wind arising from an OB supergiant found in the Quintuplet cluster at 3.6 cm. Assuming the following wind parameters (near the extreme values) for an OB supergiant at the Galactic center—a maximum mass loss rate of $`\dot{M}`$=10<sup>-4</sup> M yr<sup>-1</sup>, an electron temperature of T<sub>e</sub>=25,000 K, a terminal wind velocity of v=500 km s<sup>-1</sup>, and a distance of 8.0 kpc; the predicted radio flux density at 3.6 cm is $``$4 mJy. At this frequency, QR1$``$QR5 have flux densities in the range of 2$``$6 mJy, consistent with this prediction. Given the rms noise in our images of 0.2 mJy beam<sup>-1</sup>, we are capable of detecting emission from the winds of OB supergiants in the region of the Quintuplet cluster, and the radio sources are most likely detections of these ionized winds. Since there are at least 8 Wolf-Rayet type stars in the Quintuplet cluster, we can also estimate the radio flux density at 3.6 cm for these stars, using the following wind parameters: Ṁ=5 $`\times `$ 10<sup>-5</sup> M yr<sup>-1</sup>, T<sub>e</sub>=40,000 K, v=2000 km s<sup>-1</sup>, and d=8.0 kpc; the predicted flux density for a Wolf-Rayet star at the Galactic center is $``$0.05 mJy at 3.6 cm. Since the rms noise in both of the VLA continuum maps is 0.2 mJy beam<sup>-1</sup>, we would clearly not have detected the mass-losing Wolf-Rayet stars in the current data, and are only sensitive to the winds arising from OB supergiants. Although the sources QR1, QR2, and QR3 are detected with S/N $`>`$ 5, and have spectral indices consistent with stellar wind sources, they have no obvious HST/NICMOS stellar counterparts. A possible explanation is that the near-IR extinction varies across the cluster, and that the stellar counterparts of QR1, QR2 and QR3 are masked by greater extinction than the stellar counterparts of QR4 and QR5. If we invoke extinction to explain the lack of counterparts for QR1$``$QR3, then a near-IR extinction A<sub>k</sub> $`>`$ 8 is required, corresponding to a visual extinction A<sub>v</sub> $`>`$ 80. This kind of extinction is only possible if a dense molecular cloud is located in front of part of the Quintuplet cluster. In that case, the unseen counterparts could still be members of the cluster. However, there is no evidence for such a molecular cloud in this region, which makes this suggestion unlikely. ### 4.2 The Pistol Star The spectral index of the Pistol Star ($`\alpha `$ = $``$0.4$`\pm `$0.2) is consistent with a flat or slightly falling spectrum. It does not follow the classic theory for a fully ionized wind, which predicts a rising spectrum, $`\alpha `$ = +0.6. The Pistol Star, a prominent source in the near-IR HST/NICMOS image, has been classified as a Luminous Blue Variable (LBV) by Figer et al. (1998) and has a stellar wind. Based on the stellar parameters for the Pistol Star (c.f., Figer et al. 1998, the “L” model), Ṁ=3.8 $`\times `$ 10<sup>-5</sup> M yr<sup>-1</sup>, T<sub>e</sub>=12,000 K, v=100 km s<sup>-1</sup>, and a distance of 8.0 kpc, the predicted radio flux density at 3.6 cm is $``$9 mJy using the formulation of Panagia & Felli (1973) and Wright & Barlow (1975). At 3.6 cm, the flux density of the Pistol Star is 5.8$`\pm `$1.0 mJy, and at 6 cm the flux density is 7.4$`\pm `$1.0 mJy. The radio emission of the Pistol Star source is likely a detection of the ionized wind arising from the Pistol Star. One possible explanation for the slightly falling spectrum is that the Pistol Star may have a non-thermal component in its wind over the cm-wavelength range. This type of spectral index has been observed from other supermassive stars, with $`\alpha `$ in the range of $`\alpha `$=$``$0.8 to 0.0 (Abbott et al. 1984; Persi et al. 1985). In fact, the VLA survey of of Galactic OB stars made by Bieging et al. (1989) finds that 24% of luminous supergiants are observed to have non-thermal spectra. This fraction is consistent with our results: 1 of the 6 radio point sources we detect has a slightly falling spectral index. Non-thermal emission is thought to arise either by means of shocks in the wind itself, in the shock between the stellar wind and a binary companion (Contreras et al. 1996), or from the interaction of the stellar wind with the remnant of a star’s previous evolutionary mass-loss phase (Leitherer et al. 1997). ## 5 Conclusions Six point sources were detected at 3.6 cm and 6 cm with the VLA in the vicinity of the Quintuplet cluster. These sources have rising spectra in the range of $`\alpha `$=+0.5 to +0.8, with the exception of the Pistol Star ($`\alpha `$=$``$0.4). Based on the overlay of the HST/NICMOS and 8.3 GHz VLA continuum images, three of these radio sources, including the Pistol Star source, can be identified with hot, massive stars with high-mass loss rates. Therefore, the radio sources are most likely detections of the ionized stellar winds emanating from the supermassive stars in the Quintuplet cluster. We would like to thank Luis Rodriguez for suggesting that the stellar wind of the Pistol Star may be detectable at 8.3 GHz. We would also like to thank Liese van Zee for help with the coordinate solution for the HST/NICMOS image, and Paco Najarro for useful comments on the theory of stellar winds. Figure 1 - VLA 3.6 cm radio continuum image of the six radio point sources detected at both 3.6 and 6 cm, labelled QR1$``$QR5, and the Pistol Star. This image has a resolution of 2$`\stackrel{}{\mathrm{.}}`$12 $`\times `$ 1$`\stackrel{}{\mathrm{.}}`$71, PA=59°. The contours represent 0.5, 1, 2.5, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22 mJy $`beam^1`$; where 0.5 mJy beam<sup>-1</sup> corresponds to 1.5$`\sigma `$. The crosses represent HST/NICMOS positions of stars that are associated with three of the radio sources, with positional uncertainties of $``$ 3$`\sigma `$, where $`\sigma `$=0$`\stackrel{}{\mathrm{.}}`$5. Figure 2 - VLA 3.6 cm continuum image (as shown in Figure 1) overlaid on HST/NICMOS $`\lambda `$=2.05$`\mathrm{\mu m}`$ image of the Quintuplet cluster.
no-problem/9907/cond-mat9907183.html
ar5iv
text
# Electronic Shell Structure of Nanoscale Superconductors \[ ## Abstract Motivated by recent experiments on Al nanoparticles, we have studied the effects of fixed electron number and small size in nanoscale superconductors, by applying the canonical BCS theory for the attractive Hubbard model in two and three dimensions. A negative “gap” in particles with an odd number of electrons as observed in the experiments is obtained in our canonical scheme. For particles with an even number of electrons, the energy gap exhibits shell structure as a function of electron density or system size in the weak-coupling regime: the gap is particularly large for “magic numbers” of electrons for a given system size or of atoms for a fixed electron density. The grand canonical BCS method essentially misses this feature. Possible experimental methods for observing such shell effects are discussed. \] As the technology for fabricating ultrasmall metallic grains steadily improves, the typical sample dimensions are approaching molecular dimensions . The present accessibility to samples ranging from nanoscale to bulk has renewed interest in various features of the solid state that may or may not survive the excursion to ultrasmall dimensions. In superconducting Al samples, for example, the ability to distinguish even and odd numbers of electrons through tunneling experiments has called into question the use of the grand canonical ensemble to describe the electron pairing in these ultrasmall samples within a model with equal level spacings , and within the attractive Hubbard model . In these works the BCS theory of pairing was formulated within the canonical ensemble, following the early treatment of nuclei , and some rigorous “quality control” was provided by exact studies . Within the attractive Hubbard model we have found two prominent features that emerged from the canonical BCS treatment, both of which were verified by the exact solutions. The first is the existence of “negative gaps” for odd electron number grains. By this we simply mean that a tunneling bias less than the charging energy would be required to tunnel an electron onto a grain with an odd number of electrons. The second is the existence of what were termed “super-even” electron numbers, where the tunneling bias required to tunnel an electron onto a grain with certain even numbers of electrons would be unusually high. In this letter we investigate these features for various bandstructures in two and three dimensions, as might apply to Al, and briefly discuss some possible experiments to observe in particular the “super-even” effect. We have adopted the attractive Hubbard model, whose specifics are well known. The additional feature we include here is the possibility of using both periodic boundary conditions (PBC) as well as open boundary conditions (OBC), which are more appropriate for small systems. Either of these is accomplished through a unitary transformation to a basis that diagonalizes the kinetic energy term. The BCS variational calculation is then performed with a wave function containing pairs of time-reversed states $`(n,)`$ and $`(n,)`$ . The even and odd wave functions with $`\nu `$-pairs are given by $`|\mathrm{\Psi }_{2\nu }`$ $`=`$ $`c{\displaystyle \frac{1}{2\pi i}}{\displaystyle 𝑑\xi \xi ^{\nu 1}\underset{n}{}\left(\mathrm{\hspace{0.17em}1}+\xi g_na_n^{}a_n^{}\right)|0},`$ (1) $`|\mathrm{\Psi }_{2\nu +1}`$ $`=`$ $`c{\displaystyle \frac{1}{2\pi i}}{\displaystyle 𝑑\xi \xi ^{\nu 1}a_{m\sigma }^{}\underset{nm}{}\left(\mathrm{\hspace{0.17em}1}+\xi g_na_n^{}a_n^{}\right)|0},`$ (2) with $`N_e=2\nu `$ and $`N_e=2\nu +1`$, respectively. The contour integral is on any counterclockwise path that encloses the origin. For odd $`N_e`$, the blocked state $`m`$ is chosen so that it gives the lowest energy for a given coupling strength. Details with PBC were given previously , and with OBC they will be given elsewhere. We calculate the ground state energy for three systems with electron number $`N_e`$, $`N_e+1`$, and $`N_e+2`$, and evaluate the energy gap by the formula $`\mathrm{\Delta }_{N_e}=(E_{N_e1}2E_{N_e}+E_{N_e+1})/2`$. In the grand canonical ensemble, the number of electrons is fixed only on average. Thus we must solve the gap equation, $$\mathrm{\Delta }_n=\underset{m}{}(\mathrm{Re}V_{nn,mm})\frac{\mathrm{\Delta }_m}{2E_m},$$ (3) along with the number equation, $$n_e\frac{N_e}{N}=1\frac{1}{N}\underset{n}{}\frac{1}{E_n}(\stackrel{~}{ϵ}_n\mu ),$$ (4) for gap parameters $`\{\mathrm{\Delta }_n\}`$ and chemical potential $`\mu `$. Here, $`V_{nn,mm}`$ is the transformed interaction, $`E_n\sqrt{(\stackrel{~}{ϵ}_n\mu )^2+\mathrm{\Delta }_n^2}`$ is the quasiparticle energy and $`\stackrel{~}{ϵ}_n=ϵ_n+_mV_{nm,nm}\frac{g_m^2}{1+g_m^2}`$ is the single-particle energy modified with the Hartree term. The gap is given by $`\mathrm{\Delta }_0=\mathrm{min}(E_n)`$ for a finite size system, that is, with quantized energy levels $`\{ϵ_n\}`$. In the weak to intermediate coupling regime, the energy gap as a function of electron number (or electron density $`n_e`$) roughly reflects the single-particle density of states (DOS). Thus in a simple cubic (SC) lattice in three dimensions (3D), the gap in the bulk limit is a smooth function of $`n_e`$ that increases from zero at zero density to a maximum value at half filling. In Fig. 1(a) we show the single-electron DOS for a SC lattice of $`N=16\times 16\times 16=4096`$ sites with PBC (solid curve). The result has been smoothed by convolution with a normalized Gaussian, and it is very similar to the bulk density of states shown by the dashed curve. Although $`N=4096`$ is fairly large, the energy gap behaves quite differently from what we expect for a bulk system as a function of electron density for weak coupling. Results for the grand canonical BCS gap, $`\mathrm{\Delta }_0/t`$, are illustrated in Fig. 1(b) for $`|U|/t=2`$ and (c) for $`|U|/t=1`$. For $`|U|/t=2`$ the overall scale of the gap $`\mathrm{\Delta }_0`$ as a function of $`n_e`$ resembles $`g(ϵ)`$ shown in Fig. 1(a). However, it has many fine structures; discontinuities at small density and cusps at larger density. This non-smooth behaviour is a result of the discrete density of states, i.e., quantized energy levels $`\{ϵ_n\}`$ and their degeneracy in a finite size system. Such quantum structures of $`\mathrm{\Delta }_0`$ turn out to be prominent for weaker coupling strengths, as seen for $`|U|/t=1`$ in Fig. 1(c): in this case there are discontinuities in the gap for the entire range of density. The discontinuities or cusps in the gap arise from finite level spacings, while their positions as a function of $`n_e`$ and the magnitude of the gap are determined by the degeneracy of levels, as will be explained in detail shortly. In Fig. 2 (upper frame) the gap $`\mathrm{\Delta }_{N_e}`$ obtained by the canonical BCS is shown with crosses, along with the grand canonical gap $`\mathrm{\Delta }_0`$ (solid curve), as a function of electron density $`n_e`$, for a nanoscale system in weak coupling. The most obvious new feature is the “negative gap”, for systems with an odd number of electrons. As was already mentioned, this result has already been observed in small Al grains . For the even numbered grains note that most of the results shown follow the discontinuous, step-function-like behaviour of $`\mathrm{\Delta }_0`$. However, anomalously high values occur at densities where the grand canonical result has discontinuities. These anomalies follow from the analogue of shell effects for a finite lattice of electrons. In the lower part of Fig. 2 we plot the number of single-particle states as a function of $`n_e`$ for this system. Each of the discrete levels is plotted at the density that corresponds to electron number for filling all the levels up to that particular level (“closed-shell” configuration ) and the height is the degeneracy of the level without the spin factor of two. It is clear that the densities where the canonical gap has a jump are the ones that correspond to the closed-shell configurations. In a closed-shell configuration, the occupation of levels is mainly driven by the kinetic energy. The cost required to occupy higher energy states exceeds the gain due to the increased interaction. Moreover, a careful examination of the energy gain due to pairing reveals two distinct sources, a Hartree-like term, and the explicit BCS pairing term. The latter is small compared to the former, so the loss in energy reduction due to less mixing of states is indeed quite small. The Hartree-like term continues to play a role, however, which is why the value of the gap in the closed shell configurations is approximately equal to half the level spacing (i.e. one would have expected an additional pairing energy). The same physics occurs within the grand canonical ensemble , though the discontinuity is the best these equations can do to account for the closed shell configurations. Different level structures result in different shell structures in the gap. In Fig. 3 we show the number of states for $`N=8^3=512`$ sites with PBC for SC, FCC (face-centred cubic) and BCC (body-centred cubic) lattices, as a function of single-particle energy for the entire band. The one for SC (top frame) for negative energy is the same as shown in Fig. 2, except now it is plotted vs. energy so that the level spacings are clearly visible; these in turn determine the height of the jumps in the canonical gap. In fact for such a small system there are only two distinct level spacings in SC. This is why in the gap shown in Fig. 2, there are only two anomalously high values for the gap. In FCC (middle frame) there is no particle-hole symmetry and the degeneracy is more concentrated near the top of the band. Compared with SC, the jumps at closed shell configurations will be more enhanced by the larger level spacings and more frequent for smaller density; in addition the gap for open shells will be smaller (on average) because of less degeneracies. The BCC (bottom frame) has particle-hole symmetry as in SC, but the degeneracy is concentrated around zero energy. As in FCC there will be jumps more frequently at smaller densities, but near half-filling the gap will be continuous with the Fermi level at zero energy. The level structure also depends on the boundary condition. In Fig. 4 we illustrate this for a small 2D system, where we compare PBC and OBC. In Fig. 4 the canonical gap $`\mathrm{\Delta }_{N_e}`$ is plotted (for all densities) with crosses (PBC) and squares (OBC), and the grand canonical gap $`\mathrm{\Delta }_0`$ is shown with solid (PBC) and dashed (OBC) curves, for $`|U|/t=2`$. The SC lattice in 2D with PBC has a large degeneracy (a singularity in the bulk DOS) at zero single-particle energy. With OBC a relatively high degeneracy remains at zero energy, but for nonzero energy there are more levels with less degeneracies, because translational symmetry is absent. For $`N=64`$ most of the levels are doubly degenerate, while some have no degeneracy. This is why in Fig. 4 the canonical gap with OBC has jumps more frequently (often periodic between $`N_e`$ multiples and non-multiples of four) with less height than with PBC at smaller density. We note again that the canonical gap for even $`N_e`$ for open shells and that for odd $`N_e`$ look symmetric about the x-axis (for both PBC and OBC). We can also see the shell effects for a fixed electron density by varying the lattice size. In Fig. 5 the canonical gap $`\mathrm{\Delta }_{N_e}`$ is shown as a function of $`1/N`$ ($`NN_s^2`$) for a 2D SC lattice with PBC for quarter filling $`n_e=0.5`$ (circles), for strong coupling (upper frame) and weak coupling (bottom frame). The grand canonical gap in the bulk limit is indicated with a solid circle on the ordinate. In strong coupling the gap hardly depends on the number of sites. In weak coupling the gap exhibits strong size dependence as can be seen for $`|U|/t=1`$. For $`N_s=10`$, 14, 18, 22 and 26, quarter filling corresponds to a closed-shell configuration and this can be seen clearly for $`N_s=14`$, 18 and 22 as the big jumps, which reflect the level spacing in each case. In contrast the Fermi level is open for $`N_s=8,12,16,20,24`$. Interestingly the gap for open shells also changes as a function of size in a non-smooth way . By far, however, the transition to the bulk regime is dominated by the oscillations of the magnitude of the gap between open and closed shell configurations. In summary, we have examined the tunneling gap for three dimensional ultrasmall superconducting grains, as a function of electron density, coupling strength, and system size. In weak coupling, shell effects are particularly prominent, and should be observable in very clean grains at low temperature. An ideal experimental arrangement would allow one to vary the electron density over a wide range. In this way one could observe the large modulation of the gap and identify “magic numbers” of electrons corresponding to the electron densities with anomalously large gap. However, in practice we anticipate that through the use of a gate electrode, one can vary the electron density only by a small amount (though large enough to see even/odd effects ). Hence one will have to rely on ion-implanting a distribution of grain sizes, and thus make use of Fig. 5 to correlate gaps of different magnitude with different grain sizes. A systematic search should yield grain sizes whose electron number lies near a “magic number” so that tunneling a handful of electrons (one by one) onto the sample controlled by a gate electrode will allow one to observe large changes in the gap, as illustrated in Fig. 2. We thank Allen Goldman, Boldizsár Jankó, and Al Meldrum for enlightening discussions on possible experimental methods. Calculations were performed on the 44-node SGI parallel processor at the University of Alberta. This research was supported by the Avadh Bhatia Fellowship and by the Natural Sciences and Engineering Research Council of Canada and the Canadian Institute for Advanced Research.
no-problem/9907/astro-ph9907054.html
ar5iv
text
# Photometric Redshifts in lensing clusters: Identification and Calibration of faint high-𝑧 galaxies ## 1. Introduction Lensing clusters can be used as gravitational telescopes to build up and to study an independent sample of high-$`z`$ galaxies, in order to complement the large samples obtained in field surveys. The amplification close to the critical lines is typically $`\mathrm{\Delta }m`$ 2 to 3 mags, and it is still $`1`$ mag at 1’ from the cluster center (see Figure 1). The signal/noise ratio in spectra of amplified sources and the detection fluxes are improved beyond the limits of conventional techniques, whatever the wavelenght used for this exercise. In particular, the amplification properties have been succesfully used in the ultra-deep MIR survey of A2390 (Altieri et al. 1999), and the SCUBA cluster lens-survey (Smail et al 98; Blain et al. 99). We discuss here the systematic use of photometric redshifts in lensing clusters to identify high-$`z`$ sources. This is the basis of a large collaboration program presently going on, and involving different european institutions, aimimg to perform the spectroscopic follow up of high-$`z`$ candidates selected from the visible and near-IR photometry. The first lensed galaxy confirmed at $`z\stackrel{>}{}2`$ was the spectacular blue arc in Cl2244-02 (Mellier et al., 1991). More recent examples of highly-magnified galaxies, identified either purposely or serendipitously, strongly encourages this approach: the star-forming source $`\mathrm{\#}384`$ in A2218 at z=2.51 (Ebbels et al. 1996); the luminous z=2.7 arc behind the EMSS cluster MS1512+36 (Yee et al. 1996; Seitz et al., 1998); three z $``$ 4 galaxies in Cl0939+47 (Trager et al. 1997); a z=4.92 system in Cl1358+62 (Franx et al. 1997, Soifer et al. 1998); and the two red galaxies at $`z4`$ in A2390 (Frye & Broadhurst 1998, Pelló et al. 1999). ## 2. Photometric Redshifts The method used here to compute photometric redshifts (hereafter $`z_{phot}`$) is a standard SED fitting procedure, according to: $$\chi ^2(z)=\underset{i=1}{\overset{N_{filters}}{}}\left(\frac{F_i[Observed]b\times F_i[Template](z)}{\sigma _i}\right)^2$$ (1) where $`F_i[Observed]`$, $`F_i[Template]`$, and $`\sigma _i`$ are the observed and template fluxes and their uncertainty in filter i, respectively, and b is a normalisation constant. It was originally developed by Miralles (1998) (see also Miralles & Pelló 1998), and a new version of this code called hyperz is presently developed (Bolzonella et al. in preparation). The set of templates includes mainly spectra from the Bruzual & Charlot evolutionary code (GISSEL98, Bruzual & Charlot, 1993), and also a set of empirical SEDs compiled by Coleman, Wu and Weedman (1980) to represent the local population of galaxies. The synthetic database derived from Bruzual & Charlot includes 255 spectra, distributed into 5 different star-formation regimes (51 different ages for the stellar population, all of them with solar metallicity): a burst of 0.1 Gyr, a constant star-formation rate, and 3 $`\mu `$ models (exponential-decaying SFR) with characteristic times of star-formation chosen to match the present-day sequence of E, Sa and Sc galaxies. The reddening law is taken from Calzetti (1999), but 4 other laws are also included in our code. The normal setting for $`A_v`$ ranges between 0 and 0.5 magnitudes. Flux decrements in the Lyman forest are computed according to Giallongo & Cristiani (1990) or Madau (1995), both of them giving similar results. When applying hyperz to the spectroscopic samples available on the HDF, the uncertainties are typically $`\delta z/(1+z)0.1`$. ## 3. Identification of high-$`z`$ sources High-z lensed sources are selected close to the appropriate critical lines (see Figure 1), with $`z_{phot}`$ $`2`$. Figure 2 displays the $`z_{phot}`$ distribution of arclets in two well known lensing clusters, where the sample of arclets have been applied according to different selection criteria. In all cases, $`z_{phot}`$ is computed on the basis of a photometric survey including near-IR data. The method is restricted to lensing clusters whose mass distribution is highly constrained by multiple images (revealed by HST or ground-based multicolor images), where the amplification uncertainties are typically $`\mathrm{\Delta }m_{lensing}<`$ 0.3 magnitudes. Such clusters with well constrained mass distributions enable to recover precisely the properties of lensed galaxies (morphology, magnification factor). Highly magnified sources are presently the only way to access the dynamical properties of galaxies at $`z2`$, through 2D spectroscopy, at a spatial resolution $``$ 1 kpc. The two multiple-images at the same $`z4`$, observed behind A2390, are un example of these reconstruction capabilities (Pelló et al. 1999). Cluster lenses can be used advantageously to determine the redshift distribution up to the faintest levels through magnified sources (Figure 2). It is also the natural way to search for primeval galaxies, in order to constraint the scenarios of galaxy formation. ## 4. Photometric Redshifts compared to Spectroscopic and Lens Inversion results For a subsample of spectroscopically confirmed objects, we have tested the photometric redshift accuracy as a function of the relevant parameters (SFR, reddening, age and metallicity of the stellar population). We have also cross-checked the consistency between the photometric, the spectroscopic and the lensing redshift obtained from inversion methods (Ebbels et al. 1998). The agreement between the three methods is good up to at least $`z\stackrel{<}{}1.5`$. For higher redshifts, the results for the most amplified sources are promising, but an enlarged spectroscopic sample is urgently needed. The comparison between the spectroscopic and the lensing redshift has been studied in the field of A2218 (Ebbels et al. 1998), and all the present results seem to follow this trend at least to $`z\stackrel{<}{}1.5`$. Figure 4 displays the difference between $`z_{phot}`$ and lens redshift for a subsample of 98 arclets in the core of A2390, selected according to morphological criteria (minimum elongation and right orientation are requested). According to these results, about $`60\%`$ of the sample have $`|\mathrm{\Delta }z|0.25`$. Most of the discrepancy corresponds to sources with $`z\stackrel{>}{}2`$. A general trend exists with high-$`z`$ images, which are not correctly identified by inversion techniques as compared to $`z_{phot}`$. This behaviour is expected as a result of the relative low sensitivity to $`z`$ of the inversion method for high-$`z`$ values. Taking into account that $`z_{phot}`$ and lensing inversion techniques produce independent probability distributions for amplified sources, combining both methods provides with an alternative way to determine the redshift distribution of high-$`z`$ sources. Figure 3 shows un example for 2 MIR selected sources in A2390. ## 5. Conclusions and Future Developments The selection of high-$`z`$ candidates in cluster lenses using a $`z_{phot}`$ approach is strongly supported by the present results. The efficiency is increased when using a large wavelength range to compute $`z_{phot}`$, including the IR bands. For most statistical purposes, $`z_{phot}`$ should be accurate enough to discuss the properties of these extremely distant galaxies. Conversely, lensing clusters could be used as a tool to check photometric redshifts up to the faintest limits, through the spectroscopic confirmation of $`z_{phot}`$ for such amplified sources. An Ultra-Deep Photometric Survey of selected cluster lenses is urgently needed to probe the distant Universe, and this could be a a well defined program for the ACS camera on HST. ### Acknowledgments. We are grateful to G. Bruzual, S. Charlot, Y. Mellier, B. Fort, R.S. Ellis, J.F. Le Borgne and M. Dantel-Fort for useful discussions on this particular technique. Part of this work was supported by the TMR Lensnet ERBFMRXCT97-0172 (http://www.ast.cam.ac.uk/IoA/lensnet). ## References Altieri, B., et al. 1999, A&A, 343, L65 Blain, A. W., Kneib, J.-P., Ivison, R. J., Smail, I. 1999, ApJ, 512, L87 Bruzual, G., Charlot, S. 1993, ApJ, 405, 538 Calzetti, D. 1999, Ringberg Workshop on Ultraluminous Galaxies September 20-26, 1998. Kluwer Proceedings (astro-ph/9902107) Coleman, D.G., Wu, C.C., Weedman, D.W. 1980, ApJS, 43, 393 Ebbels, T.M.D., et al. 1996, MNRAS, 281, L75 Ebbels, T.M.D., et al. 1998, MNRAS, 295, 75 Franx M., et al. 1997, ApJ, 486, 75 Frye B., Broadhurst T. 1998, ApJ, 499, 115 Giallongo E., Cristiani S. 1990, MNRAS, 247, 696 Madau P. 1995, ApJ, 441, 18 Mellier Y., et al. 1991, ApJ, 380, 334 Miralles, J. M. 1998, PhD. thesis Université Paul Sabatier Miralles, J. M., Pelló R. 1998, ApJ, submitted, (astro-ph/9801062) Pelló, R. et al. 1999, A&A, 346, 359, (astro-ph/9810390) Seitz, S., et al. 1998, MNRAS, 298, 945 Smail, I., Ivison, R. J., Blain, A. W., Kneib, J.-P. 1998, AAS 192, 4813 Soifer, B.T., et al. 1998, ApJ, 501, 171 Trager S. C., et al. 1997 , ApJ, 485, 92 Yee, H.K.C., et al. 1996, AJ, 111, 1783
no-problem/9907/cond-mat9907270.html
ar5iv
text
# Log-periodic power law bubbles in Latin-American and Asian markets and correlated anti-bubbles in Western stock markets: An empirical study ## 1 Introduction A series of works have documented a robust and universal signature preceding large crashes occuring in major financial stock markets, namely accelerated price increase decorated by large scale log-periodic oscillations culminating close to a critical point \[Sornette et al., 1996, Feigenbaum and Freund, 1996, Sornette and Johansen, 1997, Johansen and Sornette, 1998, Johansen, 1997, Sornette and Johansen, 1998, Feigenbaum and Freund, 1998, Gluzman and Yukalov, 1998, Vandewalle et al., 1998a, Vandewalle et al., 1998b, Johansen and Sornette, 1999a, Johansen and Sornette, 1999b, Johansen et al., 2000, Johansen et al., 1999c, Drozdz et al., 1999\]. Specifically, in the simplest form, the index $`I(t)`$ can be represented by the following time dependence $$I\left(t\right)=A+B\left(t_ct\right)^z+C\left(tt_c\right)^z\mathrm{cos}\left(\omega \mathrm{log}\left(t_ct\right)\varphi \right),$$ (1) where $`A`$ is the terminal price at the critical time $`t_c`$, the exponent $`0<z<1`$ describes an acceleration and $`\omega `$ and $`\varphi `$ are respectively the angular frequency of the log-periodic oscillations and their phase (or time unit). The log-periodic oscillations, i.e., periodic in the variable $`\mathrm{log}\left(t_ct\right)`$, are the hallmark of a discrete scale invariance \[Sornette, 1998\] since the argument of the cosine is reproduced at each time $`t_n`$ converging to $`t_c`$ according to a geometrical time series $`t_ct_n\lambda ^n`$ where $`\lambda =e^{2\pi /\omega }`$. The previously reported cases well-described by eq.(1) comprise the Oct. 1929 US crash, the Oct. 1987 world market crash, the Oct. 1997 Hong-Kong crash, the Aug. 1998 global market events, the 1985 Forex event on the US dollar, the correction on the US dollar against the Canadian dollar and the Japanese Yen starting in Aug. 1998, as well as the bubble on the Russian market and its ensuing collapse in June 1997 \[Johansen et al., 1999c\]. Symmetrically, “anti-bubbles” with decelerating market devaluations following all-time highs have also been found to carry strong log-periodic structures \[Johansen and Sornette, 1999d\], represented by eq.(1) with $`t_ct`$ changed into $`tt_c`$, $$\mathrm{log}\left(I\left(t\right)\right)=A+B\left(tt_c\right)^z+C\left(tt_c\right)^z\mathrm{cos}\left(\omega \mathrm{log}\left(tt_c\right)\varphi \right).$$ (2) The use of the logarithm of the index instead of the index as in (1) has been discussed in \[Sornette and Johansen, 1997, Johansen and Sornette, 1999a, Johansen et al., 1999c\] and is related to the duration of the bubble/anti-bubble as well as to the dynamics controlling the amplitude of the collapse. A quite remarkable strong example of such an anti-bubble is given by the Japanese Nikkei stock index from 1990 to present as well as by the Gold future prices after 1980, both after their all-time highs. For the Nikkei, a theoretical formulation \[Johansen and Sornette, 1999d\] allowed us to issue a quantitative prediction in early Jan.1999 (when the Nikkei was at its low), that the index will exhibit a recovery over 1999 and 2000 \[Johansen and Sornette, 1999d, Johansen and Sornette, 1999e\]. The hypothesis to rationalize these empirical facts first proposed in \[Sornette et al., 1996\] is that stock market crashes are caused by the slow buildup of long-range correlations between traders leading to the expansion of a speculative bubble that may become unstable at a critical time and lead to a crash or to a drastic change of market regime. Bubbles are considered to be natural occurrences of the dynamics of stock markets, as argued persuasively by Keynes \[Keynes, 1964\] and illustrated intuitively in classroom experiments \[Ball and Holt, 1998\]. It is possible to be more quantitative and construct a rational expectation model of bubbles and crashes based on Blanchard’s model \[Blanchard, 1979\], which has two main components : (1) we assume that a crash may be caused by local self-reinforcing imitation processes between noise traders which can be quantified within the frame-work of critical phenomena developed in the Physical Sciences and (2) we allow for a remuneration of the risk of a crash by a higher rate of growth of the bubble, which reflects that the crash is not a certain deterministic outcome of the bubble and, as a consequence, it remains rational for traders to remain invested provided they are suitably compensated. The bubble price is then completely controlled by the crash hazard rate and we have proposed that its acceleration and its log-periodic structures are the hallmark of a discrete scale invariance, appearing as a result of self-organising interactions between traders \[Sornette et al., 1996, Johansen et al., 2000, Johansen et al., 1999c\]. These empirical facts have until now been restricted to the major financial markets of the world (WMFMs), i.e., the stock markets on Wall Street, Tokyo and Hong Kong as well as the foreign exchange market (FOREX) and the Gold market in the seventies and early eighties. Recently, it was established \[Johansen et al., 1999c\] that the Russian stock market in $`[1996.21997.6]`$ exhibited an extended bubble followed by a (relatively slow but) large crash, which had strong characteristics of log-periodicity decorating a power law acceleration of the index, similar to those found in the WMFMs \[Johansen and Sornette, 1999a\]. This raises the question whether such behavior may be found in emerging markets in general or if the Russian case is unique due to its rather special characteristics \[Intriligator, 1998\]. The purpose of the present analysis is twofold. The main objective is to answer this question concerning whether log-periodic power laws can be as successfully applied to speculative bubbles on emerging markets as it has been done on the WMFMs. This is done by analyzing a range of emerging stock markets using the same tools as in the previous analysis of the WMFMs, as well as comparable time scales. The second objective is to illustrate on a qualitative level using log-periodic signatures that the smaller Western stock markets are strongly influenced by the leading trends on Wall Street. Furthermore, we will show quantitatively that these smaller stock markets can “phase lock” (in a weak sense) not only because of the over-all influence of Wall Street but also independently of the current trends on Wall Street. The methodology we adopt is the one used in our previous works on the WMFMs, which consists in a combination of parametric fits with formulas like (1) and of non-parametric log-frequency analysis \[Johansen and Sornette, 1999a, Johansen and Sornette, 1999b, Johansen et al., 2000, Johansen et al., 1999c\]. We have established the reliability of this approach by extensive numerical tests on synthetic data. The use of the same method will allow us to test the hypothesis that emerging markets exhibit bubbles and crashes with similar log-periodic signatures as in the WMFMs. We stress from the beginning that the results obtained on the emerging markets analysed here does not carry the same robustness as obtained for the WMFMs with respect to identification of the bubble and the values obtained for the exponent $`z`$ and the frequency $`\omega `$ of the log-periodic oscillations. We expect in part the technical difficulties in maintaining a high-quality stock markets index on these smaller emerging markets to be responsible for this. More important is presumably the fact that these emergent markets are strongly influenced by events not directly related to the economy and stock market of that particular country due to their smaller size. This also explains the fact that the life-time of the bubbles identified on these emergent markets are in general of somewhat shorter time-span compared to the bubbles and anti-bubbles previously identified on the WMFMs. Fundamentally, this is related to the question over which time-scales can a given market be regarded as a closed system with a good approximation and connects to the question of finite-size effects, a question that has been much studied in relation to critical phenomena in physical systems \[Cardy, 1998\]. ## 2 Emerging Markets ### 2.1 Speculative bubbles Emerging markets are often the focus of interest and also often exhibit large financial crises \[Lowell et al., 1998\]. The story of financial bubbles and crashes has repeated itself over the centuries and in many different locations since the famous tulip bubble of 1636 in Amsterdam, almost without any alteration in its main global characteristics \[Galbraith, 1997, Montroll and Badger, 1974\]. 1. The bubble starts smoothly with some increasing production and sales (or demand for some commodity), in an otherwise relatively optimistic market. 2. The interest for investments with good potential gains then leads to increasing investments possibly with leverage coming from novel sources, often from international investors. This leads to price appreciation. 3. This in turn attracts less sophisticated investors and, in addition, leveraging is further developed with small down payment (small margins or binders), which lead to a demand for stock rising faster than the rate at which real money is put in the market. 4. At this stage, the behavior of the market becomes weakly coupled or practically uncoupled from real wealth (industrial and service) production. 5. As the price skyrocket, the number of new investors entering the speculative market decreases and the market enters a phase of larger nervousness, until a point when the instability is revealed and the market collapses. This scenario applies essentially to all market crashes, including old ones such as Oct. 1929 on Wall Street, for which the US market was considered to be at that time an interesting “emerging” market with good investment potentialities for national as well as international investors. The robustness of this scenario is presumably deeply rooted in investors psychology and involves a combination of imitative/herding behavior and greediness (for the development of the speculative bubble) and over-reaction to bad news in period of instabilities. ### 2.2 Classification of markets The commonalities recalled above does not imply that different markets exhibit the same price trajectories. There can be strong difference due to local constraints, such as cash flow restrictions, government control and so on. In our analysis of several emerging markets, we find three main classes. #### 2.2.1 Latin-American markets The Latin-American stock markets, that we analyze in details below, seems to display features reminiscent of the largest financial markets, however, with much larger fluctuations in the values obtained for the exponent $`z`$ and log-angular frequency $`\omega `$. In the next sections, we will see to what extent this similarity can be quantified. Specifically, the accelerating log-periodic power law (1) will be fitted to the various stock market data preceding large crashes as well as large decreases. We do not posit that a crash has to occur suddenly, only that it marks the end of an accelerating bullish period and the beginning of a bearish regime cumulating in a significant drop. #### 2.2.2 Asian tigers The stock markets of Asian Tigers, specifically Korea, Malaysia and Thailand as well as that of the Philippines and Indonesia also display approximate accelerating power law bubbles and subsequent crashes. However, the acceleration accompanying the observed bubbles in these markets often seem incompatible with the requirement of either $`0<z<1`$ or, more important, that of a real power law with $`t<t_c`$ in eq. (1). This since the optimisation algorithm kept on insisting on a $`t_c`$ smaller that the last data point, thus causing a floating point error. For the smaller Asian stock markets studied here this problem could be cured working on the logarithm stock market index instead with the exception of the Korean stock market and the 1997 crash in Indonesia. Again we mention that depending on the price dynamics ending the bubble, the index or the logarithm of the index turns out to be the relevant observable quantifying the acceleration of the bubble \[Johansen and Sornette, 1999a, Johansen et al., 1999c\]. Naturally, the nature of the log-periodic oscillations does not dependent of which observable is used. As an additional example of the difference between the WMFMs and the stock markets of Indonesia, Korea, Malaysia, Philippines and Thailand, we find that similarly to what is seen for the Latin-American markets, the values obtained for the exponent $`z`$ and log-angular frequency $`\omega `$ from the fitting with eq. (1) fluctuates considerably compared to the WMFMs, as reported in table 4 and \[Johansen and Sornette, 1999a\]. A recent analysis shows that the Asian stock returns exhibit characteristics of bubbles, which are however incompatible in details with the prediction of the model of rational speculative bubbles \[Chan et al., 1998\]. This suggests that a different formulation than simply using eq. (1) is needed in order to capture the trends displayed by these South-East and East Asian stock markets prior to large corrections and crashes. In this respect, we note that the Korean stock market could not be shown to display bubbles following eq. (1) nor the Indonesian crash of July 1997. #### 2.2.3 East-European stock markets The East-European stock markets seems to be following a completely different logic than their larger Western counterparts and their indices does not resemble those of the other markets. In particular, we find that they do not follow neither power law accelerations nor log-periodic patterns though large crashes certainly occurs. ### 2.3 Latin-American markets #### 2.3.1 Identification of bubbles In figure 2 to 6, the evolution of six Latin-American stock market indices (Argentina, Brazil, Chile, Mexico, Peru and Venezuela) is shown as a function of time from early in this decade to Feb. 1999. We first define a bubble as a period of time going from a pronounced minimum to a large maximum by a prolonged price acceleration, followed by a crash or a large decrease represented by a bear-market. As for the WMFMs, such a bubble is defined unambiguously by identifying its end with the date $`t_{max}`$ where the highest value of the index is reached prior to the crash/decrease. For the bubbles prior to the largest crashes on the WMFMs, the beginning of a bubble is clearly identified as coinciding always with the date of the lowest value of the index prior to the change in trend. However, this identification is not as straightforward for the Latin-American and smaller Asian indices analyzed here. Hence, in approximately half the cases, the date of the first data point used in defining the beginning of the bubble had to be moved up and the bubble had to be truncated in order to obtain fits with non-pathological values for $`z`$ and $`\omega `$. This may well be an artifact stemming from the restrictions in the fitting imposed by using a single cosine as the periodic function in eq. (1). We recall that the exponent $`z`$ is expected to lie between zero and one and it should be not too close to both zero and one: too small a $`z`$ implies a flat bubble with a very sudden acceleration at the end. Too large a $`z`$ corresponds to an almost linear non-accelerating bubble. The angular frequency $`\omega `$ of the log-periodic oscillations must also not be too small or too large. If it is too small, less than one oscillation occurs over the whole interval and the log-periodic oscillation has little meaning. If it is too large, the oscillations are too numerous and they start to fit the high-frequency noise. From the six stock market indices, we have identified by eye four Argentinian bubbles, one Brazilian bubble, two Chilean bubbles, two Mexican bubbles, two Peruvian bubbles and a single Venezuelan bubble, with a subsequent large crash/decrease, as shown in figures 2 to 6. Before the reader starts to argue that our procedure is rather arbitrary and that many other bubbles can be seen on the figures, we stress that times scales considered should be comparable with those of the larger crashes analyzed in \[Johansen and Sornette, 1999a, Johansen et al., 2000, Johansen et al., 1999c\] and not considerably less than one year. This has been achieved in most cases for the bubbles, whereas the life-time of the anti-bubbles seems to be shorter as a rule. Exceptions are the first and second Argentinian bubbles, the second Chilean bubble and the first Mexican, as shown in figures 8, 8, 14 and 16, where the fitted interval is $`0.7`$ years except for the second Argentinian bubble, where only $`0.4`$ years could be fitted. On purpose, we have restrained from analyzing log-periodic structures on smaller scales in order to obtain a good comparison with our previous analysis on WMFMs. That the time-scales on which the bubbles has been identified in general are shorter than for the WMFMs is as mentioned not very surprising. #### 2.3.2 Results In figures 8 to 19, we see the fits of the bubbles indicated in figures 2 to 6 as well as the spectral Lomb periodogram \[Flannery et al, 1992\] of the difference between the indices and the pure power law defined as $$I\left(t\right)\frac{I\left(t\right)\left[AB(t_ct)^z\right]}{C(t_ct)^z}.$$ (3) One exception is the second Peruvian bubble of which a numerically stable fit could not be obtained due to an almost vertical raise at the very end of the bubble. Using the logarithm of the index instead did only in part solve this problem and we have not included this fit in the present paper. If log-periodicity is present in the data as quantified by eq. (1), the residue defined by eq. (3) should be a pure cosine of $`\omega \mathrm{ln}(t_ct)`$ and a spectral analysis of this variable should give a strong peak around $`\omega `$. For this, we use the Lomb spectral analysis, which corresponds to a harmonic analysis using a series of local fits of a cosine (with a phase) with some user chosen range of frequencies. The advantage of the Lomb periodogram over a Fast Fourier transform is that the points do not have to be equidistantly sampled, which is the generic case when dealing with power laws. For unevenly sampled data, the Lomb method is superior to FFT-methods because it weights data on a “per point” basis instead of “per time interval” basis. Furthermore, the significance level of any frequency component can be estimated quite accurately if the nature of the noise is known. It is clear from simply looking at the figures, that the overall quality of these fits is rather good and both the acceleration and the accelerating oscillations are rather well captured by eq. (1). We let the reader directly appreciate the quality of the fits on the figures. We notice that, notwithstanding their value, the fits does not have the same excellent over-all quality as for those obtained for the WMFMs as well as for the Russian stock market \[Johansen et al., 1999c\]. A plausible interpretation is that we deal here with relatively small markets in terms of capitalization and number of investors, for which finite size effects, in the technical sense given in Statistical Physics \[Cardy, 1998\], are expected and thus may blur out the signal with systematic distortions and unwanted fluctuations. In this vein, numerical simulations of all (with one single exception \[Stauffer and Sornette, 1994\]) available microscopic stock market models have shown that simple regular deterministic dynamics is obtained when the limit of a large effective number $`N`$ of traders is taken while the stock market behavior seems realistically random and complex when only a few hundred traders are simulated \[Busshaus and Rieger, 1999, Egenter et al.. 1999, Hellthaler, 1995, Kohl, 1997\]. In tables 1 and 2, the parameters of the various fits are given as well as the beginning and ending dates of the bubble and the size of the crash/correction, defined as $$\text{drop \%}=\frac{I\left(t_{max}\right)I\left(t_{min}\right)}{I\left(t_{max}\right)}.$$ (4) Here, $`t_{min}`$ is is defined as the date after the crash/correction where the index achieves its lowest value before a clear novel market regime is observed. The duration $`t_{max}t_{min}`$ of the crash/correction is found to range from a few days (a crash) to a few months (a change of regime). From table 1, we observe that the fluctuations in the parameters values $`z`$ and $`\omega `$ obtained for the $`11`$ Latin-American crashes are considerable. The lower and upper values for the exponent $`z`$ are $`0.12`$ and $`0.62`$, respectively. For $`\omega `$, the lower and upper values are $`2.9`$ and $`11.4`$ corresponding to a range of $`\lambda `$’s in the interval $`1.88.8`$. Removing the two largest values for $`\lambda `$ reduces the fluctuations to $`2.8\pm 1.1`$, which is still much larger than the $`2.5\pm 0.3`$ previously seen on WMFMs \[Johansen and Sornette, 1999a\]. Again, we attribute these larger fluctuations to finite-size effects. Last, we note that three cases of anti-bubbles could be identified for the Latin-American markets analysed here, see figures 8, 14 and 19 and table 3. Quite remarkably, the first and the last are preceded by a bubble thus exhibiting a qualitative symmetry around comparable $`t_c`$’s as defined in eq.’s (1) and (2). ### 2.4 Asian markets #### 2.4.1 Identification of bubbles In figures 21 to 25, the evolution of six Asian stock market indices (Hong-Kong, Indonesia, Korea, Malaysia, Philippines and Thailand) is shown as a function of time from 1990 to Feb. 1999 except for Hong-Kong, which goes back to 1980. From the six stock market indices, we have identified three bubbles on the Hong-Kong stock market, two on the Indonesian, two on the Korean and one on the Malaysian, Philippine and Thai stock markets, respectively, with subsequent crashes/decreases that could be identified by eye, as indicated in figures 21 to 25. Of these, the two Korean bubbles and the second Indonesian could not be quantified using eq. (1). Of the remaining seven which could, all except the Hong-Kong crashes of Oct. 1987 and Oct. 1997 belonged to the same period ending in Jan. 1994 as also found for the Latin-American markets analysed in section 2.3.2 with the exception of Venezuela. As we shall see in section 3, this globally coordinated crash on emerging markets triggered a correlated anti-bubble on the smaller Western stock markets. #### 2.4.2 Results In figures 27 to 32, we see the fits of the bubbles indicated in figures 21 to 25 as well as the spectral Lomb periodogram of the difference between the indices and the pure power law with the exceptions of the Korean stock market and the second Indonesian bubble, which could not quantified by eq. (1). In tables 4 and 5, the parameters of the various fits are given as well as the beginning and ending dates of the bubble and the size of the crash/correction. We again see somewhat larger fluctuations in the values for the exponent $`z`$ and the log-angular frequency $`\omega `$ compared to the WMFMs as for the Latin-American markets. However, except for the Indonesian and Korean bubbles, the results are surprisingly consistent with what has been obtained for the WMFMs as well as for the Latin-American markets. ## 3 Correlations across Markets It is well-known that the Oct. 1987 crash was an international event, occurring within a few days in all major stock markets \[Barro et al., 1989\]. It is also often noted that smaller West-European stock markets as well as other markets around the world are influenced by dominating trends on Wall Street. This correlation seems to have increases over the years as can be seen with the naked eye by comparing the start and the end intervals of figure 34 (showing several market indices prior and after the Aug. 1998 turmoil). We observe a clear qualitative strengthening of the correlations between Wall Street and the smaller Western stock markets in this decade. Specifically, identifying “spikes” in either direction for the two end intervals of figure 34, a stronger correspondence between changes in the various indices is clearly observed in the later period compared to the former. We stress that the suggested dependence between these markets would not necessarily be detected by standard correlation measures, which are averages over long period of times and detect only a part of possible dependence structures. What we unravel here corresponds to “phasing-up” between markets at special times of large moves and/or large volatilities. An example of a decoupling between the West-European stock markets and Wall Street in the first part of this decade comes from the period following the crashes/corrections on most emerging stock markets in early 1994. This rash of crises occurred from January to June 1994 and concerned the currency markets (Mexico, South Africa, Turkey, Venezuela) and the stock markets (Chile, Hungary, India, Indonesia, Malaysia, Philippines, Poland, South Africa, Turkey, Venezuela, Germany, Hong-Kong, Singapore, U.K.) \[Lowell et al., 1998\]. The period of time is associated to sharply rising U.S. interest rates. Whereas the S&P500 dipped less than $`10`$% and recovered within a few months, see figure 34, the effect was much more profound on smaller Western stock markets worldwide. Surprisingly, the toll on a range of western countries resembled that of a mini-recession with decreases between $`18`$% (London) and $`31`$% (Hong Kong) over a period from $`5`$ months (London) to $`13`$ months (Madrid), as summarized in table 6. For each stock market, the decline in the logarithm of the index has been fitted with eq. (2). In figures 36 to 41, we see that the decreases in all the stock markets analyzed can be quantified by eq. (2) as log-periodic anti-bubbles \[Johansen and Sornette, 1999d\]. Using the second best fit of the CAC40 and the Swiss indices, we see that the dates of the start of the decline is well-estimated by the value of $`t_c`$ obtained from the fit. Furthermore, from table 6, we observe that the value of the prefered scaling ratio $`\lambda =e^{2\pi /\omega }`$ is remarkable consistent $`\lambda 2.0\pm 0.3`$. This comes as a good surprise, considering that the stock markets that have been analyzed belong to three very different geographical regions of the world (Europe, Asia and Pacific). With respect to the value of the exponent $`z`$, the fluctuations are as usual much larger. However, excluding New Zealand and Hong-Kong<sup>*</sup><sup>*</sup>*A possible explanation for the very low value $`z0.03`$ may be the under-representation of trading days in the first part of the data interval due to holidays. Hence, the last part of the data, where the deceleration is weaker, is allowed to dominate thus underestimating the trend. A somewhat less severe under-sampling was also present in the New Zealand index compared to, e.g., the Australian index., we obtain $`z0.4\pm 0.1`$, which again is quite reasonable compared to WMFMs \[Johansen et al., 2000, Johansen et al., 1999c\]. Furthermore, the amplitudes $`C`$ of the log-periodic oscillations are remarkable similar with $`C0.30.4`$, except for London ($`0.02`$) and Milan ($`0.05`$), as shown in table 7. ## 4 Conclusions Log-periodic bubbles followed by large crashes/corrections seem to be a statistical significant feature of Latin-American and Asian stock markets. Indeed, it seems quite unprobable to attribute the results obtained for the Latin-American and smaller Asian stock markets to pure noise-fitting because of the relatively large number of successful cases ($`18`$) compared to the number of unsuccessful cases ($`4`$) as well as the objective criteria used in identifying them. Furthermore, removing the extreme value of $`\lambda =8.8`$ for one of the Chilean bubbles gives an average of $`\lambda 2.6`$ for the remaining $`17`$ cases, which is very close to the average value found for the worlds major financial markets \[Johansen and Sornette, 1999a, Johansen and Sornette, 1999b, Johansen et al., 2000, Johansen et al., 1999c\]. However, the results obtained for the Latin-American and smaller Asian markets are as expected less striking on a one-to-one basis than those obtained on the major financial markets of the world (WMFMs) that we analyzed previously with exactly the same methodology \[Johansen and Sornette, 1999a, Johansen and Sornette, 1999b, Johansen et al., 2000, Johansen et al., 1999c\]. In this respect, it is quite remarkable that the bubbles prior to the 3 largest crashes on the Hong-Kong stock market have the same log-frequency within $`\pm 15`$% and quite similar to what has been found for bubbles on Wall Street and the FOREX. One important difference lies in the identification of a bubble. For the WMFMs, the identification of the first and last data point to be used in the fitting to our formulas was straightforward: The last point was chosen as the highest value of the index prior to the crash and the first point was the lowest value prior to the bubble. The results using these criteria have always been conclusive and a re-run of the fitting algorithm on a different interval was never necessary. This was not the case for the Latin-American and smaller Asian stock markets, where the first point had to be changed in approximately half the cases. This ambiguity is also reflected in the large fluctuations seen in the parameter values obtained for the meaningful variables $`z`$ and $`\omega `$ (or equivalently $`\lambda =e^{2\pi /\omega }`$). Weaker signatures naturally gives larger fluctuations as well as additional sensitivity to truncation of the fitted interval. The cause for the weaker signatures can be (at least) three-fold. The signatures can be truly weaker, or they appear weaker due to the poorer quality of these smaller indices compared to those the major stock markets. Another hypothesis is the “finite-size effect” already mentioned according to which the smallest market size entails larger fluctuations and possible systematic bias. It seems at present difficult to distinguish between these different hypotheses. With respect to the values obtained for the frequency for the best fits of the $`18`$ Latin-American and Asian bubbles that could be quantified using eq. (1), it is rather interesting to see that the fit falls in two rather distinct clusters one around $`\omega 6`$ and another around $`\omega 11`$ with few values in between as shown in figure 42. It looks like a frequency-doubling, which correspond to squaring $`\lambda `$, as allowed by the theory of critical phenomena \[Sornette, 1998, Saleur and Sornette, 1996\]. In the second part of this research, we have tried to argue that, in bullish times, the leading trends on Wall Street will tend to dominate the smaller Western stock markets. However, it was shown by a quantitative analysis that these smaller stock markets can collectively decouple their dynamics from Wall Street. The case we document corresponds to a surprisingly correlated anti-bubble with log-periodic signatures and power law decay similar to what has been found on longer times scales for the Nikkei and Gold decays \[Johansen and Sornette, 1999d\]. In fact, the results obtained for the majority of these smaller anti-bubbles, i.e., excluding the small values of the exponent $`z`$ obtained for the New Zealand and Hong Kong stock markets, are quite compatible with what was obtained for the Gold decay both with respect to the values for the exponent $`z`$ and prefered scaling ratio $`\lambda `$. This supports the notion that the higher values obtained for $`\omega `$ is presumably due to a more rapid dynamics present in smaller market as proposed for the Gold decay in the early eighties \[Johansen and Sornette, 1999d\]. With respect to the identification of the data intervals used for the smaller anti-bubbles, we stress that it did not suffer from the same problems as the Latin-American and Asian bubbles and could be directly identified unambiguously prior to and independently from the fitting procedure. We acknowledge stimulating discussions with D. Darcet and encouragements of D. Stauffer.
no-problem/9907/astro-ph9907413.html
ar5iv
text
# Optical properties of carbon grains: Influence on dynamical models of AGB stars ## 1 Introduction Asymptotic giant branch (AGB) stars show large amplitude pulsations with periods of about 100 to 1000 days. The pulsation creates strong shock waves in the stellar atmosphere, causing a levitation of the outer layers. This cool and relatively dense environment provides favourable conditions for the formation of molecules and dust grains. Dust grains play an important role for the heavy mass loss, which influences the further evolution of the star. Condensation and evaporation of dust in envelopes of pulsating stars must be treated as a time-dependent process since the time scales for condensation and evaporation are comparable to variations of the thermodynamic conditions in the stellar envelope. The radiation pressure on newly formed dust grains can enhance or even create shock waves leading to more or less pronounced discrete dust shells in the expanding circumstellar flow (e.g. Fleischer et al. Fleischer91 (1991), Fleischer92 (1992); Höfner et al. Hoefner95 (1995); Höfner & Dorfi Hoefner97 (1997)). Since a significant part of the dust grains transferred to interstellar space are produced in the atmosphere of these old luminous stars (Sedlmayr Sedlmayr94 (1994)) an understanding of the nature of mass loss of these long-period variables is crucial for the general understanding of dust in space. Modelling circumstellar envelopes requires knowledge of the absorption properties of the different types of grains over the relevant part of the electromagnetic spectrum. For this the optical properties of the corresponding dust material are needed. Amorphous carbon is a very good candidate as the most common type of carbon grains present in circumstellar envelopes, since the far-infrared data of late-type stars show a spectral index as expected for a very disordered two-dimensional material like amorphous carbon (Huffman Huffman88 (1988)). Silicon carbide (SiC) grains seem to be another important component of the dust in circumstellar envelopes. While amorphous carbon could explain the continuum emission, SiC particles could be responsible for the 11.3 $`\mu `$m band observed in many C-rich objects. In this paper, we present self-consistent dynamical models of circumstellar dust shells calculated with selected laboratory amorphous carbon data. Based on these models we have performed radiative transfer calculations for pure amorphous carbon and in some cases also including SiC dust. In Sect. 3 the used amorphous carbon data are described. The influence on the model structure is described in Sect. 4 and the resulting spectral appearance is discussed in Sect. 5. ## 2 Optical properties of dust The two sets of quantities that are used to describe optical properties of solids are the real and imaginary parts of the complex refractive index $`m=n+ik`$ and the real and imaginary parts of the complex dielectric function (or relative permittivity) $`ϵ`$ = $`ϵ^{}`$ \+ $`iϵ^{\prime \prime }`$. These two sets of quantities are not independent, the complex dielectric function $`ϵ`$ is related to the complex refractive index, $`m`$, by $`ϵ^{}=n^2k^2`$ and $`ϵ^{\prime \prime }=2nk`$, when the material is assumed to be non-magnetic ($`\mu =\mu _0`$). Reflection and transmission by bulk media are best described using the complex refractive index, $`m`$, whereas absorption and scattering by particles which are small compared with the wavelength are best described by the complex dielectric function, $`ϵ`$. The problem of evaluating the expected spectral dependence of extinction for a given grain model (i.e. assumed composition and size distribution) is essentially that of evaluating the extinction efficiency Q<sub>ext</sub>. It is the sum of corresponding quantities for absorption and scattering; $`\mathrm{Q}_{\mathrm{ext}}=\mathrm{Q}_{\mathrm{abs}}+\mathrm{Q}_{\mathrm{sca}}`$. These efficiencies are functions of two quantities; a dimension-less size parameter $`x=2\pi a/\lambda `$ (where $`a=`$ the grain radius and $`\lambda =`$ the wavelength) and a composition parameter, the complex refractive index $`m`$ of the grain material. Q<sub>abs</sub> and Q<sub>sca</sub> can therefore be calculated from the complex refractive index using Mie theory for any assumed grain model. The resulting values of total extinction can be compared with observational data. A limit case within the Mie theory is the Rayleigh approximation for spherical particles. This approximation is valid when the grains are small compared to the wavelength, $`x=2\pi a/\lambda <<1`$ and in the limit of zero phase shift in the particle ($`|m|x<<1`$). In the Rayleigh approximation the extinction by a sphere in vacuum is given as: $$\frac{Q_{ext}}{a}=\frac{8\pi }{\lambda }\mathrm{Im}\{\frac{m^21}{m^2+2}\}.$$ (1) ### 2.1 Measuring methods of optical properties A proper application of Mie theory to experimental data requires that the samples are prepared such that the particles are quite small (usually sub-micrometer), well isolated from one another, and that the total mass of particles is accurately known. In order to obtain single isolated homogeneous particles, the grains are often dispersed in a solid matrix. Small quantities of sample are mixed throughly with the powdered matrix material e.g. KBr or CsI. The matrix is pressed into a pellet which has a bulk transparency in the desired wavelength region. Some of the problems with this technique are that there is a tendency for the sample to clump along the outside rim of the large matrix grains and that the introduction of a matrix, which has a refractive index different from vacuum, might influence the band shape and profile. This matrix effect can be a problem for comparisons of laboratory measurements with astronomical spectra (Papoular et al. Papoular98 (1998); Mutschke et al. Mutschke99 (1999)). By measuring the sample on a substrate (e.g. quartz, KBr, Si or NaCl) using e.g. an infrared microscope, the matrix effect can nearly be avoided since the sample is almost fully surrounded by a gas (e.g. air, Ar or He). But the amount of material in the microscopic aperture remains unknown, which is an important disadvantage of this method. Therefore, these measurements are not quantitative but they reveal the shape of the spectrum nearly without a matrix effect (Mutschke et al. Mutschke99 (1999)). A major problem of both methods is clustering of the grain samples either during the production of the particles or within the matrix or on the substrate. Clustering can cause a dramatic difference in the optical properties (Huffman Huffman88 (1988)). A way to avoid this problem is to perform the optical measurements on a polished bulk sample. For the determination of both $`n`$ and $`k`$ two or more measurements on bulk samples are required. This might be done either by a transmission and a reflection measurement, or by two reflectance measurements determinations at different angles or with different polarisations. Since the real part, $`n`$, of the refractive index, $`m`$, is determined by the phase velocity and the imaginary part, $`k`$, by the absorption, a transmission measurement easily fixes $`k`$. The Kramers-Kronig relations can be applied in order to obtain the optical constants for grain measurements. The real part of the refractive index can be expressed as an integral of the imaginary part (see e.g. Bohren & Huffman Bohren83 (1983)). ## 3 Carbon grains While carbon is expected to constitute a major fraction of the circumstellar dust in carbon stars, its exact form is still unclear. Carbon has the unique property that the atoms can form three different types of bonds through sp<sup>1</sup>, sp<sup>2</sup> (graphite) and sp<sup>3</sup> (diamond) hybridization. A number of observations of late-type stars contradict the presence of graphite as the dominant dust type (e.g. Campbell et al. Campbell76 (1976); Sopka et al. Sopka85 (1985); Martin & Rogers Martin87 (1987); Gürtler et al. Gurtler96 (1996)). The far-infrared (FIR) data of late-type stars generally show a dust emissivity law of $`Q(\lambda )\lambda ^\beta `$ with a spectral index of $`\beta 1`$. While graphite grains have a FIR emission proportional to $`\lambda ^2`$ (Draine & Lee Drain84 (1984)), a $`\lambda ^1`$ behaviour can be expected in a very disordered two-dimensional material like amorphous material (Huffman Huffman88 (1988); Jäger et al. Jager98 (1998)). Amorphous carbon grains therefore seem to be a very good candidate as the common type of carbon grains present in circumstellar envelopes. Another possibility could be small diamond grains. Presolar diamond grains have been identified from primitive (unaltered) meteorites (carbonaceous chondrites) and are the most abundant (500 ppm) of the presolar grains discovered so far (Lewis et al. Lewis87 (1987)). At least 3% of the total amount of carbon present at the formation of the Solar System was in the form of diamonds (Huss & Lewis Huss94 (1994)). The place of origin of the presolar diamonds is still unknown, but since they can only have formed under reducing conditions Jørgensen (Jorgensen88 (1988)) has suggested C-rich AGB stars as the place of formation of the majority of the presolar diamond grains. It has been suggested by Krüger et al. (Kruger96 (1996)) that the surface growth processes on carbonaceous seed particles in circumstellar envelopes will take place at sp<sup>3</sup> bonded carbon atoms rather than at sp<sup>2</sup> bonded ones, which suggests that the grain material formed in circumstellar envelopes could be amorphous-diamond like carbon. Presolar diamonds extracted from meteorites have a median grain size of about 2 nm (Fraundorf et al. Fraundorf89 (1989)), meaning that each diamond contains a few hundred to a few thousand carbon atoms. The presolar diamonds therefore seem to actually consist of a mixture of diamond (core) and hydrogenate amorphous carbon (surface) having about 0.46 the volume fraction of pure diamond (Bernatowicz et al. Bernatowicz90 (1990)). Several spectra of presolar diamonds from various meteorites have been published (Lewis et al. Lewis89 (1989); Colangeli et al. Colangeli95 (1995); Mutschke et al. Mutschke95 (1995); Hill et al. Hill97 (1997); Andersen et al. Andersen98 (1998); Braatz et al., submitted to Meteorit. Planet. Sci.) and even though a number of artifacts tends to be present in all the spectra, the general trend is that the presolar diamonds have an absorption coefficient that is twice that of pure diamond and a factor of a hundred less than the “diamond-like” amorphous carbon of Jäger et al. (Jager98 (1998)). There exists a wide variety of possible amorphous carbon grain types, which fall in between the categories “diamond-like” and “graphite-like” amorphous carbon. We have calculated dynamical models using various laboratory data of amorphous carbon to determine the possible influence of these different grain types on the structure and the wind properties of C-rich AGB star models. ### 3.1 Laboratory measurements of amorphous carbon Laboratory conditions are far from the actual space conditions where grains are produced or processed, but experiments in which physical and chemical parameters are controlled and monitored do give the option of selecting materials which may match the astronomical observations. When choosing which amorphous carbon data to use one is faced with the fact that due to the various processes used in the sample preparation, differences often appear between the measurements of various authors. Another major problem is that the optical properties of amorphous carbon are most often obtained by different techniques in different wavelength regions. Extinction measurements of sub-micron-sized particles is the most common technique in the infrared. In the visible and ultraviolet, reflectivity and transmission measurements are often obtained on bulk samples. Bussoletti et al. (1987a ) have determined the extinction efficiencies for various types of sub-micron amorphous carbon particles and spectroscopically analysed them in the wavelength range 1000 Å – 300 $`\mu `$m. In their paper they present an updated version of the data already published from 2000 Å to 40 $`\mu `$m (Borghesi et al. Borghesi83 (1983), 1985a ) and new data obtained in the UV/vis (1000 – 3000 Å) and in the FIR (35 – 300 $`\mu `$m). The sub-micron amorphous carbon grains were obtained by means of two methods: (1) striking an arc between two amorphous carbon electrodes in a controlled Ar atmosphere at different pressures (samples AC1, AC2 and AC3; where the numbers refer to different accumulation distances from the arc discharge); (2) burning hydrocarbons (benzene and xylene) in air at room pressure (samples BE and XY). The smoke was collected on quartz substrates. For the UV/vis spectroscopy the quartz substrates on which the particles had been collected were used directly while the dust was scrapped from the substrate and embedded in KBr pellets for the IR spectroscopy. Bussoletti et al. (1987b ) suggest that the extinction efficiencies, $`Q_{\mathrm{ext}}/a`$, for the AC samples should be corrected by a factor of 5 due to an experimental underestimation of the pellet density. This correction gives an agreement with the data by Koike et al. (Koike80 (1980)). Colangeli et al. (Colangeli95 (1995)) measured the extinction efficiency in the range 40 nm – 2 mm. They produced three different samples; two by arc discharge between amorphous carbon electrodes in Ar and H<sub>2</sub> atmospheres at 10 mbar (sample ACAR and ACH2 respectively) and one by burning benzene in air (sample BE). The samples were deposited onto different substrates for the UV/vis measurements, while in the IR the samples where prepared both on a substrate and by being embedded in KBr/CsI pellets and for the FIR measurements the samples were embedded in polyethylene pellets. These different but overlapping methods gave the possibility of evaluating the difference as a result of embedding the samples in a matrix or by having them on a substrate. Colangeli et al. (Colangeli95 (1995)) found that embedding the samples in a matrix introduces a systematic error (the matrix effect) while the spectra obtained for grains deposited onto a substrate did not suffer from any matrix effect detectable within the accuracy available in the experiment. Therefore the FIR data were corrected for the extinction offset introduced by the matrix. Jäger et al. (Jager98 (1998)) produced structurally different carbon materials by pyrolizing cellulose materials at different temperatures (400 C, 600 C, 800 C and 1000 C), and characterised them in great detail. These materials have increasing sp<sup>2</sup>/sp<sup>3</sup> ratios making the amorphous carbon pyrolysed at 400 C the most ”diamond-like” with the lowest sp<sup>2</sup>/sp<sup>3</sup> ratio while the amorphous carbon pyrolysed at 1000 C is more ”graphite-like” with the highest sp<sup>2</sup>/sp<sup>3</sup> ratio. The pyrolysed carbon samples were embedded in epoxy resin and reflectance of the samples was measured in the range 200 nm to 500 $`\mu `$m, making this the first consistent laboratory measurement of amorphous carbon over the whole spectral range relevant for radiative transfer calculations of C-rich AGB stars. From the reflectance spectra the complex refractive index, $`m`$, was derived by the Lorentz oscillator method (see e.g. Bohren & Huffman 1983, Chap. 9). There is a significant difference between the two low temperature (400 C and 600 C) and the two high temperature samples (800 C and 1000 C). The latter two behave very similar to glassy carbon. In contrast to grain measurements, the bulk samples by Jäger et al. (Jager98 (1998)) give the possibility of investigating the difference between the influence of the internal structure of amorphous material and the morphology of the carbon grains. These two properties can be separated out due to the careful investigation of the internal structures of the four samples and the range of material properties that these four amorphous carbon samples span (from ”diamond-like” to ”graphite-like”). ### 3.2 Calculated optical properties of amorphous carbon Several authors have used the data of Bussoletti et al. (1987a ) and Colangeli et al. (Colangeli95 (1995)) to obtain the optical constants of amorphous carbon grains. Maron (Maron90 (1990)) used the extinction efficiencies of Bussoletti et al. (1987a ) (sample AC2) to derive the optical constants (n and k) by estimating the complex permittivity by a combination of the measured absorption efficiencies, dispersion formulae and Kramers-Kronig relation. The reason for performing these calculations is that the optical constants are needed for modelling emission properties of grains containing various allotropic carbons or having different sizes. Maron (Maron90 (1990)) is of the opinion that the differences between the primary extinction efficiencies obtained by Bussoletti et al. (1987a ) and Koike et al. (Koike80 (1980)) are real and caused rather by the use of different electrodes (amorphous carbon and graphite, respectively) than by an underestimation of the pellet column density as suggested by Bussoletti et al. (1987b ). Therefore he did not introduce the correction suggested by Bussoletti et al. (1987b ). Rouleau & Martin (Rouleau91 (1991)) used the AC2 and BE data from Bussoletti et al. (1987a ) to produce synthetic optical constants ($`n`$ and $`k`$) which satisfy the Kramers-Kronig relations and highlight the effects of assuming various shape distributions and fractal clusters. One of the complications in determining these optical properties of amorphous carbon material was that in the infrared the extinction measurements were done on a sample of sub-micron-sized particles, while in the visible and ultraviolet the optical constants were obtained by measurements of reflectivity and transmission or by electron energy loss spectroscopy on bulk samples. These diverse measurements were used to produce synthetic optical constants which satisfied the Kramers-Kronig relations. Preibisch et al. (Preibisch93 (1993)) used the BE sample from Bussoletti et al. (1987a ) between 0.1–300 $`\mu `$m and the data of Blanco et al. (Blanco91 (1991)) between 40–700 $`\mu `$m, using the same technique as used by Rouleau & Martin (Rouleau91 (1991)) for deriving optical constants taking shape and clustering effects into account. Preibisch et al. (Preibisch93 (1993)) extend the available optical constants on the basis of the measurements of Blanco et al. (Blanco91 (1991)). With these they determine the opacities of core-mantle-particles with varying mantle thickness and pollution. Zubko et al. (Zubko96 (1996)) used the extinction efficiencies obtained by Colangeli et al. (Colangeli95 (1995)) to derive the optical constants (n and k) also by use of the Kramers-Kronig approach. These data were used to evaluate the possible shapes of the amorphous carbon grains in space and the possible clustering of the particles. In this study we have used the derived optical constants of Maron (Maron90 (1990)), Rouleau & Martin (Rouleau91 (1991)), Preibisch et al. (Preibisch93 (1993)), Zubko et al. (Zubko96 (1996)) and Jäger et al. (Jager98 (1998)), see Table 1 for details. The extinction efficiency data presented in this paper were calculated in the Rayleigh approximation for spheres. ### 3.3 The nature of silicon carbide Thermodynamic equilibrium calculations performed by Friedemann (1969a,b) and Gilman (Gilman69 (1969)) suggested that SiC particles can form in the mass outflow of C-rich AGB stars. The observations performed by Hackwell (Hackwell72 (1972)) and Treffers & Cohen (Treffers74 (1974)) presented the first empirical evidence for the presence of SiC particles in stellar atmospheres. A broad infrared emission feature seen in the spectra of many carbon stars, peaking between 11.0 and 11.5 $`\mu `$m is therefore attributed to solid SiC particles and SiC is believed to be a significant constituent of the dust around carbon stars. An ultimate proof for the formation of SiC grains in C-rich stellar atmospheres was the detection of isotropically anomalous SiC grains in primitive meteorites (Bernatowicz et al. Bernatowicz87 (1987)). Based on isotopic measurements of the major and trace elements in the SiC grains and on models of stellar nucleosynthesis, it is established that a majority of the presolar SiC grains has their origin in the atmospheres of late-type C-rich stars (Gallino et al. Gallino90 (1990), 1994; Hoppe et al. Hoppe94 (1994)). For recent reviews see, e.g., Anders & Zinner (Anders93 (1993)), Ott (Ott93 (1993)) and Hoppe & Ott (Hoppe97 (1997)). Detailed laboratory investigations on the infrared spectrum of SiC have been presented by the following authors: Spitzer et al. (1959a,b) performed thin film measurements on $`\beta `$\- and $`\alpha `$-SiC; Stephens (Stephens80 (1980)) measured on crystalline $`\beta `$-SiC smokes; Friedemann et al. (Friedemann81 (1981)) measured two commercially available $`\alpha `$-SiC; Borghesi et al. (1985b ) investigated three commercially produced $`\alpha `$\- and one commercially produced $`\beta `$-SiC; Papoular et al. (Papoular98 (1998)) measured two samples of $`\beta `$-SiC powders, one produced by laser pyrolysis and one which was commercially available; Mutschke et al. (Mutschke99 (1999)) studied 16 different SiC powders which were partly of commercial origin and partly laboratory products (8 $`\alpha `$-SiC and 8 $`\beta `$-SiC); Speck et al. (Speck99 (1999)) made thin film measurements of $`\alpha `$\- and $`\beta `$-SiC and Andersen et al. (Andersen99 (1999)) have measured the spectrum of meteoritic SiC. One of the difficulties in interpreting laboratory data lies in disentangling the combination of several effects due to size, shape, physical state (amorphous or crystalline), purity of the sample and possible matrix effects if a matrix is used. There is a general agreement that grain size and grain shape have a crucial influence on the absorption feature of SiC particles. This is particularly demonstrated by Papoular et al. (Papoular98 (1998)), Andersen et al. (Andersen99 (1999)) and Mutschke et al. (Mutschke99 (1999)). Papoular et al. (Papoular98 (1998)), Mutschke et al. (Mutschke99 (1999)) and Speck et al. (Speck99 (1999)) have shown that the matrix effect does not shift the resonance feature as a whole as it was assumed by Friedemann et al. (Friedemann81 (1981)) and Borghesi et al. (1985b ). While Papoular et al. (Papoular98 (1998)) and Mutschke et al. (Mutschke99 (1999)) find that the profile is not shifted but altered, Speck et al. (Speck99 (1999)) state that the profile is not affected at all, whether a matrix is used in the experimental set up or not. The influence of purity of the laboratory samples was mainly studied by Mutschke et al. (Mutschke99 (1999)). Another issue considered is the effect of the crystal type. Silicon carbide shows pronounced polytypism which means that there exist a number of possible crystal types differing in only one spatial direction. All these polytypes are variants of the same basic structure and can therefore be divided into two basic groups: $`\alpha `$-SiC (the hexagonal polytypes) and $`\beta `$-SiC (the cubic polytype). It was found by Spitzer et al. (1959a,b), Papoular et al. (Papoular98 (1998)), Andersen et al. (Andersen99 (1999)) and Mutschke et al. (Mutschke99 (1999)) that the crystal structure of SiC cannot be determined from IR spectra, because there is no systematic dependence of the band profile on the crystal type. In contrast, Borghesi et al. (1985b ) and Speck et al. (Speck99 (1999)) find the contrary result. In this paper we have used the average value for bulk SiC reflectance spectra of $`\beta `$-SiC as presented by Mutschke et al. (Mutschke99 (1999)) with $`ϵ_{\mathrm{}}=6.49`$, $`\omega _{\mathrm{TO}}`$ = 795.4 cm<sup>-1</sup>, $`\omega _\mathrm{p}`$ = 1423.3 cm<sup>-1</sup> and $`\gamma `$ = 10 to calculate the optical constants $`n`$ and $`k`$, using the one-oscillator model described in Mutschke et al. (Mutschke99 (1999)). The damping constant $`\gamma `$ is an “ad hoc” parameter, which in a perfect crystal reflects the anharmonicity of the potential curve. A damping constant of $`\gamma =10`$ characterises crystals which are not structurally perfect but still far from amorphousness. Since there is no systematic dependence of the band profile on the crystal type in the data of Mutschke et al. (Mutschke99 (1999)), we could just as well have used the data of one of their $`\alpha `$-SiC samples and would have obtained a similar result. The optical constants $`n`$ and $`k`$ where used to calculate the extinction efficiency for small spherical grains in the Rayleigh limit. Spheres are not necessarily the best approximation for the grain shape of SiC particles in C-rich AGB stars compared to e.g. a continuous distribution of ellipsoids (CDE) as introduced by Bohren & Huffman (Bohren83 (1983)). The general appearance of the feature as well as the peak position will depend on the grain shape, however, common for all grain shapes of SiC are that the feature will always fall between the transverse (TO) and the longitudinal (LO) optical phonon mode, so the difference will be that a spherical grain shape will give rise to a sharper and narrower resonance than other grain shape approximations. ## 4 Dynamical models ### 4.1 Modelling method To obtain the structure of the stellar atmosphere and circumstellar envelope as a function of time we solve the coupled system of radiation hydrodynamics and time-dependent dust formation (cf. Höfner et al. Hoefner95 (1995), Höfner & Dorfi Hoefner97 (1997) and references therein). The gas dynamics is described by the equations of continuity, motion and energy, and the radiation field by the grey moment equations of the radiative transfer equation (including a variable Eddington factor). In contrast to the models presented in Höfner & Dorfi (Hoefner97 (1997)) we use a Planck mean gas absorption coefficient based on detailed molecular data as described in Höfner et al. (Hoefner98 (1998)). Dust formation is treated by the so-called moment method (Gail & Sedlmayr Gail88 (1988); Gauger et al. Gauger90 (1990)). We consider the formation of amorphous carbon in circumstellar envelopes of C-rich AGB stars. The dynamical calculations start with an initial model which represents the full hydrostatic limit case of the grey radiation hydrodynamics equations. It is determined by the following parameters: luminosity $`L_{}`$, mass $`M_{}`$, effective temperature $`T_{}`$ and the elemental abundances. We assume all elemental abundances to be solar except the one of carbon which is specified by an additional parameter, i.e. the carbon-to-oxygen ratio $`\epsilon _\mathrm{C}/\epsilon _\mathrm{O}`$. The stellar pulsation is simulated by a variable boundary (piston) which is located beneath the stellar photosphere and is moving sinusoidally with a velocity amplitude $`\mathrm{\Delta }u_\mathrm{p}`$ and a period $`P`$. Since the radiative flux is kept constant at the inner boundary throughout the cycle the luminosity there varies according to $`L_{\mathrm{in}}(t)R_{\mathrm{in}}(t)^2`$. ### 4.2 Dust opacities The self-consistent modelling of circumstellar dust shells requires the knowledge of the extinction efficiency $`Q_{\mathrm{ext}}`$ of the grains, or rather of the quantity $`Q_{\mathrm{ext}}/a`$, which is independent of $`a`$ in the small particle limit which is applicable in this context. For the models of long period variables presented in Höfner & Dorfi (Hoefner97 (1997)) and Höfner et al. (Hoefner98 (1998)) a fit formula for the Rosseland mean of $`Q_{\mathrm{ext}}/a`$ derived from the optical constants of Maron (Maron90 (1990)) was used. One important point of this paper is to investigate the direct influence of $`Q_{\mathrm{ext}}/a`$ on the structure and wind properties of the dynamical models. Therefore we have computed Rosseland and Planck mean values of $`Q_{\mathrm{ext}}/a`$ (see Fig. 2) based on various optical constants derived from laboratory experiments (see Sect. 3 for details about the samples). For the dynamical calculations presented here we have selected the following data sets (see Table 1 for a detailed specification): Jäger400 and Jäger1000 (representing the extreme cases), Rouleau (closest to the Maron data used in earlier models but extending to wavelengths below $`1\mu `$m) and Zubko. The data of Preibisch are almost identical to the data of Rouleau. Figure 2 demonstrates that for a given data set the difference between Planck and Rosseland means is relatively small. This is due to the fact that amorphous carbon grains have a continuous opacity with a smooth wavelength dependence<sup>1</sup><sup>1</sup>1 In contrast, the two means may differ by orders of magnitude for gas opacities in case of molecular line blanketing.. ### 4.3 Wind properties All models discussed here are calculated with the same set of stellar parameters, i.e. $`L_{}=13000L_{\mathrm{}}`$, $`M_{}=1.0M_{\mathrm{}}`$, $`T_{}=2700`$ K, $`\epsilon _\mathrm{C}/\epsilon _\mathrm{O}=1.4`$, $`P=650`$ d, $`\mathrm{\Delta }u_\mathrm{p}=4`$ km/s, corresponding to model P13C14U4 in Höfner et al. (Hoefner98 (1998)). The only difference between individual models is the adopted mean dust opacity. Most models have been calculated with Rosseland mean dust opacities to allow us a direct comparison with earlier models based on Rosseland means derived from the Maron data<sup>2</sup><sup>2</sup>2Note however that all models use Planck mean gas opacities as discussed in Sect. 4.1.. Wind properties like the mass loss rate $`\dot{M}`$ or the time-averaged outflow velocity $`u`$ and degree of condensation $`f_\mathrm{c}`$ are direct results of the dynamical calculations. The Rosseland mean models in Table 2 (first group) are listed in order of decreasing dust extinction efficiency. Both $`u`$ and $`f_\mathrm{c}`$ change significantly with the dust data used. $`f_\mathrm{c}`$ increases with decreasing dust extinction efficiency while $`u`$ decreases, reflecting a lower optical depth of the circumstellar dust shell (see also Sect. 5). The mass loss rates seem to show a weak overall trend but it is doubtful whether the differences between “neighbouring” models in Table 2 are significant. Since the mass loss rate varies strongly with time the average values given in the table are more uncertain than the ones for the velocity and the degree of condensation which both do not show large variations with time. The behaviour of the wind properties can be explained in the following way: The stellar parameters of the models presented here were chosen in such a way that the models fall into a domain where dust formation is efficient and the outflow can be easily driven by radiation pressure on dust (luminous, cool star, relatively high C/O ratio). In this case, the mass loss rate is essentially determined by the density in the dust formation zone which mainly depends on stellar and pulsation parameters (see e.g. Höfner & Dorfi 1997). Therefore it is not surprising that the mass loss rates of the different models are quite similar. On the other hand, in a self-consistent model, the degree of condensation (dust-to-gas ratio) depends both on the thermodynamical conditions in the region where the dust is formed and on the specific grain opacity. The higher the mass absorption coefficient the faster the material is pushed out of the zone where efficient dust formation and grain growth is possible. Therefore the degree of condensation decreases with increasing dust absorption coefficients (i.e. higher radiative pressure) as grain growth is slowed down by dilution of the gas. Note that even in the model with the lowest dust absorption coefficient (DJ4R) the condensation of ”free” carbon (i.e. all carbon not locked in CO) is far from complete ($`f_\mathrm{c}<1`$). For the two extreme cases (Jäger1000 and Jäger400) we have also calculated models with Planck mean dust opacities. As shown in Table 2 the wind properties of the corresponding Planck and Rosseland models (DJ1P/DJ1R and DJ4P/DJ4R) are very similar (if the differences are significant at all, see above). The two Planck mean models fit nicely into the dust extinction efficiency sequence discussed before for the Rosseland mean models. As demonstrated in many earlier papers (e.g. Winters et al. Winters94 (1994); Höfner & Dorfi Hoefner97 (1997)) the dust formation in dynamical models is not necessarily periodic with the pulsation period $`P`$. In general the models are multi- or non-periodic in the sense that the dust formation cycle is a more or less well defined multiple of $`P`$. In the models discussed here, a new dust shell is formed about every 5-6 pulsation periods. In contrast to the hotter inner regions, the atmospheric structure below about 1500 K does not repeat after each pulsation cycle but is more or less periodic on the dust formation time scales which span several pulsation cycles. While it is easy to compare the time-averaged wind properties of the models it is much more problematic to find comparable “snapshots” in different model sequences for the discussion of observable properties presented in the next section. Therefore in all cases involving detailed radiative transfer on top of given model structures we have decided to show statistical comparisons including several maximum and minimum phase models of each sequence. ## 5 Spectral energy distributions and synthetic colours ### 5.1 Frequency-dependent radiative transfer The dynamical calculation yields the structure of the atmosphere and circumstellar envelope (density, temperature, degree of condensation, etc.) as a function of time. The time-independent radiative transfer equation is solved for each frequency separately along parallel rays to obtain spectral energy distributions (Windsteig et al. Windsteig97 (1997) and references therein). The grey gas opacity (Planck mean) is taken directly from the dynamical models. The dust opacity is calculated from the optical properties of amorphous carbon and in some cases SiC (see Sect. 3 for the description of the different dust data). ### 5.2 Spectral energy distributions To investigate how the different dust data influence the resulting spectral energy distributions (SEDs), spectra were calculated using the optical constants of amorphous carbon of Rouleau, Zubko, Jäger400 and Jäger1000 on top of a fixed atmospheric structure. Two different kinds of SEDs were calculated; (1) fully consistent ones where the same amorphous carbon data were used in the dynamical model and in the SED calculations and (2) “inconsistent” ones where we used different dust opacity data for the detailed radiative transfer on top of the same dynamical model structure (fixed spatial distribution of density, temperature, degree of condensation). The latter spectra enable us to distinguish between the effect of the various dust data in the radiative transfer calculation and the effect on the model structure. Figure 3 shows the result for the SEDs based on a minimum phase model of the DROR model sequence. The full line always denotes the SED of the consistent model where the Rouleau data were used for the underlying dynamical model as well as for the calculation of the spectrum. In the upper panel the (inconsistent) Jäger1000 spectrum (dotted) calculated on top of the same Rouleau model is shown in addition to the consistent Rouleau spectrum. The middle panel shows the same for Zubko and the lower panel for Jäger400. The effects of the different dust data for a given structure compared to a consistent model using the Rouleau data can be summarised as follows: * Jäger1000: the spectrum has a lower flux in the short wavelength region (0.5 to 5 $`\mu `$m) and the maximum at longer wavelengths. The lower flux level in this region is due to the fact that $`Q_{\mathrm{ext}}/a`$ for the Jäger1000 data is higher than for the Rouleau data, therefore we have a higher total dust opacity which results in less flux coming out. The shift of the maximum is also due to the higher dust opacity in the Jäger1000 case. * Zubko: the spectrum has a lower flux level in the short wavelength region and the maximum is shifted to longer wavelengths, but not as far as the Jäger1000 spectrum. This is due to the fact that Jäger1000 is a more ”graphite-like” amorphous carbon dust than the Zubko material. * Jäger400: the spectrum has a comparable flux all over the spectrum and the maximum at slightly shorter wavelengths. The slightly higher flux level around the maximum results from the lower $`Q_{\mathrm{ext}}/a`$ of the Jäger400 data which is due to its more “diamond-like” nature compared to the Rouleau data. In the wavelength region where the maxima of the spectra lie, the two data sets are very similar, therefore the maxima of the SEDs do not differ much in wavelength. Note that for the “inconsistent” SEDs the total flux may differ from the value of the consistent models. ### 5.3 Spectral energy distributions including SiC The analysis of mid-IR carbon star spectra indicates that SiC is the best candidate to reproduce the observations around the 11 $`\mu `$m region. We have therefore considered SiC as an additional dust component (see Sect. 3.3 for details). The formation of SiC is not included in the self-consistent model calculations because (1) little is known about the condensation process and (2) because we do not expect that SiC will have a significant influence on the model structures. We use either Planck or Rosseland means for the model computations and SiC will contribute only in a very narrow wavelength region with small amounts to these mean opacities compared to amorphous carbon. The effect of SiC as dust component is described in a qualitative manner. The dust opacity $`\kappa _\mathrm{d}`$ for each wavelength is calculated from $`\kappa _\mathrm{d}=\frac{\kappa _{\mathrm{amC}}X_{\mathrm{amC}}+\kappa _{\mathrm{SiC}}X_{\mathrm{SiC}}}{X_{\mathrm{amC}}+X_{\mathrm{SiC}}}`$ where X<sub>i</sub> are the fractional parts of amorphous carbon and SiC, respectively, and where $`\kappa _{\mathrm{amC}}`$ and $`\kappa _{\mathrm{SiC}}`$ are the opacities of carbon and SiC. Figure 4 shows how a mixture of dust grains consisting of amorphous carbon (Jäger1000 data) and SiC modifies the SED around 11.3 $`\mu `$m. Two different ratios of X<sub>amC</sub> : X<sub>SiC</sub>, 4:1 and 9:1 were adopted. The higher the amount of SiC, the stronger is the 11.3 $`\mu `$m feature (see inset of Fig. 4). Another choice of grain shape than spherical for the SiC particles, would result in a broader and weaker feature. ### 5.4 Synthetic colours For a comparison of the consistent spectra (model structure and spectra computed with the same dust data) we have calculated synthetic J, H and L colours as well as the IRAS 12 $`\mu `$m colour. In a (J$``$H) versus (H$``$L) diagram (Fig. 5a) the models based on different amorphous dust data fall into distinct regions. The models calculated with the Jäger1000 dust data have the reddest colours in (H$``$L). The Jäger400 colours are the bluest, while Rouleau and Zubko lie in between. In (J$``$H) models with the Zubko data are reddest and the others do not differ much. The reason for the different slopes of Rouleau and Zubko compared to both of the Jäger data sets is that in these cases the maxima of the SEDs are changing between the J and the H filter depending on the phase. The maxima of the SEDs resulting from the Jäger1000 model structures are always at longer wavelengths and the ones of the Jäger400 structures lie mainly in one filter. From Fig. 5b, which shows the “inconsistent” colours based on model DROR (structure was calculated with the Rouleau data and the spectra with other dust data) in addition to the consistent Rouleau colours, it is clear that the influence of the different dust data used in the radiative transfer calculation is much stronger than the effect of the underlying hydrodynamic model structure. Note that in Fig. 5 only maximum and minimum phases are shown. Other phases would fill in the gaps between successive extremes. The colours are strongly related to the formation of a new dust shell which takes place every 5 to 6 pulsation cycles (see Sect. 4.3). After this time scale the colours match very closely the ones of the preceding dust formation cycle as shown in Fig. 6. When connecting the succeeding points it can be seen that they form a spiral. The minima (circles) are always redder than the following maxima (asterisks). To investigate also the mid-IR properties of the models we calculated the 12 $`\mu `$m colour. A (L$``$) vs. (H$``$L) diagram (Fig. 5c) shows, that again the consistent colours fall into distinct regions. The sequence in (L$``$) (Zubko \- Rouleau \- Jäger1000 \- Jäger400) is a sequence of decreasing optical depths. Table 3 lists the mean dust optical depths for a few selected wavelengths. In Fig. 5d the inconsistent colours based on the model structure of DROR are shown for comparison. From Fig. 5 we can infer that the influence of the different amorphous carbon dust data used in the radiative transfer calculation is much stronger than the effect of the model structures. The colours resulting from the same dust data fall approximately into the same region of a two-colour-diagram, whether they are calculated on top of the corresponding model structure (upper panel) or a fixed model sequence (lower panel). This applies for all data sets. In addition we computed synthetic colours for the Maron and the Preibisch data, but they are not shown in Fig. 5 to avoid overlaps in the plot. The Preibisch colours fall into the same region as the Zubko colours as one would expect because the $`Q_{\mathrm{ext}}/a`$ are very similar. The same applies for the Maron and the Rouleau data. Only the J flux differs between Maron and Rouleau, the reason being that the Maron data do not extend below 1 $`\mu `$m and therefore the corresponding contribution to the J filter is lacking. ## 6 Summary and conclusions Carbon bearing grains are expected to form in the outflows of C-rich AGB stars. The two most common types of carbon grains in these stars are expected to be amorphous carbon and SiC grains. We have investigated the direct influence of different dust optical properties on the wind characteristics and the resulting observable properties of the dynamical models. The term amorphous carbon covers a wide variety of material properties from “diamond-like” to “graphite-like”. We have used $`n`$ and $`k`$ data of Maron (Maron90 (1990)), Rouleau & Martin (Rouleau91 (1991)), Preibisch et al. (Preibisch93 (1993)), Zubko et al. (Zubko96 (1996)) and Jäger et al. (Jager98 (1998)) to investigate the influence of different types of amorphous carbon, on the structure and the wind properties of dynamical models. The Rosseland and Planck mean values of Q<sub>ext</sub>/a used in the model computations were calculated from the Rayleigh approximation for spheres. The difference between the Planck and the Rosseland means is relatively small for the amorphous carbon data because the grains have a continuous opacity with a smooth wavelength dependence. In our test models, both the outflow velocity $`u`$ and degree of condensation $`f_\mathrm{c}`$ change significantly with the dust data used. $`f_\mathrm{c}`$ increases with decreasing dust extinction efficiency while $`u`$ decreases, reflecting a lower optical depth of the circumstellar dust shell. The mass loss rate is, however, not significantly influenced by the use of different dust data. On top of the structures resulting from the dynamic calculations we have performed detailed radiative transfer calculations to obtain the spectral energy distribution of the circumstellar dust shells. Regarding infrared colours, the influence of the different dust data used in the radiative transfer calculation is much stronger than the effect of the underlying hydrodynamic model structure. However, this should not be used as an excuse for fitting observations by arbitrarily choosing the optical properties of the dust grains for a given model structure. In a consistent model the dynamical properties (e.g. outflow velocities) and the optical appearance of the circumstellar envelope are related in a complex way. The influence of including SiC grains is that the 11.3 $`\mu `$m feature appears in the spectral energy distribution of the models. How much SiC should be “mixed” into a model to reproduce the 11.3 $`\mu `$m feature observed (e.g. class 4 in Goebel Goebel95 (1995)) will very much depend on the assumptions which are made about the size and shape of the SiC grains which enter into the model (Papoular et al. Papoular98 (1998); Andersen et al. Andersen99 (1999); Mutschke et al. Mutschke99 (1999)). ###### Acknowledgements. This work was supported by the Austrian Science Fund (FWF, project number S7305-AST) and the Austrian Academy of Sciences (RL acknowledges a “Doktorandenstipendium”). We thank E.A Dorfi (IfA Vienna) for inspiring discussions.
no-problem/9907/astro-ph9907111.html
ar5iv
text
# Transient radio emission from SAX J1808.4–3658 ## 1. Introduction Millisecond pulsars (MSPs) have long been thought to be the endpoint in the evolution of low-mass X-ray binaries (LMXBs) (Alpar et al. (1982); Radhakrishnan & Srinivasan (1982)). Although the link between the LMXBs and MSPs is strong, evidence that LMXBs did indeed contain objects spinning at millisecond periods was, until recently, still missing. The quasi-periodic oscillations, especially near 1 kHz (van der Klis et al. (1996)), recently discovered in X-ray binary systems offer indirect evidence for the existence of weakly magnetized neutron stars with millisecond periods. However, the recent discovery of 2.49 ms coherent X-ray pulsations from the LMXB SAX J1808.4–3658 (Wijnands & van der Klis 1998b ; Chakrabarty & Morgan (1998)) now gives strong support to the picture that LMXBs are the progenitors of MSPs; this system provides the best evidence yet that a low-field neutron star can be spun up to millisecond periods via accretion from its companion. There are a number of processes which could potentially produce radio emission from such a system. An exciting possibility is that SAX J1808.4–3658 will, at some point, turn on as a radio pulsar, producing pulsed radio emission characteristic of MSPs. Timing of such pulses could allow an improved determination of the astrometric, rotational and orbital parameters of the system as well as the determination of possible post-keplerian and tidal effects. Furthermore they could probe any wind produced by the interaction between the pulsar and its companion such as in the eclipsing binary millisecond pulsar systems (e.g. Fruchter et al. (1990)). Such emission could potentially be heavily scattered by this interaction or by interactions with material excreted from the system (e.g. Rasio, Shapiro, & Teukolsky (1989)). Thus it would appear unpulsed and could only be detected as a continuum source. Alternatively, unpulsed radio emission could be produced by the interaction of the relativistic pulsar wind with the interstellar medium (e.g. Frail et al. (1996)). Finally, radio emission could be associated with the accretion process and X-ray outburst, as seen in a significant fraction of the X-ray binary population (Hjellming & Han (1995); Fender & Hendry (1999)). We here report on a search for radio emission during the outburst and then also during quiescence, aimed at testing these possibilities. In §2 we describe our observations and analysis, while in §3 we demonstrate the detection of radio emission from the system, and discuss its implications. ## 2. Observations and Reduction All observations were made with the Australia Telescope Compact Array (ATCA; Frater, Brooks, & Whiteoak (1992)), an east-west synthesis array located near Narrabri, NSW, Australia; these observations are summarized in Table 1. The ATCA can observe two frequencies simultaneously; for all epochs except 1998 Nov 30, we alternated, approximately every 20 minutes, between observing with a 1.4/2.5 GHz combination and a 4.8/8.6 GHz combination. On 1998 Nov 30, observations were only made at 1.4/2.5 GHz. A bandwidth of 128 MHz was used at each frequency. A pointing center RA (J2000) $`18^\mathrm{h}08^\mathrm{m}13^\mathrm{s}`$, Dec (J2000) $`36\mathrm{°}57\mathrm{}18\mathrm{}`$ was observed in all cases, approximately $`3^{}`$ from the position of SAX J1808.4–3658 (Giles, Hill, & Greenhill (1999)). Amplitudes were calibrated using observations of PKS B1934–638, assuming a flux density for this source of 15.0, 11.1, 5.8 and 2.8 Jy at 1.4, 2.5, 4.8 and 8.6 GHz respectively (where 1 Jy $`=10^{26}`$ W m<sup>-2</sup> Hz<sup>-1</sup>). Phases were calibrated using observations once per hour of PKS B1934–638 (at 1.4 & 2.5 GHz) and PMN J1733–3722 (at 4.8 & 8.6 GHz). Data were reduced in the MIRIAD package using standard techniques. For each frequency in each observation, images were formed using natural weighting and excluding baselines shorter than 1.5 km. Sidelobes around detected sources were deconvolved using the CLEAN algorithm. Each image was then smoothed to the appropriate diffraction limit, and corrected for the mean primary beam response of the ATCA antennas. ## 3. Results and Discussion The position for SAX J1808.4–3658, as determined by observations of its optical counterpart, is at RA (J2000) $`18^\mathrm{h}08^\mathrm{m}27\stackrel{}{\mathrm{.}}54`$, Dec (J2000) $`36\mathrm{°}58\mathrm{}44\stackrel{}{\mathrm{.}}3`$ with uncertainties of $`0\stackrel{}{\mathrm{.}}8`$ in each coordinate (Giles, Hill, & Greenhill (1999)). In all observations except Obs 1, the only radio source detected in a $`5\mathrm{}`$ region surrounding this position was a double-lobed radio galaxy, NVSS 180824–365813 (Condon et al. (1998)); no source was seen at the position of SAX J1808.4–3658. The resolution and limiting sensitivity for these non-detections are summarized in Table 1. In Obs 1 an unresolved radio source near the position of SAX J1808.4–3658 was detected at the $`4\sigma `$ level at each of 2.5, 4.8 and 8.6 GHz, as shown in Figure 1; flux densities are given in Table 2. The source was not detected at 1.4 GHz. We attemped various approaches to the imaging, deconvolution and fitting processes. The results of these suggest a systematic uncertainty in the flux densities for the source of $``$50%, a value to be expected when deconvolving a weak source under conditions of poor $`uv`$ coverage. Our best position for this source is RA (J2000) $`18^\mathrm{h}08^\mathrm{m}27\stackrel{}{\mathrm{.}}6`$, Dec (J2000) $`36\mathrm{°}58\mathrm{}43\stackrel{}{\mathrm{.}}9`$ with an uncertainty of $`0\stackrel{}{\mathrm{.}}5`$ in each coordinate. In the bottom panel of Figure 1, our position and the optically determined positions of Giles et al. (1999) and Roche et al. (1998) are compared. All three positions are consistent within the quoted uncertainties. The source shown in Figure 1 is almost certainly not an artifact. It is not at the phase center, and was detected at three different frequencies and for a variety of different weighting schemes and combinations of baselines. While the probability of finding an unrelated radio source within a few arcsec of SAX J1808.4–3658 is low in any case, we note that this source was not detected at any other epoch, despite the improved sensitivity of these later observations. Furthermore, the region was observed with the Very Large Array (VLA) on 1998 Apr 26, the day immediately before Obs 1, and no source was detected at this position down to a comparable sensitivity (R.M. Hjellming, private communication). Therefore, from its transient nature and positional coincidence with the optical counterpart of SAX J1808.4–3658, we conclude that this radio source is associated with the system. Our data lack the time-resolution required to search for pulsed radio emission from this source. However, X-ray emission due to accretion was present at the time of our detection, and it is likely that if any radio emission was being produced in the pulsar magnetosphere it would be quenched by this process. Thus the source we have detected is most likely not emission associated with the magnetosphere. It is also unlikely that the source corresponds to emission from a pulsar wind (if such a wind even exists at this point in the system’s evolution) — if we assume that the disappearance of the source between Obs 1 and Obs 2 is due to synchrotron losses, then this cooling time-scale implies a magnetic field in the wind of $``$2 G, many orders of magnitudes higher than observed for other pulsars (e.g. Manchester, Staveley-Smith, & Kesteven (1993); Frail et al. (1996)). The detection of radio emission only at an epoch during which X-rays were still being generated provides strong evidence that this radio source is related to the accretion process. Radio emission has been detected from $``$25% of all X-ray binaries (Fender & Hendry (1999)). In cases for which this emission has been resolved, it takes the form of jets being emitted from the system, often at relativistic velocities (Fender, Bell Burnell, & Waltman (1997)). The burst properties and rapid X-ray variability of SAX J1808.4–3658 suggest that it is an atoll source, i.e. a low magnetic field neutron star accreting at about 10% of the Eddington limit (Wijnands & van der Klis 1998a ). Fender & Hendry (1999) review the radio properties of different types of persistent X-ray binaries, and show that most atoll sources show no radio emission — the only detections have been transient, and at the mJy level. When transient radio emission is seen from the atoll sources, the spectrum is first seen absorbed, but then becomes optically thin and takes on a synchrotron spectrum with $`\alpha 0.5`$ ($`S_\nu \nu ^\alpha `$). The emission then decays away through adiabatic losses (e.g. Hjellming & Han (1995)). The properties of the radio emission seen here for SAX J1808.4–3658 are consistent with this behavior. While the spectrum of the source is very poorly constrained by our data, the flux densities for it at 2.5 GHz and above are consistent with $`\alpha 0.5`$, while the non-detection at 1.4 GHz is suggestive of a low-frequency turnover probably due to self-absorption in the ejecta. We note that on 1998 Apr 26, the day before our radio detection, the X-ray flux suddenly deviated from exponential decay and began to rapidly decrease (Gilfanov et al. (1998)). As discussed by these authors, there are two different mechanisms which can produce such a cut-off, either the onset of the so-called “propeller phase” or an instability of the accretion disk. In both cases the abrupt change corresponds to ejection of material from the system. The appearance of transient radio emission just after this event suggests that it is this ejection of material which has produced the source we see here. If we assume that emission above 2.5 GHz corresponds to optically thin, incoherent, unbeamed, synchrotron emission, the $`10^{12}`$ K Compton limit on the brightness temperature corresponds to a minimum scale for the emission of $``$12 light seconds at a distance of 4 kpc, much larger than the $``$1 light second binary separation of the orbit (Chakrabarty & Morgan (1998)). Thus the emission is coming from well beyond the binary, as would be expected if it has resulted from an expulsion of material from the system. At later times, no radio emission was detected at the position of SAX J1808.4–3658. This is not necessarily indicative that the radio pulse mechanism is not yet functioning. Despite strong limits on the flux density from our non-detections, the apparently large distance to the system of $``$4 kpc corresponds to a 3$`\sigma `$ monochromatic luminosity limit of $`L_{1.4\mathrm{GHz}}6`$ mJy kpc<sup>2</sup>, greater than that of the majority of known MSPs (Taylor et al. (1995)). Furthermore, Ergma et al. (1999) have proposed that this system may only be detected at shorter wavelengths ($`\lambda <3`$ cm) due to free-free absorption by material excreted from the system. ## 4. Conclusion We have detected a transient radio source coincident with SAX J1808.4–3658; the source turned on within a day, and then had disappeared again a month later. We interpret this source as radio emission associated with ejection of material, as seen in other X-ray binaries, and in this case possibly associated with the onset of a propellor phase or a disk instability the day before the source was detected. The spectrum and light curve of this source are essentially unconstrained, however, and the source should certainly be searched for radio emission next time it is in outburst. Eventually, it is hoped that this source will emerge as a radio MSP; radio searches in quiescence should continue to be carried out in anticipation of this. We thank Rob Fender and Deepto Chakrabarty for useful discussions, Bob Hjellming for communicating the results of his VLA observations, and Scott Cunningham, Lucyna Kedziora-Chudczer, Robin Wark and Mark Wieringa for assistance with the observations. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. BMG acknowledges the support of NASA through Hubble Fellowship grant HF-01107.01-98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5–26555. BWS is supported by NWO Spinoza grant 08-0 to E.P.J. van den Heuvel.
no-problem/9907/chao-dyn9907037.html
ar5iv
text
# Order and chaos in galactic maps ## 1 Introduction The application of modern methods in the domain of Galactic Dynamics has been proved very fruitful during the last decade. Among them is the use of maps to describe galactic motion (Caranicolas c1 (1990), c3 (1994)), and the invariant spectra in galactic Hamiltonian systems (see Contopoulos et al. cong (1995), Patsis et al. panos (1997)). Maps, derived from galactic type Hamiltonians, are useful in the study of galactic orbits because they are faster, in general, than numerical integration and allow a quick visualisation of the corresponding phase plane. Maps also can give the stability conditions for the periodic orbits. On the other hand, our experience based on previous work shows that the results given by the maps are in good agreement with those given by numerical integration at least for small perturbations (see Caranicolas & Karanis ck (1999)). On this basis, it seems that one has some good reasons to use a map for the study of galactic motion. In the present work we consider that the local (i.e. near an equilibrium point ) galactic motion is described by the potential $$V=\frac{1}{2}\left(\omega _1^2x^2+\omega _2^2y^2\right)ϵ\left[\beta \left(x^4+y^4\right)+2\alpha x^2y^2\right],$$ (1) where $`\omega _1,\omega _2`$ are the unperturbed frequencies of oscillation along the x and y axis respectively, $`ϵ>0`$ is the perturbation strength while $`\alpha ,\beta `$ are parameters.We shall study the case where $`\omega _1=\omega _2=\omega `$. Without the loss of generality we can take $`\omega =1`$, that is the 1:1 resonance case. Then the Hamiltonian to the potential (1) is $`H`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(p_x^2+p_y^2+x^2+y^2\right)`$ (2) $`ϵ\left[\beta \left(x^4+y^4\right)+2\alpha x^2y^2\right]=h,`$ where $`p_x,p_y`$ are the momenta per unit mass conjugate to x, y and h is the numerical value of H. Our aim is to study the various types of periodic orbits, their stability and the kind of non periodic motion (regular or chaotic) in the Hamiltonian (2) for various values of parameters $`\alpha `$ and $`\beta `$ using the map corresponding to the Hamiltonian (2), as well as numerical integration. This resonance case is also known as the perturbed elliptic oscillators (Deprit d1 (1991), Deprit & Elipe d2 (1991), Caranicolas & Innanen ci (1992), Caranicolas c2 (1993)). The $`xp_x`$ Poincare phase plane derived by the map and the numerical integration will be compared in each case. Of a special interest is the study of the chaotic motion. We shall try to find an answer to questions such as: 1. Does the map describe in a satisfactory way the chaotic layers in the $`xp_x`$ plane and, if so, how this behavior evolves by increasing $`ϵ`$ ? 2. What are the differences, if any, in the Lyapunov Characteristic Number (LCN) found by the map and the numerical integration in the regular and the chaotic area? 3. Are there any similarities in the invariant spectra derived by the map and the numerical integration? The map and the stability conditions of the periodic orbits are given in Section 2. In the same Section we compare the $`xp_x`$ phase plane found by the map and numerical integration for some of the main different cases. In Section 3 we compare the LCNs and the spectra of orbits, derived using the map and numerical integration. Section 4 is devoted to a discussion and the conclusions of this work. ## 2 Map and stability conditions The averaged Hamiltonian corresponding to the Hamiltonian (2) reads $`<H>`$ $`=`$ $`ϵ\alpha J\left(\mathrm{\Lambda }J\right)\left[2\mathrm{cos}2\varphi \right]`$ (3) $`+{\displaystyle \frac{3}{2}}ϵ\beta \left[J^2+\left(\mathrm{\Lambda }J\right)^2\right]`$ Following the steps described in Caranicolas (c1 (1990)) we find the map $`J_{n+1}`$ $`=`$ $`J_n2ϵ\alpha J_{n+1}(\mathrm{\Lambda }J_{n+1})\mathrm{sin}2\varphi _n`$ $`\varphi _{n+1}`$ $`=`$ $`\varphi _n+ϵ(2\alpha 3\beta )(\mathrm{\Lambda }2J_{n+1})`$ $`ϵ\alpha (\mathrm{\Lambda }2J_{n+1})\mathrm{cos}2\varphi _n`$ where $`\mathrm{\Lambda }=h`$. The map describes the motion in the $`J\varphi `$ plane and we return to $`x,p_x`$ variables through $`x=(2J)^{1/2}\mathrm{cos}\varphi ,p_x=(2J)^{1/2}\mathrm{sin}\varphi `$. The fixed points of (4) are at $`(i)J`$ $`=`$ $`0,forany\varphi `$ $`(ii)J`$ $`=`$ $`\mathrm{\Lambda }/2,\varphi =0,\pi `$ (5) $`(iii)J`$ $`=`$ $`\mathrm{\Lambda }/2,\varphi =\pm \pi /2.`$ There are three distinguished case of the $`xp_x`$ phase plane portrait in the Hamiltonian (2). Type A, when both fixed points (i) and (ii) are stable. Type B when points (i) are stable while fixed points (ii) are unstable. In type C phase plane, fixed points (i) and (ii) are unstable and stable respectively. Applying the stability conditions (see Lichtenberg & Lieberman ll (1983)) we find after some straightforward calculations: Type A phase plane appears when $`\alpha >\beta >\alpha /3`$ $`(\alpha >0,\beta >0)`$. Type B appears when $`\beta >\alpha `$ $`(\alpha >0,\beta >0)`$. We observe the phase plane of type C when $`\alpha >3\beta `$ $`(\alpha >0,\beta >0)`$ or if $`\alpha <0,\beta >0`$. In all three cases there is, for a fixed value of the energy h, a value of the perturbation strenght $`ϵ_{\mathrm{esc}}`$, such as for $`ϵ>ϵ_{\mathrm{esc}}`$ curves of zero velocity $`hV=0`$ open and the test particle is free to escape. We do not consider cases where the curves of zero velocity are always closed that is cases where the Hamiltonian (1) has no $`ϵ_{\mathrm{esc}}`$. The value of $`ϵ_{\mathrm{esc}}`$ can be found using the method described in Caranicolas & Varvoglis (cvar (1984)). For the type A phase plane we find $$ϵ_{\mathrm{esc}}=1/[8h(\alpha +\beta )],$$ (6) while in the cases B and C $`ϵ_{\mathrm{esc}}`$ is given by the formula $$ϵ_{\mathrm{esc}}=1/[16h\beta ].$$ (7) Let us now go to see the three different types of the phase plane produced by the Hamiltonian system (2). In all numerical calculation we use the value $`h=2`$. Fig. 1 shows the type A $`xp_x`$ phase plane derived by numerical integration. The values of $`\alpha ,\beta `$ are 1.2, 0.8 respectively while $`ϵ=ϵ_{\mathrm{esc}}=0.03125`$. The motion is everywhere regular except near the hyperbolic point in the center and in a thin strip along the separatrix.(see Fig. 3). Fig. 2 is the corresponding figure produced by the map. As one can see the agreement is good. One significant difference is that the map is inadequate to produce the chaotic layer seen in Fig. 3. Also note that $`x_{max}`$ in Fig. 1 is greater than 2 while in Fig. 2 is smaller. Fig. 4 and Fig. 5 show the type B phase plane derived using numerical integration and the map respectively. The values of the parameters are $`\alpha =0.20`$, $`\beta =0.25`$, while $`ϵ=ϵ_{\mathrm{esc}}=0.125`$. The results are similar to those observed in Figs. 1 and 2. Again the map describes well the real phase plane except in a small chaotic region near the two hyperbolic fixed points. Thus, our numerical calculations suggest that, the map (4) describes well the properties of motion in the Hamiltonian (2) up to the largest perturbation, that is $`ϵ=ϵ_{\mathrm{esc}}`$ but it is insufficient to described the small chaotic region observed when using numerical integration. Type C phase plane is shown in Figs. 6 and 7. Fig. 6 comes from numerical integration while Fig. 7 was derived using the map. The values of the parameters are $`\alpha =1.20,\beta =0.20`$, while $`ϵ=ϵ_{\mathrm{esc}}=0.15625`$. This value of $`ϵ`$ was chosen to maximize the chaotic effects. The most important characteristic, observed in both Figs. 6 and 7 is the large unified chaotic region. In contrast with the two previous cases A and B, here the map reproduces satisfactorily the chaotic sea found by numerical integration. On the other hand, it is evident that the map describes qualitatively in a satisfactory way the areas of regular motion around the elliptic points in the $`p_x`$ axis. Some differences are observed in the area near the center between the two patterns. Another important characteristic, observed in case C, is that the chaotic areas are large even when $`ϵ<ϵ_{\mathrm{esc}}`$. Results, not shown here, suggest that considerable chaotic areas are observed in the phase plane derived using the map, when $`ϵ0.08`$. Therefore we must admit that our numerical experiments show that, in the case when the Hamiltonian system (2) has small chaotic regions, the map is inadequate to produce them but it describes them satisfactorily when they are large. ## 3 Dynamical spectra In this Section we shall study the spectra of the Hamiltonian system (2) using numerical integration and the map (4).Before doing this, it is necessary to remember some useful notions and definitions. The “stretching number” $`\alpha _i`$ (see Voglis & Contopoulos vogcon (1994), Contopoulos at al. cong (1995), Contopoulos & Voglis conv (1997)) is defined as $$\alpha _i=\mathrm{ln}\left|\frac{\xi _{i+1}}{\xi _i}\right|,$$ (8) where $`\xi _{i+1}`$ is the next image on the Poincare phase plane of an infinitesimal deviation $`\xi _i`$ between two nearby orbits. The spectrum of the stretching numbers is their distribution function $$S(\alpha _i)=\frac{\mathrm{\Delta }N(\alpha )}{Nd\alpha },$$ (9) where $`\mathrm{\Delta }(\alpha )`$ is the number of stretching numbers in the interval $`(\alpha ,\alpha +d\alpha )`$ after N iterations.The maximal Lyapunov characteristic number can be written as $$LCN=\underset{\mathrm{}}{\overset{\mathrm{}}{}}\alpha S(\alpha )𝑑\alpha ,$$ (10) in other words, the LCN is the average value of the stretching number $`\alpha `$. Today we know that the distribution of the successive stretching numbers forms a spectrum which is invariant with respect to (i) the initial conditions along an orbit and the direction of the deviation from this orbit and (ii) the initial conditions of the orbits belonging to the same chaotic region. In what follows we shall give the spectra $`S(\alpha )`$ of orbits of the Hamiltonian system (2) derived using the map (4) as well as numerical integration. In the case of the Hamiltonian where t is a continuous quantity ( that is in the numerical integration of the equations of motion) we use for the derivation of the stretching numbers and spectra the method used by Contopoulos et al. (1995). All calculations correspond to the case C because the results are much more interesting. Fig. 8 shows the spectra of two orbits (the first with solid line the other with dots) derived using numerical integration. The orbits were started in the chaotic region with different initial conditions. It is evident that the two spectra are close to each other. Fig. 9 shows the spectra of the same two orbits found using the map. Again the two spectra are close to each other. The spectra in both cases were calculated for $`N=10^6`$ periods. As one observes the pattern shown in both Figs. 8 and 9 have the characteristics of the spectrum of orbits belonging to a chaotic region. Fig. 10 shows the LCNs for two chaotic orbits. Number 1 was derived using the map, while number 2 was found using numerical integration. As one can see the mean exponential divergence of the two nearby chaotic trajectories, described by the map, is larger than that given by numerical integration. Nevertheless the two curves are qualitatively similar. In Figs. 11 and 12 we give the spectra of the same regular orbit derived by numerical integration and the map respectively. This is an orbit with initial conditions near the periodic orbit $`x=y=0,p_x=\mathrm{\Lambda }^{1/2}`$. The spectra were calculated for $`N=10^6`$ periods. As one can see the agreement between the two spectra is good. From other cases we know that the agreement is much better if we calculate an orbit for $`N=10^7`$, or $`N=10^8`$ periods. Both are “U-shaped”, with two large and two small peaks. Such spectra are characteristic of quasi-periodic orbits, starting close to a stable periodic orbit (see Patsis et al. panos (1997)). Fig. 13 shows the LCNs for the orbit of Fig. 11 derived using a map (dots) and numerical integration (solid line). Again one can see that the map describes well the qualitative properties of regular orbits. Let us now come to the symmetry of the spectra. It was observed that the spectra of ordered orbits, starting close to a stable periodic orbit, are almost symmetric with respect to the $`\alpha =0`$ axis, while, when we go far from the stable periodic orbit they become asymmetric. In order to give an estimate between closeness and symmetry we have made extensive numerical experiments near the exact periodic orbit $`x=0`$, $`p_x=h^{1/2}=\sqrt{2}`$. We define as “symmetry factor” the quantity $$Q=\frac{1}{N}\underset{i=1}{\overset{N}{}}|S(\alpha _i)S(\alpha _i)|$$ (11) As one can see, $`Q=0`$ corresponds to a perfectly symmetric spectrum, while $`Q=1`$ to a totally asymmetric with all non zero values of $`S(\alpha )`$ to belong to either positive or negative $`\alpha `$. The results $`Q`$ vs $`p_x`$ for the map, in the case C, when $`ϵ=0.02`$, are shown in Fig. 14. Dots correspond to values given by the numerical experiments, while the solid line corresponds to the best fit $$Q=\mathrm{a}p_x^2+bp_x+c,$$ (12) where $`\mathrm{a}=0.2704`$, $`b=0.7960`$ and $`c=0.5986`$. Fig. 15 is the same as Fig. 14, for the Hamiltonian in the case C when $`ϵ=0.1`$. The best fit is now with $`\mathrm{a}=0.9100`$, $`b=2.659`$ and $`c=1.983`$. As we can see we have a second order polynomial growth of the asymmetry of the spectrum as we move far from the stable periodic orbit. The value of N in all cases was $`10^6`$ iterations. ## 4 Discussion In this paper we have studied the regular and chaotic motion in the Hamiltonian system (2) using a map, derived from the averaged Hamiltonian and numerical integration. Note that Hamiltonian (2) is integrable when $`\alpha =\beta `$ and when $`\alpha =3\beta `$. Depending on the choice of the parameters $`\alpha ,\beta `$ this Hamiltonian displays three different types of $`xp_x`$ phase plane named as types A,B,C. In the case of the two first types of phase plane, the map describes in a very satisfactory way the real properties of orbits up to $`ϵ=ϵ_{\mathrm{esc}}`$. Extensive numerical calculations have shown that the map does not produce any chaotic regions in the cases A and B. Therefore, one concludes that the map fails to describe the small chaotic regions near the separatrix found by numerical integration. Here we must note that at the beginning we had started with smaller values of $`ϵ`$, but we observed that the map persisted not to produce chaos up to $`ϵ=ϵ_{\mathrm{esc}}`$. This explains why $`ϵ=ϵ_{\mathrm{esc}}`$ was chosen. The situation is quite different in the case of the phase plane of type C. Here the system has large chaotic regions, increasing when $`ϵ`$ approaches $`ϵ_{\mathrm{esc}}`$ and the map describes them satisfactorily. Moreover, the comparison of the spectra and LCNs of orbits found using the map and numerical integration shows that one can trust the map. This is very important because the map is at least 10-20 times faster than numerical integration and one can make faster all the time consuming calculations. Note that in all cases the number of periods in the calculation of the spectra using the map or numerical integration was the same in order to be able to compare the corresponding results. We have also investigated the symmetry of the spectra of ordered orbits. It was found that the quantity $`Q`$ increases, with a second order polynomial dependence, as we are moving far from the stable periodic orbit. Finally the authors would like to make clear that (i) the proposed map has some qualitative similarities with the Poincaré map of the original Hamiltonian system (2), although the numerical differences may be important and (ii) the results of this work correspond to the particular Hamiltonian system (2) and for the resonance case 1:1. Interesting results, on the spectra of orbits, in galactic Hamiltonians made up of harmonic oscillators in the 4:3 resonance, have been given by Contopoulos et al. (cong (1995)). Also spectra of orbits have been studied, in the standard map, by Voglis & Contopoulos (vogcon (1994)), Contopoulos et al. (conve (1997)). This paper was focused on the comparison of the results (structure of the phase plane and spectra of orbits) given by numerical integration and the map. ###### Acknowledgements. The authors would like to thank the referee Prof. G. Contopoulos for valuable suggestions and comments that helped to significantly improve the paper.
no-problem/9907/astro-ph9907005.html
ar5iv
text
# Galactic Winds and Circulation of the ISM in Dwarf Galaxies ## 1 Introduction Many gas rich dwarf galaxies are known to be in a starburst phase, or are believed to have experienced periods of intense star formation in the past (e.g. Gallagher & Hunter 1984; Thuan 1991; Tosi 1998, and references therein). These galaxies, classified as “blue compact dwarf (BCD) galaxies” or “HII galaxies”, are thus excellent laboratories to investigate the feedback of vigorous star formation on the interstellar medium (ISM). Massive stars inject enormous amount of energy in the ISM through stellar winds and when they explode as type II supernovae (SNe); the impact of such an energy input on the galactic ISM may, in principle, be devastating. In fact it is often found that the total energy released during a starburst is greater that the gas binding energy. Yet many dwarf galaxies in a post-starburst phase are still gas rich. As Skillman & Bender (1995) pointed out, observational evidence (e.g. Marlowe et al. 1995; Martin 1996) are still insufficient to substantiate a disruptive impact of galactic winds on the ISM. Clearly, simple energetic considerations do not catch the essential nature of the feedback process, and detailed, time dependent hydrodynamical models are needed. Galactic winds are thought to have a key role in the formation and evolution of dwarf galaxies (Dekel & Silk 1986, Babul & Rees 1992, Matteucci & Chiosi 1983). In general, understanding the physics of the feedback of massive stars on the ISM is a key problem in cosmological theories of galaxy formation (Yepes et al. 1997, Cole et al. 1994). Gas outflows from dwarf galaxies are also suggested to be an important factor for the production and enrichment of the intergalactic medium (Trentham 1994). However, persuasive arguments against this conclusion are given by Gibson & Matteucci (1997), and the origin of metals in clusters of galaxies is still a matter of debate (Brighenti & Mathews 1998). The fate of the (metal rich) material ejected by massive stars is of crucial importance in understanting the chemical evolution of these galaxies (Tosi 1998), in particular the low $`\alpha `$-elements abundance and the ‘strange’ values of (He/H) and (N/O) vs. (O/H). These problems have been encompassed invoking a ‘differential ejection’, in which the enriched gas lost by massive stars escapes from the galaxy as galactic wind, while some (or most) of the original ISM is unaffected. Recent hydrodynamical simulations have verified that, under many circumstances, galactic winds are able to eject most of the metal rich gas, preserving a significant fraction of the original ISM (MacLow & Ferrara 1998, hereafter MF; De Young & Heckman 1994; De Young & Gallagher 1991). Silich & Tenorio-Tagle (1998) and Tenorio-Tagle (1996), instead, found that even the metal-rich material is hardly lost from galaxies, since it is at first trapped in the extended halos and then accreted back onto the galaxy. To investigate this subject further, we present here new high resolution calculations, addressing the ultimate fate of the ISM and SN ejecta, and their mixing, in a realistic starbursting dwarf galaxy. We investigate in detail the different phases of the gas flow, with particular emphasis on the late evolution, evolving the simulations for 500 Myr after the starburst event. We consider the effect of the dark matter, gas rotation, thermal conduction and different starburst strengths. We also discuss the X-ray emission and its diagnostic for the abundance of the hot gas, a particularly exciting topic in view of the forthcoming launch of AXAF and XMM. We aim at investigating the evolution of galactic winds in a general way, without focusing on any specific object. Thus, we select the parameters of the galactic models (total mass, ISM mass and distribution, etc.) to be representative of the class of dwarf galaxies. Nevertheless, it can be useful to compare some of our results to a real, representative object. An ideal galaxy is NGC 1569, a nearby, well studied starburst galaxy. Several independent lines of evidence indicate that NGC 1569 is in a post-starburst phase (Israel 1988, Israel & de Bruyn 1988, Waller 1991, Heckman et al. 1995, Greggio et al. 1998), with the major starburst activity ceased $`510`$ Myr ago. H$`\alpha `$ observations of NGC 1569 show (young) bubbles complexes, filaments and arcs throughout the volume of the galaxy (Tomita, Ohta & Saito 1994), suggesting a diffuse star formation. Heckman et al. (1995) found that the H$`\alpha `$ emission of NGC 1569 can be separated in a quiescent component, permeating the starbursting region of the galaxy, and a more violent component, far more extended and with velocities up to 200 km s<sup>-1</sup>. This high velocity component is interpreted to be ionized shells of superbubbles and provides a direct evidence of a galactic-scale outflow. Heckman et al. (1995) and Della Ceca et al. (1996) detected X-ray emission, extending for 1-2 kpc along the optical minor axis of NGC 1569, thus probing the hot gas phase directly. This hot gas ($`T10^7`$ K) is the signature of the violent SN activity on the ISM. As in almost all studies to date, we make a number of simplifying assumptions in calculating our models. First, the ISM is assumed to be homogeneous and single phase. Second, we neglect the selfgravity of the gas, even if the gas mass is of the same order of the stellar mass. Third, the starburst is instantaneous and concentrated in a small region at the center of the galaxy. While none of these hypotheses is likely to be strictly correct, they allow for a more direct comparison with previous works, and still make possible the calculation of models retaining the basics attributes of real galactic winds. We will relax some of these assumptions in a future paper in preparation. ## 2 Galaxy models Many ingredients play an important role in determining the hydrodynamical evolution of the galactic wind. Among others, the density distribution of the ISM in the pre-burst galaxy, the energy injection rate of the newly formed stars, the gravitational potential of the galaxy and the effectiveness of transport processes in the gas, like thermal conduction. A thorough exploration of the parameter space would require an enormous amount of computational resources and it is beyond the scope of this paper. Thus, we hold approximately constant the stellar and ISM masses of the model galaxies ($`M_{}=1.7\times 10^8`$ $`\mathrm{M}_{}`$and $`M_{\mathrm{ISM}}1.3\times 10^8`$ $`\mathrm{M}_{}`$), although $`M_{\mathrm{ISM}}`$ is a crucial factor for the late evolution of the system (De Young & Heckman 1994; MF). Instead, we vary some of the others parameters as described below. ### 2.1 The gravitational potential and the gas distribution The gravitational potential for our standard model is due to two mass distributions: a spherical quasi-isothermal dark matter halo plus a stellar thin disk. The halo density is given by $`\rho _\mathrm{h}(r)=\rho _{0h}/[1+(r/r_\mathrm{c})^2]`$, and we chose a central density $`\rho _{0h}=4.34\times 10^{25}`$ g cm<sup>-3</sup> ($`6.4\times 10^3`$ $`\mathrm{M}_{}`$pc<sup>-3</sup>). The halo core radius is assumed to be $`r_\mathrm{c}=1`$ kpc. The dark halo is truncated at $`r=20`$ kpc. The total dark matter mass is thus $`2\times 10^9`$ $`\mathrm{M}_{}`$, while the halo mass inside the galactic region (defined hereafter as a cylinder $`R<2.2`$ kpc and $`|z|<1.1`$ kpc, approximately the optical size of NGC 1569) is only $`0.66\times 10^8`$ $`\mathrm{M}_{}`$. For simplicity, we assume that the stars are distribuited in an infinitesimally thin Kuzmin’s disk with surface density $$\mathrm{\Sigma }_{}(R)=\frac{r_{}M_{}}{2\pi (R^2+r_{}^2)^{3/2}}$$ where $`r_{}=2`$ kpc is the radial scalelength and $`M_{}=1.7\times 10^8`$ $`\mathrm{M}_{}`$is the total stellar mass, a typical value for dwarf galaxies. Although this mass distribution is clearly a rough approximation of real stellar disks, it does not degrade the accuracy of the large scale hydrodynamical flow. The stellar potential generated by this mass distribution is $$\mathrm{\Phi }_{}(R,z)=\frac{GM_{}}{\sqrt{R^2+(r_{}+|z|)^2}}$$ (Binney & Tremaine 1987). It turns out that the stellar mass inside the galactic region is $`M_{,\mathrm{gal}}3.13\times 10^7`$ $`\mathrm{M}_{}`$, about half of the dark halo mass and about a factor of four less than the gas mass inside the same volume (see below). The dark halo totally dominates the mass budget at larger radii. The ISM is assumed to be single-phase and in equilibrium with the potential described above. In real dwarf galaxies the neutral ISM is supported against gravity partly by rotation and partly by the HI velocity dispersion (see Hoffman et al. 1996), with maximum rotational velocity that typically exceeds the velocity dispersion by a factor of few. Thus, in the standard model (hereafter model STD) we allow the ISM to rotate, to investigate the role played by the angular momentum conservation on the late phase of the evolution, when (once the energy output is ceased) the gas tends to recollapse toward the central regions (see section 3.1). The temperature of the unperturbed ISM is set to $`T_0=4.5\times 10^3`$. To build a rotating ISM configuration in equilibrium with the given potential, we first arbitrarily assume a gas distribution in the equatorial plane ($`z=0`$) of the form $`\rho (R,0)=\rho _0/[1+(R/R_c)^2]^{3/2}`$, where the central value is $`\rho _0=3.9\times 10^{24}`$ g cm<sup>-3</sup> and the gas core radius is $`R_c=0.8`$ kpc. The rotational velocity in the equatorial plane is then determined from the condition of equilibrium: $$v_\varphi ^2=v_c^2\frac{R}{\rho }\left|\frac{dp}{dR}\right|_{z=0}$$ where $`v_c=\sqrt{Rd\mathrm{\Phi }/dR}`$ is the circular velocity and $`p`$ the thermal gas pressure. The rotational velocity is assumed to be independent of $`z`$. The density at any $`z`$ is then found integrating the $`z`$-component of the hydrostatic equilibrium equation, for any $`R`$. The edge-on and face-on profiles of the resulting gas column density are shown in Fig. 1. We note that this model, having an extended gaseous halo, resembles the models worked out by Silich & Tenorio-Tagle (1998). The circular velocity for this mass model increases with $`R`$, reaching the maximum value of $`20`$ km s<sup>-1</sup> at $`R4`$ kpc and staying almost constant for larger $`R`$. The rotational velocity $`v_\varphi `$ shows a similar radial behaviour, but with a maximum value $`v_\varphi 15`$ km s<sup>-1</sup>. The total gas mass inside the galactic region is $`M_{\mathrm{ISM},\mathrm{gal}}1.32\times 10^8`$ $`\mathrm{M}_{}`$, a typical amount for dwarf galaxies (Hoffman et al. 1996), and in close agreement with the mass inferred for NGC 1569 in particular (Israel 1988). The total gas mass present in the numerical grid (extending to 25 kpc in both $`R`$ and $`z`$ directions) is $`M_{\mathrm{ISM},\mathrm{tot}}6\times 10^8`$ $`\mathrm{M}_{}`$. The gas distribution qualitatively resembles that used by Tomisaka & Ikeuchi (1988). It has a low density region around the $`z`$-axis (see Fig. 2a), which acts as a collimating funnel for the hot outflowing gas (Tomisaka & Bregman 1993, Suchkov et al. 1994). This is due to the assumption that $`v_\varphi `$ does not depend on $`z`$. The funnel, however, influence the gas dynamics only at very large distances above the galactic plane (i.e. for $`z\mathrm{}>10`$ kpc) and does not invalidate the results presented in sections 3 and 4. In order to address the influence of an intracluster medium (ICM) confining the galactic ISM, we calculate model PEXT (section 4.3). In this simulation we replace all the cold ISM (distributed as described above) having a thermal pressure $`P10^{13}`$ dyn cm<sup>-2</sup> with a hot, rarefied ICM with $`\rho _{\mathrm{ICM}}=8\times 10^{30}`$ g cm<sup>-3</sup> and $`T_{\mathrm{ICM}}=10^8`$ K. In this case the cold ISM is confined to a roughly ellipsoidal region with major and minor semiaxes 2 kpc and 1 kpc, respectively. The galactic ISM mass is now only $`M_{\mathrm{ISM},\mathrm{gal}}=1.05\times 10^8`$ $`\mathrm{M}_{}`$, while the total mass of gas in the grid is $`M_{\mathrm{ISM},\mathrm{tot}}1.16\times 10^8`$ $`\mathrm{M}_{}`$. In addition to the models described above, we use a different galaxy model (model B) to investigate the effect of the absence of dark matter and rotation. The isothermal ISM in hydrostatic equilibrium in the potential well generated by the same stellar distribution as in model STD. The central gas density is $`\rho _0=1.1\times 10^{23}`$ g cm<sup>-3</sup>, and the gas mass inside the galactic region is $`1.4\times 10^8`$ $`\mathrm{M}_{}`$, approximately as in model STD. Due to the lack of rotational support, the gas distribution is now more concentrated than in model STD (see the ISM column density in Fig. 1), and the total gas mass inside the grid is $`M_{\mathrm{ISM},\mathrm{tot}}=2.3\times 10^8`$ $`\mathrm{M}_{}`$. We also run a model identical to model B, but including heat conduction (model BCOND). ### 2.2 The starburst We assume an instantaneous burst of star formation which injects energy in the ISM for a period of $`30`$ Myr, approximately the lifetime of a $`8`$ $`\mathrm{M}_{}`$star, the smallest star producing a type II SN. We consider two starburst strengths: the first is representative of a moderate starburst, while the second is intended to match more active galaxies. We assume that the starburst produces a steady (mechanical) energy input rate $`L_{\mathrm{inp}}=3.76\times 10^{39}`$ erg s<sup>-1</sup> (hereafter SB1 model) and $`L_{\mathrm{inp}}=3.76\times 10^{40}`$ erg s<sup>-1</sup> (SB2 model) respectively. Model SB2 produces a mechanical power similar to the lower limit estimated for NGC 1569 (Heckman et al. 1995). The mass injection rate is assumed to be respectively $`\dot{M}=3\times 10^3`$ $`\mathrm{M}_{}`$yr<sup>-1</sup> and $`\dot{M}=3\times 10^2`$ $`\mathrm{M}_{}`$yr<sup>-1</sup>. It is useful to compare the parameters we use with the detailed starburst models of Leitherer & Heckman (1995) (LH). For example, for an instantaneous burst with a Salpeter IMF (from 1 to 100 $`\mathrm{M}_{}`$) and metallicity $`1/4`$ of solar, they found a mechanical energy deposition rate approximately constant between 6 and 30 Myr after the starburst (see their fig. 55). This justifies our assumption of a steady energy source. Our assumed mechanical luminosities for SB1 and SB2 correspond respectively to $`2.1\times 10^5`$ and $`2.1\times 10^6`$ $`\mathrm{M}_{}`$turned into stars during the starburst event, according to LH. If all stars with initial mass greater than 8 $`\mathrm{M}_{}`$end their lives as type II supernovae, the total number of SNII events produced by the starbursts is $`4000`$ and $`40000`$ for SB1 and SB2. The total energy deposited after 30 Myr is $`3.56\times 10^{54}`$ erg and $`3.56\times 10^{55}`$ for SB1 and SB2. These values must be compared with the binding energy of the gas present in the numerical grid in the standard model, $`E_{\mathrm{bind}}5.3\times 10^{54}`$ erg, and with the binding energy of the gas inside the galactic region $`1.8\times 10^{54}`$ erg. After 30 Myr, the total mass returned to the ISM by stellar winds and type II SNe in the model by LH, again with $`Z=1/4`$ $`Z_{}`$ and a Salpeter IMF, is $`4.81\times 10^4`$ $`\mathrm{M}_{}`$and $`4.81\times 10^5`$ $`\mathrm{M}_{}`$for SB1 and SB2. With our assumed $`\dot{M}`$, however, we inject $`9\times 10^4`$ $`\mathrm{M}_{}`$and $`9\times 10^5`$ $`\mathrm{M}_{}`$for models SB1 and SB2 respectively. Thus, we overestimate the mass return rate by a factor $`2`$ with respect to the LH model <sup>1</sup><sup>1</sup>1Alternatively, we can think to a starburst with a double amount of mass turned into stars, and to an efficiency in the energy deposition rate of $`0.5`$.. However, the hydrodynamical evolution of our models is not sensitive to such a discrepancy, as well as our estimate of the efficiency of ISM and metal ejection (although the pollution degree of the ISM may be affected). While our assumed starburst model is fairly consistent with the detailed theoretical models by LH, it is important to note that real galaxies have generally a much more complex star formation history. The assumption of instantaneous, point-like burst appears particularly severe. For example, Greggio et al (1998) found that the bulk of the starburst in NGC 1569 proceeded at an approximately constant star formation rate of $`0.5`$ $`\mathrm{M}_{}`$yr<sup>-1</sup> for $`0.10.15`$ Gyr (assuming a Salpeter IMF from 0.1 to 120 $`\mathrm{M}_{}`$), until $`510`$ Myr ago, when the star formation in the field ended. It implies that $`57.5\times 10^7`$ $`\mathrm{M}_{}`$of gas has been converted into stars. Moreover, H$`\alpha `$ observations of NGC 1569 show (young) bubbles complexes, filaments and arcs distributed throughtout the volume of the galaxy (Tomita et al. 1994), suggesting a diffuse, wide scale star formation. Galactic wind models powered by a point-like energy source are nevertheless useful as first step toward the full complexity of the problem, and for a direct comparison with previous studies. Simulations with spatially and temporally extended star formation will be the subject of a forthcoming paper. ### 2.3 The numerical simulations To work out the models presented in this paper we used two different 2-D hydrocodes. The first one has been developed by the Numerical Group at Bologna Astronomical Observatory and the (1-D) core of the scheme is described in Bedogni & D’Ercole (1986). This code and its successive extensions have been applied to a variety of astrophysical problems (e.g. Brighenti & D’Ercole 1997, D’Ercole & Ciotti 1998). The second code employed is ZEUS-2D, a widely used, well tested scheme developed by M. Norman and collaborators at LCSA (Stone & Norman 1992). We always found consistent results among the codes, as expected from the numerous hydrodynamical tests performed with the Bologna code (Brighenti 1992). We solve the usual hydrodynamical equations, with the addition of a mass source term and a thermal energy source term; the hot gas injected expands to form the starburst wind with the appropriate mechanical luminosity $`L_{\mathrm{inp}}`$. These equations are described in details in, e.g., Brighenti & D’Ercole (1994). The (constant) mass and energy source terms are given respectively by $`\alpha =\dot{M}/𝒱`$ and $`\alpha ϵ`$. Here $`𝒱`$ is the volume of the source region, chosen to be a sphere of radius=50 pc, centered at $`(R,z)=(0,0)`$, and $`ϵ=L_{\mathrm{inp}}/\dot{M}`$. To keep track of the gas lost by the stars formed in the starburst (the ejecta), we passively advect it solving an ancillary continuity equation for the ejecta density $`\rho _{\mathrm{ej}}`$. Both the codes used spread shocks over 3-4 zones and contact discontinuities over 4-10 zones. In our models the angular momentum is treated in a fully consistent way (see Stone & Norman 1992 for the details about the resolution of the angular momentum equation). Thus, contrary to some of the previous studies (Tomisaka & Ikeucki 1988, Tomisaka & Bregman 1993), we do not use a reduced gravitational force to mimic the rotational support of the ISM. To take into account the thermal conduction (model BCOND) we adopt the operator splitting method. We isolate the heat diffusion term in the energy equation and solve the heat transport equation, alternatively along the $`z`$ and $`R`$ direction separately, through the Crank-Nicholson method which is unconditionally stable and second order accurate. The system of implicit finite difference equations is solved according to the two-stage recursion procedure (e.g. Richmeyer and Morton 1967). Following Cowie & McKee (1977), we adopt saturated fluxes to avoid unphysical heat transport in presence of steep temperature gradients. We ran the models on a cylindrical grid (coordinates $`R`$ and $`z`$), assuming axial symmetry. We use reflecting boundary conditions along the axes and outflow boundary conditions at the grid edges. To better resolve the central region, the grid is unevenly spaced, with the zone width increasing from the center to large radii. Specifically, in the standard model (STD+SB2), the grid extends in both the $`R`$-direction and $`z`$-direction from 0 to 25 kpc. The first zone is $`\mathrm{\Delta }R=\mathrm{\Delta }z=10`$ pc wide, and the size ratio between adjacent zones is 1.00747. For the other models we use different grid spacing. For model SB1 (§4.1) the inner grid size is 3 pc and the size ratio is 1.00717. For model B and BCOND the central zone is only 2 pc wide and the size ratio between adiacent zones is 1.01. The parameters used in the models and other characteristic quantities are summarized in Table 1. ## 3 The standard model (STD+SB2) ### 3.1 The dynamics of the ISM As the starburst wind starts blowing, the classical two shocks configuration is achieved, in perfect analogy to the stellar wind bubble theory (Dyson & de Vries 1972, Weaver et al. 1977). The freely expanding wind encounters the reverse shock and is heated to $`T5\times 10^7`$ K, while the external shock sweeps the ISM. The shocked starburst wind and the shocked ISM are separated by a contact discontinuity. The reverse shock is always approximately spherical, since the short sound crossing time in the shocked wind region keeps the pressure almost uniform. The shape of the forward shock, instead, depends on the ISM density distribution. The roughly oblate ISM configuration of our model galaxy forces the superbubble to expand faster along the polar direction, acquiring the classical bipolar shape at late times (Fig. 2b,c,d). At earlier times, however, the density distribution favours a diagonal expansion, generating a curious boxy morphology (Fig. 2a). When the energy input from the starburst ends (at $`t=30`$ Myr, Fig. 2a), almost the whole galactic region ($`R<2.2`$ kpc, $`|z|<1.1`$ kpc) is filled by the freely expanding wind, a situation clearly unrealistic, due to our simple ISM model. In real galaxies we expect that this region hosts a complicated multiphase medium. Israel & van Driel (1990) found a relatively small hole (with diameter $`200`$ pc) in the $`HI`$ distribution, associated with the ‘super star cluster’ N 1569A and likely caused by the action of SNII and stellar winds of the star cluster. The shocked ISM shell is accelerating through the steep density gradient of the unperturbed ISM, and this acceleration promotes Rayleigh-Taylor (R-T) instabilities. The shell tends to fragment and relatively dense ($`n\mathrm{}<0.5`$ cm<sup>-3</sup>), cold ($`T10^4`$ K) filaments are clearly seen in Fig 2a at $`(z,R)(1,3)`$ kpc. The numerical resolution of this simulation is not appropriate to follow the real formation and evolution of these features, whose actual density is expected to be higher than that found in our computations. As pointed out by MF, these filaments of dense gas are immersed in the hot gas and they do not necessarily trace the edges of the superbubble. At t=30 Myr the whole shocked ISM shell is radiative. Following the trend of the external ISM, a density gradient is present along the shell, from the equator to the pole, with the lateral side being denser ($`n1.5`$ cm<sup>-3</sup>) and the polar region more rarefied ($`n0.002`$ cm<sup>-3</sup>). At this time, the optical appearance of the system, if kept ionised (for example by a low level star formation activity, hot evolved stars, etc.), would be that of an incomplete shell, the polar portion being too rarefied to be observable, given its low emission measure ($`EM0.01`$ cm<sup>-6</sup> pc for the shell and $`EM<30`$ cm<sup>-6</sup> pc for the filaments). The radiative external shock is too slow ($`v_{\mathrm{shock}}\mathrm{}<180`$ km s<sup>-1</sup>) to emit X-ray. This is contrary to the results by Suchkov et al. (1994), who claim that most of the X-ray radiation comes from the shocked ISM. The different behaviour in our models is due to the lower (by an order of magnitude) $`L_{\mathrm{inp}}`$ considered, probably more appropriate for a dwarf galaxy. As explained in section 3.4, we also find that the hot ISM is the most important contributor to the X-ray luminosity. However, it is not heated by shocks, but by mixing with the shocked wind at the contact discontinuities. At later times the outer shock accelerates through the steep density gradient, and heats the ISM to X-ray temperatures. On the other hand, this happens only when the X-ray luminosity has dropped to very low and uninteresting values (cf. Fig. 6). Fig. 2b shows the density distribution at $`t=60`$ Myr. The steep density gradient along the $`z`$-direction induces a radiative-adiabatic transition of the polar portion of the external shock. The cold filaments are slowly moving forward, and their density decreases to maintain the pressure equilibrium with the expanding hot gas; now the densest filaments have $`n0.04`$ cm<sup>-3</sup>. The shell is increasingly thicker with time. In fact, while the outer edge is still expanding, with $`v25`$ km s<sup>-1</sup>, the inner edge of the shell near the equatorial plane is already receding toward the center with a velocity $`v10`$ km s<sup>-1</sup>. This backward motion, due to the drop of the pressure inside the expanding hot bubble, will eventually cause the collapse of the cold gas back inside the galactic region, as evident in Fig. 2d (see also MF). The details of the collapse are shown in Fig. 3 and described below. Fig. 2c and 2d show the density at 100 and 200 Myr respectively. The external shock assumes a prounounced cylindrical shape because of the collimating effect of the low density region around the $`z`$-axis (section 2.1). The polar shock crossed the numerical grid edge (at $`z=25`$ kpc); however, being the motion supersonic, the numerical noise generated at the grid boundary does not propagate back. Moreover, given the very low densities in that region, the amount of gas lost from the grid is completely neglibile. The temperature of the hot, X-ray emitting gas decreases with time. $`T5\times 10^7`$ K during the active energy injection phase ($`t\mathrm{}<30`$ Myr). At later times, radiative losses and especially expansion lower the temperature ($`T\mathrm{}<10^7`$ K at 60 Myr; $`T\mathrm{}<2\times 10^6`$ K at 100 Myr; $`T\mathrm{}<10^6`$ K at 200 Myr). ASCA observations of NGC 1569 (Della Ceca et al 1996) indicate that the diffuse X-ray emission comes from a (luminosity weighted) $`T8\times 10^6`$ K gas. We note, however, that the temperature inferred from simple fits to observed X-ray spectra may be a poor estimate of the actual temperature (Strickland & Stevens 1998). We warn that all the model temperature values quoted are mass weighted and may not represent the “observable” ones. No more supported by the hot gas pressure, the cold ISM recollapses at $`t150`$ Myr, filling again the galactic region. We show a zoomed view of the central part of the grid in Fig. 3. In panel a the density contours and the velocity field are shown at $`t=140`$ Myr, just before the cold gas reaches the center. In panel b the same quantities are shown at $`t=200`$ Myr. In Fig. 3a the cold tongue is approaching the center at $`v40`$ km s<sup>-1</sup>, with a Mach number $`10`$. The density in the collapsing gas increases with $`R`$, from $`n2.3\times 10^3`$ cm<sup>-3</sup> at $`(R,z)=(0.5,0)`$ kpc, to $`n7.5\times 10^3`$ cm<sup>-3</sup> at $`(R,z)=(1,0)`$ kpc, to $`n2.3\times 10^2`$ cm<sup>-3</sup> at $`(R,z)=(2,0)`$ kpc. At $`t=150`$ Myr the cold gas reaches the center and shocks. Hereafter the accretion proceeds through a cylindrical shock wave. At $`t=200`$ Myr the cold gas is still flowing toward the center with $`v15`$ km s<sup>-1</sup>, building a (transient) conical structure around the $`z`$-axis (Fig. 3b). The mean density on the equatorial plane in the galactic region is $`n0.025`$ cm<sup>-3</sup>, and it is still growing with time. Panels c and d show the subsequent evolution ($`t=250`$ Myr and $`t=300`$ Myr respectively). The collapse is the result of the pressure drop in the hot bubble, due to its expansion along the polar direction. Between $`t=60`$ Myr and $`t=100`$ Myr the pressure of the hot gas decreases by more than one order of magnitude (from $`6\times 10^{13}`$ to $`10^{15}`$ dyn cm<sup>-2</sup>). The cold gas, no longer supported by the hot phase, is driven back to the center mainly by its own pressure, rather than the galactic gravity. We have verified that the collapse is not a spurious numerical effect due to strong radiative losses at the numerically broadened contact discontinuities. To this purpose we run a simulation identical to the standard model, but with the radiative cooling turned off. For this adiabatic model we found that the collapse time is again $`150`$ Myr. The ISM recollapse is thus an unavoidable phenomenon for all the models considered in this paper. It is interesting to follow the circulation of the gas. The mass of the ISM in the galactic region, at $`t=200`$ Myr, is $`M_{\mathrm{ISM},\mathrm{gal}}0.034\times 10^8`$ $`\mathrm{M}_{}`$, about a factor of $`40`$ lower than the initial gas mass in the same volume (note that it does not mean that the ejection efficiency $`f_{\mathrm{ISM}}`$ is 1/40, since what matters in estimating $`f_{\mathrm{ISM}}`$ is the amount of ISM effectively bound to the galaxy, as described in the next section). However, at $`t=200`$ Myr the cold gas is still flowing toward the center (Fig. 3b); for instance, at $`t=300`$ Myr we found $`M_{\mathrm{ISM},\mathrm{gal}}=0.096\times 10^8`$ $`\mathrm{M}_{}`$. At $`t=500`$ Myr, the final time of our simulation, $`M_{\mathrm{ISM},\mathrm{gal}}=0.27\times 10^8`$ $`\mathrm{M}_{}`$, a factor of $`5`$ lower than the initial gas mass in the same volume. We can speculate further on the fate of the cold gas falling back to the center at late times. The face-on surface density of the central ISM is an increasing function of time (Fig. 4), since material continues to accrete until the end of our simulation ($`t=500`$ Myr; we could not follow the evolution further because of our limitation in computational resources). It has been suggested that above a critical ISM surface density $`\mathrm{\Sigma }_{\mathrm{crit}}510\times 10^{20}`$ cm<sup>-2</sup>, the star formation in dwarf galaxies is very efficient (Gallagher & Hunter 1984, Skillman 1987, 1996). At $`t=200`$ Myr the face-on surface density peak in our model is $`\mathrm{\Sigma }10^{20}`$ cm<sup>-2</sup>, and it grows slowly to $`6\times 10^{20}`$ cm<sup>-2</sup> at $`t=500`$ Myr. Thus, we can hypothesize that the threshold surface density is reached in a time of the order of 1 Gyr, after that a new burst of star formation may start. This scenario is rougly consistent with many studies of the star formation history in BCD galaxies, which indicate that stars are formed mainly through several discrete, short bursts, separated by long ($``$ few Gyrs) quiescent periods (see the review by Tosi (1998) and references therein). ### 3.2 The ISM ejection efficiency A key point in the galactic wind theory is the ability of the starburst in ejecting the ISM (see Skillman & Bender (1995) and Skillman (1997) for a critical review about this subject). We estimate the ISM ejection efficiency calculating, at some late time, for example t=200 Myr, the mass of ISM $`M_{\mathrm{lost}}`$ which has velocity or sound speed greater than the local escape velocity. We assume that this gas (and the gas that already left the grid) will be lost by the galactic system (see also MF). It is important to note that $`M_{\mathrm{lost}}`$ calculated in this way should be considered only a rough estimate of the amount of gas leaving the galaxy, since dissipative effects may lower the ejection efficiency, and the escape velocity depends critically on the poorly known size of real dark matter halos. The ISM ejection efficiency, $`f_{\mathrm{ISM}}`$, is then defined as $`M_{\mathrm{lost}}/M_{\mathrm{initial}}`$, where $`M_{\mathrm{initial}}`$ is the total gas mass present on the whole grid at $`t=0`$ (we neglect the contribution of the ejecta, whose total mass is only $`\mathrm{}<0.2`$ % of the initial mass). We note that this operative definition for $`f_{\mathrm{ISM}}`$ is grid dependent, since $`M_{\mathrm{initial}}`$ increases with the volume covered by the numerical grid. At t=200 Myr we find $`f_{\mathrm{ISM}}=0.058`$: evidently even a powerful starburst as the one considered for this model is not effective in removing the interstellar gas. However, as pointed out in the previous section, the gas mass inside the galactic region can be significantly lower than the initial value, even long after the starburst event: thus, the efficiency in removing the ISM from the central regions may be considerably greater than $`f_{\mathrm{ISM}}`$. However, for other models (see section 4.3), the galaxy is able to recover most of the original ISM mass after $`100`$ Myr. ### 3.3 The enrichment In a similar way we have estimated the ejection efficiency of the metal-rich stellar ejecta, $`f_{\mathrm{ej}}`$. At $`t=200`$ Myr we found $`f_{\mathrm{ej}}=0.46`$: the galaxy is less able to retain the enriched stellar ejecta than its own original ISM. This finding supports the selective winds hypothesis, and it is in qualitative agreement with others numerical simulations (De Young & Gallagher 1990; De Young & Heckman 1994; MF). It is interesting to investigate the spatial distribution of the ejecta material. We found that at $`t=200`$ Myr $`7.3\times 10^5`$ $`\mathrm{M}_{}`$of stellar ejecta are present on the numerical grid, about 80 % of the total material released by the starburst. However, the ejecta mass in the galactic region is only $`M_{\mathrm{ej},\mathrm{gal}}5.15\times 10^3`$ $`\mathrm{M}_{}`$, less than 0.6 % of the total amount ejected ($`9\times 10^5`$ $`\mathrm{M}_{}`$)! Since gas continues to flow toward the central region, the mass of the ejecta in the galactic region increases slightly with time. At $`t=300`$ Myr, for instance, we found $`M_{\mathrm{ej},\mathrm{gal}}1.29\times 10^4`$ $`\mathrm{M}_{}`$, and $`M_{\mathrm{ej},\mathrm{gal}}3.6\times 10^4`$ $`\mathrm{M}_{}`$at $`t=500`$ Myr. We conclude that, while a significant fraction of the ejecta is retained by the relatively deep potential of the dark halo, most of it resides in the outer regions of the system, in a phase so rarefied to be virtually unobservable. The cold gas collapsing at late times, and filling the galactic region, has been only slightly polluted by the stellar ejecta. To characterize the pollution degree we introduce the local ejecta fraction as $`𝒵=\rho _{\mathrm{ej}}/\rho `$, where $`\rho _{\mathrm{ej}}`$ is the density of the ejecta. The average ejecta fraction in the galactic region, at $`t=200`$ Myr, defined as $`<𝒵_{\mathrm{gal}}>=M_{\mathrm{ej},\mathrm{gal}}/M_{\mathrm{ISM},\mathrm{gal}}`$, is $`𝒵1.4\times 10^3`$. The cold galactic ISM, probably the only component detectable at late times because of its relatively high density, shows only a small degree of enrichment. We can estimate the increase in the metal abundance generated by the starburst from the total number of SNII, which we assume to be the only source of metals. The iron production and circulation is particularly worthwhile, because the metallicity estimated through X-ray spectra of the hot gas phase ($`T10^7`$ K) are especially sensitive to iron through the Fe-L complex at $`1`$ keV. For the sake of simplicity we shall neglect the iron produced by SNIa, whose iron release timescale is believed to be of the order of one Gyr (Matteucci & Greggio 1986), a time much longer than those considered in this paper. In section 2.2 we estimated a total number of SNII $`4000`$ and $`40000`$ for SB1 and SB2 respectively (adopting the same IMF as in LH). The yields of metals from SNII are rather uncertain, especially for iron and oxygen, because of the complications in the late evolution of massive stars and nuclear reactions rates. A compilation of IMF averaged yields, i.e., the mean ejected mass of a given element per SN, can be found in Loewenstein and Mushotzky (1996). Given the approximate nature of the calculations presented in this paper, we simply adopt $`<y_{\mathrm{Fe}}>=0.1`$ $`\mathrm{M}_{}`$and $`<y_\mathrm{O}>=1`$ $`\mathrm{M}_{}`$as reasonable values for averaged iron and oxygen yields. We assume that the metals are well mixed within the ejecta, whose abundances (by mass and relative to H) are $`Z_{\mathrm{Fe},\mathrm{ej}}3.4`$ $`Z_{\mathrm{Fe},}`$ and $`Z_{\mathrm{O},\mathrm{ej}}4.6`$ $`Z_{\mathrm{O},}`$, where we adopt the meteoritic solar abundances from Anders & Grevesse (1989). In Fig. 5 we show the iron gas abundance distribution at $`t=200`$ Myr, assuming that the original ISM has $`Z_{\mathrm{Fe}}=0`$ (i.e. we calculate the increment in the metallicity caused by the starburst ejecta). The iron abundance is highly inhomogeneous, both in the hot and cold phase. It ranges from very low values $`Z_{\mathrm{Fe}}\mathrm{}<0.01`$ $`Z_{\mathrm{Fe},}`$, to the pure ejecta value $`Z_{\mathrm{Fe}}=3.4`$ $`Z_{\mathrm{Fe},}`$. The hot phase metallicity is supersolar with typical values $`Z_{\mathrm{Fe}}=1.52.5Z_{\mathrm{Fe},}`$. It is puzzling that recent ASCA observations of the outflows in starburst galaxies indicate that the metal abundance of the hot gas is rather low. We discuss this point in the next section. While the numerical diffusion may affect somewhat the absolute values of $`Z_{\mathrm{Fe}}`$, we believe that the spatial variations of the metallicity are real. The oxygen abundance pattern is identical to that of iron, due to our assumption of perfectly mixed ejecta, but with different minimum and maximum values ($`04.6`$ $`Z_{\mathrm{O},}`$). The cold gas replenishing the galactic region has average metallicity $`<Z_{\mathrm{Fe}}>\mathrm{}<0.005`$ $`Z_{\mathrm{Fe},}`$ ($`<Z_\mathrm{O}>0.01`$ $`Z_{\mathrm{O},}`$), so that a successive instantaneous starburst event would form stars only slightly more metallic than the previous stellar generation. ### 3.4 The X-ray emission and hot gas metallicity A detailed investigation of the X-ray emission of the hot gas is beyond the scope of this paper. The intrinsic diffusion of the numerical scheme spreads contact discontinuities, separating the hot and cold phases, over several grid points. This fact prevents us to consistently calculate X-ray luminosities ($`L_X`$) and emission averaged abundances of the hot phase. In fact, the gas inside the broadened contact discontinuities (a mixture of the ejecta and the pristine ISM), being relatively dense and with temperatures of the order of $`10^6`$ K, turns out to dominate $`L_X`$. It is important to note that several physical processes, not considered in this simulation, smear out hydrodynamical discontinuities, mixing cold and hot ISM, and producing a gas phase with intermediate temperature and density. Most important are thermal conduction and turbulent mixing (Begelman & Fabian 1990). Thus, the undesired numerical diffusion qualitatively mimics real physical effects. We consider explicitely the heat conduction in section 4.3, but we are not in the position to make quantitative estimates on the influence of turbulent mixing layers on $`L_X`$. With this limitation in mind we can nevertheless gain some insights on the properties of the X-ray emission of starbursting galaxies. The X-ray luminosity of several models (see below), calculated in the straightforward way as $`L_X=ϵ_R(T)𝑑V`$, is a decreasing function of time (here $`ϵ_R(T)`$ is the Raymond-Smith emissivity in the ROSAT band). All models shown in Fig. 6 share the same trend: at first $`L_X`$ drops gently until $`t=30`$ Myr and then, when the energy input stops, $`L_X`$ decreases rapidly to unobservable values. One of the most interesting observables is the emission averaged metallicity. In order to compare our data with oservations, we define the emission averaged ejecta fraction $`<𝒵>_X=(1/L_X)𝒵ϵ_R𝑑V`$ (it is a measure of the gas abundance <sup>2</sup><sup>2</sup>2The emission averaged iron abundance is $`Z_{\mathrm{Fe},\mathrm{ej}}<𝒵>_X`$, where $`Z_{\mathrm{Fe},\mathrm{ej}}=3.4`$ $`Z_{\mathrm{Fe},}`$ with the assumptions adopted in section 3.3.). It is almost constant with time up to 30 Myr, with typical values of 0.04-0.06, , and then it increases steadly up to $`0.5`$ at $`t=100`$ Myr, when, however the X-ray luminosity is so weak to make the detection virtually impossible. This very low ‘metallicity’ clearly indicates that most of the X-ray emission comes from original ISM mixed with stellar ejecta. To isolate the contribution of the ejecta material to $`L_X`$ and $`<𝒵>_X`$, we recalculated these two quantities using the ejecta density $`\rho _{\mathrm{ej}}`$ (instead of the gas density) in the calculation of the X-ray emissivity. Now $`L_X`$ is a factor $`100`$ lower than before, and $`<𝒵>_X`$ is now much larger, approaching unity. Note that this does not mean that the ISM heated by the outer shock is the main contributor to $`L_X`$ (see below). It demonstrates instead that diffusion processes mix the ISM with the shocked ejecta, and the material in these mixing layers makes most of the $`L_X`$. In the last few years the ROSAT, ASCA and BeppoSAX satellites provided detailed X-ray observations of starburst galaxies. While for dwarf galaxies the hot gas abundance cannot be unambiguosly determined (e.g. Della Ceca et al. 1996), for brighter starburst galaxies the counts statistics is high enough to make this task possible (Ptak et al 1997; Tsuru et al. 1997; Okada, Mitsuda & Dotani 1997; Persic et al. 1998). A somewhat surprising result of all these observations is that the iron abundance is invariably small, typically less than 0.1 solar. This low metallicity can easily be understood if the X-ray emission is dominated by the layer of shock-heated ISM, as pointed out by Suchkov et al. (1994). However, this is not a general result, and it does not hold for our models in particular, since the external shock is too slow to heat the ISM to X-ray temperatures (section 3.1). Thus, in model STD the only X-ray emitting gas is expected to be the (shocked) ejecta of the stars formed in the starburst, and its metallicity is thus expected to be quite high. This abundance discrepancy forces the theoretical models to move toward a higher level of complexity. Low X-ray abundances can be explained in several ways. First, it seems reasonable that thermal conduction and turbulent mixing give rise to a mass loaded flow (Hartquist, Dyson and Williams 1997, Suchkov et al. 1996) with low emission averaged metallicity, provided that the cold gas mixed with the hot phase is nearly primordial. In this case the emission averaged temperature of the hot gas is expected to be low (few $`10^6`$ K); see section 4.3. Second, the hot gas might be severly depleted by dust. Stellar outflows and SN ejecta are observed to form dust (e.g. Clegg 1989; Colgan et al. 1994), and the dust sputtering time $`t_{\mathrm{sp}}2\times 10^6a_{\mu \mathrm{m}}/n`$ yr ( where $`a_{\mu \mathrm{m}}`$ is the dust grain radius, Draine & Salpeter 1979; Itoh 1989) in the hot phase may be long enough to make most of the iron still locked into grains after few $`10^7`$ yr. Another possibility is that the estimated abundances are not accurate. Strickland & Stevens (1998) analysed the synthetic ROSAT X-ray spectrum of a simulated wind-blown bubble, finding that simple fits may underestimate the metallicity by more than one order of magnitude. The inadequacy of 1-T models in estimating the gas abundance has been demonstrated also by Buote & Fabian (1998) and Buote (1999) in the context of hot gas in elliptical galaxies and groups of galaxies. Indeed, Dahlem, Weaver & Heckman (1998) used multi-components models to fit ROSAT PSPC + ASCA spectra of seven starburst galaxies and found that low metallicities are no more required, and nearly solar abundances are entirely consistent with the data. Their findings support the idea that the inferred low abundances are caused by the undermodelling of X-ray spectra. ## 4 Other Models ### 4.1 Model SB1 With this simulation we investigate the effect of a weaker starburst on the ISM of a dwarf galaxy. This model is identical to model STD, but the starburst mechanical luminosity is a factor of ten lower. This starburst may be more typical among dwarf galaxies. We reduce the mechanical luminosity lowering the mass loss rate by a factor of ten (see §2.2). We anticipate that in this model the radiative cooling at the contact discontinuities, artificially broadened by numerical diffusion, is now important, and causes the hot bubble to slowly collapse. Fig 7a shows the density distribution at $`t=30`$ Myr, when the energy and mass input turns off. As in model STD, dense tongues of shocked ISM penetrate in the hot bubble as a result of R-T instabilities. As expected, the superbubble is now much smaller than in model STD, and it is expanding less rapidly. The hydrodynamical evolution is illustrated in the other panels of Fig. 7. At $`t=60`$ Myr (Fig. 7b) the internal edge of the shocked ISM shell is receding toward the center with $`v25`$ km s<sup>-1</sup>. The cold gas reaches the origin at $`t70`$ Myr, much earlier than in model STD. We find that the ISM collapse is slightly accelerated by the spurious energy losses mentioned above. To address the importance of this undesired numerical effect, we recalculated the Model SB1 without radiative cooling (the adiabatic model). These two extreme models should bracket the reality. For this adiabatic model the replenishing of the central region occurs at $`t85`$ Myr. Fig. 7c shows the ISM density at $`t=100`$ Myr. The hot, rarefied bubble is almost totally shrunk; the hot gas mass is now only $`4.4\times 10^2`$ $`\mathrm{M}_{}`$(it was $`2.7\times 10^4`$ $`\mathrm{M}_{}`$at $`t=30`$ Myr). At the same time the adiabatic model contains $`2.6\times 10^4`$ $`\mathrm{M}_{}`$of hot gas. The cold ISM continues to move ordinately toward the $`z`$-axis, and it encounters a weak accretion shock at $`R0.1`$ kpc. The density distribution at 200 Myr is shown in Fig. 7d. No more hot gas is present (while in the adiabatic model $`M_{\mathrm{hot}}4.8\times 10^3`$ $`\mathrm{M}_{}`$, and it is decreasing with time as the result of the numerical diffusion). The accretion shock has moved forward to $`R1`$ kpc, where the cold ISM is still accreting with $`v10`$ km s<sup>-1</sup>. The face-on surface density in the galactic region varies from $`2.5\times 10^{21}`$ cm<sup>-2</sup> at the very center, to $`4\times 10^{20}`$ cm<sup>-2</sup> at $`R=2`$ kpc. The number density in the central region is about $`0.35`$ cm<sup>-3</sup>. As in model STD the central surface density is slowly increasing with time, approaching the critical value for the onset of effective star formation activity. Thus, also for this model, the secular hydrodynamical evolution indicates the possibility of recurrent starburst episodes. The time between successive starburst events in this model is shorter that in model STD, being only few 100 Myr. At $`t=200`$ Myr the ISM ejection efficiency $`f_{\mathrm{ISM}}`$ is essentially zero: all the gas is cold and it is moving with a velocity lower than the escape velocity ($`f_{\mathrm{ISM}}=1.8\times 10^3`$ for the adiabatic model). The gas mass inside the galactic region is $`M_{\mathrm{ISM},\mathrm{gal}}=6.5\times 10^7`$ $`\mathrm{M}_{}`$, about half of the mass present initially. Since the gas is still accreting, the central ISM mass increases with time: at $`t=300`$ Myr we have $`M_{\mathrm{ISM},\mathrm{gal}}=8.0\times 10^7`$ $`\mathrm{M}_{}`$. Thus, in the case of moderate starburst strength, the galaxy is able to recover most of the original ISM in a relatively short time. The evolution of this model is qualitatively similar to that of model STD, but is now accelerated and, as expected, the galactic ISM ‘forgets’ the starburst quicker. The circulation of the stellar ejecta is qualitatively similar to that of the standard model. However, now $`f_{\mathrm{ej}}=0.003`$: almost all the metals produced by the starburst remain bound to the galaxy. A significant fraction of the total ejecta mass ($`2.4\times 10^4`$ $`\mathrm{M}_{}`$, $`27`$ % of the total) is still present in the galactic region at this late time. The very low value for $`f_{\mathrm{ej}}`$ is partly due to the excess of radiative losses at the contact surfaces. For the adiabatic model we find $`f_{\mathrm{ej}}=0.14`$ (still much lower than in model STD) and $`M_{\mathrm{ej},\mathrm{gal}}3.3\times 10^4`$ $`\mathrm{M}_{}`$. In summary, we find that $`f_{\mathrm{ej}}`$ is significantly lowered by the spurious extra-cooling, but the important quantity $`M_{\mathrm{ej},\mathrm{gal}}`$ does not change greatly. The conclusion is that a significant fraction ($`30`$ %) of the metals ejected is retained in the galactic region when the moderate starburst SB1 is adopted. ### 4.2 Model PEXT+SB2 With this model we investigate the evolution of a galactic wind occurring in a galaxy immersed in a hot, tenuous ICM as described in section 2.1. All the other parameters are identical to model STD. Fig. 8a shows the gas density at $`t=30`$ Myr. The superbubble has already blown out in the ICM, generating a complex filamentary structure. The fastest material penetrating in the ICM is moving with $`v2000`$ km s<sup>-1</sup>. Fig. 8b and 8c show the density at $`t=60`$ Myr and at $`t=200`$ Myr. The portion of the cold shell blowing out in the ICM is completely disrupted by the instabilities and spreads in a large volume, due to the high expansion velocities in the rarefied medium. At 200 Myr, the original ISM survives in a toroidal structure ($`1.5<R<8`$ kpc) on the equatorial plane. The inner edge of the cold gas is receding slowly ($`v20`$ km s<sup>-1</sup>) toward the center, while the outer portion is still expanding ($`v40`$ km s<sup>-1</sup>). The cold gas starts to collapse toward the center, which is reached at $`t270`$ Myr, much later than in the previous models. The ISM column density increases more slowly than in model STD, and at $`t=500`$ Myr the central peak is only $`\mathrm{\Sigma }_02\times 10^{20}`$ cm<sup>-2</sup>. Thus, in this case, the subsequent star formation episode might be delayed with respect to model STD. At 200 Myr the mass of gas present in the galactic region is $`M_{\mathrm{ISM},\mathrm{gal}}1.9\times 10^6`$ $`\mathrm{M}_{}`$, and about 1.5 % is hot ($`T10^6`$ K). At the final time (500 Myr) we have $`M_{\mathrm{ISM},\mathrm{gal}}8.7\times 10^6`$ $`\mathrm{M}_{}`$. The ejection efficiency is $`f_{\mathrm{ISM}}=0.31`$, much higher than in model STD because the absence of an extended envelope of (relatively dense) cold gas. The mass of the metal-rich ejecta in the galaxy is $`M_{\mathrm{ej},\mathrm{gal}}5.8\times 10^3`$ $`\mathrm{M}_{}`$($`1.8\times 10^4`$ $`\mathrm{M}_{}`$at 500 Myr) and $`f_{\mathrm{ej}}=0.83`$. These values are comparable to those found for model STD. However, we find that the hot gas has been severly contamined by the hot ICM, and $`𝒵\mathrm{}<0.05`$ for almost all the hot ISM. The reason for this behaviour is the high temperature of the ICM, which greatly increases the importance of numerical diffusion. We estimate an upper limit for this effect, considering the first order upwind method (Roache 1972). The numerical diffusion coefficient is $`D_{\mathrm{upwind}}c\mathrm{\Delta }`$, where $`\mathrm{\Delta }`$ is the zone size. The diffusion time is $`\tau _D=\mathrm{\Delta }^2/D30`$ Myr (here $`\mathrm{\Delta }30`$ pc at $`R=z2.5`$ kpc and $`c`$ is the ICM sound speed), so the numerical diffusion affects significantly this simulation, and this explains the very low values for $`𝒵`$. We conclude that for model PEXT we cannot calculate the enrichment process in a consistent way. For model STD, given the low temperature of the ISM ($`4.5\times 10^3`$ K), $`\tau _D`$ is more than two order of magnitude longer, and the intrinsic diffusion is negligible. We note that the physical diffusion time scale, $`\tau _D=L^2/D`$, where $`L`$ is the typical length scale of the problem ($`L1`$ kpc), is very short: $`\tau _D10^210^3`$ yr. This is due to the high value for $`D\lambda c`$, where $`\lambda 5`$ Mpc is the mean free path for the ICM (Spitzer 1962). However, even a small magnetic field reduces the mean free path to the order of the ion Larmor radius $`r_L`$. Only in this case we are allowed to consistently use the hydrodynamical equations. With $`\lambda r_L`$ the physical diffusion is effectively impeded. ### 4.3 Model B+SB2 and BCOND+SB2 Panel a of Fig. 9 shows the gas density of model B (section 2) at 30 Myr, just at the end of the starburst activity. The free expanding wind extends so far that almost all of the galaxy is devoid of the pristine gas. There is a radial gradient in the bubble temperature: along the $`z`$-axis $`T`$ ranges from $`4\times 10^7`$ K close to the reverse shock to $`10^6`$ K behind the forward shock; a similar pattern is also present along the $`R`$-axis, although the temperatures behind the lateral shock are lower ($`10^5`$ K) because of the lower velocity of the shock moving through the higher local ambient density. The average density of the hot gas filling the bubble is $`10^4`$ cm<sup>-3</sup>. The expansion velocity along the symmetry axis is $`300`$ km s<sup>-1</sup>, and decreases toward the equatorial plane. The bubble accelerates as it expands through the decreasing ISM density profile, and the R-T unstable contact discontinuity generates relatively dense ($`n10^2`$ cm<sup>-3</sup>) and cold ($`T10^4`$ K) filaments and blobs. Actually, denser structures can be seen on the $`z`$ axis, but they are likely due to our assumption of vanishing radial velocity on the symmetry axis. In fact, cold gas deposited on this axis cannot be effectively removed, a well known shortcoming common to all 2D cylindrical simulations. Given the progressively increasing zones size with $`z`$ and $`R`$, it is likely that the knots density is underestimated in our simulations, especially for the condensations far from the center. At $`t=57`$ Myr (Fig. 9b) there is a large region essentially devoid of gas ($`n5\times 10^6`$ cm<sup>-3</sup>) surrounded by a very thick, low density shell ($`n10^4`$ cm<sup>-3</sup>), with a temperature $`10^6`$ K. The external shock is rounder than in model STD, due to the lack of the collimating effect of the funnel along the $`z`$-axis (cfr. Fig. 2b). The dense and cold gas near the equatorial plane is already receding toward the center with a velocity $`30`$ km s<sup>-1</sup>, and a rarefaction wave is moving outward. The highest density in the shell is $`10`$ cm<sup>-3</sup> on the equator, where the expansion velocity is $`10`$ km s<sup>-1</sup>. Apart the cold gas on the $`z`$ axis, where the density reaches $`30`$ cm<sup>-3</sup>, the densest filaments have $`n3\times 10^2`$ cm<sup>-3</sup>. At $`t75`$ Myr (not shown in Fig. 9) the inflowing cold gas reaches the center, where the density is still rather low ($`n10^2`$ cm<sup>-3</sup>). s<sup>-1</sup>. After 106 Myr (panel c), the final time of this simulation, the hot gas is still expanding, but the central cold ISM is enterely collapsed, filling a region $`|z|<2.5`$ kpc, $`R<6.5`$ kpc. The galaxy has thus recovered a cold ISM distribution similar to the original one. The ISM mass inside the galaxy is $`M_{\mathrm{ISM},\mathrm{gal}}10^8`$ $`\mathrm{M}_{}`$. Near the center the density reaches a few cm<sup>-3</sup>. About the starburst ejecta, its largest content inside the galaxy is reached at $`t=30`$ Myr, with $`M_{\mathrm{ej},\mathrm{gal}}=3.6\times 10^5`$ $`\mathrm{M}_{}`$. At this time $`<𝒵_{\mathrm{gal}}>=2.4\times 10^3`$, and it decreases steadly with time. At $`t=106`$ Myr $`<𝒵_{\mathrm{gal}}>=1.1\times 10^3`$ while the ejecta mass is $`M_{\mathrm{ej},\mathrm{gal}}10^5`$ $`\mathrm{M}_{}`$, about one tenth of the total gas lost by the massive stars. The ISM and metals ejection efficiency, estimated at $`t=106`$ Myr, are $`f_{\mathrm{ISM}}=0.48`$ and $`f_{\mathrm{ej}}=0.77`$ respectively. While the high $`f_{\mathrm{ej}}`$ is consistent with the results obtained in section 3.3, the large value for $`f_{\mathrm{ISM}}`$ is striking when compared to model STD. However, this discrepancy reflects a deficiency in the definition of $`f_{\mathrm{ISM}}`$, rather than a really different behaviour of the two models. As a matter of fact, model B recovers a ‘normal’ ISM before model STD! The discrepancy is due to several factors. First, in model B the massive dark halo is absent. The escape velocity is then quite low (a factor 2-4 less than in model STD) and the gas residing at large radii becomes easily unbound. Second, the ISM distribution in model B is more peaked than in model STD, and the total amount of gas present in the numerical grid is lower (cfr. section 2.1). This, in turn, means that in model B the starburst provides more energy per unit gas mass than in model STD. We believe that the difference in $`f_{\mathrm{ISM}}`$ between model STD and model B should be considered with some caution. In real galaxies, the gas at large radii, which is the source of the difference in $`f_{\mathrm{ISM}}`$, can be removed by ram pressure and tidal stripping, processes not included in our simple models. Thus, the contribution of this gas to $`f_{\mathrm{ISM}}`$ is rather uncertain. The X-ray emission averaged $`<𝒵>_X`$ is much higher than $`<𝒵_{\mathrm{gal}}>`$ and $`increases`$ from $`<𝒵>_X=0.06`$ at $`t=30`$ Myr up to $`<𝒵>_X0.2`$ at $`t\mathrm{}>50`$ Myr; at later times the bubble gas cools out of the X-ray temperatures and $`<𝒵>_X`$ drops to zero at $`t60`$ Myr. In Fig. 10 we show model BCOND, identical to model B but with the heat conduction activated. Again, panel a shows the density at 30 Myr. The superbubble is less extended than in model B, because of the increased radiative losses in the conduction fronts. The temperature distribution inside the superbubble is now rather flat near the equator, but a negative gradient is present along the $`z`$-direction, with the temperature in the range $`10^6<T<10^7`$ K. As in the previous models, cold structures are present due to the R-T instabilities. The density of these structures is $`n3`$ cm<sup>-3</sup>, while the density of the hot gas is $`10^3`$ cm<sup>-3</sup>, one order of magnitude larger than in model B. This higher density is due to the evaporation of the walls of the shocked ISM shell which ‘feed’ the inner region of the bubble. The expansion velocities are similar but lower than those of model B at the same time. Panel b shows the density at 56 Myr and can be compared with panel b of Fig. 9. The size of the superbubble remains smaller and the shape more elongated. The hot gas in the cavity is denser ($`n5\times 10^5`$ cm<sup>-3</sup>) and slightly colder ($`T2.5\times 10^5`$ K) than in the non-conductive case. The cold gas near the equatorial plane is receding toward the center, while the outer edge (where $`n10`$ cm<sup>-3</sup>) is still expanding. The densest filaments have $`n0.1`$ cm<sup>-3</sup>. Panel c shows the gas flow at 105 Myr. The cold inflowing gas has just reached the center, much later than model B. In fact, the pressure drop of the hot gas is slower than in model B because the density of the hot gas is kept higher by the shell evaporation and by the slower expansion rate. At $`t=125`$ Myr, the last time of this simulation, $`M_{\mathrm{ISM},\mathrm{gal}}0.92\times 10^8`$ $`\mathrm{M}_{}`$, not far from the initial value. However, only a fraction of the galactic volume contains a cold, dense ISM. In fact, roughly half of the galaxy is still filled with the rarefied gas of the cavity, now only moderately hot ($`T\mathrm{}<10^5`$ K). At this time the ISM ejection efficiency is $`f_{\mathrm{ISM}}=0.32`$. The peculiar structure apparent on the simmetry axis ($`z\mathrm{}>20`$ kpc) at $`t=105`$ Myr (Fig. 10c) is a numerical artifact depending on our treatment of the heat conduction. Collisionless shocks (for istance, in supernova remnants) do not show the hot precursor which would be expected (Zel’dovic & Raizer, 1966). This means that the plasma instabilities responsible of the shock front formation also inhibit the heat flow through the front itself (Cowie 1977). To mimic this phenomenon in numerical simulations, the heat conduction coefficient must vanish at the shock front. To detect the shock front position on the computational grid is an easy task in 1D simulations, but becomes rather cumbersome in two dimensions. Fortunately, the precursor length is rather short (shorter than the grid size) unless the upwind density is very low. We thus did not make any special treatment at the shock front. Effectively, the heat flux overruns the front only at late times, when the upwind density becomes rather low. However, this happens when the shock is well outside the galaxy, and our conclusions are not affected. The ejecta content inside the galaxy is $`M_{\mathrm{ej}}=1.7\times 10^5`$ $`\mathrm{M}_{}`$after 30 Myr ($`<𝒵_{\mathrm{gal}}>1.2\times 10^3`$) and decreases steadily down to $`M_{\mathrm{ej}}=0.77\times 10^5`$ $`\mathrm{M}_{}`$after 125 Myr, when $`<𝒵_{\mathrm{gal}}>8.3\times 10^4`$. At $`t=125`$ Myr we find $`f_{\mathrm{ej}}=0.88`$. It is particularly interesting to investigate the X-ray emission for model BCOND, since now the numerical diffusivity does not affect the value of $`L_X`$ and $`<𝒵>_X`$ (cfr. section 3.4). In fact, the thermal conduction naturally broadens the contact surfaces on lengthscales larger than thickness due to the numerical diffusion. The temporal variation of $`L_X`$ is shown in Fig. 6. $`L_X`$ is higher than in model B because of the emission arising in conduction fronts. The emission averaged abundance $`<𝒵>_X`$ ranges between 0.13 and 0.20 for $`t\mathrm{}<70`$ Myr. As the energy input stops, the temperatures of the hot phase quickly drops and after $`t70`$ Myr no more X-ray emitting gas is present. The observable emission averaged temperature of the hot gas is $`<T>_X2\times 10^6`$ K for $`t\mathrm{}<30`$ Myr and drops quickly thereafter. ## 5 Discussion and Conclusions The results presented here qualitatively confirm the conclusions of previous investigations on the effect of galactic winds in dwarf galaxies (e.g. MF). In general, it is found that the ISM is more robust than expected, and it is not disrupted even if the total energy input is much greater than the gas binding energy. In fact, the gas in the optical region of dwarf galaxies is only temporarily affected by the starburst, and the galaxy is able to recover a ‘normal’ ISM after a time of the order of 100 Myr from the starburst event, here assumed to be instantaneous. Our results agree well with the ‘moderate form’ of galactic wind dominated evolution of dwarf galaxies described by Skillman (1997). In Table 2 we summarize the values of the fraction of ISM and metal-rich ejecta that is lost by the galaxy. We find that the evolution of the ISM can be separated in two phases. The first one corresponds to the energy input period (which lasts 30 Myr in our models). During this phase the superbubble expands surrounded by a fragmented and filamentary shell of cold gas. The hot gas inside the bubble and the cold shell gas are in pressure equilibrium. The second phase starts when the energy input stops: the pressure of the hot bubble, still expanding along the polar direction, drops quickly. This causes the inner portion of the shell near the equator to collapse back toward the center, replenishing the galactic region with cold gas. The collapse is driven mainly by the pressure gradient, with the gravity being of secondary importance. The replenishment process occurs through inflow of cold gas moving parallel to the equatorial plane, thus resembling the inflows considered by Tenorio-Tagle & Munoz-Tunon (1997). However, the ram pressure associated to this flow in model STD, representative of all our models, is $`10^{14}`$ dyn cm<sup>-2</sup>, five orders of magnitude lower than those assumed by Tenorio-Tagle & Munoz-Tunon (1997). Evidently, if such massive inflows exist, they must have a different origin. We found that the central ISM reaches the critical column density required for rapid star formation after 0.1 - 1 Gyr from the starbust, the exact value depending on the galactic parameters, when a new starburst may start. This episodic star formation regime is necessary to account for the chemical evolution of BCD galaxies, and we have shown here that it is consistent with the hydrodynamical evolution of the ISM. Most of the metal-rich material shed by the massive stars resides in the hot phase of the ISM, and for powerful starbursts it is easily lost from the galaxy (Table 2). We estimate that a fraction of $`0.50.9`$ of the total metal-rich gas is dispersed in the intergalactic medium when the starburst model SB2 is adopted. However, for moderate energy input rates (model SB1), only a small fraction ($`\mathrm{}<10`$ %) becomes formally unbound. In spite of the smallest $`f_{\mathrm{ej}}`$, model SB1 has the lowest $`<𝒵_{\mathrm{gal}}>`$, since the total amount of ejecta is a factor of 10 lower than the other models. Most of the ejecta material is pushed to large distance from the galaxy (several kpc), and its fate is uncertain, being subject to ram pressure and tidal stripping. These processes may effectively remove material loosely bound to the galaxy. There is some quantitative difference between our findings and those by MF. The generally lower $`f_{\mathrm{ej}}`$ found in our model is likely to be the result of our more extended gaseous halo. Their models, with a sharp truncation of the ISM, are similar to our model PEXT. The most striking disagreement is between model SB1, for which we find $`f_{\mathrm{ej}}=0.003`$, and their model with $`M_{\mathrm{gas}}=10^8`$ $`\mathrm{M}_{}`$and $`L_{\mathrm{inp}}=10^{39}`$ erg s<sup>-1</sup> which has $`f_{\mathrm{ej}}=1`$. However, as explained in section 4.1, model SB1 suffers of some numerical extra-cooling; the same model without radiative losses gives $`f_{\mathrm{ej}}=0.14`$, probably a more realistic value if thermal conduction is not effective. A comparison of our Fig. 7a-b with fig. 2a of MF (panel with the model $`M_\mathrm{g}=10^8`$; $`L_{38}=10`$, in their notations) dramatically shows the sensitivity of the superbubble dynamics (and size) on the ISM distribution. The cold gas replenishing the central region, from which a successive starburst may form, has been only slightly polluted by the massive star ejecta, with $`<𝒵>4\times 10^42\times 10^3`$, or $`<Z_\mathrm{O}>2\times 10^310^2`$ $`Z_{\mathrm{O},}`$, with the assumptions described in section 3.3. Thus, many starburst episodes are necessary to build an average metallicity $`Z0.25`$ $`Z_{}`$, as determined for NGC 1569 from observations of emission lines gas (e.g. Kobulnicky & Skillman 1997). Many dwarf galaxies, however, are more metal poor (the abundance of IZw18 is only 0.07 solar), and few bursts may be sufficient to produce the observed metallicity. The values listed in Table 2 demonstrate how the evolution of the ISM is not regulated by the ejection efficiency parameters $`f_{\mathrm{ISM}}`$ and $`f_{\mathrm{ej}}`$ alone. For instance, model PEXT and BCOND have similar $`f_{\mathrm{ISM}}`$ and $`f_{\mathrm{ej}}`$, but very different ISM and ejecta masses. Conversely, the gas mass evolution of model SB1 and model B is comparable, despite of dissimilar ejection efficiencies. Comparing models STD and B, we found that a dark matter halo has little direct influence on the final behaviour of the ISM. However, dark matter and rotation determine the initial gas distribution, which is an important parameter for the flow evolution. For instance, the central region of model STD is refilled with cold gas more slowly than model B, a difference which reflects the different initial ISM distribution. In order to evaluate how the assumption of an instantaneous burst influences our results, we run an additional model similar to STD, but with $`L_{\mathrm{inp}}=1.128\times 10^{40}`$ erg s<sup>-1</sup>, $`\dot{M}=0.009`$ M yr<sup>-1</sup>. The energy and mass sources are now active for 100 Myr, so that the total energy and mass injected are the same as in STD. With this model, which in a simple way mimics the effect of a prolonged starburst, we wish to check the sensitivity of our results on the assumption of a instantaneous starburst adopted in models illustrated in the previous sections. We found that the general dynamics is similar to that of model STD and is not described here. The ejection efficiencies, again calculated at $`t=200`$ Myr, are $`f_{\mathrm{ISM}}=0.055`$ and $`f_{\mathrm{ej}}=0.40`$, almost identical to those found for STD (Table 2). Thus, it appears that the instantaneous starburst hypothesis does not invalidate our general findings. We devoted section 3.4 to discuss the complex X-ray emission arising from starburst galaxies. We warn again that for model BCOND only we can calculate the X-ray quantities in a strictly consistent way, the other models having the contact surfaces numerically spread by the intrinsic diffusion of the hydrocode (although several physical processes are thought to produce similar effect, cfr. section 3.4). In model BCOND the thermal conduction generates physically broadened interfaces between hot and cold gas. We found that the X-ray luminosities in the ROSAT band (Fig. 7) are generally less than those estimated by Della Ceca et al. (1996, 1997) for NGC 1569 and NGC 4449. It is thus suggested that a mass loading mechanism is at work in these systems (see also Suchkov et al. 1996). Moreover, the low abundances found in X-ray studies also indicate that thermal conduction (or some other process that mix cold and hot gas) is effective. Contrary to Suchkov et al. (1994) we found that the shocked ISM layer does not contribute appreciably to the X-ray luminosity for $`t\mathrm{}<30`$ Myr, a fact indicating that the origin of the X-ray radiation is model dependent. For $`L_{\mathrm{inp}}`$ appropriate for typical dwarf galaxies we found that $`L_X`$ is dominated by the ISM mixed with the shocked ejecta at the contact discontinuities. ## Acknowledgments We are grateful to Luca Ciotti, Laura Greggio, Bill Mathews and Monica Tosi for interesting discussions. We are indebted to the referee for a number of thoughtful comments and suggestions. This work has been partially supported by the Italian Ministry of Reserch (MURST) through a Cofin-98 grant.
no-problem/9907/astro-ph9907347.html
ar5iv
text
# Primordial black holes as a source of extremely high energy cosmic rays ## 1 Introduction Small Primordial Black Holes (PBHs), with masses well below the self-gravitational collapse limit and possibly as low as the Planck Mass ($`M_{Pl}5.5\times 10^5g`$) may have formed in the primordial universe . Numerous processes, compatible with standard cosmological scenarios, can be put forward to explain their formation . In particular, if they result from initial density perturbations (with an initial mass determined by the horizon mass at this epoch), the mass spectrum can be analytically determined following the natural hypothesis of scale-invariant Gaussian fluctuations: $$\frac{\mathrm{d}^2n}{\mathrm{d}M\mathrm{d}V}=(\alpha 2)M^\alpha M_{evap}^{\alpha 2}\mathrm{\Omega }_{PBH}\rho _{crit}$$ (1) where $`M_{evap}10^{15}`$ g is the mass of a PBH evaporating nowadays, $`\mathrm{\Omega }_{PBH}`$ is the current density of PBHs in units of critical density $`\rho _{crit}`$ and $`\alpha =(1+3\gamma )/(1+\gamma )+1`$, $`\gamma =p/\rho `$ being the pressure to density ratio. This study is dedicated to the final-stage emission of PBHs to investigate if they can be considered as candidates for extremely high energy cosmic rays (EHECR), beyond 100 EeV ($`10^{20}`$ eV). Observational data show that the cosmic ray flux seems to be curiously unaffected by the expected Greisen-Zatsepin-Kuz’min (GZK) cutoff (due to interaction with the 2.7K cosmological background above photoproduction threshold). The integrated emission of PBHs is estimated in the following sections for a volume of universe where predicted effects of this interaction are weak (i.e. for a radius close to the attenuation length $``$ 50 Mpc). ## 2 Individual emissions Hawking showed that black holes can radiate particles in a process qualitatively equivalent to $`e^+e^{}`$ pairs production in a strong electric field. When the hole temperature becomes greater that the quantum chromodymanics confinment scale ($`T>\mathrm{\Lambda }_{QCD}`$), i.e. some hundreds of MeV, emitted particles are fundamental constituents rather than composite hadrons . The EHECR production by PBHs (which are particularly affected by evaporation effects because of their low mass) has to be understood in such an approach. The emission spectrum for particles of energy $`Q`$ per unit of time $`t`$ is: $$\frac{\mathrm{d}^2N}{\mathrm{d}Q\mathrm{d}t}=\frac{\mathrm{\Gamma }_s}{h\left(exp\left(\frac{Q}{h\kappa /4\pi ^2c}\right)(1)^{2s}\right)}$$ (2) where contributions of angular velocity and electric potential have been neglected since the black hole discharges and finishes its rotation much faster than it evaporates . $`\kappa `$ is the surface gravity, $`s`$ is the spin of the emitted species and $`\mathrm{\Gamma }_s`$ is the absorption probability. In the general case, $`\mathrm{\Gamma }_s`$ is a function of $`Q`$, the particle mass $`m`$, the hole mass $`M`$, and the number of degrees of freedom of the species. Its value can only be computed by numerical approximations based on expansion in spherical harmonics of the scattering matrix . In the optical limit (i.e. $`Q\mathrm{}`$) , which is totally justified for energies considered in this work, $$\mathrm{\Gamma }_s\frac{27Q^2}{64\pi ^2(kT)^2}$$ (3) where $`T`$ is the ”temperature” defined by $$kT=\frac{hc^3}{16\pi ^2GM}10^4\left(\frac{1\mathrm{g}}{M}\right)\mathrm{EeV}$$ (4) In such a description, the black hole behaviour mimics a black body whose temperature increases when the mass decreases until it reaches the Planck limit where this theoretical description becomes unadapted. The time-evolution of the system depends on the emitted constituents’ degrees of freedom and is therefore based on the choice of a particle physics model. It is likely that new particles, absent from the standard model, are emitted when the black hole temperature becomes extremely high, but the general behaviour remains unchanged: all the emission above 100 EeV is nearly instantaneous. The mass loss rate of a PBH is : $$\frac{\mathrm{d}M}{\mathrm{d}t}\frac{(7.8d_{s=1/2}+3.1d_{s=1})10^{24}\mathrm{g}^3\mathrm{s}^1}{M^2}$$ (5) where $`d`$ is the mass-dependant number of degrees of freedom for the emitted particles of spin $`s`$. In the standard model, $`\mathrm{d}M/\mathrm{d}t7.9\times 10^{26}/M^2`$ above the top quark production threshold. It leads to $$\mathrm{d}t=\frac{1}{(7.8d_{s=1/2}+3.1d_{s=1})}\frac{h^3c^6}{(4\pi )^6G^3}\frac{d(kT)}{(kT)^4}$$ (6) or $$\mathrm{d}t_{}1.510^{15}\frac{\mathrm{d}(kT_{})}{(kT_{})^4}$$ (7) where $`t_{}=t/1\mathrm{s}`$ and $`kT_{}=kT/1\mathrm{E}\mathrm{e}\mathrm{V}`$. Since it has been checked that only particles emitted at $`kT5`$ EeV will contribute (within a few percent) to the flux of cosmic rays with energies beyond 100 EeV, the characteristic production time is $`\mathrm{\Delta }t4\times 10^{18}`$ s. As a comparison, the total evaporation time for a $`10^{15}`$ g black hole is of the order of the age of the universe. ## 3 Extremely high energy emission Taking into account formula (6) relating the temperature to the mass, the previous emission spectrum can be rewritten in its integral form per particle species above a threshold $`E_{th}`$: $$N=\frac{1}{(7.8d_{s=1/2}+3.1d_{s=1})}\frac{27h^2c^9}{8^6\pi ^8G^3}_{kT_i}^{kT_{Pl}}\frac{1}{(kT)^6}_{E_{th}}^{\mathrm{}}\frac{Q^2\mathrm{d}(kT)\mathrm{d}Q}{e^{Q/(kT)}(1)^{2s}}$$ (8) where $`T_i`$ and $`T_{Pl}`$ are the initial and Planck temperatures. It can be numerically expressed as: $$N=1.5610^{16}_{kT_i}^{kT_{Pl}}\frac{\mathrm{d}(kT_{})}{(kT_{})^3}_{E_{th}/(kT)}^{\mathrm{}}\frac{x^2\mathrm{d}x}{e^x(1)^{2s}}$$ (9) where $`x=Q/(kT)`$. Fig 1 shows that the Planck cutoff is effective for energies well beyond those of interest here. After their production, emitted quark and gluons fragment and produce a subsequent number of hadrons. Monte Carlo simulation codes tuned to reproduce experimental data obtained on colliders cannot be used because the energies considered here are several orders of magnitude greater than those available today. The multiplicity $`n_h`$ of charged hadrons produced in a jet of energy $`Q`$ is therefore estimated by means of the leading log QCD computation : $$n_h(Q)3\times 10^2e^{2.7\sqrt{ln(Q/\mathrm{\Lambda })}}+2$$ (10) To get the resulting hadron spectrum, Hill derived the following distribution: $$\frac{dn_h}{dz}10^1e^{2.7\sqrt{ln\frac{1}{z}}}\times \frac{(1z)^2}{z\sqrt{ln\frac{1}{z}}}$$ (11) The number of emitted hadrons above the threshold, by a PBH of temperature $`T`$, can then be written as: $$N_h=1.5610^{16}_{kT_i}^{kT_{Pl}}\frac{d(kT_{})}{(kT_{})^3}_{mc^2/(kT)}^{\mathrm{}}\frac{x^2dx}{e^x(1)^{2s}}_{E_{th}/(xkT)}^1\frac{\mathrm{d}n_h}{\mathrm{d}z}dz$$ (12) per particle species of mass $`m`$. The numerical computation has been compared to what is given by the empirical function $`dn_h/dz=(15/16)\times z^{3/2}(1z)^2`$, leading to a multiplicity which can be easily calculated analytically. Results are in agreement within an error of 12% which is certainly not the dominant uncertainty in this evaluation. Figure 2 illustrates the general behavior of the total hadronic multiplicity $`_{E_{th}/Q}^1\frac{dn_h}{dz}𝑑z`$ above a given threshold. The total number of emitted particles above a detector threshold $`E_{th}=100\mathrm{EeV}`$ can then be estimated by summing the direct flux (taking into account all the standard model degrees of freedom) of fundamental stable particles and the fragmentated flux resulting from the previous computation for coloured objects. The numerical result is $`F(100\mathrm{EeV})8.510^{14}`$ particles over the lifetime of a PBH. ## 4 Resulting flux above 100 EeV The derivation of the exact resulting spectrum on the Earth is a complete study by itself, well beyond the scope of this work. It is straightforward to demonstrate that the integrated emission goes as $`E^2`$, which seems quite difficult to conciliate with the cosmic-ray experimental data if energy-dependent confinement effects are ignored. The following section therefore aims at evaluating the orders of magnitude involved. To derive the resulting flux reaching the earth, PBHs have been considered as classical (non baryonic) cold dark matter clustered in galactic halos. The Milky Way mass distribution is therefore assumed to follow the simple law in spherical coordinates: $$\rho (R)=\rho _{}\frac{R_c^2+R_{}^2}{R_c^2+R^2}$$ (13) where $`R`$ is the distance between the considered PBHs and the Galactic Center, $`\rho _{}`$ is the local density of exploding PBHs, and $`R_c`$ is the core radius of the halo. The particle flux becomes: $$\left(\frac{\mathrm{d}N}{\mathrm{d}t}\right)_{galactic}=\rho _{}\times F\times J(R_H,R_c,R_{})$$ (14) where $$J(R_H,R_c,R_{})=\frac{1}{8\pi }_0^\pi _0^{R_H}R^2\frac{R_c^2+R_{}^2}{R_c^2+R^2}\frac{sin\varphi \mathrm{d}R\mathrm{d}\varphi }{R_{}^22RR_{}cos\varphi +R^2}$$ (15) $`R_H`$ being the total radius of the halo. Table 1 gives fluxes normalized to the average for extreme values of $`R_c`$ and $`R_H`$ (for $`R_{}=8`$ kpc): it shows a quite low dependance on the halo parameters. The extragalactic contribution is computed by assuming a standard galaxy distribution $`\rho _G0.01e^{\pm 0.4}h^3\mathrm{Mpc}^1`$ (with the Hubble parameter defined as $`H_0=h100`$ km.s<sup>-1</sup>.Mpc<sup>-1</sup>). The resulting flux is: $$\left(\frac{\mathrm{d}N}{\mathrm{d}t}\right)_{extragalactic}=K(F,R_H,R_c,R_{})\times \rho _{}\times \rho _G\times R_{GZK}$$ (16) where $`K(R_H,R_c,R_{})`$ is the average emission of a single galaxy (obtained by the previous method) and $`R_{GZK}50\mathrm{Mpc}`$ is the radius of a sphere ”unaffected” by the GZK cutoff. On such distances, it is not necessary to redshift energies within the expected accuracy of a few percent. Numerical results for average values of physical parameters show that the galactic contribution is nearly three orders of magnitude larger that the extra-galactic component, even assuming the highest galaxies number density and the upper Hubble parameter limits (h$``$1). As it only depends linearly on $`R_{GZK}`$, the accurate determination of this radius is also irrelevant. The total flux above 100 EeV is then: $$\left(\frac{\mathrm{d}n}{\mathrm{d}t}\right)_{PBH}=3.810^{23}\times \left(\frac{\rho _{}}{1\mathrm{year}^1\mathrm{pc}^3}\right)\mathrm{m}^2\mathrm{s}^1\mathrm{sr}^1$$ (17) Experimental data on EHECR show an integrated flux of the order of $`\left(\frac{\mathrm{d}n}{\mathrm{d}t}\right)_{exp}10^{16}\mathrm{m}^2\mathrm{s}^1\mathrm{sr}^1`$ . The required density of exploding PBHs near the earth to reproduce such a signal is then $`\rho _{}2.610^6\mathrm{year}^1\mathrm{pc}^3`$. ## 5 Discussion Direct observational constraints on the local PBH explosion rate $`\rho _{}`$ are quite difficult to obtain. A reliable search for short bursts of ultra high-energy gamma radiations from an arbirary direction have been performed using the CYGNUS air-shower array . No strong 1 second burst was observed and the resulting upper limit, based on the exhaustive analysis of a very fine binning of the sky, is in the range $`\rho _{}0.910^6\mathrm{year}^1\mathrm{pc}^3`$. Very similar results were derived by the Tibet and the AIROBIC collaborations . TeV gamma-rays have also been used to search for short time-scale coincidence events, thanks to the imaging atmospheric Cherenkov technique developped by the Whipple collaboration. The very high-energy gamma-ray bursts detected are compatible with the expected background, within $`\pm 1.7\sigma `$. The resulting upper limit obtained with 5 years of data , i.e. $`\rho _{}310^6\mathrm{year}^1\mathrm{pc}^3`$, is substantially better than the previous published results in the TeV range. All those limits are roughly compatible with the density required to generate the observed EHECR spectrum. At the opposite, low-energy ($`<0.5`$ GeV) cosmic-ray antiprotons detected by a BESS 13-hours ballon flight have been used to put a much more severe upper limit of $`\rho _{}210^2\mathrm{year}^1\mathrm{pc}^3`$, which could exclude PBHs as serious candidates for EHECR. This analysis is particularly promising since the authors have shown that the local PHB-antiproton flux can only be due to contributions from black holes that are very close to explosion, and exist within a few kpc away from the Solar system. Nevertheless, such data suffer from an important lack of statisics and from contamination effects due to interactions with the atmosphere. Future results from the AMS spectrometer on board the International Space Station will give a much more accurate antiproton spectrum in the 0.1-1 GeV range. Those data should allow a stringent upper limit (if not a positive detection) on nearby exploding PBHs. An entirely different approch is to study the diffuse gamma-ray background spectrum. The emission from PBHs over the lifetime of the Universe is integrated so as to evaluate the resulting particles and radiations. This method leads to $`\rho _{}10\mathrm{y}\mathrm{e}\mathrm{a}\mathrm{r}^1\mathrm{pc}^3`$ for clustered black holes. It should, anyway, be emphasized that such a study does not directly constrain $`\rho _{}`$. The resulting ”Page-Hawking bound” on $`\mathrm{\Omega }_{PBH}`$, derived to match the observed spectrum at 100 MeV, is converted into an upper limit on the initial number density of holes per logarithmic mass interval $`N_{PBH}`$ at $`M=M_{evap}`$ under assumptions on the Hubble parameter, on the relative matter density ($`\mathrm{\Omega }_M`$), on the equation of state of the Universe at the formation epoch, and on the gaussian distribution of initial density perturbations. This latter point is rather controvertial. The upper limit on $`\rho _{}`$ which can then be derived has to account for the large (possibly up to 8 orders of magnitude) uncertainties associated with clustering. Recent reviews on the detection of PBHs captured around massive objects show that, when the first astrophysical objects with masses of the order of the Jeans mass were forming, black holes haloes on the sub-galactic scale could have formed around old globular clusters, dark matter clusters or population III stars. This makes the use of 100 MeV gamma-rays a quite difficult way of ruling out an important local rate of PBH explosions, though future GLAST data should change the situation by a dramatic improvement in statistics and resolution. Furthermore, the first results from the AUGER Observatory will soon give high statistics samples of EHECR. With the PBH space distribution previously assumed, the resulting EHECR flux would be from 6.0 to 2.2 times higher in the Galactic Center direction than in the opposite direction, for an integrated observation angle from 10 to 90 degrees. After five years of operation, the AUGER observatory should collect up to 300 cosmic rays above 100 EeV. Such an anisotropy would be detectable if more than approximately 50% of them come from PBHs. Finally, some evidences for PBH signatures available nowadays should be noted. Studies of the BATSE 1B and 1C catalogs have shown that some gamma-ray bursts (GRBs) were consistent with a PBH evaporation origin at the quark-gluon phase transition. Characteristics of selected events are in remarkable agreement with the ”Fireball” PBH picture. The resulting (model dependent) limit is significantly lower than what is expected in the present work, and disfavours a PBH origin for EHECR. Nevertheless, new analysis of EGRET data gives some evidences for a gamma-ray halo ”glow” due to PBH emission. Those first tentative detections are very promising for further investigations on the subject. From the theoretical point of view, it should also be emphasized that results given in this paper are based on the standard particle physics model. The probable increase of degrees of freedom available when the black hole temperature exceeds energies currently available on colliders would modify the estimated fluxes, making the final explosion much more violent. This could validate PBHs as a good source candidate for a fraction of the observed high energy cosmic rays. Acknowledgments. I would like to thank Cecile Renault for very helpful discussions.
no-problem/9907/nucl-th9907121.html
ar5iv
text
# Is there 𝑛⁢𝑝 pairing in odd-odd N=Z nuclei? \[ ## Abstract The binding energies of even-even and odd-odd $`N=Z`$ nuclei are compared. After correcting for the symmetry energy we find that the lowest $`T=1`$ state in odd-odd $`N=Z`$ nuclei is as bound as the ground state in the neighboring even-even nucleus, thus providing evidence for isovector $`np`$ pairing. However, $`T=0`$ states in odd-odd $`N=Z`$ nuclei are several MeV less bound than the even-even ground states. We associate this difference with a pair gap and conclude that there is no evidence for isoscalar correlated pairs in $`N=Z`$ nuclei. \] Soon after the interpretation of superconductivity in terms of a condensate of strongly correlated electron pairs (Cooper pairs) by Bardeen, Cooper and Schrieffer (BCS) a similar pairing mechanism was invoked for the nucleus to explain, for example, the energy gap in even-even nuclei and the magnitudes of moments of inertia. For almost all known nuclei, i.e. those with $`N>Z`$, the “superfluid” state consists of neutron ($`nn`$) and/or proton ($`pp`$) pairs coupled to angular momentum zero and isospin T=1. However, for nuclei with $`N=Z`$ the near degeneracy of the proton and neutron Fermi surfaces (protons and neutrons occupy the same orbitals) leads to a second class of Cooper pairs consisting of a neutron and a proton ($`np`$). The $`np`$ pair can couple to angular momentum zero and isospin $`T=1`$ (isovector), or, since they are no longer restricted by the Pauli exclusion principle, they can couple to $`T=0`$ (isoscalar) and the angular momentum is $`J=1`$ or $`J=J_{max}`$ , but most commonly the maximum value . Charge independence of the nuclear force implies that for $`N=Z`$ nuclei, $`T=1`$ $`np`$ pairing should exist on an equal footing with $`T=1`$ $`nn`$ and $`pp`$ pairing. Whether there also exists strongly correlated $`T=0`$ $`np`$ pairs, has remained an open question. Early theoretical works discussed the competition between $`T=0`$ and $`T=1`$ pairing within the BCS framework. Recent works have focussed on the solutions of schematic (or algebraic) and realistic shell models , as well as on the properties of heavier $`N=Z`$ nuclei, and the effects of rotation. To date, there exists a wealth of experimental evidence in support of the existence of $`nn`$ and $`pp`$ pairs, but little or no evidence for $`np`$ pairing mainly because of the experimental difficulties in studying $`N=Z`$ nuclei. Nevertheless, following recent advances in the experimental techniques and considering the new possibilities that will become available with radioactive beams, there has been a revival of nuclear structure studies along the $`N=Z`$ line. In this letter we present an analysis<sup>*</sup><sup>*</sup>*In preparing this manuscript, a preprint (P.Vogel, Los Alamos preprint, nucl-th/980515) came to our attention describing a very similar analysis to that presented here and with similar conclusions. of experimental binding energies ($`BE`$) of nuclei along the $`N=Z`$ line and the relative excitation energies of the lowest $`T=0`$ and $`T=1`$ states in self-conjugate ($`N=Z`$, $`T_z=0`$) odd-odd nuclei. We have found evidence for the existence of strong $`T=1`$ $`np`$ pairing in $`N=Z`$ nuclei, but find no such evidence for $`T=0`$ $`np`$ pairing. Let us start by recalling that pairing effects can be isolated by studying differences in binding energies. Particularly, the difference $`BE_{eveneven}BE_{oddodd}\mathrm{\Delta }_p+\mathrm{\Delta }_n2\mathrm{\Delta }`$ (1) is used as a measure of the pair gap, $`\mathrm{\Delta }`$, for both protons and neutronsThere is usually a correction term due to the residual $`np`$ interaction. This term is of order $`20\text{MeV}/A`$ and we will not consider it here.. Implicit in Eq. (1) is the assumption that the ground states have the same isospin, which is the case for nuclei with $`NZ`$ since they are maximally aligned in isospace, i.e. $`T=T_z=\frac{1}{2}(NZ)`$. Equation (1) is also true when comparing $`T=0`$ states in even-even and odd-odd $`N=Z`$ nuclei, and the difference in binding energy, given by $`BE_{ee}(N,Z){\displaystyle \frac{(BE_{oo}(N1,Z1)+BE_{oo}(N+1,Z+1))}{2}},`$ (2) is shown in Fig. 1, where the binding energies are from Ref. . For comparison, the same quantities are given for nuclei with $`N=Z+4`$. Taking the average in Eq. (2) removes the smooth variations due to volume, surface, and Coulomb energies, and any remaining differences are then attributed to shell or pairing effects. The extra binding of the even-even systems is clearly seen in Fig. 1 and it follows the known $`1/A^{1/2}`$ dependence. This result shows that the $`T=0`$ states in odd-odd $`N=Z`$ nuclei behave like those in any other odd-odd nucleus. Assuming that the binding energy differences reflect differences in pairing energy then the extra $`n`$ and $`p`$ block the pairing to the same degree as any “standard” 2-quasiparticle state. Note, if the ground states of $`N=Z`$ even-even nuclei contained $`T=0`$ correlated pairs, the addition of a $`T=0`$ $`np`$ pair would not give a gap, and the average binding energy of the two odd-odd nuclei would be the same as the even-even neighbor. This suggests that correlated $`T=0`$ pairs do not contribute significantly to the pairing energy in $`N=Z`$ nuclei. Is it possible that only $`T=1`$ pairing is important for these $`N=Z`$ nuclei? If $`np`$ $`T=1`$ pairs form a correlated state, the lowest $`T=1`$ state in self-conjugate odd-odd nuclei should be as bound as that of the neighboring even-even ground state. An analysis similar to that used for $`T=0`$ states should provide the answer. However, in applying Eq. (1) or (2) to determine the binding energy difference we need to include a symmetry energy term because of the different isospins (i.e. $`T=1`$ in odd-odd $`N=Z`$ nuclei and $`T=0`$ in neighboring even-even $`N=Z`$ nuclei). A discussion on the symmetry term is given in refs. . To extract the symmetry energy ($`E_{sym}=BE_{sym}`$) the experimental binding energies of several nuclei in the range $`A=1064`$ were plotted, as shown in Fig. 2, after subtracting volume, surface, and Coulomb terms. (The surface, Coulomb, and symmetry terms have the opposite sign to the volume term.) They are plotted as a function of $`T(T+x)`$, for three cases: 1) $`x=4`$, corresponding to the SU(4) Wigner supermultiplet expression, 2) $`x=1`$, i.e. $`T(T+1)`$, and 3) $`x=0`$, giving a $`T^2`$ approximation. While any of these choices can be used, the $`T(T+1)`$ expression provides a better account of the experimental data, as discussed in Ref. . In our analysis we use a symmetry energy given by $`E_{sym}=\frac{75\text{MeV}}{A}T(T+1)`$ which represents an average neglecting the effects of shell structure and pairing. The binding energy difference for $`T=1`$ states in odd-odd $`N=Z`$ nuclei compared with $`T=0`$ ground states in neighboring even-even $`N=Z`$ nuclei is presented in Fig. 3 (squares). If the only difference between the even-even ground state and the odd-odd $`T=1`$ state were the symmetry term, then the difference in binding energy is given by the upper solid line. That is, the symmetry energy of the $`T=1`$ state ($`\frac{75\text{MeV}}{A}T(T+1)=\frac{150\text{MeV}}{A}`$) subtracted from the binding energy of the even-even nucleus provides the correct reference to which the odd-odd $`T=1`$ states should be compared. It is also possible to use the even-even $`T=1`$ ($`T_z=1,1`$) isobaric analog states as a reference, rather than the global expression $`\frac{75\text{MeV}}{A}T(T+1)`$. After correcting for the Coulomb energy, the binding energies of the isospin triplet are very similar, often within a few hundred keV. The average binding energies of the even-even $`T=1`$ ($`T_z=1,1`$) isobaric analog states, relative to the even-even $`T=0`$ ground state, are also shown in Fig. 3 (dotted line). These values are extremely close to those of the corresponding $`T=1`$, $`T_z=0`$ state in the odd-odd nucleus. Since, (i) the binding energy difference between the $`T=1`$, $`T_z=0`$, (odd-odd) and $`T=0`$, $`T_z=0`$ (even-even) states is described by the symmetry energy term only, and (ii) the $`T=1`$ ($`T_z=1,1`$) state is the ground state of the even-even isobaric analog, then the binding energy difference ($`BE_{ee}(T=0)BE_{oo}(T=1)`$) cannot be associated with a difference in pairing. Rather, it is due to the difference in isospin for which the smooth overall behavior is given by the symmetry energy. These results indicate that the lowest $`T=1`$ state in a self conjugate odd-odd nucleus is as bound as the neighboring even-even $`N=Z`$ ground state (after correcting for the symmetry energy). In other words, there is no difference in pairing, and just as the addition of an $`nn`$ or $`pp`$ pair to an even-even nucleus does not block pair correlations, neither does the addition of an $`np`$ $`T=1`$ pair in $`N=Z`$ nuclei. However, as expected, adding a single $`n`$ or $`p`$ to the even-even core does reduce the pair energy and results in a binding energy difference in excess of the symmetry energy, as seen by the fact that the data points (stars in Fig. 3) for an odd nucleus ($`N=Z+1`$) lie higher than the symmetry energy expected for a T=1/2 nucleus (lower solid curve in Fig. 3). In view of the charge-independence of the nuclear force these results may not be too surprising; nevertheless they provide a strong argument in favor of the existence of full (i.e. $`nn`$, $`pp`$, and $`np`$) isovector pairing correlations in $`N=Z`$ nuclei. Finally, we consider the relative energies of the $`T=0`$ and $`T=1`$ states in odd-odd $`N=Z`$ nuclei. If there were no $`np`$ pairing of any type ($`T=0`$ or $`T=1`$) the $`T=1`$ state should lie above the $`T=0`$ state at an excitation energy given by the symmetry term. However, the analysis of the experimental data presented above shows strong evidence for the existence of $`T=1`$ $`np`$ pair correlations, and at the same time no evidence for $`T=0`$ correlated pairs. The $`T=1`$ states should then lie at a lower energy than that given by the symmetry term, and if the $`T=1`$ pairing energy were sufficiently large, the $`T=1`$ state may lie lower than the $`T=0`$ state. The experimental energy differences are shown in Fig. 4 along with the expected contribution from the symmetry energy. The energy separation between the states of different isospin is clearly less than that predicted by the symmetry term. This is consistent with the pairing arguments presented above, and suggests that whether the $`T=0`$ or $`T=1`$ state is lower depends largely on the relative magnitudes of the symmetry and pairing energies. We further note that while the near cancellation of the symmetry and pairing terms (for $`T=1`$ compared with $`T=0`$) appears to be accidental we can not rule out, at this time, a deeper physical origin. Assuming the reduced separation is only due to the effects of pairing then, in the language of the BCS model and taking the symmetry term into account, the $`T=0`$ state in the odd-odd $`N=Z`$ nucleus can be interpreted as a 2-quasiparticle excitation (“broken-pair” with seniority 2) relative to the $`T=1`$ correlated pair state. In complete analogy with Eq. (1) we have $`(BE_{T=1}BE_{sym})BE_{T=0}2\mathrm{\Delta }_{np},`$ (3) or, in terms of excitation energies, $`E_{sym}(E_{T=1}E_{T=0})2\mathrm{\Delta }_{np}.`$ (4) (Note, this is the difference between the lowest $`T=1`$ and $`T=0`$ state in the same N=Z odd-odd nucleus.) The effective gap ($`\mathrm{\Delta }_{np}`$), thus extracted, is presented in the insert to Fig. 4, where for comparison the result of a BCS calculation that includes $`nn`$, $`pp`$, and $`np`$ $`T=1`$ pairs is also shown. In this calculation we adopted standard single-particle levels from a spherical Nilsson potential and a pairing strength of $`20\text{MeV}/A`$. This figure illustrates that the magnitude of $`2\mathrm{\Delta }_{np}`$ extracted from experiment using Eq. (4) compares favorably with that obtained from the spectrum of single-particle levels. While the gap (difference in binding energy) is not necessarily related only to a pairing interaction , the agreement is remarkable. Due to the presence of shell gaps the simple BCS model gives a characteristic oscillation in $`\mathrm{\Delta }`$. In this calculation, the single-particle levels were truncated at $`N=Z=50`$, which led to an artificial quenching of $`\mathrm{\Delta }`$ at A=100. The reversal of the favored isospin from $`T=1`$ to $`T=0`$ at <sup>58</sup>Cu coincides with it being one $`np`$ pair above the $`N=Z=28`$ and, within the pairing interpretation given here, occurs because the shell gap reduces the magnitude of the $`T=1`$ pair gap. For heavier nuclei, $`A>60`$, the $`T=1`$ state is favored and we would expect that this is likely to remain the case until the $`N=Z=50`$ shell gap is reached, where for <sup>98</sup>In (N=Z=49) the ground state may well revert to $`T=0`$ once more. The competition between pairing and symmetry energy was also discussed in Ref. . In this work, semi-empirical fits to the binding energies suggested that for odd-odd $`N=Z`$ nuclei beyond the $`1f_{7/2}`$ shell, pairing correlations will result in $`T=1`$ ground states. In conclusion, we have argued that binding energy differences indicate that the lowest $`T=1`$ states in odd-odd $`N=Z`$ nuclei are as bound as their even-even neighbors, which provides strong evidence for the presence of isovector $`np`$ pairing. There is, however, no similar evidence to support the existence of $`np`$ isoscalar pair correlations. The intriguing switch from $`T=0`$ to $`T=1`$ ground states in odd-odd $`N=Z`$ nuclei arises from a subtle competition between the symmetry energy and isovector pairing. For $`A>40`$, $`T=1`$ pairing wins over the symmetry energy and the $`J=0^+`$ state becomes the ground state, except, possibly, near closed shells where the “collective” effects of pairing are expected to be reduced. Future experiments on $`N=Z`$ nuclei to determine the binding energies and the relative excitation energies of $`T=1`$ and $`T=0`$ states (in odd-odd nuclei), as well as studies of their high-spin rotational properties are necessary and will provide further tests of the role of pairing in $`N=Z`$ nuclei. This work was supported under DOE contract No. DE-AC03-76SF00098. We are grateful to A.Goodman for discussions on the use of BCS equations that include np pairs. One of us (AOM) would like to thank D.R.Bes, H.Sofia, N.Scoccola and O.Civitarese for a valuable discussion on this subject.
no-problem/9907/physics9907039.html
ar5iv
text
# Observation of Nonlinear Mode in a Cylindrical Fabry-Perot Cavity ## FIGURE CAPTIONS FIG 1. The cylindrical Fabry-Perot cavity, showing the coordinate system used and an incident Gaussian (in $`x`$) beam. FIG 2. Schematic of the experimental apparatus used to observe the spatial nonlinear mode. The D’s are detectors. FIG 3. Calculated linear absorption and $`n_2`$ in natural-abundance Rb vapor at $`80^{}\mathrm{C}`$ ($`N=1.5\times 10^{12}\mathrm{c}m^3`$), near the D2 transition at $`\lambda =780\mathrm{n}m`$, for circularly-polarized light. The arrow indicates the line used in the experiment, the Doppler-broadened $`{}_{}{}^{85}\mathrm{R}b`$, $`F=2`$ set of transitions. FIG 4. Power transmitted through the nonlinear cavity as a function of laser frequency, for a particular cavity length $`L`$ and beam power. Frequency is relative to the $`{}_{}{}^{85}\mathrm{R}b`$ line indicated in Fig. 3. FIG 5. Total power transmitted through the cavity as a function of laser frequency, for two different cavity lengths $`L`$. FIG 6. Transmitted intensity profiles of spatial nonlinear mode taken with the CCD camera, integrated over $`y`$, at the peaks of the nonlinear modes shown in Fig. 5.
no-problem/9907/physics9907032.html
ar5iv
text
# References Vestnik Leningradskogo Universiteta, No. 10, pp. 22-28, 1971 Study of the one-dimensional Schroedinger equation generated from the hypergeometric equation G.A. Natanzon Received 24 October 1970; published October 1971 Note: Translated from Russian by H.C. Rosu (November 1998) Original English Summary. \- A systematic method of constructing potentials, for which the one-variable Schroedinger equation can be solved in terms of the hypergeometric (HGM) function, is presented. All the potentials, obtained by energy-independent transformations of the HGM equation, are determined together with eigenvalues and eigenfunctions. A class of potentials derived from the confluent HGM equation is found by means of a limit process. To study theoretically the rotational and rotational-vibrational spectra of diatomic molecules, one often uses one-dimensional model potentials for which the solution can be expressed in terms of the HGM functions . The problem of the changes of variable in the HGM equation leading to the one dimensional Schroedinger equation has been studied several times in the past , but the quoted authors have considered only the case of potentials given in explicit form <sup>1</sup><sup>1</sup>1The existence of potentials $`U(x)`$, not explicitly depending on $`x`$, has been first posed in (see also ).. However, knowing the energy spectrum and the wavefunctions of the equation may prove useful for many applied problems whether or not the potential is given in explicit or implicit form. The general form of the transformations leading from the HGM equation to the Schroedinger equation is determined by the requirement ($`z^{^{}}=dz/dx`$) $$(z^{^{}})^2I(z)+\frac{1}{2}\{z,x\}=k^22MU(x),$$ (1) where $$I(z)=\frac{(1\lambda _0^2)(1z)+(1\lambda _1^2)z+(\mu ^21)z(1z)}{4z^2(1z)^2};$$ (2) and $`\{z,x\}`$ is the Schwartzian derivative of $`z(x)`$ with respect to $`x`$ $$\{z,x\}=\frac{z^{^{\prime \prime }}}{z^{^{}}}\left[\frac{z^{^{\prime \prime \prime }}}{z^{^{\prime \prime }}}\frac{3}{2}\frac{z^{^{\prime \prime }}}{z^{^{}}}\right];$$ (3) $`U(x)`$ is the potential function; $`M`$ is the reduced mass; $`E=\frac{k^2}{2M}`$ is the energy. When condition (1) is fulfilled the wavefunction $`\mathrm{\Psi }`$ is related to the HGM function as follows $$\mathrm{\Psi }[z(x)]=(z^{^{}})^{1/2}z^{\frac{\lambda _0+1}{2}}(1z)^{\frac{\lambda _1+1}{2}}F(\alpha ,\beta ,\gamma ;z).$$ (4) Here $$\{\begin{array}{cc}\lambda _0=\gamma 1,\hfill & \\ \lambda _1=\alpha +\beta \gamma ,\hfill & \\ \mu =\beta \alpha .\hfill & \end{array}$$ (5) We shall assume that $`z(x)`$ and therefore also $`\{z,x\}`$ do not depend on the energy $`E`$. In this case, comparing the lhs and rhs of Eq.(1) one concludes that the parameters $`\mu ^2`$, $`\lambda _0^2`$ and $`\lambda _1^2`$ are linear in $`k^2`$ $$\{\begin{array}{cc}1\mu ^2=ak^2f\hfill & \\ 1\lambda _p^2=c_pk^2h_p\hfill & p=0,1\hfill \end{array}$$ (6) and therefore $`z(x)`$ fulfills the differential equation $$\frac{(z^{^{}})^2R(z)}{4z^2(1z)^2}=1$$ $`(7)`$ where $$R(z)=a(z1)z+c_0(1z)+c_1z$$ $`(8a)`$ or in a slightly different form $$R(z)=az^2+b_0z+c_0=a(z1)^2+b_1(z1)+c_1.$$ $`(8b)`$ Introducing in $`\{z,x\}`$ the logarithmic derivatives of $`z^{^{}}`$ and $`z^{^{\prime \prime }}`$ that can be found using Eq.(7), one next gets the potential from Eq.(1) by means of Eq.(6) $$2MU[z(x)]=\frac{fz(z1)+h_0(1z)+h_1z+1}{R}+\left[a+\frac{a+(c_1c_0)(2z1)}{z(z1)}\frac{5}{4}\frac{\mathrm{\Delta }}{R}\right]\frac{z^2(1z)^2}{R^2}$$ $`(9)`$ where the discriminant $`\mathrm{\Delta }=b_p^24ac_p=(ac_1c_0)^24c_1c_0`$. Thus, one will get a semiparametric (i.e., including the constant of integration of the differential equation) family of potential curves, for one can always choose in the role of the three parameters, the scale factor, the origin of the coordinate $`x`$ and the origin of the energy $`E`$. For the Schroedinger solution given by Eq.(4) the variable $`z`$ is considered within the interval . The corresponding interval for the variable $`x`$ will be studied shortly. Since the case of the potential well of infinite depth is not of much interest, we shall ask the three-term quadratic polynomial $`R(z)`$ to have no zeros for $`0<z<1`$, that is we shall consider $$R(z)>0.$$ $`(10)`$ It follows from Eq.(7) that in the unit interval the function $`z(x)`$ is monotone so that the corresponding transformation is single valued. Eq.(10) implies <sup>2</sup><sup>2</sup>2For $`\mathrm{\Delta }>0,a>0`$ the coefficients $`b_0`$ and $`b_1`$ should be of the same sign, i.e., $`b_0b_1=(c_1c_0)^2a^2>0`$. $$c_p0$$ $`(11a)`$ $$a<(\sqrt{c_1}+\sqrt{c_0})^2.$$ $`(11b)`$ Integrating Eq.(7) gives a) $`\mathrm{\Delta }0`$ $$\pm 2x=\underset{p=0}{\overset{1}{}}(1)^p\left[\sqrt{c_p}\mathrm{ln}|b_p2\sqrt{c_p}t_p|b_p\frac{dt_p}{t_p^2a}\right]$$ $`(12a)`$ where $`t_p=\frac{\sqrt{r}\sqrt{c_p}}{zp}`$; b) $`\mathrm{\Delta }=0`$ $$\pm 2(xx_0)=\underset{p=0}{\overset{1}{}}(1)^p\sqrt{c_p}\mathrm{ln}|zp|.$$ $`(12b)`$ The transformation $`z(x)`$ can be obtained in explicit form only for a few particular values of the parameters $`a`$ and $`c_p`$. One should notice the case when the zeros of $`R`$ coincide with the singularities of the HGM equation. The resulting potentials have been already considered by various authors as follows a) Poeschl-Teller potentials ($`R=b_0z(1z)`$ or $`R=b_0`$): $$2Mb_0U(x)=\{\begin{array}{cc}f+1\frac{h_0+\frac{3}{4}}{\mathrm{sin}^2(x/\sqrt{b_0})}\frac{h_1+\frac{3}{4}}{\mathrm{cos}^2(x/\sqrt{b_0})}(13a)\hfill & \\ h_1+1+\frac{h_0+\frac{3}{4}}{\mathrm{sh}^2(x/\sqrt{b_0})}\frac{f+\frac{3}{4}}{\mathrm{ch}^2(x/\sqrt{b_0})}(13b)\hfill & \end{array}$$ b) Rosen-Morse potentials ($`R=c_0`$) $$2Mc_0U(x)=\frac{h_0+h_1+2}{2}+\frac{h_0h_1}{2}\mathrm{th}(x/\sqrt{c_0})\frac{1}{4}\frac{f}{\mathrm{ch}^2(x/\sqrt{c_0})}$$ $`(14)`$ c) Manning-Rosen potentials ($`R=az^2`$) $$2MaU(x)=\frac{f+h_1+2}{2}+\frac{fh_1}{2}\mathrm{cth}(x/\sqrt{a})\frac{1}{4}\frac{h_0}{\mathrm{sh}^2(x/\sqrt{a})}.$$ $`(15)`$ The potentials (13-15) have the following general properties: the isotopic shift can be described by means of some changes of the parameters $`f`$, $`h_0`$ and $`h_1`$. This property does not extend to the whole family of potential curves given by Eq.(9). The scale transformation of the coordinate $`x`$ leads in this case to an equation with five singular points, 0, 1, $`z_1`$, $`z_2`$, and $`\mathrm{}`$, where $`z_1`$ and $`z_2`$ are the zeros of $`R`$. Let us find now the interval where $`x`$ is defined for $`\mathrm{\Delta }0`$. For that, we first notice that the integral entering Eq. (12a) has no singularities, because from $`t_p=\sqrt{a}`$ or $$\sqrt{a+\frac{b_p}{zp}+\frac{c_p}{(zp)^2}}=\sqrt{a}+\frac{c_p}{zp}$$ $`\mathrm{\Delta }=0`$ would follow. This is why, performing $`zp`$ ($`t_pb_p/2\sqrt{c_p}`$) in Eq. (12), we get the limits $`\pm \mathrm{}`$ in $`x`$ for $`c_p0`$; if however one of the coefficients $`c_p`$ is nought then the corresponding choice of the coordinate origin for $`x`$ as well as of the sign in Eq.(12a) can be done such that $$0x<\mathrm{}.$$ Finally, the variable $`x`$ has a finite definition interval whenever both $`c_0`$ and $`c_1`$ are zero. The case when only one of the $`c_p`$ coefficients, for example $`c_0`$, is zero, is of special interest for the theory of two atom molecules, because for $`x0`$ ($`z0`$) the function $`2Mb_0U(x)`$ goes to infinity as $`(h_0+\frac{3}{4})z^1`$. For this potential, contrary to the Morse case, the Schroedinger equation is solved with regular boundary conditions for $`x=0`$ (see ). If $`h_0=\frac{3}{4}`$, the potential $`U(x)`$ is finite at the origin and for $`x0`$ ($`0z1`$) reads $$2MaU[z(x)]=\frac{f(z1)+h_1+\frac{3}{4}}{zz_1}+1+\frac{3}{4}\frac{(3z_11)(1z_1)}{(zz_1)^2}\frac{5}{4}\frac{z_1(1z_1)^2}{(zz_1)^3}$$ $`(16)`$ For $`x<0`$ it is natural to a priori determine the potential Eq.(16) to be symmetric $`U(x)=U(x)`$. Notice that for $$\frac{1}{4}\frac{9}{4}\frac{(1z_1)}{z_1}>(f+\frac{3}{4})(1z_1)>1$$ the potential Eq.(16) has two symmetric minima separated by a small barrier. Since the HGM solution which is irregular at $`z=1`$ has a $`(1z)^{\lambda _1}`$ ($`\lambda _1>0`$) behavior at that point, the integral $$_0^{\mathrm{}}\mathrm{\Psi }^2𝑑x=\frac{1}{4}_0^1Rz^{\lambda _01}(1z)^{\lambda _11}[F(\alpha ,\beta ,\gamma ;z)]^2𝑑z$$ $`(17)`$ is finite only if $`\alpha `$ (or $`\beta `$) is equal either to a negative integer or zero: $`\alpha =n`$. The eigenfunctions $`\mathrm{\Psi }_n^\pm `$ are determined by the conditions $$\frac{d\mathrm{\Psi }_n^+}{dx}|_{x=0}=0$$ $`(18a)`$ and $$\mathrm{\Psi }_n^{}(0)=0,$$ $`(18b)`$ leading to a finite integral Eq. (17). The energy spectrum is determined by $$p+\frac{1}{2}+\sqrt{h_1+1c_1k_p^2}=\sqrt{f+1ak_p^2},$$ $`(19)`$ where $`p=2n`$ for even levels and $`p=2n+1`$ for the odd ones, respectively. Let us clarify now under what conditions the potential Eq.(9) leads to a discrete spectrum. We shall consider $`c_10`$. For a discrete spectrum to occur, the function $`(z^{^{}})^{\frac{1}{2}}\mathrm{\Psi }`$ should go to $`\mathrm{}`$ when $`z0`$ at least as $`z^{\frac{1}{2}}`$, if one takes $`\mathrm{\Psi }`$ as the general solution of the Schroedinger equation. This condition is obviously true for $`c_00`$. If $`c_0=0`$ the existence of the discrete spectrum is possible only for positive values of $`h_0`$. The energy levels come from the equation $$2n+1=\sqrt{f+1ak_n^2}\sqrt{h_0+1c_0k_n^2}\sqrt{h_1+1c_1k_n^2}.$$ $`(20)`$ It is easy to see that the spectrum is bounded from above, if and only if both $`c_0`$ and $`c_1`$ are not zero, and therefore, it follows from Eq.(20) that the discrete part of the spectrum has only a finite set of energy levels. Only the potential curve Eq.(13a), for which $`c_0=c_1=0`$, has an infinity of discrete levels. The eigenfunctions of the discrete spectrum are of the form $$\mathrm{\Psi }_n=B_nz^{\frac{\lambda _01}{2}}(1z)^{\frac{\lambda _11}{2}}(Rz^{^{}})^{\frac{1}{2}}P_n^{(\lambda _1,\lambda _2)}(2z1).$$ $`(21)`$ Here $`P_n^{(\lambda _1,\lambda _0)}(2z1)`$ are the Jacobi polynomials, whereas $`B_n`$ is a normalization constant given by $$B_n=\left[\left(\frac{c_1}{\lambda _1}+\frac{c_0}{\lambda _0}\frac{a}{\mu }\right)\frac{\mathrm{\Gamma }(\lambda _0+n+1)\mathrm{\Gamma }(\lambda _1+n+1)}{n!\mathrm{\Gamma }(\mu n)}\right]^{\frac{1}{2}},$$ $`(22)`$ (see the integrals 7.391 (1) and (5) in ). In particular, the eigenfunctions $`\mathrm{\Psi }_n^{(\pm )}`$, corresponding to the potential Eq.(16), are obtained from Eq. (21) for $`c_0=0,\lambda _0=\pm \frac{1}{2}`$. We notice that the potential curve Eq.(13a) displays one more important property: the Jacobi polynomials for neighbour energy levels are connected by simple recurrence relationships (the parameters $`\lambda _0`$ and $`\lambda _1`$ does not depend in this case on energy), and the matrix elements $`m|z|n`$ is different from zero for $`m=n,n\pm 1`$. To pass to the limit of the confluent HGM equation we make the following notations $$a=\frac{\sigma _2}{\tau ^2},c_1=\frac{\sigma _2}{\tau ^2}+\frac{\sigma _1}{\tau }+c_0(b_0=\frac{\sigma _1}{\tau }),$$ $`(23a)`$ $$f=\frac{g_2}{\tau ^2},h_1=\frac{g_2}{\tau ^2}+\frac{g_1}{\tau }+h_0$$ $`(23b)`$ and $$z=\tau \zeta .$$ $`(24)`$ For $`\tau 0`$ we have $$\frac{(\zeta ^{^{}})^2R(\zeta )}{4\zeta ^2}=1,$$ $`(25)`$ $$I(\zeta )=\frac{\delta _2}{4}\frac{\delta _1}{4\zeta }+\frac{1\lambda _0^2}{4\zeta ^2},$$ $`(26)`$ where $$\delta _1=\mathrm{lim}_{\tau 0}[\tau (\lambda _1^2\mu ^2)]=g_1\sigma _1k^2,$$ $`(27a)`$ $$\delta _2=\mathrm{lim}_{\tau 0}[\tau ^2\mu ^2]=g_2\sigma _2k^20.$$ $`(27b)`$ For $`\delta _2>0`$ <sup>3</sup><sup>3</sup>3For $`\delta _2=0`$, $`\mathrm{\Psi }=(\zeta ^{^{}})^{1/2}\zeta ^{1/2}Z_{\frac{\lambda _0}{2}}(\sqrt{\delta _1}\zeta ^{1/2})`$ (see the formula 2, 162 (1a) in ) the wavefunction $`\mathrm{\Psi }`$ is connected with the Whittaker and the confluent series as follows $$\mathrm{\Psi }=(\zeta ^{^{}})^{1/2}M_{\frac{\delta _1}{4\sqrt{\delta _0}},\frac{\lambda _0}{2}}(\sqrt{\delta _2\zeta })$$ $`(28a)`$ or $$\mathrm{\Psi }=(\zeta ^{^{}})^{1/2}(\sqrt{\delta _2}\zeta )^{\frac{\lambda _0+1}{2}}e^{\sqrt{\delta _2}\zeta /2}F(\frac{\gamma }{2}+\frac{\delta _1}{4\sqrt{\delta _2}},\gamma ;\sqrt{\delta _2\zeta }).$$ $`(28b)`$ The series terminates if $$\frac{\gamma }{2}\frac{\delta _1}{4\sqrt{\delta _2}}=n,n=0,1,2\mathrm{}$$ $`(29)`$ The potential $`U[\zeta (x)]`$ takes the form $$2MU[\zeta (x)]=\frac{g_2\zeta ^2+g_1\zeta +h_0+1}{R}+\left[\frac{\sigma _1}{\zeta }\sigma _2\frac{5\mathrm{\Delta }}{4R}\right]\frac{\zeta ^2}{R^2},$$ $`(30)`$ where $`\mathrm{\Delta }=\sigma _1^24\sigma _2c_0`$; here the function $`\zeta (x)`$ is given in the following implicit form $$\pm 2(xx_0)=\sqrt{R}+\sqrt{c_0}\mathrm{ln}|\sigma _12t\sqrt{c_0}|+\frac{\sigma _1}{2\sqrt{\sigma _2}}\mathrm{ln}|\frac{t+\sqrt{\sigma _2}}{t\sqrt{\sigma _2}}|,$$ $`(31)`$ where $`t=\frac{\sqrt{R}\sqrt{c_0}}{\zeta }`$. It is supposed that $`R(\zeta )`$ has no zeros within $`(0,\mathrm{})`$, that is $$\sigma _20,c_00,\sigma _1>2\sqrt{c_0\sigma _2}.$$ $`(32)`$ If two of the three parameters $`\sigma _1`$, $`\sigma _2`$ and $`c_0`$ turn to zero, Eq.(31) is easy to solve with respect to $`\zeta `$ and the potential leads in this case to: a) a spherically symmetric oscillator well (or barrier) for $`\sigma _2=c_0=0`$ ; b) a Morse potential for $`\sigma _1=\sigma _2=0`$ ; a Kratzer potential for $`\sigma _1=c_0=0`$ . We remark that in the work an implicit potential occurs for which the Schroedinger equation is turned into a confluent HGM equation. This potential can be obtained from Eq.(30), if $`\sigma _2=0`$. The assumptions in may be considered equivalent to a transformation $`\zeta (x)`$ not depending on energy and therefore the results of can be easily obtained in our framework. Let us study now the definition interval of $`x`$. For $`\zeta \mathrm{}`$ $`x`$ goes to $`\pm \mathrm{}`$ as $`\pm \frac{1}{2}\sqrt{\sigma _2}\zeta `$, if $`\sigma _20`$, and as $`\pm \sqrt{\sigma _1\zeta }`$, if $`\sigma _2=0`$, $`\sigma _10`$, because in the latter case $$\pm 2(xx_0)=\sqrt{R}+\frac{\sigma _1}{t}+\sqrt{c_0}\mathrm{ln}|\zeta t^2|$$ $`(33)`$ (for $`\sigma _2=0`$, $`\sigma _1=0`$ $`x`$ varies from $`\mathrm{}`$ to $`+\mathrm{}`$). For $`\zeta 0`$, $`x\mathrm{}`$ if $`c_00`$, and $`xx_0`$ if $`c_0=0`$. For $`\sigma _20`$, the zero of the energy is determined by the condition $`U(\mathrm{})=0`$, i.e., $`g_2=0`$. In this case, one gets for the energy spectrum the following equation $$2n+1=\frac{\sigma _1}{2\sqrt{\sigma _2}}|k_n|\frac{g_1}{2\sqrt{\sigma _2}|k_n|}\sqrt{h_0+1c_0k_n^2}.$$ $`(34)`$ From this, it follows that a) for $`g_10`$, $`h_0+10`$, there is no discrete spectrum, because $$\frac{\sigma _1}{2\sqrt{\sigma _2}}|k_n|\sqrt{h_0+1c_0k_n^2}\left[\frac{\sigma _1}{2\sqrt{\sigma _2}}\sqrt{c_0}\right]|k_n|0;$$ $`(35)`$ b) for $`g_10`$, $`h_0+1<0`$, a finite number of discrete levels is possible if $`\sigma _1<0`$; c) for $`g_1<0`$ the discrete spectrum has an infinite number of levels converging to zero. For $`\sigma _2=0`$ the discrete levels are determined by the equation $$1+h_0c_0k_n^2=\left(2n+1+\frac{g_1\sigma _1k_n^2}{2\sqrt{g_2}}\right)^2$$ $`(36)`$ and for $`c_00`$ the discrete spectrum may have only a finite number of energy levels (the case $`\sigma _2=0`$, $`c_0=0`$ corresponds to an infinite number of levels). The wavefunctions of the discrete spectrum are given in terms of the Laguerre polynomials as follows $$\mathrm{\Psi }_n=B_nR^{\frac{1}{2}}(\sqrt{\delta _2}\zeta )^{\frac{\lambda _01}{2}}e^{\sqrt{\delta _2}\frac{\zeta }{2}}(\sqrt{\delta _2}\zeta ^{^{}})^{\frac{1}{2}}L_n^{(\lambda _0)}(\sqrt{\delta _2}\zeta ).$$ $`(37)`$ Here, the normalization factor $`B_n`$ reads $$B_n=\left[\left(\frac{c_0}{\lambda _0}+\frac{\sigma _1}{\sqrt{\delta _2}}+(\lambda _0+2n+1)\frac{\sigma _2}{\delta _2}\right)\frac{\mathrm{\Gamma }(\gamma +n)}{n!}\right]^{\frac{1}{2}}.$$ $`(38)`$ The integrals required for its calculation can be obtained from $`I_\nu `$ : $$I_\nu =e^{\sqrt{\delta _2}\zeta }\zeta ^{\nu 1}[F(n,\gamma ;\sqrt{\delta _2}\zeta )]^2𝑑\zeta $$ $`(39)`$ for $`\nu =\gamma \pm 1,\gamma `$. The potentials given by Eqs. (9) and (30) may have useful applications in the theory of diatomic molecules, in particular, for the calculations of important quantities such as the Franck-Condon factors and anharmonic constants. Acknowledgments The author is deeply grateful to M.N. Adamov, Yu.N. Demkov, and I.V. Komarov for their critical remarks.
no-problem/9907/hep-ph9907406.html
ar5iv
text
# 1 GRACE System Flow Implementation of the Non-Linear Gauge into GRACE <sup>1</sup><sup>1</sup>1Talk presented by K.Kato at 6-th AIHENP, April 1999, University of Crete. G. Bélanger<sup>1)</sup>, F. Boudjema<sup>1)</sup>, J.Fujimoto<sup>2)</sup>, T.Ishikawa<sup>2)</sup>, T. Kaneko<sup>3)</sup>, K. Kato<sup>4)</sup>, V. Lafage<sup>2)</sup>, N. Nakazawa<sup>4)</sup>, Y. Shimizu<sup>2)</sup> 1) LAPTH, Annecy-le-Vieux F-74941, France 2) KEK, Tsukuba, Ibaraki 305–0801, Japan 3) Meiji-Gakuin University, Totsuka, Yokohama 244–0816, Japan 4) Kogakuin University, Shinjuku, Tokyo 163–8677, Japan Abstract > A general non-linear gauge condition is implemented into GRACE, an automated system for the calculation of physical processes in high-energy physics. This new gauge-fixing is used as a very efficient means to check the results of large scale evaluation in the standard model computed automatically. We report on some systematic test-runs which have been performed for one-loop two-to-two processes to show the validity of the gauge check. A major part of the theoretical predictions in high-energy physics is based on perturbation theory. However the complexity increases rapidly as one moves to higher orders in perturbations, like when dealing with loop corrections or when dealing with many-body final states. In many instances calculations, if done by hand, become intractable and prone to error. Since perturbation theory is a well-established algorithm, it is possible to construct a system or software to perform these calculations automatically. There are several systems operating as expert systems for high-energy physics. The Minami-Tateya group has developed the system named GRACE. Its structure is depicted in the figure. The system can, in principle, deal with the perturbative series up to any order. For instance all diagrams contributing to a process are generated automatically given the order of perturbation, and specifying the particles. However, due to the handling of the loop integrals, practical calculations are, for the moment, restricted to tree and one-loop orders. Two-loop calculation is possible only for some limited cases. The GRACE system can work for any type of theory, once the model file is implemented. Here, the model file is a database which stores all component fields and interactions contained in the Lagrangian of the theory. Besides the model file, peripheral parts are sometimes required. For instance, the structure of vertex in the theory is absent in the tool box of GRACE, the definition should be added. Also, soft correction factor, kinematics code, the interface to structure functions, parameter control section, and so forth might be supplied if necessary. The system is versatile enough to include new features and such added components would become a part of the new version of GRACE. This report is confined to the calculation based on the so-called standard model. The extension to SUSY is presented in a separated talk. In contrast to the manual computation, the theoretical prediction from an automated system is obtained by invoking several commands at a terminal with some elapsed time ( which is often rather long ), due for instance to numerical integrations over phase-space. Especially with an automated system it is difficult to judge the reliability of the final result, hence a need for a built-in automated function to confirm the results. At tree-level, the check of gauge invariance has been shown to be powerful. Within GRACE, the comparison is done between the unitary gauge and the covariant gauge and we observe $`15`$($`30`$) digits agreement in double(quadruple) precision computation. In the one-loop case, the calculation in the unitary gauge does not work well. Within GRACE one of the problems in this gauge is that the library containing the various loop integrals is designed assuming that the numerator of vector particles is $`g^{\mu \nu }`$. For instance, the library for vertex functions is implemented for the numerator of 3rd order polynomial in the loop momentum, while the handling of 9th order one is required in the unitary gauge. This not only creates very large expressions but also introduces terms with large superficial divergences that eventually need to be canceled precisely between many separate contributions. The non-linear gauge is introduced to make the gauge-check possible within GRACE. We take a generalized non-linear gauge fixing condition for the standard model: $$\begin{array}{cc}\hfill _{\mathrm{GF}}=& \frac{1}{\xi _W}\left|(_\mu ie\stackrel{~}{\alpha }A_\mu ig\mathrm{cos}\theta _W\stackrel{~}{\beta }Z_\mu )W^{+\mu }+\xi _W\frac{g}{2}(v+\stackrel{~}{\delta }H+i\stackrel{~}{\kappa }\chi _3)\chi ^+\right|^2\hfill \\ & \\ & \frac{1}{2\xi _Z}\left(_\mu Z^\mu +\xi _Z\frac{g}{2\mathrm{cos}\theta _W}(v+\stackrel{~}{ϵ}H)\chi _3\right)^2\frac{1}{2\xi _A}(_\mu A^\mu )^2\hfill \end{array}$$ (1) We take $`\xi _W=\xi _Z=\xi _A=1`$ so that the numerator structure of vectors is $`g^{\mu \nu }`$ as with the usual ’tHooft-Feynman gauge. Any of the other parameters can then provide a gauge check. Then the GRACE library to compute loop amplitudes works without any change. When $`\stackrel{~}{\alpha }=\stackrel{~}{\beta }=\stackrel{~}{\delta }=\stackrel{~}{\kappa }=\stackrel{~}{ϵ}=0`$, the gauge turns to the ’tHooft-Feynman gauge. For instance, when $`\stackrel{~}{\alpha }=1`$ ($`\stackrel{~}{\beta }=\mathrm{tan}^2\theta _w`$), $`W^\pm \chi ^{}\gamma `$ ($`W^\pm \chi ^{}Z`$) vertex disappears. Though in the non-linear gauge we have new vertices in the ghost sector, e.g., ghost-ghost-vector-vector vertices, we can reduce the total number of diagrams and therefore speed up the calculation of many processes. Of course the Feynman rules in the model file is revised to take into account the modifications due to the non-linear gauge. This includes all the vertices at tree-level as well as all appropriate counter terms needed for any 1-loop calculation. As a check on the new input file, we have confirmed that the results is unchanged from the original standard model case when all non-linear gauge parameters are 0. We have tested the system for several tree-level processes as well as one-loop two-to-two processes. Here we report on the tests done at the 1-loop level. The ultraviolet divergence is regularized by dimensional regularization where space-time dimension is $`n=42\epsilon `$. In the code generated by GRACE, the divergence is kept as a variable Cuv which stands for $`1/\epsilon \gamma _E+\mathrm{log}4\pi `$. As a first step to check the system, we have compared, for a given random set of the gauge parameters, the results with letting Cuv$`=0`$ and that with Cuv$`0`$. Since the agreement is exact, the system passes the first diagnosis, renormalizability. Then, we proceed to check that the finite result is independent of the choice of set for the non-linear gauge parameters. Some of the processes are listed in the table below. The center-of-mass energy and the masses used in the computation are as follows: $`W=500\mathrm{G}\mathrm{e}\mathrm{V}`$, $`M_Z=91.187\mathrm{GeV},M_W=80.37\mathrm{GeV},M_H=100\mathrm{G}\mathrm{e}\mathrm{V},M_t=174\mathrm{G}\mathrm{e}\mathrm{V}`$. For the regularization of the infra-red divergences, we introduce a fictitious photon mass, and its value in the computation is $`\lambda =10^6\mathrm{GeV}.`$ In the table, we present the value of $$\left(\frac{d\sigma ^{1loop}}{d\mathrm{cos}\theta }\frac{d\sigma ^{tree}}{d\mathrm{cos}\theta }\right)_{\theta =10^{}(\mathrm{cos}\theta =0.985)}2\mathrm{}\left(T^{loop}T^{tree}\right),$$ and LG stands for the linear gauge(’tHooft-Feynman gauge) case with Cuv=0 and NLG does for the non-linear gauge case with Cuv=1, $`\stackrel{~}{\alpha }=\stackrel{~}{\beta }=1`$, $`\stackrel{~}{\delta }=\stackrel{~}{\kappa }=\stackrel{~}{ϵ}=0`$. The latter case corresponds to the background gauge formalism. The number of diagrams depends on the choice of the gauge-fixing condition since some vertices are not present in some gauges. The counting of the total number of diagrams in the Table refers to all possible diagrams and therefore with appropriate choices of gauge this number may be reduced. The number of diagrams involving counter-terms insertions and that with self-energy contributions is denoted by CT and SE, respectively. | | Number of Diagrams | | | $`d\sigma /d\mathrm{cos}\theta `$\[pb\] | | | --- | --- | --- | --- | --- | --- | | process | tree | 1-loop | (CT:SE) | LG | NLG | | $`e^+e^{}t\overline{t}`$ | 2 | 52 | (4 : 6) | -0.870876519 | -0.87087619 | | $`e^+e^{}HZ`$ | 1 | 119 | (3 : 4) | -0.03174046785 | -0.03174046785 | | $`e^+e^{}W^{}W^+`$ | 3 | 152 | (5 : 5) | -0.9963368092 | -0.9964451605 | | $`t\overline{b}t\overline{b}`$ | 6 | 268 | (12:14) | 33.75029132 | 33.75132514 | | $`W^+W^{}t\overline{t}`$ | 4 | 238 | (8 : 6) | -0.08938607492 | -0.08938607286 | | $`ZHt\overline{t}`$ | 4 | 352 | (8 : 8) | 2.672194263 | 2.672194265 | | $`W^+\gamma t\overline{b}`$ | 4 | 238 | (8 : 6) | -0.5998664910 | -0.5998663896 | | $`W^+Zt\overline{b}`$ | 4 | 282 | (8 : 6) | -0.2216982981 | -0.2216940817 | | $`W^+Ht\overline{b}`$ | 4 | 283 | (8 : 6) | 0.04785521591 | 0.0478570236 | We note that the agreement and hence the gauge independence of the result is excellent. The accuracy is only limited by that of the evaluation of the numerical loop integrals. Through detailed inspections of individual diagrams we have noted that, expectedly, the gauge independence requires different diagrams to combine in order to produce a gauge-parameter independent result. In this sense, the check by gauge invariance is a powerful diagnostics. The comparison is still in progress for other processes at one-loop to establish the validity of the system. We can then proceed to more complicated processes, i.e., one-loop two-to-three or two-to-four processes and can confirm the results of large scale computation by the non-linear gauge method presented here. Acknowledgments > The authors would like to thank the local organizing committee of AIHENP99 for the excellent organization. They also thank the colleagues in Minami-Tateya group and the CPP collaboration in Japan, France, and Russia. This work is supported in part by the Ministry of Education, Science and Culture, Japan under the Grant-in-Aid for Scientific Research Program No.10640282 and No.09649384.