id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/0002/astro-ph0002236.html
ar5iv
text
# The Schweizer-Middleditch star revisited ## 1 Introduction SN 1006 was the brightest supernova witnessed in recorded history. The estimated peak magnitude (V$`=`$$``$9.5$`\pm `$1, Clark & Stephenson 1977), reported visibility for nearly two years, and the lack of a nearby OB association strongly suggests a Type Ia origin (SNIa, Minkowski 1966). Almost all current models of Type Ia supernova involve the nuclear explosion of a white dwarf induced by rapid mass accretion in a binary system. However, no stellar remnant from this supernova explosion has ever been conclusively identified, including a pulsar, or the remains of any companion star. In 1980, Schweizer & Middleditch searched for just such a stellar remnant from SN 1006 and discovered a faint (V$`=`$16.7) blue star $``$2.5’ from the projected centre of the supernova remnant (SNR). They identified this object (now known as the Schweizer$``$Middleditch star, SM star or SM80) as a hot subdwarf sdOB star, and estimated its effective temperature T<sub>eff</sub>$`=`$38,500$`\pm `$4500K, and surface gravity log g$`=`$6.7$`\pm `$0.6. From an estimate of the absolute magnitude, M<sub>v</sub>$`=`$6.2$`\pm `$1.8, Schweizer and Middleditch (1980) derived a distance to their subdwarf of 1.1 ($`+`$1.4, $``$0.6) kpc. Since chance projection seemed unlikely, and the distance estimate was in rough agreement with the then exisiting estimates of the distance to the SNR itself, Schweizer & Middleditch (1980) suggested that their subdwarf may in fact be the remnant star, or at least associated with it. Savedoff and Van Horn (1982) later showed conclusively that the SM star could not be the remnant of the supernova itself, since the time to cool to the observed effective temperature was simply too long, $``$10<sup>6</sup> years compared to the SNR age of 10<sup>3</sup> years. However, this does not rule out the SM star as a stellar remnant of the donor star in a pre-SNIa interacting binary system. Subsequent far ultra-violet (far-UV) observations with IUE and HST/FOS revealed the presence of strong Fe II and Si II, III and IV lines superimposed on the continuum of the SM star (Wu et al. 1983, Fesen et al. 1988, Wu et al. 1993). The iron lines have symmetrical velocity profiles, broadened up to $``$8000 km s<sup>-1</sup> FWHM. The Si features are asymmetric, redshifted and centred at a radial velocity of $``$5000 km s<sup>-1</sup>. These features have been used to estimate the mass of iron in the remnant and to map the positions of various shock regions. Importantly, though, the presence of redshifted lines in the supernova ejecta suggests that the SM star must lie behind the SNR, since they are assumed to originate in material moving away from us on the far side of the remnant. Measurements of the widths of these aborption lines, coupled with the angular size of the remnant, led Wu et al. (1993) to derive a lower limit to the SNR distance of 1.9 kpc. This contrasts strongly with the estimate of Willingale et al. (1995) of 0.7$`\pm `$0.1 kpc, derived from modelling X-ray emission detected in ROSAT PSPC observations. Therefore, we were motivated to re-observe and re-analyse the SM star in order to place tighter constraints on its distance, and hence on the distance to the SNR itself. Secondly, we learnt of the study by Wellstein et al. (1999) which suggests that the prior donor star in an SN Ia progenitor system (an interacting binary) may appear subsequently as a low mass hot subdwarf star. This new theoretical result re-opens the question first posed by Schweizer & Middleditch (1980) in the conclusion to their discovery paper: ”Can one component of a binary system that forms a Type Ia supernova end up being a hot subdwarf or white dwarf?”. In the light of Wellstein et al.’s recent work, we re-address this question. ## 2 Spectroscopy The SM star was observed for a total of 4000 seconds on 1996 April 14 with the South African Astronomical Observatory’s 1.9-m Ratcliffe Telescope, the Unit spectrograph and the Reticon photon counting system (RPCS). The RPCS had two arrays, one which accumulates energy from the source, while the other records sky background through an adjacent aperture. The target was observed for 2000 seconds through one aperture, then for a further 2000 seconds through the second aperture, in order to average out variations between the two light paths. The grating (number 6) was blazed to cover a wavelength range of $``$3700Å$``$5200Å with a resolution of $``$4Å. Flat fields were obtained at the start and end of the night, and wavelength calibration was provided by a CuAr lamp, which was observed before and after the target. A blue spectro-photometric standard (LTT 6248) was also observed. The reduced, calibrated spectrum is shown in Fig. 1. ## 3 High speed photometry Recently, multi-periodic pulsations have been discovered in a number of subdwarf sdB stars (the EC14026 stars, Kilkenny et al., 1997). Both radial and non-radial modes are present, although the cause of these pulsations is not fully understood. Theoretical studies have shown that these oscillations may be excited by an opacity bump due to heavy element ionization, giving rise to a metal-enrichment in this driving region (Charpinet et al., 1996). However, why pulsations are observed in some sdBs and not in others remains a mystery. We observed the SM Star on 1999 September 4th with the South African Astronomical Observatory’s 0.75m telescope, together with the University of Cape Town’s CCD photometer in high speed mode, in order to search for pulsations. A $``$2600 second light curve was obtained, consisting of 20 second exposures separated by essentially zero seconds of dead time. Four comparison stars were also observed at the same time. The differential light curve is shown in Fig. 2. The SM star (star #8 in Fig. 2) shows no evidence of pulsations; the fluctuations in Fig. 2 are merely random noise. The amplitude spectrum (Fig. 3), which has been calculated out to the Nyquist frequency, also shows no evidence for pulsations. However, at V$``$16.7 we are clearly unable to detect fluctuations below $``$0.05 mags. with this telescope. Many of the known sdB pulsators vary at the level of 0.001$``$0.05 mags., and so clearly we cannot rule out low level pulsations in this object. We suggest that it should be re-observed on a larger telescope. ## 4 Analysis ### 4.1 Spectral analysis The H Balmer series is visible in the calibrated optical spectrum (Fig. 1) to H11. HeI is detected at 4026Å, 4144Å, 4472Å and marginally at 4922Å. There is also a marginal detection of HeII at 4686Å. A grid of synthetic spectra derived from H & He line blanketed NLTE model atmospheres (Napiwotzki 1997) was matched to the data to simultaneously determine the effective temperature, surface gravity and He abundance (see Heber et al. 1999). We find T<sub>eff</sub>$`=`$32,900K, log g$`=`$6.18 and log (N(He)$`/`$N(H))$`=`$$``$1.7. While formal statistical errors from the fitting procedure are relatively small (1$`\sigma `$: $`\mathrm{\Delta }`$(T<sub>eff</sub>$`=`$)=340K, $`\mathrm{\Delta }`$(log g)$`=`$$`\mathrm{\Delta }`$(log(He/H))$`=`$0.1dex), systematics dominate the error budget and are estimated from varying the spectral windows for the profile fitting and the continuum setting to be $`\mathrm{\Delta }`$(T$`\mathrm{\_}eff`$)$`=`$$`\pm `$1500K, $`\mathrm{\Delta }`$(log g)$`=`$$`\pm `$0.3 dex and $`\mathrm{\Delta }`$(log (N(He)$`/`$N(H))$`=`$$`\pm `$0.3 dex. These best-fit parameters are unchanged if H<sub>ϵ</sub> is omitted from the fit (since it might be contaminated by CaII). A more precise error estimate would, however, require repeat observations. Therefore, we find that both the temperature and gravity are at the low end of the large range estimated by Schweizer & Middleditch (1980). With these parameters the SM star resembles an ordinary subdwarf B star close to the zero-age extended horizontal branch (ZAEHB). ### 4.2 Extinction Using the Matthews & Sandage (1963) calibration, combined with our model fit parameters, we estimate the colour excess E<sub>(B-V)</sub>$`=`$0.16$`\pm `$0.02. From Whitford (1958) we then estimate the visual extinction A<sub>v</sub>$`=`$3.0$`\times `$E<sub>(B-V)</sub>$`=`$0.48$`\pm `$0.06. Schweizer & Middleditch measured the V magnitude from photoelectric photometry as 16.74$`\pm `$0.02. Therefore, we take the redening corrected magnitude as V<sub>0</sub>$`=`$16.26$`\pm `$0.07. ### 4.3 Distance Since bolometric corrections for hot subluminous stars are large and somewhat uncertain, we prefer not to make use of them for the distance determination. Instead we calculate the angular radius from the ratio of the observed (dereddened) flux at the effective wavelength of the V filter and the corresponding model flux. Assuming the canonical mass for hot subdwarf stars, M$`=`$0.5$`\mathrm{M}_{}`$, we determine the stellar radius from the gravity and finally derive the distance from the angular diameter and the stellar radius. We obtain a distance of d$`=`$1485pc which corresponds to an absolute magnitude of M<sub>V</sub>$`=`$5.4. However, the error on log g is large ($`\pm `$0.3 dex), translating to d$`=`$1050pc for log g$`=`$6.48, or d$`=`$2100pc for log g$`=`$5.88. If the SM star has a much lower mass than usually assumed for these objects, as suggested by Wellstein et al. (1999), then the absolute magnitude will be lower and hence the star will be much closer to us. For example, if M$`=`$0.2$`\mathrm{M}_{}`$ then we find M<sub>V</sub>$`=`$6.4 and d$`=`$940pc (assuming log g$`=`$6.18). If M$`=`$0.1$`\mathrm{M}_{}`$, then M<sub>V</sub>$`=`$7.2 and d$`=`$650pc. ## 5 Discussion A new analysis of the Schweizer-Middleditch star, a hot subdwarf which lies along the same line-of-sight as the centre of the SN1006 SNR, has allowed us to place tighter constraints on its atmospheric parameters, and re-assess its distance. Since Wellstein et al. (1999) have demonstrated that the remnant of the donor star in a pre-SNIa binary system could appear as a hot subdwarf, albeit with an abnormally low mass, we can now re-address Schweizer & Middleditch’s original question: is the SM star the stellar remnant of one component of the SNIa progenitor binary? In order to begin answering this question, we need to convice ourselves that the SM star lies at the same distance as the SN1006 SNR. Unfortunately, there is a large range in the SNR distance estimates quoted in the literature. In Table 1, we list the various distance estimates to the SN1006 SNR itself and the method used to obtain that distance. Early estimates, based for example on the historical record of its brightness (e.g. Minkowski 1966) and early models of the X-ray emission, gave distances $``$1kpc. Most of the more recent estimates, based on a variety of theoretical models or measurements of e.g. the expansion velocity or proper motion of optical filaments, place the SNR at a distance of $``$1.5$``$2.0kpc. The one glaring exception is the estimate of Willigale et al. (1995), 0.7$`\pm `$0.1kpc, based on an analysis of the ROSAT PSPC X-ray image of the SNR. We find the distance to the SM star 1050$`<`$d$`<`$2100 pc, assuming that it is an ordinary hot subdwarf. If Willingale et al’s distance estimate is correct, then the SM star would lie a long way behind the remnant. In order for it to lie within the remnant, it would have to be of unusually low mass. A mass of 0.1$``$0.2$`\mathrm{M}_{}`$ gives a distance compatible with Willingale et al’s estimate, and in that scenario the SM star could indeed then be a remnant of the donor star in an SNIa progenitor system. However, if Willingale et al’s SNR distance estimate is wildly inaccurate, and the more conservative estimates of $``$1.5$``$2.0kpc are correct, then the SM star cannot be a low mass remnant of the donor star in a pre-SNIa binary. In fact, there are two more compelling arguments against the SM star having any relation to SN1006. Firstly, it is located $``$2.5’ south of the projected centre of the remnant, and would have to possess a proper motion of 0.15” per year and a velocity of $``$800km sec<sup>-1</sup> to have reached its current location. Unfortunately, the star simply does not possess this motion or velocity. Secondly, the presence of red-shifted metal absorption lines superimposed on the SM star’s UV spectrum strongly indicate that the star is behind the remnant, since these features almost certainly originate at a shock front on the remnant’s far side. Confirmation of this may come from observations of other nearby objects with strong UV fluxes and generally featureless far-UV continuums. Indeed, P.F. Winkler has an HST/STIS program to observe four such objects behind SN1006 during Cycle 8 (two QSOs and two A0 stars, program ID 8244), and one of these objects is even closer to the projected centre of SN1006 than the SM star. These targets are not scheduled to be observed until June-July 2000, but the detection of the same red-shifted features as seen in the SM star (and the non-detection of any additional features with separate velocities) would effectively rule out any exotic origin for these lines, and confirm the location of the SM star behind the SN1006 SNR. Thus, the SM star can only be the remnant of the donor star in a pre-SNIa binary, such as might have produced SN1006, if the following four criteria are fulfilled: (1) The star has an unusually low mass for a hot subdwarf ($``$0.1$`\mathrm{M}_{}`$), (2) the low distance estimate to the SN1006 SNR of Willingale et al. (1995) is correct, (3) the red-shifted metal lines seen in the SM star’s far-UV spectrum originate somewhere other than on the far side of the SNR, and (4) the SM star has a high proper motion and transverse velocity. Unfortunately, at the time of writing, none of these conditions can convincingly shown to be true. However, the tighter constraint we have been able to place on the distance to the SM star in this analysis can now be used to place an upper limit on the distance to the SN1006 SNR itself, and hence constrain the models and methods used to estimate the distances of supernova remnants. ###### Acknowledgements. MRB acknowledges the support of PPARC, UK. We thank Pete Wheatley (Leicester University) for generating the Fourier transform of the SM Star’s light curve.
no-problem/0002/physics0002051.html
ar5iv
text
# Thermal expansion in small metal clusters and its impact on the electric polarizability ## Abstract The thermal expansion coefficients of $`\mathrm{Na}_N`$ clusters with $`8N40`$ and $`\mathrm{Al}_7`$, $`\mathrm{Al}_{13}^{}`$ and $`\mathrm{Al}_{14}^{}`$ are obtained from ab initio Born-Oppenheimer LDA molecular dynamics. Thermal expansion of small metal clusters is considerably larger than that in the bulk and size-dependent. We demonstrate that the average static electric dipole polarizability of Na clusters depends linearly on the mean interatomic distance and only to a minor extent on the detailed ionic configuration when the overall shape of the electron density is enforced by electronic shell effects. The polarizability is thus a sensitive indicator for thermal expansion. We show that taking this effect into account brings theoretical and experimental polarizabilities into quantitative agreement. preprint: TPR-99-22 Since electronic shell effects were put into evidence in small metallic systems , metal clusters have continously attracted great interest both experimentally and theoretically . Besides technological prospects, one of the driving forces for this research has been the fundamental question of how matter develops from the atom to systems of increasing size, and how properties change in the course of this growing process. In some cases it has been possible to extract detailed information from experiments done at low temperatures and the related theories . In many cases, however, a deeper understanding is complicated by the finite temperature which is present in most experiments due to the cluster production process, see e.g. the discussion in . Whereas a lot of theoretical information about finite temperature effects in nonmetallic systems has been gained in the last years , only little is known about it in metallic clusters. Here, sodium is a particularly interesting reference system because of its textbook metallic properties and the fact that it has been extensively studied within the jellium model, see e.g. for an overview. Aluminum, on the other hand, is of considerable technological interest. Some advances to study temperature effects in metal clusters including the ionic degrees of freedom were made using phenomenological molecular dynamics , a tight-binding hamiltonian , the Thomas-Fermi approximation or the Car-Parrinello method . Recently, it has also become possible to study sodium clusters of considerable size using ab initio Born-Oppenheimer, local spin density molecular dynamics (BO-LSD-MD) . In this work we report on the size dependence of a thermal property which is well known for bulk systems, namely the linear thermal expansion coefficient $$\beta =\frac{1}{l}\frac{l}{T}.$$ (1) For crystalline sodium at room temperature, it takes the value $`71\times 10^6K^1`$, for Al $`23.6\times 10^6K^1`$. To the present date, however, it has not been known how small systems are affected by thermal expansion. At first sight, it is not even obvious how thermal expansion can be defined in small clusters. Whereas in the bulk it is no problem to define the length $`l`$ appearing in Eq. (1), e.g. the lattice constant, it is less straightforward to choose a meaningful $`l`$ in the case where many different ionic geometries must be compared to one another. For small metal clusters, the latter situation arises because of the many different isomers which appear at elevated temperatures. We have calculated the thermal expansion coefficients for $`\mathrm{Na}_8`$, $`\mathrm{Na}_{10}`$, $`\mathrm{Na}_{12}`$, $`\mathrm{Na}_{14}`$, $`\mathrm{Na}_{20}`$ and $`\mathrm{Na}_{40}`$ in BO-LSD-MD simulations. Results concerning isomerization processes in these simulations have been presented in , and the BO-LSD-MD method is described in detail in Ref. . A meaningful length to be used in Eq. (1) if it is applied to finite systems with similar overall deformation is the mean interatomic distance $$l_{\mathrm{m}iad}=\frac{1}{N(N1)}\underset{i,j=1}{\overset{N}{}}\left|𝐑_i𝐑_j\right|,$$ (2) where $`𝐑_i`$ are the positions of the $`N`$ atoms in the cluster. Obviously, $`l_{\mathrm{m}iad}`$ measures the average “extension” of a clusters ionic structure, and we calculated it for all configurations obtained in a BO-LSD-MD run. Two different methods were used to calculate $`\beta `$. First, we discuss the heating runs, in which the clusters were thermalized to a starting temperature and then heated linearly with a heating rate of 5K/ps and a time step of 5.2 fms. $`l_{\mathrm{m}iad}`$ was recorded after each time step. In this way, for $`\mathrm{Na}_8`$ the temperature range from about 50 K to 670 K was covered, corresponding to 24140 configurations, for $`\mathrm{Na}_{10}`$ from ca. 150 K to 390 K (9260 configurations), for $`\mathrm{Na}_{14}`$ from ca. 50 K to 490 K (17020 configurations), for $`\mathrm{Na}_{20}`$ from ca. 170 K to 380 K (8000 configurations), and for $`\mathrm{Na}_{40}`$ from ca. 200 K to 400 K (7770 configurations). Fig. 1 shows how $`l_{\mathrm{m}iad}`$ changes with temperature for $`\mathrm{Na}_8`$ and $`\mathrm{Na}_{10}`$. Both curves show large fluctuations, as is to be expected for such small systems. However, one clearly sees a linear rise as the general trend. We therefore made linear fits to the data for each cluster in two ways. The first column in the left half of table I gives the linear thermal expansion coefficients which we obtained from fitting the data in the temperature interval between 200 K and 350 K, i.e. around room temperature, where bulk sodium is usually studied. In order to allow for an estimate of the statistical quality of the fits in view of the fluctuations, the second and third column in the left half of Table I list the ratio of the fit parameters, i.e. the axis interception $`a`$ and the slope $`b`$, to their standard deviations. It becomes clear from these results that thermal expansion in the small clusters is considerably larger than that in the bulk. This can be understood as an effect of the increased surface to volume ratio in the finite systems. However, the expansion coefficient also strongly depends on the cluster size. This can even be seen directly from the different slopes in Fig. 1. As we will show below, this size dependence has far reaching consequences for the interpretation of experimental data which is usually measured on hot clusters, as e.g. the static electric polarizability. In addition to the values given in Table I, we calculated the expansion coefficient of $`\mathrm{Na}_{12}`$ with a different method. In two separate runs, the cluster was thermalized to temperatures of about 200 K and 350 K, and then BO-LSD-MD was performed for 5 ps at each temperature, i.e. without heating. From the average $`l_{\mathrm{m}iad}`$ found in the two simulations, $`\beta _{\mathrm{Na}_{12}}=2.5\beta _{\mathrm{b}ulk}`$ was calculated. Thus, also the second method leads to a $`\beta `$ that is larger than that of the bulk, i.e. it confirms the results of the heating runs. The average thermal expansion coefficient for the full temperature range covered in each simulation is obtained from a fit to the complete set of data, shown as a dashed line in Fig. 1 for $`\mathrm{Na}_8`$ and $`\mathrm{Na}_{10}`$. This average is of interest because it covers several hundred K for each cluster in the range of temperatures which are to be expected for clusters coming from the usual supersonic expansion sources . The right half of table I lists these average expansion coefficients and their statistical deviations in the same way as before. As is to be expected, the values differ from the previous ones for the small clusters, because the expansion coefficient is influenced by which isomers are or become accessible at a particular temperature, i.e. especially at low temperatures it is temperature dependent. In Fig. 1 one e.g. sees from comparison with the average dashed line that for temperatures between 50 K and 100 K, the thermal expansion is smaller than that seen for higher temperatures. However, once the cluster has reached a temperature where it easily changes from one isomer to another, the thermal expansion coefficient becomes nearly independent of the temperature. In the case of $`\mathrm{Na}_8`$, e.g., $`\beta `$ changes only by about 5 % in the interval between 300 K and 670 K. Detailed previous investigations have shown that small clusters do not show a distinct melting transition. However, the largest cluster studied here, $`\mathrm{Na}_{40}`$, shows a phase transition above 300 K . At the melting point, the octupole and hexadecupole deformation of the electronic density sharply increase. If $`l_{\mathrm{m}iad}`$ is a relevant indicator for structural changes, then melting should also be detectable from it. Indeed we find a noticeable increase in $`l_{\mathrm{m}iad}`$ at 300 K, and similar fluctuation patterns as in the multipole moments. In our simulation, we could only determine the expansion coefficient for the solid phase, and it is given in the right half of table I. As seen in Fig. 1, $`\mathrm{Na}_8`$ shows thermal expansion already at 50 K. This raises the question at which temperature the expansion actually starts, i.e. where anharmonic effects in the ionic oscillations will start to become important. In this context we note that one can compare the $`l_{\mathrm{m}iad}`$ at T=0 K found by extrapolation from the heating data to the $`l_{\mathrm{m}iad}`$ which is actually found for the ground state structure at T=0 K. We have done this for $`\mathrm{Na}_8`$, $`\mathrm{Na}_{10}`$ and $`\mathrm{Na}_{14}`$, where the ground state structures are well established. In all cases, the differences between the two values were less than 1%. This indicates that the anharmonic effects for Na clusters are important down to very low temperatures. Furthermore, the anharmonicities should also be observable in the heat capacities , where they will lead to deviations from Dulong-Petit’s law. We have checked this and indeed found deviations between 8 % ($`\mathrm{Na}_{20}`$) and 19 % ($`\mathrm{Na}_8`$) from the Dulong-Petit value. As an example for the considerable influence of thermal expansion on measurable physical properties we discuss the average static electric dipole polarizability $`\alpha `$, which is defined as the trace of the polarizability tensor. It was one of the first observables from which the existence of electronic shell effects in metal clusters was deduced , and it has been measured for clusters of various sizes and materials . For Na clusters with up to eight atoms, the polarizability was also calculated in different approaches . These calculations qualitatively reproduce the experimentaly observed trends, but they all underestimate the measured value. We show that this discrepancy is to a large part due to the fact that the calculations were done for T=0, whereas the measurement is done on clusters having temperatures of about 400 to 600 K . For various, different isomers obtained in our heating runs for $`\mathrm{Na}_8`$ and $`\mathrm{Na}_{10}`$, we have calculated the polarizability from the derivative of the induced dipole moment with respect to the electric field (finite field method). Since highly unsymmetric isomers from the high temperature part of the simulations were taken into account, the full tensor was computed by numerically applying the dipole field in the different directions in seperate calculations. We have checked that the used field strength of $`5\times 10^5e/a_0^2`$ is large enough to give a numerically stable signal and small enough to be in the regime of linear response. In Fig. 2 we have plotted the thus obtained polarizabilities versus $`l_{\mathrm{m}iad}`$, and show three instances of ionic geometries for each cluster that demonstrate how different the structures actually are. Nevertheless, within a few percent the polarizabilities are on a straight line. This shows that the average polarizability depends mainly and strongly on the mean interatomic distance, and only to a minor extent on details in the ionic configurations. Of course, the situation might be more complicated for clusters where the overal shape, i.e. the lowest terms in the multipole expansion of the valence electron density, is not stabilized by electronic shell effects. For the present clusters, however, the deformation induced by the electronic shell effects persists even at elevated temperatures. That $`\alpha `$ is less sensitive to the detailed ionic configuration than, e.g., the photoabsorption spectrum, is understandable because it is an average quantity. The dependence of the polarizability on the mean interatomic distance has the consequence that $`\alpha `$ also strongly depends on the temperature. From Fig. 2 one deduces that an average bondlength increase of 1 $`a_0`$ in $`\mathrm{Na}_8`$ and $`\mathrm{Na}_{10}`$ leads to an increase in the polarizability of about 25 $`\AA ^3`$. Thus, neglection of the thermal expansion in T=0 calculations leads to polarizabilities which are smaller than the ones measured on clusters coming from supersonic expansion sources . Of course, also underestimations of the cluster bond lengths that are due to other reasons will directly appear in the polarizability. With the Troullier-Martins pseudopotential, e.g. the BO-LSD-MD underestimates the dimer bond length by 4.5%, and it is to be expected that the situation is similar for the bond lengths of larger clusters. Taking this into account, one can proceed to calculate the polarizability for clusters with a temperature corresponding to the experimental one of about 500 K . In the experiments the clusters are spending about $`10^4`$s in the deflecting field from which the polarizability is deduced, i.e. the experimental timescale is orders of magnitude larger than the timescale of the fluctuations in the mean interatomic distance (see Fig. 1). Thus, the fluctuations will be averaged over and can be neglected. From the average expansion coefficients we obtain a bond length increase of 0.48 $`a_0`$ for $`\mathrm{Na}_8`$ and 0.87 $`a_0`$ for $`\mathrm{Na}_{10}`$ at 500 K, which in turn leads to an increase in the polarizability of 12 $`\AA ^3`$ and 23 $`\AA ^3`$, respectively. The resulting polarizabilities of 130 $`\AA ^3`$ for $`\mathrm{Na}_8`$ and 172 $`\AA ^3`$ for $`\mathrm{Na}_{10}`$ compare favourably with the experimental values 134$`\pm `$16$`\AA ^3`$ and 190$`\pm `$20$`\AA ^3`$ . For all other cluster sizes, the two experiments give different values for the polarizability. From the present work it becomes clear that differences in the experimental temperatures might be the reason for the discrepancies. Therefore, an accurate measurement of the clusters’ temperatures is necessary before further quantitative comparisons can be made. However, a detailed comparison to both experiments showed that the theoretical T=0 polarizability of all isomers underestimates both experimental results . Thus, the increase in $`\alpha `$ that is brought about by thermal expansion will lead to better agreement between theory and experiment for all cluster sizes. Thermal expansion is also observed in aluminum clusters. For $`\mathrm{Al}_7`$ we performed 5 ps of BO-LSD-MD at each of the fixed temperatures 100 K, 300 K, 500 K and 600 K, for $`\mathrm{Al}_{13}^{}`$ at 260 K, 570 K and 930 K, and for $`\mathrm{Al}_{14}^{}`$ at 200 K, 570 K and 900 K, in analogy to the procedure for $`\mathrm{Na}_{12}`$. From the average $`l_{\mathrm{m}iad}`$ at each temperature, we calculated the expansion coefficients $`\beta _{\mathrm{A}l_7}=1.3\beta _{\mathrm{b}ulk}`$, $`\beta _{\mathrm{A}l_{13}^{}}=1.4\beta _{\mathrm{b}ulk}`$, $`\beta _{\mathrm{A}l_{14}^{}}=1.4\beta _{\mathrm{b}ulk}`$. It should be noted that with $`\mathrm{A}l_{13}^{}`$ we have chosen an electronically as well as geometrically magic cluster , i.e. a particularly rigid one, and the fact that it also shows a larger expansion coefficient than the bulk is further evidence for the conclusion that the increased expansion coefficient is indeed a finite size effects. A noteworthy difference between Al and Na is seen in the temperatures where the expansion sets in. Whereas for Na this temperature is below 50 K, we observe that $`\mathrm{A}l_{13}^{}`$ and $`\mathrm{A}l_{14}^{}`$ show no expansion below 300 K. In summary, we have calculated thermal expansion coefficients for small metal cluster and demonstrated that thermal expansion in these systems is larger than that in the bulk. For the case of sodium, the dependence of the expansion coefficient is not monotonous according to the cluster size. We showed that the average static electric dipole polarizability of clusters whose overall shape is fixed by electronic shell effects depends linearly on the mean interatomic distance. Thus, thermal expansion increases the static electric polarizability, and we demonstrated that taking this effect into account brings the theoretical values in a close agreement with the experimental ones. We thank M. Brack and A. Rytkönen for clarifying discussions. J.A. acknowledges support by the Väisälä Foundation, S.K. by the Deutsche Forschungsgemeinschaft, and all authors by the Academy of Finland.
no-problem/0002/nlin0002017.html
ar5iv
text
# Heat Transport in Turbulent Rayleigh-Bénard Convection ## Abstract We present measurements of the Nusselt number $`𝒩`$ as a function of the Rayleigh number $`R`$ in cylindrical cells with aspect ratios $`0.5\mathrm{\Gamma }D/d12.8`$ ($`D`$ is the diameter and $`d`$ the height). We used acetone with a Prandtl number $`\sigma =4.0`$ for $`10^5R4\times 10^{10}`$. A fit of a powerlaw $`𝒩=𝒩_0R^{\gamma _{eff}}`$ over limited ranges of $`R`$ yielded values of $`\gamma _{eff}`$ from 0.275 near $`R=10^7`$ to 0.300 near $`R=10^{10}`$. The data are inconsistent with a single powerlaw for $`𝒩(R)`$. For $`R>10^7`$ they are consistent with $`𝒩=a\sigma ^{1/12}R^{1/4}+b\sigma ^{1/7}R^{3/7}`$ as proposed by Grossmann and Lohse for $`\sigma 2`$. Since the pioneering measurements by Libchaber and co-workers of heat transport by turbulent gaseous helium heated from below, there has been a revival of interest in the nature of turbulent convection. In addition to the local properties of the flow, one of the central issues has been the global heat transport of the system, as expressed by the Nusselt number $`𝒩=\lambda _{eff}/\lambda `$. Here $`\lambda _{eff}=qd/\mathrm{\Delta }T`$ is the effective thermal conductivity of the convecting fluid ($`q`$ is the heat-current density, $`d`$ the height of the sample, and $`\mathrm{\Delta }T`$ the imposed temperature difference), and $`\lambda `$ is the conductivity of the quiescent fluid. Usually a simple powerlaw $$𝒩=𝒩_0R^{\overline{\gamma }}$$ (1) was an adequate representation of the experimental data. Here $`R=\alpha gd^3\mathrm{\Delta }T/\kappa \nu `$ is the Rayleigh number, $`\alpha `$ the thermal expansion coefficient, $`g`$ the gravitational acceleration, $`\kappa `$ the thermal diffusivity, and $`\nu `$ the kinematic viscosity. Various data sets yielded exponent values $`\overline{\gamma }`$ from 0.28 to 0.31. Most recently, measurements over the unprecedented range $`10^6R10^{17}`$ were made by Niemela et al., and a fit to them of Eq. 1 gave $`\overline{\gamma }=0.309`$; but even these data did not reveal any deviation from the functional form of Eq. 1. Competing theoretical models involving quite different physical assumptions made predictions of powerlaw behavior with exponents $`\gamma `$ in the same narrow range. Here we mention just two of them. A boundary-layer scaling-theory which yielded $`\gamma =2/70.2857`$ was an early favorite, at least for the experimentally accessible range $`R10^{12}`$. It was generally consistent with most of the available experimental results. More recently, a competing model based on the decomposition of the kinetic and the thermal dissipation into boundary-layer and bulk contributions was presented by Grossmann and Lohse (GL) and predicts that the data measure an average exponent $`\overline{\gamma }`$ associated with a crossover from $`\gamma =1/4`$ at small $`R`$ to a slightly larger $`\gamma `$ at much larger $`R`$. In the experimental range the effective exponent $`\gamma _{eff}d(ln(𝒩))/d(ln(R))`$, which should be compared with the experimentally determined $`\overline{\gamma }`$, is only very weakly dependent upon $`R`$ and has values fairly close to 2/7. Thus, it has not been possible before to distinguish between the competing theories on the basis of the experimental data. Here we present new measurements of $`𝒩(R)`$ over the range $`10^5R4\times 10^{10}`$ for a Prandtl number $`\sigma \nu /\kappa =4.0`$. Our data are of exceptionally high precision and accuracy. They are incompatible with the single powerlaw Eq. 1, and yield values of $`\gamma _{eff}`$ which vary from 0.277 near $`R=10^7`$ to 0.300 near $`R=10^{10}`$. In particular, the results rule out the prediction $`\gamma =2/7`$. For $`R10^7`$ a good fit to our results can be obtained with the crossover function $$𝒩=a\sigma ^{1/12}R^{1/4}+b\sigma ^{1/7}R^{3/7}$$ (2) proposed by GL for $`\sigma 2`$. We used two apparatus. One was described previously. It could accomodate cells with a height up to $`d=3`$ cm. The other was similar, except that its three concentric sections were lengthened by 20 cm to allow measurements with cells as long a 23 cm. In both, the cell top was a sapphire disk of diameter 10 cm. A high-density polyethylene sidewall of circular cross section and with diameter $`D`$ close to 8.8 cm was sealed to the top and bottom by ethylene-propylene O-rings. Four walls, with lengths ranging from 0.70 to 17.4 cm, were used and yielded aspect ratios $`\mathrm{\Gamma }D/d=12.8,3.0,1.0,`$ and 0.5. The bottom plate had a mirror finish. It contained two thermistors, located 2.5 cm from the center and at 180 relative to each other. The fluid was acetone. Its physical properties are very well known. Measurements of the convective onset in thin cells yielded values of $`R_c`$ very close to 1708, thus confirming the reliability of the relevant fluid properties. Deviations from the Boussinesq approximation, which is usually assumed in theoretical work, are relatively small. From Eq. 8 of Ref. we estimated $`x1+0.0012\mathrm{\Delta }T`$ for the ratio of the temperature drops across the top and bottom boundary layers. On the other hand, the parameter $`𝒬`$ defined by Busse , which is usually considered near the onset of convection, can be approximated by $`𝒬0.05\mathrm{\Delta }T`$ and does reach significant valued when $`\mathrm{\Delta }T`$ exceeds, say 10C. Our temperature stability and resolution was 0.001C or better. We measured and corrected for the conductances of the empty cells, and applied corrections for resistance in series with the fluid. The bath temperature was controlled in a feedback loop and usually fluctuated by no more than $`0.001^{}`$C rms. The bottom-plate temperature was also held constant. Its fluctuations were typically determined by the fluid turbulence, but remained less than 0.1% of $`\mathrm{\Delta }T`$. The temperatures of the two thermistors in the bottom plate could differ by up to 0.5% of $`\mathrm{\Delta }T`$ due to the large-scale flow. The parameters $`𝒩`$ and $`R`$ were calculated using the average of the two measurements. When only one was used, exponents derived from the data typically differed by no more than $`4\times 10^4`$. Usually $`\mathrm{\Delta }T`$ was stepped in equal increments on a logarithmic scale, holding the mean temperature at $`32.00^{}`$C. Results of our measurements in four cells of different $`\mathrm{\Gamma }`$ are shown in Fig. 1. For each cell they cover about two decades of $`R`$, and collectively they span the range of $`R`$ from $`10^5`$ to $`4\times 10^{10}`$. Of interest here is the range $`R10^6`$, where one might expect Eq. 1 to be relevant. There is a dependence of $`𝒩(R)`$ upon $`\mathrm{\Gamma }`$, as already noted by others . For comparison, we also show in Fig. 1 the recent results of Chavanne et al. for $`\sigma 0.8`$ and $`\mathrm{\Gamma }=0.5`$ (plusses) and those of Ashkenazi and Steinberg for $`\sigma =1`$ and a cell of square cross section and $`\mathrm{\Gamma }=0.72`$ (open triangles). There is good agreement with the former, considering the difference in $`\sigma `$. The latter are about a factor of 1.8 larger than our results; this difference seems too large to be attributed to the difference in $`\sigma `$ or the geometry and remains unexplained. The solid line just above the open circles in the figure (more easily seen in Fig. 2) corresponds to the fit of Eq. 1 to the data of Liu and Ecke for $`\sigma =4`$, $`\mathrm{\Gamma }1`$, and a cell with a square cross section. The agreement with our data is excellent, considering the difference in geometry. Figure 1 does not have enough resolution to reveal details about the data. Thus we use the early prediction $`\gamma =2/7`$ as a reference, and show $`log_{10}(𝒩R^{2/7})`$ as a function of $`log_{10}(R)`$ in Fig. 2. If the theory were correct, $`𝒩R^{2/7}`$ should be equal to $`𝒩_0`$, i.e. independent of $`R`$, over its range of applicability. If Eq. 1 is the right functional form but $`\gamma `$ differs from 2/7, then the data should fall on straight lines with slopes equal to $`\gamma 2/7`$. Our $`\mathrm{\Gamma }=12.8`$ data (solid squares) are at relatively small $`R`$ and one might not expect Eq. 1 to become applicable until $`R`$ is larger. The $`\mathrm{\Gamma }=3.0`$ data actually show slight curvature, but in any case would yield $`\overline{\gamma }<2/7`$. The smaller-$`\mathrm{\Gamma }`$ data are clearly curved, showing that Eq. 1 is not applicable with any value of $`\gamma `$. In order to make this conclusion more quantitative, we show in Fig. 3 effective local exponents $`\gamma _{eff}`$ derived by fitting Eq. 1 to the data over various restricted ranges, each covering about half a decade of $`R`$. The fits yield values of $`\gamma _{eff}(R)`$ which have a minimum near $`R=10^7`$. Interestingly, $`\gamma _{eff}`$ is within our resolution independent of $`\mathrm{\Gamma }`$. Next we compare the predictions of GL with our data. These authors defined various scaling regimes in the $`R\sigma `$ plane. For $`\sigma 2`$, they expect that crossover between their regions $`I_u`$ and $`III_u`$ should be observed. For that case, Eq. 2 is predicted to apply. One way to test this is to plot $`y=𝒩/(R^{1/4}\sigma ^{1/12})`$ as a function of $`x=R^{5/28}\sigma ^{5/84}`$. If the prediction is correct, the data should fall on a straight line $`y=a+bx`$ with $`a`$ and $`b`$ equal to the coefficients in Eq. 2. Our $`\mathrm{\Gamma }=1.0`$ data are shown in this parameterization as solid circles in Fig. 4. The solid line is a least-squares fit. One sees that the data are fitted extremely well. The coefficients are $`a=0.326`$ and $`b=2.36\times 10^3`$, in good agreement with the coefficients estimated by GL on the basis of other experimental data. We note that in the analysis it was assumed that $`𝒩`$ is the appropriate variable to compare with Eqs. 1 and 2. However, one might argue that only the convective contribution $`𝒩1`$ should be considered. It turns out that replacing $`𝒩`$ with $`𝒩1`$ leads to the same conclusions. The open circles in Fig. 4 are obtained from our $`\mathrm{\Gamma }=1`$ data when $`𝒩1`$ is used. The dashed straight line is an excellent fit to the data and yields $`a=0.311`$ and $`b=2.58\times 10^3`$. The logarithmic derivative $`\gamma _{eff}`$ of Eq. 2 based on the analysis of $`𝒩`$ (rather than on $`𝒩1`$) is shown as a solid line in Fig. 3. For $`R10^7`$ it agrees quite well with the values determined by local fits of Eq. 1 to the data. It would be difficult to demonstrate that the small, seemingly systematic, deviations which do exist are outside of possible systematic errors in the measurements. For smaller $`R`$, one does not expect the theory to be applicable. GL estimate that their region $`I_u`$ has a lower boundary below which the Reynolds number $`R_e0.039R^{1/2}\sigma ^{5/6}50`$. This occurs when $`R1.6\times 10^7`$. Local fits of a powerlaw to data for $`𝒩1`$ are shown in Fig. 5. There we also show, as a dashed line, $`\gamma _{eff}`$ based on the dashed line in Fig. 4. We see that deviations from the prediction Eq. 2 already occur near $`R=10^8`$, which is an order of magnitude larger than was found in the analysis using $`𝒩`$. However, for $`R10^8`$ the fit of the theory to the data is equally good, regardless of whether $`𝒩`$ or $`𝒩1`$ is used. For large $`\sigma `$, the theory predicts crossover from $`I_u`$ to $`III_u`$ and Eq. 2. For somewhat smaller $`\sigma 1`$, however, the crossover should be from $`I_u`$ to $`IV_u`$, and the prediction then reads $$𝒩=a\sigma ^{1/12}R^{1/4}+bR^{1/3}.$$ (3) As a function of $`\sigma `$, GL estimate that the transition from Eq. 2 to Eq. 3 occurs near $`\sigma =2`$, but there is some uncertainty in this value. Thus, in Fig. 6 we compared Eq. 3 with our results for $`\mathrm{\Gamma }=1`$ by plotting $`y=𝒩/(R^{1/4}\sigma ^{1/12})`$ as a function of $`x=R^{1/12}\sigma ^{1/12}`$. If Eq. 3 is applicable, this should yield a straight line. As can be seen, the data deviate systematically from the fit. Thus a transition from $`I_u`$ to $`IV_u`$ at $`\sigma =4`$ is inconsistent with our data. In Fig. 7 we show the deviations $`\delta 𝒩/𝒩`$ from fits of Eqs. 1 to Eq. 3 to the $`\mathrm{\Gamma }=1.0`$ data. For Eq. 1 they are systematic, as already expected. For Eq. 2 the fit is nearly perfect. We conclude that our data are consistent with Eq. 2. For Eq. 3 the deviations are nearly as large as those for Eq. 1. Thus Eq. 3 is not applicable to the data. A similar analysis of the $`\mathrm{\Gamma }=0.5`$ data over the range $`10^9R4\times 10^{10}`$ gives similar results. So far our work has concentrated on measurements of $`𝒩(R)`$ for acetone with $`\sigma =4.0`$. We concluded that there is no significant range of $`R`$ over which the powerlaw Eq. 1 is applicable, and that the crossover function Eq. 2 proposed by Grossmann and Lohse provides a good fit to the data for $`R10^7`$ where the Reynolds number $`R_e`$ of the large-scale flow is expected to exceed about 50. Obviously a great deal of additional high-precision work remains to be done, and some of it is indeed under way. To provide further tests of the GL predictions, we started a systematic study of the Nusselt number $`𝒩(R,\sigma )`$ as a function of $`\sigma `$, but this work encountered considerable obstacles. For acetone $`\sigma `$ does not vary enough with $`T`$, and systematic errors in the properties of other suitable fluids such as the alcohols prevent a comparison before better measurements of $`\lambda `$ and $`\alpha /\kappa \nu `$ are made. Since $`R_e`$ is central to the GL theory, its determination is also an obvious area for further work. And finally, measurements of comparable accuracy and precision should be extended to the compressed gases where $`\sigma 0.7`$, where more direct comparison with much previous work is possible, and where a different GL crossover function should pertain. We are grateful to Siegfried Grossmann and Detlef Lohse for stimulating correspondence, and to David Cannell for assistance in establishing our temperature scale. This work was supported by the National Science Foundation through grant DMR94-19168.
no-problem/0002/cond-mat0002222.html
ar5iv
text
# Experimental Investigation of Resonant Activation \[ ## Abstract We experimentally investigate the escape from a metastable state over a fluctuating barrier of a physical system. The system is switching between two states under electronic control of a dichotomous noise. We measure the escape time and its probability density function as a function of the correlation rate of the dichotomous noise in a frequency interval spanning more than 4 frequency decades. We observe resonant activation, namely a minimum of the average escape time as a function of the correlation rate. We detect two regimes in the study of the shape of the escape time probability distribution: (i) a regime of exponential and (ii) a regime of non-exponential probability distribution. \] The thermal activated escape of a particle over a potential barrier in a metastable state is a classical problem since the seminal work of Kramers . It occurs in a wide variety of physical, biological and chemical systems . When the height of the potential barrier is itself a random variable the dynamics of the escape from the metastable state is affected by the statistical properties of the random variable controlling the barrier height. In the presence of barrier fluctuations rather counterintuitive phenomena may occur. A classical example is the phenomenon of resonant activation . The signature of the resonant activation phenomenon is observed when the average escape time of a particle from the metastable state exhibits a minimum as a function of the parameters of the barrier fluctuations. The original work of Doering and Gadoua triggered a large amount of theoretical activity . Analogic simulations of resonant activation have also been performed in a bistable system in the presence of multiplicative Gaussian or dichotomous noise . This letter aims to answer the following questions - (i) is resonant activation experimentally observable in a physical system? (ii) what about the probability density function of escape time? We answer both questions by investigating the escape from a metastable state over a fluctuating barrier in a physical system. The physical system is a tunnel diode biased in a strongly asymmetric bistable state in the presence of two independent sources of electronic noise. Specifically we have a dichotomous noise source controlling the metastable potential and a Gaussian noise source mimicking the role of a thermal source. In this physical system we observe resonant activation and we measure and characterize the probability density function (pdf) of escape time by finding two distinct regimes. Our physical system is the series of a resistive network $`R`$ with a tunnel diode in parallel to a capacitor (in our case the sum of diode capacitance and input capacitor of the measuring instrument). This system is rather versatile and allows investigate the dynamics of a bistable or metastable system in the presence of noise. It has been used to investigate stochastic resonance and noise enhanced stability . The equation of motion of such system is $$\frac{dv_d}{dt}=\frac{dU(v_d)}{dv_d}+\frac{1}{RC}v_n(t)$$ (1) with $$U(v_d)=\frac{V_Bv_d}{RC}+\frac{v_d^2}{2RC}+\frac{1}{C}_0^{v_d}I(v)𝑑v,$$ (2) where $`R`$ is the biasing resistor of the network, $`C`$ is the capacitance in parallel to the diode, $`V_{rms}`$ is the amplitude of the Gaussian noise $`v_n(t)`$, $`V_B`$ is the biasing voltage and $`I(v)`$ is the nonlinear current-voltage characteristic of the tunnel diode. The effective associated potential is a bistable potential. We control the parameters of our electronic network ($`V_B`$ and $`R`$) in such a way that one of the two wells is much deeper than the other. By setting such strong asymmetry between the two wells we essentially deal with a metastable state. In fact, the probability that the system once escaped goes back into the metastable state during the experimental time is negligible. By using a digital electronic switch we vary the value of the series resistance of our metastable system between the two values $`R_+`$ and $`R_{}`$. When the system switches from $`R_+`$ to $`R_{}`$ two effects take place. One concerns the variation of the height of the barrier of the metastable state whereas the other consists in a slight change of the value of intrinsic time scale of the system (the $`RC`$ term of Eqs. (1) and (2)). In our experiment we use a germanium tunnel diode 1N3149A, which has nominal peak current $`I_p=10.0`$ $`mA`$, peak voltage $`V_p=60`$ $`mV`$, valley current $`I_v=1.3`$ $`mA`$, valley voltage $`V_v=350`$ $`mV`$ and a rather short switching time (less than $`50`$ $`nsec`$). We perform our experiments by choosing $`V_b=8.95`$ $`V`$, $`R_+=1100`$ $`\mathrm{\Omega }`$, $`R_{}=1080`$ $`\mathrm{\Omega }`$ and $`C=45`$ $`pF`$. The noise $`v_n(t)`$ is obtained starting from a digital pseudo-random generator. The noise voltage is a stochastic Gaussian noise synthesized by a commercial source (the noise generator DS345 of Stanford Instruments). The Gaussian noise $`v_n(t)`$ is added to the bias signal $`V_B`$ by an electronic adder based on a low-noise wide band operational amplifier. At the output of the operational amplifier we measure the noise $`v_n(t)`$. It is a Gaussian noise characterized by a spectral density which is flat at low frequency ($`f<1`$ MHz), having a moderate (approximately 7 dB) increase in the region ($`1<f<3`$ MHz) and quickly decreasing after the cut-off frequency ($`f_{cutoff}`$=4.6 MHz). By defining the correlation time of the Gaussian noise as the time at which the normalized autocorrelation function assumes the value $`1/e`$, we measure $`\tau _n=68`$ ns. In addition to the Gaussian noise source, a dichotomous noise source is also present. The dichotomous noise source is obtained by using the commercial chip MM5437 of National semiconductor. An external clock drives the dichotomous noise source and allows us to control the noise correlation time over a wide range. The normalized autocorrelation function of this pseudorandom noise is linearly decreasing from 1 to zero in one clock period $`1/f_c`$. After this time the autocorrelation function is equal to zero within the experimental errors. By defining the correlation time as before, we have that $`\gamma \tau _C^11.3\times f_c`$. In our experiments the correlation rate $`\gamma `$ is varied in the range from $`\gamma _{min}=18`$ Hz to $`\gamma _{max}=316228`$ Hz. This is an experimental interval consisting of more than 4 frequency decades. To investigate a so wide range of frequency we have to overcome two experimental conflicting constraints: (i) we are forced to set the time constant of our system $`\tau _sRC`$ to a low value satisfying the inequality $`\gamma _{max}1/\tau _s`$ and (ii) we need to use a high value of $`\tau _s`$ to maintain the ratio $`\tau _n/\tau _s`$ as low as possible to conduct our experiments in the“white noise” limit of $`v_n(t)`$. The best compromise we find is to set $`\tau _sR_+C=49.5`$ ns. With this choice $`1/\tau _s50\gamma _{max}`$ and $`\tau _n/\tau _s1.40`$. In other words we guarantee the investigation of the resonant activation phenomenon in a wide range of frequency by performing our experiments in a regime of moderately colored noise. Under computer control, we measure, for each value of the correlation time of the binary noise, $`5\times 10^3`$ escape times from the metastable fluctuating state. In each statistical realization, the system is put into the metastable state by an electronic switch. We define the escape time $`T`$ as the time interval measured between the setting of the system in the metastable state ($`t=0`$) and the crossing of $`v_d`$ of a voltage threshold. The selected voltage threshold ($`v_d=0.4`$ $`V`$) ensures that the system is quite far from the starting well. The exact value of the threshold is not a determinant parameter because the escape from the well is fast (of the order of $`\tau _s`$). From the set of measured escape times $`T`$, we determine the average value $`<T>`$ and the distribution $`P(T)`$. The first measurement is devoted to determine the Kramers rates as a function of the noise amplitude $`V_{rms}`$ for the two metastable states in the absence of the dichotomous noise modulation. In our experiments we vary $`V_{rms}`$ in the interval from 0.816 V to 1.00 V and we observe that the logarithm of the average escape time is a linear function of the inverse of the noise intensity $`1/V_{rms}^2`$ for the two states “+” and “-” of the metastable state. In other words, the measured values lay in a straight line verifying the Kramers law in both cases. The two metastable states are characterized by different slopes of the two lines fitting experimental data. These experimental observations confirm that the shape of the metastable potential is different in the two states “+” and “-”. The observation of a Kramers law is in agreement with the theoretical prediction of the escape from a metastable state both in the absence and in the presence of colored noise. Specifically, in a regime of colored noise $`v_n(t)`$, the expected functional form is still Kramers-like and can be written as $`<T>=C\mathrm{exp}[\mathrm{\Delta }U_m\tau _s^2/V_{rms}^2\tau _n]`$ . Here $`C`$ is a prefactor and $`\mathrm{\Delta }U_m`$ is the measured potential barrier height associated to a system characterized by a time constant $`\tau _s`$ and evolving in the presence of a colored noise with correlation time $`\tau _n`$. Theoretical studies predict that $`\mathrm{\Delta }U_m`$ is a function of $`\tau _n/\tau _s`$ . We verify that we are in a regime of colored noise by performing an experimental test . We perform our investigation of the average escape time as a function of the correlation rate of the dichotomous noise for two different values of the amplitude of the Gaussian noise. The chosen values are $`V_{rms}=0.816`$ V and $`V_{rms}=0.852`$ V. In the absence of switching between the two states, the average escape times from state “+” and “-” are $`<T_+>=0.030`$ s and $`<T_{}>=0.0021`$ s when $`V_{rms}=0.816`$ V and $`<T_+>=0.0082`$ s and $`<T_{}>=0.00081`$ s when $`V_{rms}=0.852`$ V. In the presence of switching between the two states the behavior is different. In Fig. 1 we show the average escape time as a function of $`\gamma `$ in a log-log plot. We also indicate the value of the Kramers rates $`\mu _+=1/<T_+>`$ and $`\mu _{}=1/<T_{}>`$ for the two sets of experiments. In both cases the distinctive characteristics of resonant activation are observed , the average escape time initially decreases, reaches a minimum value and then again increases as a function of $`\gamma `$. Specifically for values of the correlation rate $`\gamma `$ less than $`\mu _+`$ the average escape time approaches the value $`<T>=(<T_{}>+<T_+>)/2`$ predicted by resonant activation theory . We address this regime as the regime of slow dynamics of the barrier height. For values of the correlation rate approximately ten times higher than $`\mu _{}`$, the minimum value of the average escape time is observed. Here resonant activation is fully effective. In this regime, the value of the average escape time is in both cases approximately 10% less than the one expected from resonant activation theories in the kinetic approximation , namely $`<T>=((\mu _++\mu _{})/2)^1`$. We have not a definitive explanation for this discrepancy. One possibility is that this effect reflects the fact that in our experimental set-up our system switches between two states in a regime of colored Gaussian noise rather than in the regime of “white noise”. A second one is that this behavior is related to the fact that our system switches between two states which in addition to a different potential barrier height are also characterized by different shape of the potential and different time constant. For values of $`\gamma `$ higher than 10 kHz the average escape time starts to increase as a function of $`\gamma `$ and approaches the value associated with an effective potential characterized by an average barrier at the highest value of $`\gamma `$. This is the regime of fast dynamics of the barrier height. A comparison of the two sets of experiments of Fig. 1 also shows that the region of resonant activation becomes more flat when the noise intensity decreases. For our experiments the barrier heights are kept constant in the two series of experiments. This observation is complementary to the theoretical observation that the resonant activation region becomes more flat when the barrier height is increased while the noise intensity is kept constant . For each value of the noise intensity $`V_{rms}^2`$ and for each value of the correlation rate we measure $`P(T)`$, the pdf of the escape time. In Fig. 2 we show all the pdfs measured with $`V_{rms}=0.816`$ V in a 3-dimensional semi-logarithmic plot. The regimes of slow dynamics, resonant activation and fast dynamics of the dichotomous noise are clearly observed moving from the left to the right. Concerning the shape of the pdf, our results show that two distinct regimes are present. In particular for values of $`\gamma `$ greater than $`\mu _{}`$, $`P(T)`$ is well described by an exponential decay function $`P(T)=\mathrm{exp}[T/<T>]/<T>`$ whereas in the opposite regime the pdf is non-exponential. Hence, in the resonant activation regime, the pdf has an exponential shape. This experimental observation is in agreement with the numerical observation that the system escapes preferentially through the state with the lowest barrier at the resonant activation . In other words, at the resonant activation the system approximately experiences a single potential barrier and this ends up into an exponential pdf. Exponential pdfs are also observed for higher values of $`\gamma `$ in the fast dynamics regime. The experimental detection of an exponential pdf for high values of $`\gamma `$ is in agreement with the simple description that the system experiences an effective potential with an average barrier for the highest values of the correlation rate. The final investigation of our study concerns the degree of non-exponential behavior observed when $`\gamma <\mu _{}`$. To quantify the difference between the real pdf and an exponential pdf we measure the standard deviation of the average escape time. For an exponential pdf the two observables coincide, hence a difference in these observables manifests a non-exponential behavior of the pdf. In Fig. 3 we show the average escape time together with its standard deviation as a function of $`\gamma `$ measured by setting $`V_{rms}=0.816`$ V . The two regimes are clearly seen. For high values of $`\gamma `$ the pdf is well described by an exponential function, here the average escape time and its standard deviation essentially coincide. When the correlation rate is less than $`\mu _{}`$ a deviation from the exponential form starts to emerge and becomes more pronounced for lowest values of $`\gamma `$. In particular, for very low values of the correlation rate ($`\gamma <\mu _+<\mu _{}`$) the pdf assume the form of a double exponential $`P(T)={\displaystyle \frac{1}{2<T_+>}}\mathrm{exp}[T/<T_+>]+`$ (3) $`{\displaystyle \frac{1}{2<T_{}>}}\mathrm{exp}[T/<T_{}>].`$ (4) The inset of Fig. 3 shows $`P(T)`$ measured when $`\gamma =13`$ Hz in a semi-logarithmic plot. The shape is clearly non-exponential. It is well approximated by Eq. (3) when the measured values of $`<T_+>`$ and $`<T_{}>`$ are used (solid line in the inset). In this regime the correlation rate is smaller than Kramers rates of both states. This implies that the system essentially starts and ends each realization of the stochastic process in one of the two states with probability 1/2 and the overall pdf is given by Eq. (3). In this letter we present an experimental study detecting resonant activation and we verify some theoretical results obtained in model systems for limit regimes of the correlation rate. Resonant activation is observed in spite of the fact that the Gaussian noise mimicking the presence of temperature is colored instead of being “white”. In our opinion, our result shows that the resonant activation phenomenon is pretty robust and may be observed in a variety of physical systems. By investigating the pdf of the average escape time, we detect a non-exponential shape of the pdf in the regime of low values of $`\gamma `$. This finding suggests that systems characterized by a non-exponential escape time probability distribution could be simply modeled in terms of metastable system with a fluctuating barrier. We wish to thank INFM, ASI (contract ARS-98-83) and MURST for financial support.
no-problem/0002/astro-ph0002397.html
ar5iv
text
# Observational constraints on the spectral index of the cosmological curvature perturbation ## I Introduction It is generally supposed that structure in the Universe originates from a primordial gaussian curvature perturbation, generated by slow-roll inflation. The spectrum $`𝒫_{}(k)`$ of the curvature perturbation is the point of contact between observation and models of inflation. It is given in terms of the inflaton potential $`V(\varphi )`$ by<sup>*</sup><sup>*</sup>*As usual, $`M_\mathrm{P}=2.4\times 10^{18}\text{GeV}`$ is the Planck mass, $`a`$ is the scale factor and $`H=\dot{a}/a`$ is the Hubble parameter, and $`k/a`$ is the wavenumber. We assume the usual slow-roll conditions $`M_\mathrm{P}^2|V^{\prime \prime }/V|1`$ and $`M_\mathrm{P}^2(V^{}/V)^21`$, leading to $`3H\dot{\varphi }V^{}`$. $$\frac{4}{25}𝒫_{}(k)=\frac{1}{75\pi ^2M_\mathrm{P}^6}\frac{V^3}{V^2},$$ (1) where the potential and its derivatives are evaluated at the epoch of horizon exit $`k=aH`$. To work out the value of $`\varphi `$ at this epoch one uses the relation $$\mathrm{ln}(k_{\mathrm{end}}/k)N(k)=M_\mathrm{P}^2_{\varphi _{\mathrm{end}}}^\varphi (V/V^{})\text{d}\varphi ,$$ (2) where $`N(k)`$ is actually the number of $`e`$-folds from horizon exit to the end of slow-roll inflation. At the scale explored by the COBE measurement of the cosmic microwave background (cmb) anisotropy, $`N(k_{\mathrm{COBE}})`$ depends on the expansion of the Universe after inflation in the manner specified by Eq. (31) below. Given this prediction, the observed large-scale normalization $`𝒫_{}^{1/2}10^5`$ provides a strong constraint on models of inflation. Taking that for granted, we are here interested in the scale-dependence of the spectrum, defined by the, in general, scale-dependent spectral index $`n`$; $$n(k)1\frac{\text{d}\mathrm{ln}𝒫_{}}{\text{d}\mathrm{ln}k}.$$ (3) According to most inflation models, $`n`$ has negligible variation on cosmological scales so that $`𝒫_{}k^{n1}`$, but we shall also discuss an interesting class of models giving a different scale-dependence. From Eqs. (1) and (2), $`n1`$ $`=`$ $`2M_\mathrm{P}^2(V^{\prime \prime }/V)3M_\mathrm{P}^2(V^{}/V)^2,`$ (4) and in almost all models of inflation, Eq. (4) is well approximated by $$n1=2M_\mathrm{P}^2(V^{\prime \prime }/V).$$ (5) We see that the spectral index measures the shape of the inflaton potential $`V(\varphi )`$, being independent of its overall normalization. For this reason, it is a powerful discriminator between models of inflation. The observational constraints on the spectral index have been studied by many authors, but a new investigation is justified for two reasons. On the observational side, the cosmological parameters are at last being pinned down, as is the height of the first peak in the spectrum the cmb anisotropy. No study has yet been given which takes on board these observational developments, while at the same time taking on board the crucial influence of the reionization epoch on the peak height. On the theory side, it is known that the spectral index may be strongly scale-dependent if the inflaton has a gauge coupling, leading to what are called running-mass models. The quite specific, two-parameter prediction for the scale dependence of the spectral index in these models has not been compared with presently available data. ## II The observational constraints on the parameters of the $`\mathrm{\Lambda }`$CDM model Observations of various types indicate that we live in a low density Universe, which is at least approximately flat . In the interest of simplicity we therefore adopt the $`\mathrm{\Lambda }`$CDM model, defined by the requirements that the Universe is exactly flat, and that the non-baryonic dark matter is cold with negligible interaction. Essentially exact flatness is predicted by inflation, unless one invokes a special kind of model, or special initial conditions. Also, there is no clear motivation to modify the cold dark matter hypothesis.In particular, the rotation curves of dwarf galaxies may be compatible with cold dark matter . We shall constrain the parameters of the $`\mathrm{\Lambda }`$CDM model, including the spectral index, by performing a least-squares fit to key observational quantities. ### A The parameters The $`\mathrm{\Lambda }`$CDM model is defined by the spectrum $`𝒫_{}(k)`$ of the primordial curvature perturbation, and the four parameters that are needed to translate this spectrum into spectra for the matter density perturbation and the cmb anisotropy. The four parameters are the Hubble constant $`h`$ (in units of $`100\text{km}\text{s}^1\text{Mpc}^1`$), the total matter density parameter $`\mathrm{\Omega }_0`$, the baryon density parameter $`\mathrm{\Omega }_\mathrm{b}`$, and the reionization redshift $`z_\mathrm{R}`$. As we shall describe, $`z_\mathrm{R}`$ is estimated by assuming that reionization occurs when some fixed fraction $`f`$ of the matter collapses. Within the reasonable range $`f10^4`$ to $`1`$, the main results are insensitive to the precise value of $`f`$. The spectrum is conveniently specified by its value at a scale explored by COBE, and the spectral index $`n(k)`$. We shall consider the usual case of a constant spectral index, and the case of running mass models where $`n(k)`$ is given by a two-parameter expression. Since $`𝒫_{}(k_{\mathrm{COBE}})`$ is determined very accurately by the COBE data (Eq. (16) below) we fix its value. Excluding $`z_\mathrm{R}`$ and $`𝒫_{}(k_{COBE})`$, the $`\mathrm{\Lambda }`$CDM model is specified by four parameters in the case of a constant spectral index, or by five parameters in the case of running mass inflation models. ### B The data To compare the $`\mathrm{\Lambda }`$CDM model with observation, we take as our starting point a study performed a few years ago . We consider the same seven observational quantities as in the earlier work, since they still summarize most of the relevant data. Of these quantities, three are the cosmological quantities $`h`$, $`\mathrm{\Omega }_0`$, $`\mathrm{\Omega }_\mathrm{B}`$, which we are also taking as free parameters. The crucial difference between the present situation and the earlier one is that observation is beginning to pin down $`h`$ and $`\mathrm{\Omega }_0`$. Judging by the spread of measurements, the systematic error, while still important, is no longer completely dominant compared with the random error. At least at some crude level, it therefore makes sense to pretend that the errors are all random, and to perform a least squares fit. The adopted values and errors are given in Table 1, and summarized below. In common with earlier investigations, we take the errors to be uncorrelated. #### a Hubble constant On the basis of observations that have nothing to do with large scale structure it seems very likely that $`h`$ is in the range $`0.5`$ to $`0.8`$. We therefore adopt, at notionally the 2-$`\sigma `$ level, the value $`h=0.65\pm 0.15`$, corresponding to $`h=0.65\pm 0.075`$ at the notional 1-$`\sigma `$ level. #### b The matter density The case of the total density parameter $`\mathrm{\Omega }_0`$ is similar to that of the Hubble parameter. On the basis of observations that have nothing to do with large scale structure, it seems very likely that $`\mathrm{\Omega }_0`$ lies between $`0.2`$ and $`0.5`$, and we adopt at the notional 1-$`\sigma `$ level the value $`\mathrm{\Omega }_0=0.35\pm 0.075`$. #### c The baryon density As described for instance in , the baryon density parameter $`\mathrm{\Omega }_\mathrm{b}`$ has two likely ranges. At the 1-$`\sigma `$ level, these are estimated in to be $`\mathrm{\Omega }_\mathrm{b}h^2=.019\pm .002`$ and $`\mathrm{\Omega }_\mathrm{b}h^2=.007\pm .0015`$. We adopt the high $`\mathrm{\Omega }_\mathrm{b}`$ range, which is generally regarded as the most likely, though our conclusions would be much the same if we were to adopt the low range. #### d The rms density perturbation at $`8h^1\text{Mpc}`$ Primarily through the abundance of rich galaxy clusters, a useful constraint on the primordial spectrum is provided by the rms density contrast, in a comoving sphere with present radius $`R10h^1\text{Mpc}`$, at redshift $`z=0`$ to a few. The constrained quantity is conventionally taken to be the present, linearly evolved rms density contrast at $`R=8h^1\text{Mpc}`$, denoted by $`\sigma _8`$. A recent estimate based on low-redshift clusters gives at 1-$`\sigma `$ $`\sigma _8`$ $`=`$ $`\stackrel{~}{\sigma }_8\mathrm{\Omega }_0^{0.47}`$ (6) $`\stackrel{~}{\sigma }_8`$ $`=`$ $`.560\pm .059.`$ (7) This constrains the primordial curvature perturbation on the scale $`kk_8(8h^1\text{Mpc})^1`$. #### e The shape parameter The slope of the galaxy correlation function on scales of order $`1h^1`$ to $`100h^1\text{Mpc}`$ is conveniently specified by a shape parameter $`\stackrel{~}{\mathrm{\Gamma }}`$, defined by $`\stackrel{~}{\mathrm{\Gamma }}`$ $`=`$ $`\mathrm{\Gamma }0.28(n_8^11)`$ (8) $`\mathrm{\Gamma }`$ $`=`$ $`\mathrm{\Omega }_0h\mathrm{exp}(\mathrm{\Omega }_\mathrm{B}\mathrm{\Omega }_\mathrm{B}/\mathrm{\Omega }_0).`$ (9) (The quantity $`\mathrm{\Gamma }`$ determines, to an excellent approximation, the shape of the matter transfer function on scales $`k^11`$ to $`100h^1\text{Mpc}`$, while the second term accounts for the scale dependence of the primordial spectrum. For definiteness, we evaluate $`n`$ at $`k=k_8`$, in the case that $`n`$ has significant scale dependence.) A fit reported in gives $`\stackrel{~}{\mathrm{\Gamma }}=.23`$ with a $`15\%`$ uncertainty at 2-$`\sigma `$. A more recent fit with more data gives $`\stackrel{~}{\mathrm{\Gamma }}=.20`$ to $`.25`$, depending on the assumed velocity dispersion, but with $`15\%`$ statistical uncertainty at the 1-$`\sigma `$ level.See Table 3 of ; in the present context one should focus on the last three rows of the Table. We therefore adopt $`\stackrel{~}{\mathrm{\Gamma }}=.23`$, with $`15\%`$ uncertainty at 1-$`\sigma `$. #### f The COBE normalization of the spectrum To a good approximation, the spectrum $`C_{\mathrm{}}`$ of the cmb anisotropy at large $`\mathrm{}`$ is sensitive to the primordial spectrum on the corresponding scale at the particle horizon, $`k(\mathrm{},\mathrm{\Omega }_0)`$ $`=`$ $`{\displaystyle \frac{\mathrm{}}{x_{\mathrm{hor}}(\mathrm{\Omega }_0)}}`$ (10) $`x_{\mathrm{hor}}`$ $``$ $`2H_0^1\mathrm{\Omega }_0^{1/2}\left(1+0.084\mathrm{ln}\mathrm{\Omega }_0\right).`$ (11) The COBE measurements cover the range $`2\mathrm{}\text{ }<30`$, and they constrain $`𝒫_{}(k)`$ on the corresponding scales. Instead of $`𝒫_{}`$, it is usual in this context to consider a quantity $`\delta _H`$, which is of direct interest for studies of structure formation and is defined by $`\delta _H(k)`$ $``$ $`{\displaystyle \frac{2}{5}}{\displaystyle \frac{g(\mathrm{\Omega }_0)}{\mathrm{\Omega }_0}}𝒫_{}^{}{}_{}{}^{1/2}(k)`$ (12) $`g(\mathrm{\Omega }_0)`$ $``$ $`{\displaystyle \frac{5}{2}}\mathrm{\Omega }_0\left({\displaystyle \frac{1}{70}}+{\displaystyle \frac{209\mathrm{\Omega }_0}{140}}{\displaystyle \frac{\mathrm{\Omega }_0^2}{140}}+\mathrm{\Omega }_0^{4/7}\right)^1.`$ (13) The factor $`g/\mathrm{\Omega }_0`$, normalized to 1 at $`\mathrm{\Omega }_0=1`$, represents the $`\mathrm{\Omega }_0`$-dependence of the present, linearly evolved, density contrast after pulling out the scale-dependent transfer function and $`𝒫_{}`$. Equivalently, $`a(\mathrm{\Omega })g(\mathrm{\Omega })`$ is the time-dependence of the density contrast after matter domination. According to the ordinary (as opposed to ’integrated’) Sachs-Wolfe approximation $$C_{\mathrm{}}=\frac{4\pi }{25}_0^{\mathrm{}}\frac{\text{d}k}{k}j_{\mathrm{}}^2\left(kx_{\mathrm{hor}}\right)𝒫_{}(k).$$ (14) In the regime $`\mathrm{}1`$, it satisfies Eq. (10) because $`j_{\mathrm{}}^2`$ peaks when its argument is equal to $`\mathrm{}`$. In the $`\mathrm{\Lambda }`$CDM model, the Sachs-Wolfe approximation is quite good in COBE regime, but still the quality of the data justify using the full (linear) calculation, given for instance by the output of the CMBfast package . Consider first the case $`n=1`$ (scale-independent spectrum). In the Sachs-Wolfe approximation, the value of $`𝒫_{}`$ obtained by fitting the COBE data is independent of the cosmological parameters $`h`$, $`\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_\mathrm{b}`$. Using instead the full calculation, a fit to the data by Bunn and White gives $`\delta _H`$ $`=`$ $`\mathrm{\Omega }_0^{0.7850.05\mathrm{ln}\mathrm{\Omega }_0}\stackrel{~}{\delta }_H`$ (15) $`10^5\stackrel{~}{\delta }_H`$ $`=`$ $`1.94\pm 0.08,`$ (16) As expected, the corresponding spectrum of the curvature perturbation has only mild dependence on $`\mathrm{\Omega }_0`$ ($`𝒫_{}\mathrm{\Omega }_0^{0.03}`$). Consider next the case of a scale-independent spectral index $`n1`$. Dropping an insignificant term quadratic in $`n1`$, the fit of Bunn and White handles the $`n`$-dependence by assuming that Eq. (16) holds at a ’pivot’ scale $`k_{\mathrm{COBE}}`$ which is independent of $`\mathrm{\Omega }_0`$.<sup>§</sup><sup>§</sup>§Keeping the quadratic term, the ’pivot’ scale at which Eq. (16) holds is dependent on $`n`$, but still independent of $`\mathrm{\Omega }_0`$. A related fit by Bunn, Liddle and White keeps a cross-term in $`(n1)`$ and $`\mathrm{\Omega }_0`$, which makes the ’pivot’ scale increase with $`\mathrm{\Omega }_0^1`$, though not as strongly as in Eq. (18) below. $$k_{\mathrm{COBE}}6.6H_0,.$$ (17) Insofar as the approximation Eq. (10) is valid, this corresponds to fixing $`C_{\mathrm{}}`$ at an $`\mathrm{\Omega }_0`$-dependent value of $`\mathrm{}`$, which is $`\mathrm{}=13`$ for $`\mathrm{\Omega }_0`$, and $`\mathrm{}=22`$ for our central value $`\mathrm{\Omega }_0=.35`$. In the case of a scale-independent $`n`$, an alternative fit is provided by the CMBfast package, which chooses $`𝒫_{}(k)`$ to fit an $`n`$-independent best-fit value of $`C_{10}`$. As expected, the output of CMBfast is in good agreement with the Bunn-White fit. Even better agreement is obtained using $$k_{\mathrm{COBE}}(\mathrm{\Omega }_0)13.2/x_{\mathrm{hor}},$$ (18) which reduces to Eq. (17) for $`\mathrm{\Omega }_0=1`$. Insofar as Eq. (10) is valid, this $`\mathrm{\Omega }_0`$-dependent pivot for $`k`$ corresponds to an $`\mathrm{\Omega }_0`$-independent pivot for $`\mathrm{}`$, namely $`\mathrm{}=13`$. We are also interested in the scale-dependent $`n`$ predicted by the running-mass inflation models. However, as the range of scales explored by COBE corresponds to only $`\mathrm{\Delta }N2`$, with the central values of $`\mathrm{}`$ the most important, we can take the variation of $`n`$ to be negligible on these scales. Guided by these considerations, we have adopted three slightly different versions of the COBE normalization, chosen for convenience according to the context. When calculating $`\stackrel{~}{\mathrm{\Gamma }}`$ and $`\stackrel{~}{\sigma }_8`$, we in all cases fixed $`\delta _H`$ at the central value given by Eq. (16), at the Bunn-White pivot point $`k_{\mathrm{COBE}}`$. When calculating the height of the first peak in the cmb anisotropy, in the case of the running-mass model, we used Eq. (10), with $`\delta _H`$ again fixed at the central value given by Eq. (16) but now evaluated at the slightly more accurate pivot point $`k_{\mathrm{COBE}}(\mathrm{\Omega }_0)`$. Finally, when evaluating the peak height in the case of scale-independent $`n`$, we used a linear fit to the output of CMBfast. Explicit expressions for the peak height will be given after considering the effect of reionization. #### g The peak height The model under consideration predicts a peak in the cmb anisotropy at $`\mathrm{}210`$ to $`230`$, and presently available data confirm the existence of a peak at about this position. We adopt as a crucial observational quantity $`\stackrel{~}{C}_{\mathrm{peak}}`$, defined as the maximum value of $$\stackrel{~}{C}_{\mathrm{}}\mathrm{}(\mathrm{}+1)C_{\mathrm{}}/2\pi .$$ (19) Presently available data give conflicting estimates of $`\sqrt{\stackrel{~}{C}_{\mathrm{peak}}}`$, with central values in the range 70 to $`90\mu \text{K}`$. We adopt $`(80\pm 10)\mu \text{K}`$ with the uncertainty taken to be at 1-$`\sigma `$. ### C Reionization The effect of reionization on the cmb anisotropy is determined by the optical depth $`\tau `$. We assume sudden, complete reionization at redshift $`z_\mathrm{R}`$, so that the optical depth $`\tau `$ is given by $$\tau =0.035\frac{\mathrm{\Omega }_\mathrm{b}}{\mathrm{\Omega }_0}h\left(\sqrt{\mathrm{\Omega }_0(1+z_\mathrm{R})^3+1\mathrm{\Omega }_0}1\right).$$ (20) In previous investigations, $`z_\mathrm{R}`$ has been regarded as a free parameter, usually fixed at zero or some other value. In this investigation, we instead take on board that fact that $`z_\mathrm{R}`$ can be estimated, in terms of the parameters that we are varying plus assumed astrophysics. Indeed, it is usually supposed that reionization occurs at an early epoch, when some fraction $`f`$ of the matter has collapsed, into objects with mass very roughly $`M=10^6M_{}`$. Estimates of $`f`$ are in the range $$10^{4.4}\text{ }<f\text{ }<1.$$ (21) In the case $`f1`$, the Press-Schechter approximation gives the estimate $$1+z_\mathrm{R}\frac{\sqrt{2}\sigma (M)}{\delta _\mathrm{c}g(\mathrm{\Omega }_0)}\mathrm{erfc}^1(f)(f1).$$ (22) Here $`\sigma (M)`$ is the present, linearly evolved, rms density contrast with top-hat smoothing, and $`\delta _\mathrm{c}=1.7`$ is the overdensity required for gravitational collapse. ( $`g`$ is the suppression factor of the linearly evolved density contrast at the present epoch, which does not apply at the epoch of reionization.) In the case $`f1`$, one can justify only the rough estimate $$1+z_\mathrm{R}\frac{\sigma (M)}{g(\mathrm{\Omega }_0)}(f1).$$ (23) (This estimate is not very different from the one that would be obtained by using $`f=1`$ in Eq. (22).) In our fits, we fix $`f`$ at different values in the above range, and find that the most important results are not very sensitive to $`f`$ even though the corresponding values of $`z_\mathrm{R}`$ can be quite high. ### D The predicted peak height The CMBfast package gives $`C_{\mathrm{}}`$, for given values of the parameters with $`n`$ taken to be scale-independent. Following , we parameterize the CMBfast output at the first peak in the form $$\sqrt{\stackrel{~}{C}_{\mathrm{peak}}}=\sqrt{\stackrel{~}{C}_{\mathrm{peak}}^{(0)}}\left(\frac{220}{10}\right)^{\nu /2},$$ (24) where $$\nu a_n(n1)+a_h\mathrm{ln}(h/0.65)+a_0\mathrm{ln}(\mathrm{\Omega }_0/0.35)+a_\mathrm{b}h^2(\mathrm{\Omega }_\mathrm{b}\mathrm{\Omega }_{\mathrm{b}}^{}{}_{}{}^{(0)})0.65f(\tau )\tau .$$ (25) $`\sqrt{\stackrel{~}{C}_{\mathrm{peak}}^{(0)}}`$ is the value of $`\sqrt{\stackrel{~}{C}_{\mathrm{peak}}}`$ evaluated with each term of $`\nu `$ equal to zero. The coefficients for the high choice $`\mathrm{\Omega }_\mathrm{b}^{(0)}h^2=0.019`$ are $`a_n=0.88`$, $`a_h=0.37`$, $`a_0=0.16`$, $`a_\mathrm{b}=5.4`$, and $`\sqrt{\stackrel{~}{C}_{\mathrm{peak}}^{(0)}}=77.5\mu \text{K}`$. The formula reproduces the CMBfast results within 10% for a 1-$`\sigma `$ variation of the cosmological parameters, $`h,\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_\mathrm{b}`$, and $`n_{\mathrm{COBE}}=1.0\pm 0.05`$. With the function $`f(\tau )`$ set equal to 1, the term $`0.65\tau `$ is equivalent to multiplying $`\sqrt{\stackrel{~}{C}_{\mathrm{peak}}}`$ by the usual factor $`\mathrm{exp}(\tau )`$. We use the following formula, which was obtained by fitting the output of CMBfast, and is accurate to a few percent over the interesting range of $`\tau `$; $$f=10.165\tau /(0.4+\tau ).$$ (26) For the running-mass model, we start with the above estimate for $`n=1`$, and adjust it using Eq. (10). Adopting the COBE normalization mentioned earlier, this adjustment is $$\frac{\sqrt{\stackrel{~}{C}_{\mathrm{peak}}}}{\sqrt{\stackrel{~}{C}_{\mathrm{peak}}^{(\mathrm{n}=1)}}}=\frac{\delta _H(k(\mathrm{},\mathrm{\Omega }_0))}{\delta _H(k_{\mathrm{COBE}}(\mathrm{\Omega }_0))}.$$ (27) In the case of constant $`n`$, this prescription corresponds to the previous one with $`a_n=0.91`$, in good agreement with the output of CMBfast. ## III Constant spectral index ### A The observational constraints Most models of inflation make $`n`$ roughly scale-independent, over the cosmologically interesting range. We therefore begin by considering the case that $`n`$ is exactly scale-independent. The resulting bound on $`n`$ is shown in Figure 1. In the left-hand panel we make the traditional assumption that reionization occurs at some fixed redshift $`z_\mathrm{R}`$. In the right-hand panel we make the more reasonable assumption, that it occurs when some fixed fraction $`f`$ of the matter collapses, in a reasonable range $`10^{4.5}<f<1`$. The bounds in the latter case are relatively insensitive to $`f`$, because the corresponding range of $`z_\mathrm{R}`$ is narrower; everywhere on the displayed curves, $`z_\mathrm{R}`$ is within (usually well within) the range $`8`$ to $`36`$. Details of the fit for $`z_\mathrm{R}=20`$ are given in Table 1. Practically the same fit is obtained if instead we fix $`f`$ at $`10^{1.9}`$. The least-squares fits were performed with the CERN minuit package, and the quoted error bars invokes the usual parabolic approximation (i.e., it they are the diagonal elements of the error matrix). The exact error bars given by the same package agree to better than 10%. For $`z_\mathrm{R}`$, our results are similar to those obtained in , but more precise because of improvements in our knowledge of the cosmological parameters; they are also similar to those obtained in , if we take the errors to be the ones given by the error matrix. (We do not know why the exact error bars in are about three times bigger, in conflict with both our work and that of .) After we completed this work, the BOOMeranG and MAXIMA measurements of the cmb anisotropy appeared, both of which extend to the second acoustic peak. Fits to these data seem to again give a similar constraint on $`n`$, but the values for $`\mathrm{\Omega }_\mathrm{b}`$, $`\mathrm{\Omega }_\mathrm{c}`$ and $`h`$ outside our adopted 2-$`\sigma `$ range. At the time of writing, the new cmb data have not been included in a fit of the type that we are performing (i.e., with with strong prior requirements on the cosmological parameters, as well as on the small-scale data $`\stackrel{~}{\sigma }_8`$ and $`\stackrel{~}{\mathrm{\Gamma }}`$). ### B Models of inflation giving $`n<1`$ Although the quality and quantity of data are insufficient for a proper statistical analysis, these bounds on $`n`$ are very striking when compared with theoretical expectations. These expectations are summarizedThis Table excludes the running-mass models to be discussed later, and a recently-proposed model giving $`(n1)/2=2/N`$. It also excludes the ad hoc ’chaotic inflation’ potentials $`V\varphi ^p`$, which give $`n1=(2+p)/(2N)`$ with a significant gravitational contribution to the cmb anisotropy. in Tables 2 and 3, and we now discuss them beginning with the usual case $`n<1`$ (red spectrum). Details of the models and references are given in . The simplest prediction is for a potential of the formIn this expression and in Eqs. (29) and (36), the remaining terms are supposed to be negligible, and $`V_0`$ is supposed to dominate, while cosmological scales leave the horizon. $$V=V_0\frac{1}{2}m^2\varphi ^2+\mathrm{},$$ (28) leading to $`n1=2M_\mathrm{P}^2m^2/V_0`$. This is the form that one expects if $`\varphi `$ is a string modulus (Modular Inflation), or a pseudo-Goldstone boson (Natural Inflation), or the radial part of a massive field spontaneously breaking a symmetry (Topological Inflation). The vacuum expectation value (vev) of $`\varphi `$ in these models is expected to be of order $`M_\mathrm{P}`$ or less, while the potential Eq. (28) gives $`\varphi (1n)^{1/2}M_\mathrm{P}`$. Therefore, the present bound $`n\text{ }>0.9`$ is already beginning to disfavor these models. The potential Eq. (28) may however give $`n`$ very close to 1 if the potential steepens after cosmological scales leave the horizon, for instance in an inverted hybrid inflation model. Of the remaining models of Table 2, those giving a red spectrum involve a potential basically of the form $$V=V_0\left(1+c\varphi ^p+\mathrm{}\right),$$ (29) with $`c`$ negative and $`p`$ not in the range $`1p2`$. ( ’New’ inflation corresponds to $`p`$ an integer $`3`$, while mutated hybrid inflation models account for the rest of the range. The logarithmic and exponential potentials in Table 2 may be regarded as the limits respectively $`p0`$ and $`p\mathrm{}`$.) With this form, the prediction is $$n1=\left(\frac{p1}{p2}\right)\frac{2}{N}.$$ (30) For the moment, we ignore the mild scale-dependence and set $`N=N_{\mathrm{COBE}}`$. Depending on the history of the Universe, $$N_{\mathrm{COBE}}60\mathrm{ln}(10^{16}\text{GeV}/V^{1/4})\frac{1}{3}\mathrm{ln}(V^{1/4}/T_{\mathrm{reh}})N_0.$$ (31) In this expression, $`T_{\mathrm{reh}}`$ is the reheat temperature, while the final contribution $`N_0`$ (negative in all reasonable cosmologies) encodes our ignorance about what happens between the end of inflation and nucleosynthesis. Let us pause to discuss this ignorance. In the present context, we are defining $`T_{\mathrm{reh}}`$ as the temperature when the Universe first becomes radiation dominated after inflation. In the conventional cosmology, radiation domination persists until the present matter dominated era begins, long after nucleosynthesis. If this is the case, and if also slow-roll inflation gives way promptly to matter domination as is the case in most models, then $`N_0=0`$.<sup>\**</sup><sup>\**</sup>\**In some inflation models, slow-roll is followed by an extended era of fast-roll giving $`N_0`$ of order a few; for simplicity we ignore that possibility in the present discussion. In this conventional case, $`N_{\mathrm{COBE}}`$ is largely determined by $`V_0^{1/4}`$, and hence by the model of inflation. It is certainly in the range $`32`$ to $`60`$ (lower limit corresponding to $`V_0^{1/4}=100\text{GeV}`$) and much more likely in the range $`40`$ to $`60`$ (lower limit corresponding to $`V_0^{1/4}10^{10}\text{GeV}`$ and $`T_{\mathrm{reh}}100\text{GeV}`$). However, the conventional cosmology need not be correct. In particular, the initial radiation-dominated era may give way to matter domination by a late-decaying particle, and most crucially there may be an era of thermal inflation during the transition. This unconventional cosmology, with its huge entropy dilution after inflation, is indeed demanded in many inflation models, if gravitinos created from the vacuum fluctuation persists to late times . Even one bout of thermal inflation will give $`N_010`$ and additional bout(s) cannot be ruled out. Thus, from the theoretical viewpoint, $`N_{\mathrm{COBE}}`$ can be anywhere in the range $`0`$ to $`60`$. Let us discuss the prediction Eq. (30), excluding for simplicity the ranges $`0<p<1`$ and $`2<p<3`$ (recall that the straightforward ’new’ inflation models make $`p`$ an integer $`3`$). Taking the maximum value $`N_{\mathrm{COBE}}60`$, we learn that $`n<0.93`$ for $`p=3`$ (the lowest prediction), and $`n<0.95`$ for $`p=4`$. Looking at the right-hand panel of Figure 1, we see that at nominal 1-$`\sigma `$ level, the former case is ruled out, though it is still allowed at the 2-$`\sigma `$ level. Stronger results hold in the if $`N_{\mathrm{COBE}}<60`$. Looking at things another way, a lower bound on $`n`$ gives a lower bound on $`N_{\mathrm{COBE}}`$, $$N_{\mathrm{COBE}}>\frac{p1}{p2}\frac{2}{1n}.$$ (32) Even with present data, the 2-$`\sigma `$ result $`n\text{ }>.9`$ gives $`N_{\mathrm{COBE}}\text{ }>40`$ for $`p=3`$, and $`N_{\mathrm{COBE}}\text{ }>20`$ for $`p3`$. The scale dependence given by Eq. (30) is $$\frac{\text{d}n}{\text{d}\mathrm{ln}k}=\frac{1}{2}\left(\frac{p2}{p1}\right)\left(n1\right)^2<0.$$ (33) Over the cosmological range of scales $`\mathrm{ln}(k/k_{\mathrm{COBE}})`$ is at most a few, and in particular $`\mathrm{ln}(8^1h\text{Mpc}^1/k_{\mathrm{COBE}})4`$, corresponding to $$\mathrm{\Delta }nn_8n_{\mathrm{COBE}}=.02\left(\frac{p2}{p1}\right)\left(\frac{n1}{0.1}\right)^2<0.$$ (34) Taking $`n=0.9`$ to saturate the present bound, this gives $`|\mathrm{\Delta }n|<0.02`$ with $`p3`$, and $`|\mathrm{\Delta }n|<0.04`$ with $`p0`$. Even in the latter case, the change in $`n`$ is hardly significant with present data. ### C Models giving $`n>1`$ Known models giving $`n>1`$ (blue spectrum) are all of the hybrid inflation type. The simplest case is $`V=V_0+\frac{1}{2}m^2\varphi ^2`$; it gives the scale-independent prediction $`n1=2M_\mathrm{P}^2m^2/V_0`$, which may be either close to 1 or well above 1. The other cases involve a potential of the form $`V=V_0\left(1+c\varphi ^p\right)`$ with positive $`c`$, and $`p`$ an integer $`3`$ or $`1`$. There is a maximum (early-time) value for $`N`$, and the prediction $$n1=\frac{p1}{p2}\frac{1}{N_{\mathrm{max}}N}.$$ (35) Barring the fine-tuning $`N_{\mathrm{COBE}}N_{\mathrm{max}}`$, this gives $`n10.04`$, which is compatible with the observational bound. The scale-dependence of $`n`$ in these models is still given by Eqs. (33) and (34); it may be observationally significant only in the fine-tuned case $`N_{\mathrm{COBE}}N_{\mathrm{max}}`$, which we have not investigated. ## IV The running mass models ### A The potential We have also done fits with the scale-dependent spectral index, predicted in inflation models with a running inflaton mass . In these models, based on softly broken supersymmetry, one-loop corrections to the tree-level potential are taken into account, by evaluating the inflaton mass-squared $`m^2(\mathrm{ln}(Q))`$ at the renormalization scale $`Q\varphi `$,<sup>††</sup><sup>††</sup>††The choice $`Q\varphi `$ is to be made in the regime where $`\varphi `$ is bigger than the relevant masses. When $`Q`$ falls below the relevant masses, $`m^2(Q)`$ becomes practically scale-independent (the mass ’stops running’). We have a running mass model if inflation takes place in the former regime, which happens in some interesting cases , including that of a gauge coupling. $$V=V_0+\frac{1}{2}m^2(\mathrm{ln}(Q))\varphi ^2+\mathrm{}.$$ (36) Over any small range of $`\varphi `$, it is a good approximation to take the running mass to be a linear function of $`\mathrm{ln}\varphi `$. This is equivalent to choosing the renormalization scale to be within the range, and then adding the loop correction explicitly, $$V=V_0+\frac{1}{2}m^2(\mathrm{ln}Q)\varphi ^2\frac{1}{2}c(\mathrm{ln}Q)\frac{V_0}{M_\mathrm{P}^2}\varphi ^2\mathrm{ln}(\varphi /Q).$$ (37) The dimensionless quantity $`c`$ specifies the strength of the coupling. Let us discuss its likely magnitude, taking for definiteness $`Q=\varphi _{\mathrm{COBE}}`$. It has been shown that the linear approximation is very good over the range of $`\varphi `$ corresponding to horizon exit for scales between $`k_{\mathrm{COBE}}`$ and $`8h^1\text{Mpc}`$. We shall want to estimate the reionization epoch, which involves a scale of order $`k_{\mathrm{reion}}^110^2\text{Mpc}`$ (enclosing the relevant mass of order $`10^6M_{}`$). Since only a crude estimate of the reionization epoch is needed, we shall assume that the linear approximation is adequate down to this ‘reionization scale’. In other words, we assume that it is adequate for $`\varphi `$ between $`\varphi _{\mathrm{COBE}}`$ and $`\varphi _{\mathrm{reion}}`$, the subscripts denoting the value of $`\varphi `$ when the relevant scale leaves the horizon. Within this range, we it is convenient to write Eq. (38) in the form $$V=V_0\frac{1}{2}\frac{V_0}{M_\mathrm{P}^2}c\varphi ^2\left(\mathrm{ln}\frac{\varphi }{\varphi _{}}\frac{1}{2}\right),$$ (38) so that $$V^{}=\frac{V_0}{M_\mathrm{P}^2}c\varphi \mathrm{ln}\frac{\varphi }{\varphi _{}}.$$ (39) In these expressions, the constants $`c`$ and $`\varphi _{}`$ both depend on the renormalization scale $`Q`$, which can be chosen anywhere in the range corresponding to cosmological scales (say $`Q=\varphi _{\mathrm{COBE}}`$). The dimensionful constant $`\varphi _{}`$ is related to the mass-squared by $$\mathrm{ln}(\varphi _{}/Q)=\frac{m^2(Q)M_\mathrm{P}^2}{c(Q)V_0}\frac{1}{2}.$$ (40) Note that the limit of no running, $`c0`$, corresponds to finite $`c|\mathrm{ln}(\varphi /\varphi _{})|`$, so that Eq. (38) in that limit gives back Eq. (36) with a constant mass. In general, the point $`\varphi =\varphi _{}`$ may be far outside the regime where the linear approximation Eq. (38) applies. However, in simple models the cosmological regime is sufficiently close to that point that the linear approximation is approximately valid there. In that case, we can trust the Eq. (38) and its derivatives for $`\varphi =\varphi _{}`$; since $`V^{}`$ vanishes at that point, there are four clearly distinct models of inflation as shown in Figure 2. The labeling (i), (ii), (iii) and (iv) is the one introduced in . In Models (i) and (ii), $`c`$ is positive and the potential has a maximum near $`\varphi _{}`$, while in Models (iii) and (iv), $`c`$ is negative and there is a minimum. In Models (i) and (iv), $`\varphi `$ moves towards the origin, while in Models (ii) and (iii) the opposite is true. Even if Eq. (38) is not valid near $`\varphi =\varphi _{}`$, this fourfold classification of models, according to the sign of $`c`$ and the direction of motion of $`\varphi `$, is still useful. Let us discuss the likely magnitude of $`c`$, assuming that a single coupling dominates the loop correction. The value of $`c`$ is conveniently obtained from the well-known RGE for $`\text{d}m^2/\text{d}(\mathrm{ln}Q)`$. If a gauge coupling dominates one finds $$\frac{V_0c}{M_\mathrm{P}^2}=\frac{2C}{\pi }\alpha \stackrel{~}{m}^2.$$ (41) Here, $`C`$ is a positive group-theoretic number of order 1, $`\alpha `$ is the gauge coupling, and $`\stackrel{~}{m}`$ is the gaugino mass. We see that if the loop correction comes from a single gauge coupling, $`c`$ is positive, corresponding to Model (i) or Model (ii). If a Yukawa coupling dominates, one finds (for negligible supersymmetry breaking trilinear coupling) $$\frac{V_0c}{M_\mathrm{P}^2}=\frac{D}{16\pi ^2}|\lambda |^2m_{loop}^2,$$ (42) where $`D`$ is a positive constant counting the number of scalar particles interacting with the inflaton, $`m_{\mathrm{loop}}^2`$ is their common susy breaking mass-squared, and $`\lambda `$ is their common Yukawa coupling. In this case, $`c`$ can be of either sign. To complete our estimate of $`c`$, we need the gaugino or scalar mass. The traditional hypothesis is that soft supersymmetry breaking is gravity-mediated, and in the context of inflation this means that the scale $`M_\mathrm{S}`$ of supersymmetry breaking will be roughly $`V_0^{1/4}`$. (As usual we are defining $`M_\mathrm{S}\sqrt{F}`$, where $`F`$ is the auxiliary field responsible for spontaneous supersymmetry breaking in the hidden sector. We also assume that there is no accurate cancelation in the formula $`V=|F|^23M_\mathrm{P}^2m_{3/2}^2`$, which is the case in most supersymmetric inflation models .) With gravity-mediated susy breaking, typical values of the masses are $`\stackrel{~}{m}^2|m_{\mathrm{loop}}^2|V_0/M_\mathrm{P}^2`$, which makes $`|c|`$ of order of the coupling strength $`\alpha `$ or $`|\lambda |^2`$. At least in the case of a gauge coupling, one then expects $$|c|10^1\text{ to }10^2.$$ (43) In special versions of gravity-mediated susy breaking, the masses could be much smaller, leading to $`|c|1`$. In that case, the mass would hardly run, and the spectral index would be practically scale-independent. With gauge-mediated susy breaking, the masses could be much bigger; this would not lead to a model of inflation (unless the coupling is suppressed) because it would not satisfy the slow-roll requirement $`|c|\text{ }<1`$. ### B The spectrum and the spectral index Using Eq. (2) we find $`se^{c\mathrm{\Delta }N(k)}`$ $`=`$ $`c\mathrm{ln}(\varphi _{}/\varphi )`$ (44) $`\mathrm{\Delta }N(k)`$ $``$ $`N_{\mathrm{COBE}}N(k)\mathrm{ln}(k/k_{\mathrm{COBE}}),`$ (45) where $`s`$ is an integration constant.<sup>‡‡</sup><sup>‡‡</sup>‡‡In an earlier paper we used $`\sigma se^{cN_{\mathrm{COBE}}}`$, but $`s`$ is more convenient. Eq. (5) then gives $$\frac{n(k)1}{2}=se^{c\mathrm{\Delta }N(k)}c.$$ (46) Some lines of fixed $`n_{\mathrm{COBE}}`$ in the plane $`s`$ versus $`c`$ are shown in the left-hand panel of Figure 3. In order to evaluate Eq. (27), we also need the variation of $`\delta _H`$ which comes from integrating this expression, $$\frac{\delta _H(k)}{\delta _H(k_{\mathrm{COBE}})}=\mathrm{exp}\left[\frac{s}{c}\left(e^{c\mathrm{\Delta }N}1\right)c\mathrm{\Delta }N\right].$$ (47) We are mostly interested in cosmological scales between $`k_{\mathrm{COBE}}`$ and $`k_8`$, corresponding to $`0\text{ }<\mathrm{\Delta }N\text{ }<4`$. In this range the scale-dependence of $`n`$ is approximately linear (taking $`|c|\text{ }<1`$) and the variation $`\mathrm{\Delta }nn_8n_{\mathrm{COBE}}`$ is given approximately by $$\mathrm{\Delta }n4\frac{\text{d}n}{\text{d}\mathrm{ln}k}8sc.$$ (48) In contrast with the prediction Eqs. (33) and (34) of the earlier models we considered, $`\mathrm{\Delta }n`$ is positive. Also in contrast with those models, it is not tied to the magnitude of $`|n1|`$, and (as we shall see) may be significant even with present data, for physically reasonable values of the parameters. In the right-hand panel of Figure 3, we show the branches of the hyperbola $`8sc=\mathrm{\Delta }n`$, for the reference value $`\mathrm{\Delta }n=0.04`$. Within the hyperbola, the scale-dependence of $`n`$ is probably too small to be significant with present data. The spectral index Eq. (46) depends on the coupling $`c`$, which we already discussed, and the integration constant $`s`$. To satisfy the slow-roll conditions $`M_\mathrm{P}^2|V^{\prime \prime }/V|1`$ and $`M_\mathrm{P}^2(V^{}/V)^21`$, both $`c`$ and $`s`$ must be at most of order 1 in magnitude. Significant additional constraints on $`s`$ follow, if we make the reasonable assumptions that the mass continues to run to the end of slow-roll inflation, and that the linear approximation remains roughly valid. Indeed, setting $`\mathrm{\Delta }N=N_{\mathrm{COBE}}`$, Eq. (44) becomes $`s=e^{cN_{\mathrm{COBE}}}c\mathrm{ln}(\varphi _{}/\varphi _{\mathrm{end}})`$. Discounting the possibility that the end of inflation is very fine-tuned, to occur close to the maximum or minimum of the potential, this gives a lower bound $$|s|\text{ }>e^{cN_{\mathrm{COBE}}}|c|.$$ (49) In the case of positive $`c`$ (Models (i) and (ii)), we also obtain a significant upper bound by setting $`\mathrm{\Delta }N=N_{\mathrm{COBE}}`$ in Eq. (46), and remembering that slow-roll requires $`|n1|\text{ }<1`$; $$|s|\text{ }<e^{cN_{\mathrm{COBE}}}(c>1).$$ (50) In the simplest case, that slow-roll inflation ends when $`n1`$ actually becomes of order 1, this bound becomes an actual estimate, $`|s|e^{cN_{\mathrm{COBE}}}`$. In the case of Models (i) and (iv), the mass may cease to run before the end of slow-roll inflation (but after cosmological scales leave the horizon, or the running mass model would not apply) at some point $`N_{\mathrm{run}}`$. In this somewhat fine-tuned situation, $`N_{\mathrm{COBE}}`$ in the above estimates should be replaced $`N_{\mathrm{COBE}}N_{\mathrm{run}}`$, which may be much less than $`N_{\mathrm{COBE}}`$. In the case of Model (iv), this leads to a weaker lower bound $$s\text{ }>|c|(c<0).$$ (51) In the case of Model (i) it leads to a weaker upper bound $$s\text{ }<1(c>0).$$ (52) In the left-hand panel of Figure 3, we show the bounds relevant to the choice of parameter’s ranges, i.e. the lower bound Eq. (49), the upper bound Eq. (50) and the weak lower bound Eq. (51). ### C The magnitude of the spectrum Although it is not directly relevant for our investigation of the spectral index, we should mention the constraint on the running mass model that comes from the observed magnitude $`𝒫_{}^{}{}_{}{}^{1/2}10^5`$ of the spectrum. From Eq. (1), $$\frac{4}{25}𝒫_{}=\frac{V_0}{\varphi _{}^2M_\mathrm{P}^2}\mathrm{exp}\left(\frac{2s}{c}\right)\frac{1}{|s|^2}.$$ (53) This prediction involves $`V_0`$ and $`\varphi _{}`$, in addition to the parameters $`c`$ and $`s`$ that determine the spectral index. The simplest thing is to again assume gravity-mediated susy breaking, with the ultra-violet cutoff at the traditional scale around $`M_\mathrm{P}`$, and the same supersymmetry breaking scale during inflation as in the true vacuum so that $`V_0^{1/4}10^{10}\text{GeV}`$. In this scenario, one expects $`|m^2(Q)|V_0/M_\mathrm{P}^2`$ at $`QM_\mathrm{P}`$. As Stewart pointed out in the first paper on the subject, with this very traditional set of assumptions, Eq. (53) can give the correct COBE normalization, with $`|c|`$ in the physically favored range $`10^1`$ to $`10^2`$.<sup>\**</sup><sup>\**</sup>\**At the crudest level, one can verify this using the linear approximation Eq. (38) all the way up to the $`\varphi M_\mathrm{P}`$, corresponding to $`\mathrm{ln}(M_\mathrm{P}/\varphi _{})1/c10`$ to $`100`$. Proper calculations using the RGE’s lead to the same conclusion. It is remarkable that the most traditional set of assumptions can give a model with the correct COBE normalization, and, as we shall see, with a viable spectral index. If one relaxes these assumptions, there is much more freedom in choosing $`V_0`$ and $`\varphi _{}`$. Such freedom may be very welcome, in coping with the difficulty of implementing inflation in the context of large extra dimensions . ### D Observational constraints on the running mass models Extremizing with respect to all other parameters, we have calculated $`\chi ^2`$ in the $`s`$ vs. $`c`$ plane and obtained contour levels for $`\chi ^2`$ equal to the minimum value plus $`2.41`$ and $`5.99`$ respectively, corresponding nominally to the 70% and 95% confidence level in two variables. (The $`\chi ^2`$ function presents actually two nearly degenerate minima in the allowed region, one in the positive and one in the negative quadrants (Models (i) and (iii)), separated by a very low barrier, but we assume that the usual quadratic estimate of the probability content is not very far from the true value.) The allowed region is shown in the right-hand panel of Figure 3, for the case that reionization occurs when $`f1`$. For $`c=0`$ or $`s=0`$ the constant $`n`$ result is recovered with $`n1=2c`$ or $`2s`$; our plots give in this case a slightly larger allowed interval with respect to the two sigma value in the previous section, due to the mismatch between the statistical one variable and two variables 95% CL contours. This allowed region is not too different from the one that we estimated earlier , by imposing the crude requirement $`|n1|<0.2`$ at both the COBE scale and the low scale corresponding to $`N_{COBE}10`$. (Note that in the earlier work we used the less convenient variable $`\sigma s\mathrm{exp}(cN_{\mathrm{COBE}})`$, instead of $`s`$.) The allowed region for Models (ii) and (iv) lies inside the hyperbola corresponding to $`\mathrm{\Delta }n=.04`$, which means that their scale-dependence is hardly significant at the level of present data. In contrast, the allowed region for Models (i) and (iii) extends to $`\mathrm{\Delta }n0.2`$, representing an extremely significant scale-dependence even with present data. To demonstrate this, we show in Figures 4 and 5 the allowed regions for Models (i) and (iii) in the $`n_8`$ versus $`n_{\mathrm{COBE}}`$ plane. In the case of Model (iii), the theoretical bounds on the parameters restrict the parameter space to a small corner of the allowed region, within which $`n`$ has negligible variation. In contrast, there is no significant theoretical restriction on the parameters in the case of Model (i), and $`n`$ has significant variation in a physically reasonable regime of parameter space. In both cases, a lower value of the fraction of collapsed matter $`f`$ just reduces the allowed region at large n, without affecting significantly the allowed scale-dependence of n. In the case of Model (i), a further observational constraint comes from the requirement that the density perturbation on scales leaving the horizon at the end of inflation, should be small enough to avoid dangerous black hole formation. The linear approximation is not adequate on such small scales, and one should instead evaluate the running mass using the RGE. The simplest assumption is that the RGE corresponds to a single gauge coupling, either with or without asymptotic freedom . The black hole constraint has been evaluated for these cases . The constraint amounts more or less to an upper bound on $`n_{\mathrm{COBE}}`$, typically in the range $`1.1`$ to $`1.3`$ depending on the choices of $`N_{\mathrm{COBE}}`$ and other parameters. Such a bound significantly reduces the allowed region of parameter space, but still leaves a region where $`n`$ has a strong variation. ## V Conclusion In the context of the $`\mathrm{\Lambda }`$CDM model, we have evaluated the observational constraint on the spectral index $`n(k)`$. This constraint comes from a range of data, including the height of the first peak in the cmb anisotropy, which we take to be $`80\pm 10\mu `$K (nominal 1-$`\sigma `$). Reionization is assumed to occur when some fixed fraction $`f`$ of the matter collapses, and the most important results are insensitive to this fraction in the reasonable range $`10^4\text{ }<f\text{ }<1`$. We first considered the case that $`n`$ has negligible scale dependence, comparing the observational bound with the prediction of various models of inflation. A significant improvement in the 2-$`\sigma `$ lower bound, which may well occur with the advent of slightly better measurements of the cmb anisotropy, will become a serious discriminator between models of inflation. Even the present bound has serious implications if, as is very possible, late-time gravitino creation or some other phenomenon requires an era of thermal inflation after the usual inflation. We also considered the running mass models of inflation, where the spectral index can have significant scale-dependence. Because of this scale dependence, it is in this case crucial to fix not the epoch of reionization, but the fraction $`f`$ of matter that has collapsed at that epoch. We presented results for the choice $`f=1`$ (corresponding to $`z_\mathrm{R}13`$ if the spectral index has negligible scale-dependence), and for a perhaps more reasonable choice $`f=10^{2.2}`$. In the running-mass models, the scale-dependent spectral index $`n(k)`$ is given by $`n1=s\mathrm{exp}(c\mathrm{\Delta }N)c`$, where $`\mathrm{\Delta }N=\mathrm{ln}(k_{\mathrm{COBE}}/k)`$. The parameters in this expression can be of either sign, leading to four different models of inflation. Barring fine-tuning, one expects $`s`$ to be in the range $`|c|e^{cN_{\mathrm{COBE}}}\text{ }<|s|\text{ }<e^{cN_{\mathrm{COBE}}}`$. The parameter $`c`$ depends on the nature of the soft supersymmetry breaking, but in the simplest case of gravity-mediation it becomes a dimensionless coupling strength, presumably of order $`10^1`$ to $`10^2`$ in magnitude. Without worrying about the origin of the parameters $`c`$ and $`s`$, we have investigated the observational constraints on them. In the case $`c,s>0`$ (referred to as Model (i)) we find that $`n`$ can have a significant variation on cosmological scales, with $`n1`$ passing through zero signaling a minimum of the spectrum of the primordial curvature perturbation. In a future paper, we shall exhibit the possible effect of this scale-dependence on the cmb anisotropy, at and above the first peak. ## Acknowledgments We thank Pedro Ferreira and Andrew Liddle and Martin White for useful discussions.
no-problem/0002/math0002092.html
ar5iv
text
# Theorem 1 Local Equivalence of Sacksteder and Bourgain Hypersurfaces Maks A. Akivis and Vladislav V. Goldberg $`2000`$ Mathematics Subject Classification. Primary 53A20, Secondary 14M99. Keywords and phrases. Gauss mapping, varieties with degenerate Gauss mappings, hypercubic, Sacksteder, Bourgain. Abstract. Finding examples of tangentially degenerate submanifolds (submanifolds with degenerate Gauss mappings) in an Euclidean space $`R^4`$ that are noncylindrical and without singularities is an important problem of differential geometry. The first example of such a hypersurface was constructed by Sacksteder in 1960. In 1995 Wu published an example of a noncylindrical tangentially degenerate algebraic hypersurface in $`R^4`$ whose Gauss mapping is of rank 2 and which is also without singularities. This example was constructed (but not published) by Bourgain. In this paper, the authors analyze Bourgain’s example, prove that, as was the case for the Sacksteder hypersurface, singular points of the Bourgain hypersurface are located in the hyperplane at infinity of the space $`R^4`$, and these two hypersurfaces are locally equivalent. 1. It is important to find examples of tangentially degenerate submanifolds in order to understand the theory of such manifolds. These examples prove the existence of tangentially degenerate submanifolds and help to illustrate the theory. The first known example of a tangentially degenerate hypersurface of rank 2 without singularities in $`R^4`$ was constructed by Sacksteder \[S 60\]. This example was examined from the differential geometry point of view by Akivis in \[A 87\]. In particular, Akivis proved that the Sacksteder hypersurface has no singularities since they “went to infinity”. In the same paper, Akivis presented a series of examples generalizing Sacksteder’s example in $`R^4`$, constructed a new series of examples of three-dimensional hypersurfaces $`V^3P^n()`$ of rank 2 whose focal surfaces are imaginary, and proved existence of hypersurfaces of this kind. Note that more examples of tangentially degenerate submanifolds without singularities can be found in \[I 98, 99a, 99b\]. The examples are essentially based on classical Cartan’s hypersurfaces (see \[Ca 39\]). Mori \[M 94\] claims that he constructed “a one-parameter family of complete nonruled deformable hypersurfaces in $`R^4`$ with rank $`r=2`$ almost everywhere”. However, it follows immediately from his formulas that the hypersurfaces of his family are not only ruled hypersurfaces—they are cylinders. Also much progress on the study of tangentially degenerate submanifolds over the complex numbers has been made in \[GH 79\], \[L 99\], and \[AGL 00\]. In these papers and in the papers \[FW 95\], \[W 95\], and \[WZ 99\] one can find more examples of tangentially degenerate submanifolds over the complex numbers. Recently Wu \[W 95\] published an example of a noncylindrical tangentially degenerate algebraic hypersurface in an Euclidean space $`R^4`$ which has a degenerate Gauss mapping but does not have singularities. This example was constructed (but not published) by Bourgain (see also \[I 98, 99a, 99b\]). In the present paper, we investigate Bourgain’s example from the point of view of the paper \[A 87\] (see also Section 4.7 of our book \[AG 93\]). In particular, we prove that, as was the case for the Sacksteder hypersurface, the Bourgain hypersurface has no singularities since they “went to infinity”. Namely this analysis suggested an idea that Bourgain’s and Sacksteder’s examples must be equivalent. Moreover, this analysis showed that a hypersurface constructed in these examples is torsal, i.e., it is stratified into a one-parameter family of plane pencils of straight lines. In addition, at the end of our paper we prove that the examples of Bourgain and Sacksteder are locally equivalent. 2. In Cartesian coordinates $`x_1,x_2,x_3,x_4`$ of the Euclidean space $`R^4`$, the equation of the Bourgain hypersurface $`B`$ is $$x_1x_4^2+x_2(x_41)+x_3(x_42)=0$$ (1) (see \[W 95\] or \[I 98, 99a, 99b\]). Equation (1) can be written in the form $$x_1x_4^2+(x_2+x_3)x_4(x_2+2x_3)=0.$$ (2) Make in (2) the following admissible change of Cartesian coordinates: $$x_2+x_3x_2,x_2+2x_3x_3.$$ Then equation (2) becomes $$x_1x_4^2+x_2x_4x_3=0.$$ (3) Introduce homogeneous coordinates in $`R^4`$ by setting $`x_i={\displaystyle \frac{z_i}{z_0}},i=1,2,3,4`$. Then equation (3) takes the form $$f=z_1z_4^2+z_0z_2z_4z_0^2z_3=0.$$ (4) Equation (4) defines a cubic hypersurface $`F`$ in the space $`\overline{R}^4=R^4P_{\mathrm{}}^3`$ which is an enlarged space $`R^4`$, i.e., it is the space $`R^4`$ enlarged by the hyperplane at infinity $`P_{\mathrm{}}^3`$ (whose equation is $`z_0=0`$). Denote by $`A_\alpha ,\alpha =0,1,2,3,4,`$ fixed basis points of the space $`\overline{R}^4`$. Suppose that these points have constant normalizations, i.e., that $`dA_\alpha =0`$. An arbitrary point $`z\overline{R}^4`$ can be written in the form $`z=_\alpha z_\alpha A_\alpha `$. We will take a proper point of the space $`\overline{R}^4`$ as the point $`A_0`$, and take points at infinity as the points $`A_1,A_2,A_3,A_4`$. Equation (4) shows that the proper straight line $`A_0A_4`$ defined by the equations $`z_1=z_2=z_3=0`$ and the plane at infinity defined by the equations $`z_0=z_4=0`$ belong to the hypersurface $`F`$ defined by equation (4). We write the equations of the hypersurface $`F`$ in a parametric form. To this end, we set $$z_0=1,z_4=p,z_1=u,z_3=pv.$$ Then it follows from (4) that $$z_2=vpu.$$ This implies that an arbitrary point $`zF`$ can be written as $$z=A_0+uA_1+vA_2+p(A_4uA_2+vA_3).$$ (5) The parameters $`p,u,v`$ are independent nonhomogeneous parameters on the hypersurface $`F`$. 3. Let us find singular points of the hypersurface $`F`$. Such points are defined by the equations $`{\displaystyle \frac{f}{z_\alpha }}=0`$. It follows from (4) that $$\{\begin{array}{cc}\frac{f}{z_0}=z_2z_42z_0z_3,\hfill & \\ \frac{f}{z_1}=z_4^2,\frac{f}{z_2}=z_0z_4,\frac{f}{z_3}=z_0^2,\hfill & \\ \frac{f}{z_4}=2z_1z_4+z_0z_2.\hfill & \end{array}$$ (6) All these derivatives vanish simultaneously if and only if $`z_0=z_4=0`$. Thus the 2-plane at infinity $`\sigma =A_1A_2A_3`$ is the locus of singular points of the hypersurface $`F`$. Consider a point $`B_0=A_0+pA_4`$ on the straight line $`A_0A_4`$. By (4), to the point $`B_0`$ there corresponds the straight line $`a(p)`$ in the 2-plane at infinity $`\sigma `$, and the equation of this straight line is $$p^2z_1+pz_2z_3=0.$$ (7) The family of straight lines $`a(p)`$ depends of the parameter $`p`$, and its envelope is the conic $`C`$ defined by the equation $$z_2^2+4z_1z_3=0.$$ (8) The straight line $`a(p)`$ is tangent to the conic $`C`$ at the point $$B_1(p)=A_12pA_2p^2A_3.$$ (9) Equation (9) is a parametric equation of the conic $`C`$. The point $$\frac{dB_1}{dp}=2(A_2+pA_3)$$ (10) belongs to the tangent line to the conic $`C`$ at the point $`B_1(p)`$. Consider the 2-planes $`\tau =B_0B_1{\displaystyle \frac{dB_1}{dp}}`$. Such 2-planes are completely determined by the location of the point $`B_0`$ on the straight line $`A_0A_4`$, and they form a one-parameter family. All these 2-planes belong to the hypersurface $`F`$. In fact, represent an arbitrary point $`z`$ of the 2-plane $`\tau `$ in the form $$\begin{array}{cc}z\hfill & =\alpha B_0+\beta B_1\frac{1}{2}\gamma \frac{dB_1}{dp}\hfill \\ & =\alpha A_0+\beta A_1+(2p\beta +\gamma )A_2+(p^2\beta +p\gamma )A_3+p\alpha A_4.\hfill \end{array}$$ (11) The coordinates of the point $`z`$ are $$z_0=\alpha ,z_1=\beta ,z_2=\gamma 2p\beta ,z_3=p(\gamma p\beta ),z_4=p\alpha .$$ (12) Substituting these values of the coordinates into equation (4), one can see that equation (4) is identically satisfied. Thus the hypersurface $`F`$ is foliated into a one-parameter family of 2-planes $`\tau (p)=B_0B_1{\displaystyle \frac{dB_1}{dp}}`$. In a 2-plane $`\tau (p)`$ consider a pencil of straight lines with center $`B_1`$. The straight lines of this pencil are defined by the point $`B_1`$ and the point $`B_2=A_2+pA_3+q(A_0+pA_4)`$. The straight lines $`B_1B_2`$ depend on two parameters $`p`$ and $`q`$. These lines belong to the 2-plane $`\tau (p)`$, and along with this 2-plane they belong to the hypersurface $`F`$. Thus they form a foliation on the hypersurface $`F`$. We prove that this foliation is a Monge-Ampère foliation. In the space $`\overline{R}^4`$, we introduce the moving frame formed by the points $$\{\begin{array}{cc}B_0=A_0+pA_4,\hfill & \\ B_1=A_12pA_2p^2A_3,\hfill & \\ B_2=A_2+pA_3+qA_0+pqA_4,\hfill & \\ B_3=A_3,\hfill & \\ B_4=A_4.\hfill & \end{array}$$ (13) It is easy to prove that these points are linearly independent, and the points $`A_\alpha `$ can be expressed in terms of the points $`B_\alpha `$ as follows $$\{\begin{array}{cc}A_0=B_0pB_4,\hfill & \\ A_1=B_1+2pB_2p^2B_32pqB_0,\hfill & \\ A_2=B_2pB_3qB_0,\hfill & \\ A_3=B_3,\hfill & \\ A_4=B_4.\hfill & \end{array}$$ (14) Consider a displacement of the straight lines $`B_1B_2`$ along the hypersurface $`F`$. Suppose that $`Z`$ is an arbitrary point of this straight line, $$Z=B_1+\lambda B_2.$$ (15) Differentiating (15) and taking into account (14) and $`dA_\alpha =0`$, we find that $$dZ(2qdp+\lambda dq)B_0+\lambda dp(B_3+qB_4)(modB_1,B_2).$$ (16) It follows from relation (16) that 1. A tangent hyperplane to the hypersurface $`F`$ is spanned by the points $`B_1,B_2,B_0`$ and $`B_3+qB_4`$. This hyperplane is fixed when the point $`Z`$ moves along the straight line $`B_1B_2`$. Thus the hypersurface $`F`$ is tangentially degenerate of rank 2, and the straight lines $`B_1B_2`$ form a Monge-Ampère foliation on $`F`$. 2. The system of equations $$\{\begin{array}{cc}2qdp+\lambda dq\hfill & =0,\hfill \\ \lambda dp\hfill & =0\hfill \end{array}$$ (17) defines singular points on the straight line $`B_1B_2`$, and on the hypersurface $`F`$ it defines torses. The system of equations (17) has a nontrivial solution with respect to $`dp`$ and $`dq`$ if and only if its determinant vanishes: $`\lambda ^2=0`$. Hence by (15), a singular point on the straight line $`B_1B_2`$ coincides with the point $`B_1`$. For $`\lambda =0`$, system (17) implies that $`dp=0`$, i.e., $`p=\text{const}`$. Thus it follows from (9) that the point $`B_1C`$ is fixed, and as a result, the torse corresponding to this constant parameter $`p`$ is a pencil of straight lines with the center $`B_1`$ located in the 2-plane $`\tau (p)=B_0B_1B_2`$. 3. All singular points of the hypersurface $`F`$ belong to the conic $`CP^{\mathrm{}}`$ defined by equation (8). Thus if we consider the hypersurface $`F`$ in an Euclidean space $`R^4`$, then on $`F`$ there are no singular points in a proper part of this space. 4. The hypersurface $`F`$ considered in the proper part of an Euclidean space is not a cylinder since its rectilinear generators do not belong to a bundle of parallel straight lines. A two-parameter family of rectilinear generators of $`F`$ decomposes into a one-parameter family of plane pencils of parallel lines. 4. No one of properties 1–4 characterizes Bourgain’s hypersurfaces completely: they are necessary but not sufficient for these hypersurfaces. The following theorem gives a necessary and sufficient condition for a hypersurface to be of Bourgain’s type. ###### Theorem 1 Let $`l`$ be a proper straight line of an Euclidean space $`R^4`$ enlarged by the plane at infinity $`P_{\mathrm{}}^3`$, and let $`C`$ be a conic in the $`2`$-plane $`\sigma `$. Suppose that the straight line $`l`$ and the conic $`C`$ are in a projective correspondence. Let $`B_0(p)`$ and $`B_1(p)`$ be two corresponding points of $`l`$ and $`C`$, and let $`\tau `$ be the $`2`$-plane passing through the point $`B_0`$ and tangent to the conic $`C`$ at the point $`B_1`$. Then * when the point $`B_0`$ is moving along the straight line $`l`$, the plane $`\tau `$ describes a Bourgain hypersurface, and * any Bourgain hypersurface satisfies the above construction. Proof. The necessity (b) of the theorem hypotheses follows from our previous considerations. We prove the sufficiency (a) of these hypotheses. Take a fixed frame $`\{A_u\},u=0,1,2,3,4,`$ in the space $`R^4`$ enlarged by the plane at infinity $`P_{\mathrm{}}^3`$ as follows: its point $`A_0`$ belongs to $`l`$, the point $`A_4`$ is the point at infinity of $`l`$, and the points $`A_1,A_2`$, and $`A_3`$ are located at the $`2`$-plane at infinity $`\sigma `$ in such a way that a parametric equation of the straight line $`l`$ is $`B_0=A_0+pA_4`$, and the equation of $`C`$ has the form (9). The plane $`\tau `$ is defined by the points $`B_0,B_1`$, and $`\frac{dB_1}{dp}`$. The parametric equations of this plane have the form (12). Excluding the parameters $`\alpha ,\beta ,\gamma `$, and $`p`$ from these equations, we will return to the cubic equation (4) defining the Bourgain hypersurface $`B`$ in homogeneous coordinates. The method of construction of the Bourgain hypersurface used in the proof of Theorem 1 goes back to the classical methods of projective geometry developed by Steiner \[St 32\] and Reye \[R 66\]. 5. In conclusion we prove the following theorem. ###### Theorem 2 The Sacksteder hypersurface $`S`$ and the Bourgain hypersurface $`B`$ are locally equivalent, and the former is the standard covering of the latter. Proof. In an Euclidean space $`R^4`$, in Cartesian coordinates $`x_1,x_2,x_3,x_4`$, the equation of the Sacksteder hypersurface $`S`$ (see \[S 60\]) has the form $$x_4=x_1\mathrm{cos}x_3+x_2\mathrm{sin}x_3.$$ (18) The right-hand side of this equation is a function on the manifold $`M^3=^2\times S^1`$ since the variable $`x_3`$ is cyclic. Equation (18) defines a hypersurface on the manifold $`M^3\times S^1`$. The circumference $`S^1=/2\pi `$ has a natural projective structure of $`P^1`$. In the homogeneous coordinates $`x_3=\frac{u}{v}`$, the mapping $`S^1P^1`$, can be written as $`x^3(u,v)`$. By removing the point $`\{v=0\}`$ from $`S^1`$, we obtain a 1-to-1 correspondence $$S^1\{v=0\}^1.$$ (19) Now we can consider the Sacksteder hypersurface $`S`$ in $`R^4`$ or, if we enlarge $`R^4`$ by the plane at infinity $`P_{\mathrm{}}^3`$, in the space $`P^4`$. Next we show how by applying the mapping $`S^1P^1`$, we can transform equation (18) of the Sacksteder hypersurface $`S`$ into equation(4) of the Bourgain hypersurface $`B`$. We write this mapping in the form $$x_3=2\mathrm{arctan}\frac{u}{v},\frac{u}{v}R,|x_3|<\pi .$$ (20) It follows from (20) that $$\{\begin{array}{cc}\frac{u}{v}=\mathrm{tan}\frac{x_3}{2},\hfill & \\ \mathrm{cos}x_3=\frac{1\mathrm{tan}^2{\displaystyle \frac{x_3}{2}}}{1+\mathrm{tan}^2{\displaystyle \frac{x_3}{2}}}=\frac{v^2u^2}{v^2+u^2},\hfill & \\ \mathrm{sin}x_3=\frac{2\mathrm{tan}{\displaystyle \frac{x_3}{2}}}{1+\mathrm{tan}^2{\displaystyle \frac{x_3}{2}}}=\frac{2uv}{v^2+u^2}.\hfill & \end{array}$$ (21) Substituting these expressions into equation (18), we find that $$x_4(u^2+v^2)=x_1(v^2u^2)+2x_2uv,$$ i.e., $$(x_4+x_1)u^2+(x_4x_1)v^22x_2uv=0.$$ (22) Make a change of variables $$z_1=x_4x_1,z_2=2x_2,z_3=x_1+x_4,z_0=u,z_4=v.$$ As a result, we reduce equation (22) to equation (4). It follows that the Sacksteder hypersurface $`S`$ defined by equation (18) is locally equivalent to the Bourgain hypersurface defined by equation (4). Note also that if the cyclic parameter $`x_3`$ changes on the entire real axis $`R`$, then we obtain the standard covering of the Bourgain hypersurface $`B`$ by means of the Sacksteder hypersurface $`S`$. Authors’ addresses: | M. A. Akivis | V. V. Goldberg | | --- | --- | | Department of Mathematics | Department of Mathematical Sciences | | Jerusalem College of Technology—Mahon Lev | New Jersey Institute of Technology | | Havaad Haleumi St., P. O. B. 16031 | University Heights | | Jerusalem 91160, Israel | Newark, N.J. 07102, U.S.A. | | E-mail address: akivis@avoda.jct.ac.il | E-mail address: vlgold@m.njit.edu |
no-problem/0002/nucl-ex0002010.html
ar5iv
text
# Measurement of the vector analyzing power in elastic electron-proton scattering as a probe of double photon exchange amplitudes ## Abstract We report the first measurement of the vector analyzing power in inclusive transversely polarized elastic electron-proton scattering at $`Q^2`$ = 0.1 (GeV/c)<sup>2</sup> and large scattering angles. This quantity should vanish in the single virtual photon exchange, plane wave impulse approximation for this reaction, and can therefore provide information on double photon exchange amplitudes for electromagnetic interactions with hadronic systems. We find a non-zero value of $`A`$=-15.4$`\pm `$5.4 ppm. No calculations of this observable for nuclei other than spin 0 have been carried out in these kinematics, and the calculation using the spin orbit interaction from a charged point nucleus of spin 0 cannot describe these data. (SAMPLE Collaboration) The recent development and refinement of experimental methods for measurements of small (few parts per million, or ppm) parity violating effects in polarized electron scattering provides a new technique for further studies of the electromagnetic structure of hadrons and nuclei. We have exploited these methods for the first time to measure the small vector analyzing power in elastic electron-proton scattering at large scattering angles ($`130^{}\theta _{lab}^e170^{}`$), corresponding to $`Q^2=0.1`$ (GeV/c)<sup>2</sup>. This parity conserving quantity is associated with transverse electron polarization, in contrast to the parity violating longitudinal (i.e. helicity dependent) asymmetry. It has been previously noted that transverse polarization effects will be suppressed by the relativistic boost factor 1/$`\gamma `$. Nevertheless, as demonstrated in this Letter, the development of the technology to measure small parity violating asymmetries, along with the ability to produce transversely polarized electron beams at high energies, now renders these transverse polarization effects amenable to measurement. The vector analyzing power is a time reversal odd observable which must vanish in first order perturbation theory, and can only arise in leading order from the interference of double photon exchange (second order) and single photon exchange amplitudes. Our observation of this quantity therefore demonstrates the viability of a new technique to access physics associated with the absorption of two virtual photons by a hadronic system. Thus, the study of vector analyzing powers provides another method to study double photon exchange processes that is complementary to virtual compton scattering (VCS). VCS involves the coupling of one virtual and one real photon to a hadronic system, but in practice includes problematic Bethe-Heitler amplitudes associated with radiation of a real photon from the electron. Nonetheless, there is presently a great deal of interest in the exploitation of VCS to further probe the electromagnetic structure of hadrons and nuclei, and the vector analyzing power described here potentially offers an attractive alternative to access double photon exchange amplitudes. Using the apparatus for the SAMPLE experiment , a high statistics measurement of the parity violating asymmetry in inclusive elastic $`p(\stackrel{}{e},e^{})`$ scattering at the MIT/Bates Linear Accelerator Center, we have made measurements of the asymmetry in the elastic scattering of 200 MeV transversely polarized electrons from the proton at backward scattering angles. This represents the first measurement of a vector analyzing power in polarized electron scattering from the proton at this high a momentum transfer. The vector analyzing power in electron-nucleus scattering results in a spin-dependent asymmetry, which can, for example, be generated by the interaction of the electron spin with the magnetic field seen by the electron in its rest frame . This spin-dependence in the scattering cross section $`\sigma (\theta )`$, can be written as $$\sigma (\theta )=\sigma _0(\theta )[1+A(\theta )𝐏\widehat{𝐧}],$$ (1) where $`\sigma _0(\theta )`$ is the spin-averaged scattering cross section, $`A(\theta )`$ is the vector analyzing power for the reaction, and $`𝐏`$ is the incident electron polarization vector (which is proportional to the spin vector operator $`𝐒`$). The unit vector $`\widehat{𝐧}`$ is normal to the scattering plane, and is defined through $`\widehat{𝐧}(𝐤\times 𝐤^{})`$/$`|𝐤\times 𝐤^{}|`$, where $`𝐤`$ and $`𝐤^{}`$ are wave vectors for the incident and scattered electrons, respectively. The scattering angle $`\theta `$ is found through $`\mathrm{cos}\theta =(𝐤𝐤^{})`$/$`|𝐤||𝐤^{}|`$, and, in the Madison convention, is positive for the electron scattering toward the same direction as the transverse component of $`𝐤^{}`$. The beam polarization $`𝐏`$ can be expressed in terms of the number of beam electrons with spins parallel ($`m_s`$=+1/2) and antiparallel ($`m_s`$=-1/2) to $`\widehat{𝐧}`$, so that the measured asymmetry at a given scattering angle $`ϵ(\theta )`$, is defined through $$ϵ(\theta )=\frac{\sigma _{}(\theta )\sigma _{}(\theta )}{\sigma _{}(\theta )+\sigma _{}(\theta )}=A(\theta )P,$$ (2) where $`\sigma _,(\theta )`$ is the differential cross section for $`m_s`$ = +1/2 and -1/2, respectively. Thus, with knowledge of the magnitude of the incident beam polarization $`P`$, measurement of $`ϵ(\theta )`$ can yield a determination of the vector analyzing power $`A(\theta )`$, which contains the underlying physics of the electron-nucleus interaction. The formalism and conventions reviewed here have been well established, and used extensively for “Mott” polarimeters which measure electron beam polarizations at low incident beam energies ($``$ 100 keV) , where it is valid to assume the nucleus is simply a point charge. The “Mott” asymmetry for transversely polarized electrons scattering from a point nucleus of charge $`Ze`$ and spin 0 is calculated as , $$A_{Mott}=\frac{4Z\alpha \beta }{\gamma }(\frac{\mathrm{csc}\theta \mathrm{ln}(\mathrm{csc}\frac{\theta }{2})}{\mathrm{csc}^4\frac{\theta }{2}\beta ^2\mathrm{csc}^2\frac{\theta }{2}}),$$ (3) where $`\alpha `$ is the electromagnetic fine structure constant, $`\beta `$ and $`\gamma `$ are the usual relativistic kinematic quantities, and $`\theta `$ is the laboratory scattering angle defined above. The analyzing powers calculated via Eq. (3) for low energies using very high $`Z`$ targets are much larger than the vector analyzing power for electron proton scattering reported here, and this is commonly exploited as a means to measure the polarization of low energy electron beams. Such measurements, however, are not sensitive to the internal structure or spin of the hadronic system. More recently, some level of nuclear structure has been taken into account in calculations of analyzing powers for high energy elastic scattering of transversely polarized electrons from heavy spin 0 nuclei at forward electron scattering angles, performed in the eikonal expansion and using finite charge densities for the nuclei . In these calculations, the non-zero analyzing powers were generated through the distortion of the electron waves in the Coulomb potential of the nuclear targets, providing the needed extension beyond single photon exchange to the distorted wave impulse approximation. To date, however, no such calculation exists for the scattering of transversely polarized electrons from nuclear targets of any spin other than 0 at any energy. In the SAMPLE kinematics, the electron energy of 200 MeV is much larger than the energies used for Mott polarimetry, the proton target has the smallest possible $`Z`$ so that Coulomb effects are at a minimum, and the electrons are scattered at large angles where magnetic effects are important. These facts, along with the spin 1/2 nature of the proton, imply that our measurement of the vector analyzing power will be sensitive to non-trivial electromagnetic structure of the proton not taken into account in previous theoretical treatments. The data we report are the result of an experiment performed at the MIT/Bates Linear Accelerator Center with a 200 MeV polarized electron beam of average current 40 $`\mu `$A incident on a 40 cm liquid hydrogen target . The scattered electrons were detected in a large solid angle ($``$ 1.5 sr), axially symmetric air Čerenkov detector consisting of 10 mirrors, each shaped to focus the Čerenkov light onto one of ten shielded photomultiplier tubes. This combination of large solid angle and high luminosity allow measurements of small asymmetries in a relatively short period of time. The data presented here were acquired in just two days of running under these conditions. Properties of the detector signals and beam have been described in detail in Ref.’s and , along with the method of asymmetry extraction and correction. Thus, here we report only the differences between the experimental running conditions for longitudinally polarized beams as used for parity violation measurements, and the transversely polarized beam used for the vector analyzing power measurements. The systematic errors associated with the asymmetries from each of the individual mirrors are the same for these measurements as for those in Ref.’s and , totaling 0.7 ppm, and are negligible compared with the overall statistical error of 5.4 ppm obtained for these measurements. The polarized laser light used on the bulk GaAs source crystal produces electron beams with longitudinal polarization, consequently significant spin manipulation was required to orient the beam polarization transversely. This was achieved with a Wien filter, which contains electric and magnetic fields oriented perpendicular to each other and to the beam direction, and a set of beam solenoids. The Wien filter was positioned immediately downstream of the source anode, and was used to precess the electron spin away from the beam direction ($``$ 90 for these measurements). The beam solenoids were positioned near the first accelerating cavity in the beam line, and precessed the resulting transverse components of the beam polarization. The combination of these beam line elements allowed the polarization direction to be chosen arbitrarily, and each element was calibrated such that the polarization direction is determined to $`\pm `$ 2 . For the measurements reported here, two orthogonal transverse beam polarizations were used during two running periods: one with the polarization directed to beam right (which we denote $`\mathrm{\Phi }`$=0), and one with the polarization pointing up ($`\mathrm{\Phi }`$=90). The magnitude of the beam polarization was measured with a Moller apparatus positioned on the beam line, and averaged 36.3$`\pm `$1.8% during these measurements. Finally, to minimize false asymmetries and test for systematic errors, the electron beam polarization was manually reversed relative to all electronic signals, for both $`\mathrm{\Phi }`$=0 and $`\mathrm{\Phi }`$=90 running, with the insertion of a $`\lambda `$/2 plate in the laser beam. Thus, four separate sets of measurements were made: $`\mathrm{\Phi }`$=0, $`\lambda `$/2 IN and OUT, and $`\mathrm{\Phi }`$=90, $`\lambda `$/2 IN and OUT. The elastic scattering transverse asymmetry was determined for each of the 10 individual mirrors in each running configuration after correction for all effects, including beam polarization, background dilution, and radiative effects, as described in Ref.’s and . Although the geometry of this detector allowed for combining the asymmetries from individual mirrors positioned on opposite sides of the incident beam (via Eq. (2) and imposing the rotational invariance criterion $`A(\theta )=A(\theta )`$ ), we chose an alternative form of analysis wherein the full statistical information contained in the data set could be used to extract the vector analyzing power. Because the individual mirrors were positioned at varying azimuthal angles $`\varphi `$ relative to the polarization direction, the asymmetries measured in the mirrors should follow a sinusoidal dependence in this angle. The sinusoidal dependence in the azimuthal angle $`\varphi `$ is seen by rewriting Eq. (2) as $$ϵ(\theta ,\varphi )=A(\theta )P\mathrm{sin}(\varphi +\delta ),$$ (4) where $`\varphi `$ measures the angle of the polarization vector in the plane transverse to the beam direction, and the phase $`\delta `$ takes into account the direction of $`𝐏`$ relative to $`\widehat{𝐧}`$. Table 1 summarizes the polar ($`\theta `$) and azimuthal ($`\varphi `$) angles at the center of each individual mirror within the SAMPLE detector. As seen in Table 1, mirrors 4 and 5 have the same azimuthal angle relative to the polarization direction, but different polar angles relative to the incident beam direction. A separate analysis, however, indicated that the polar angle dependence to the asymmetry was negligible, allowing us to combine the asymmetries from these two mirrors (similarly for mirrors 6 and 7) into one asymmetry at the same azimuthal angle $`\varphi `$. The data set for each $`\mathrm{\Phi }`$ and $`\lambda `$/2 running configuration therefore consists of eight data points at varying $`\varphi `$ values, to which we perform a $`\chi ^2`$ minimization to a two parameter function via $$\chi _{d.o.f.}^2=\frac{1}{6}\underset{i=1}{\overset{8}{}}[A_i^{meas}(a\mathrm{sin}\varphi _i+b\mathrm{cos}\varphi _i)]^2/[\delta A_i^{meas}]^2,$$ (5) which is linear in the coefficients $`a`$ and $`b`$. Here $`A_i^{meas}`$ is the measured asymmetry at each azimuthal angle $`\varphi _i`$, corrected for all effects including beam polarization normalization (as suggested in Eq. (4)). The coefficients $`a`$ and $`b`$ can then be converted into an amplitude and phase, i.e., $$A_{fit}=A\mathrm{sin}(\varphi +\delta )$$ (6) as in Eq. (4), where the amplitude $`A`$ gives the magnitude of the vector analyzing power, and the phase $`\delta `$ verifies the direction of the beam polarization and determines the overall sign of the analyzing power. The sinusoidal dependence just discussed is illustrated in Fig. 1, where the combined data for $`\mathrm{\Phi }=0`$ and $`\mathrm{\Phi }=90`$ are shown as a function of azimuthal angle, along with the best fit to the data according to the procedure outlined above. Here we have defined $`\varphi =0`$ to be at beam left, and have taken into account the 90 phase difference between the $`\mathrm{\Phi }=0`$ and $`\mathrm{\Phi }=90`$ polarization directions. For these combined data, the overall $`\chi ^2`$ per degree of freedom for the best fit was found to be 0.9, providing a 50% confidence level that the data follow this dependence . This should be compared, however, with the $`\chi ^2`$ per degree of freedom of 2.1 for a fit to $`A=0`$, which has a corresponding confidence level of 4% that the data are consistent with $`A=0`$. Even if we allow an overall offset to a constant dependence, we find an average of $`A=3.5\pm 3.7`$ ppm, with a $`\chi ^2`$ per degree of freedom of 1.9, and a corresponding confidence level of 7%. In Table 2 we summarize our results using this analysis procedure for the four independent running conditions. Note that the deduced magnitudes are all consistent within experimental errors, and the deduced phase changes by 180 upon the insertion or removal of the $`\lambda `$/2 plate as expected, and by 90 from one $`\mathrm{\Phi }`$ running configuration to the other. Combining these four independent measurements, we quote our final result: a vector analyzing power for elastic electron-proton scattering of -15.4$`\pm `$5.4 ppm at the average electron laboratory scattering angle of 146.1, corresponding to $`Q^2`$ = 0.1 (GeV/c)<sup>2</sup>. To demonstrate the precision to which this quantity has been determined relative to the original derivation of Mott , we plot this data point in Fig. 2 along with the prediction of Eq. (3) for a point nucleus of charge $`Z=1`$ and spin 0 as a function of electron laboratory scattering angle, covering the angular range accepted by the SAMPLE detector. The data reported here represent the first measurement of a vector analyzing power in polarized electron scattering at this high a momentum transfer. Our observation of this quantity demonstrates the viability of a new technique to access physics associated with double photon exchange, which may address some of the same physics issues as virtual compton scattering measurements. We have also made measurements of the vector analyzing power in inclusive quasielastic electron-deuteron scattering, the results of which will be reported in a future letter. Further parity violation measurements at higher $`Q^2`$ values are planned from both hydrogen and deuterium targets, where high statistics transverse asymmetry data will also be taken. Thus, we hope that the results reported in this Letter will motivate theoretical calculations of vector analyzing powers in polarized electron scattering for hadronic systems with $`S0`$ and non-trivial electromagnetic structure, which will be necessary to interpret such measurements. The efforts of the staff at MIT/Bates to provide high quality beam required for these measurements, and useful conversations with T.W. Donnelly, are gratefully acknowledged. This work was supported by NSF grants PHY-9870278 (Louisiana Tech), PHY-9420470 (Caltech), PHY-9420787 (Illinois), PHY-9457906/PHY-9229690 (Maryland), PHY-9733773 (VPI) and DOE cooperative agreement DE-FC02-94ER40818 (MIT/Bates) and contract W-31-109-ENG-38 (ANL).
no-problem/0002/astro-ph0002275.html
ar5iv
text
# Mining for Metals in the Ly𝛼 Forest ## 1. Introduction Our understanding of the Ly$`\alpha `$ forest and its connection with the Intergalactic Medium (IGM) have undergone radical revision in recent years. The paradigm in which the IGM consists of discrete, isolated clouds has been revolutionised by a generation of hydrodynamical simulations (e.g. Hernquist et al 1996). These simulations show that the ‘bottom-up’ hierarchy of structure formation knits a complex but smoothly fluctuating cosmic web, consisting of filaments, knots and extensive ‘voids’. Originally thought to be chemically pristine, it is now well-established that a large fraction of the strongest Ly$`\alpha `$ absorbers exhibit some metal enrichment, most notably C IV (e.g. Cowie et al 1995). Here, we address two specific questions. Firstly, are the C IV absorbers that have thus far been detected just the tip of the iceberg and can more sensitive spectra mine ever weaker systems? Secondly, to what H I column densities does the enrichment extend? This latter point has particularly poignant implications for the origin and transport mechanism of these metals. In-situ formation in a nearby galaxy could be responsible for the enrichment of its local IGM, explaining the presence of C IV in the relatively high column density Ly$`\alpha `$ clouds. However, low column density Ly$`\alpha `$ clouds (log$`N`$(H I) $`<`$ 14.0), which are associated with physically less dense regions, are found further from the sites of star formation. The presence of C IV in these clouds could be indicative of widespread metal enrichment, possibly by an early epoch of Population III star formation. This is an overview of work described in more detail in Ellison et al (2000). ## 2. C IV in High Column Density Ly$`\alpha `$ clouds With a S/N ratio in excess of 200 redward of Ly$`\alpha `$, our Keck/HIRES spectrum of Q1422+231 is one of the most sensitive currently available to search for C IV associated with the Ly$`\alpha `$ forest. Although this target has been extensively studied in the past, our data reveal several C IV systems that had not been previously detected in spectra of lower S/N. We undertake the standard procedure of fitting Voigt profiles to determine column densities, $`b`$-values and redshifts of the 34 detected C IV systems associated mainly with Ly$`\alpha `$ clouds with log $`N`$(H I) $`>`$ 14.5. Previous studies of C IV absorbers have established a power law column density distribution of the form $`f(N)dN=BN^\alpha dN`$ and determined $`\alpha 1.5`$, complete down to log $`N`$(C IV) $``$ 12.75 for $`z>3`$ (Songaila 1997). Below this limit, there is an apparent departure from the power law which could be due to either incompleteness or a real turnover in the number density of C IV systems. We perform a maximum likelihood fit to our data points and determine a power law index $`\alpha =1.44\pm 0.05`$, in good agreement with previous estimates (see Figure 1). The column density limits at which previous studies have exhibited a departure from the power law are sketched and clearly our data establish that this was due to incompleteness. The C IV systems from this single high quality spectrum are sufficient to show that the power law continues down to at least log $`N`$(C IV) $``$ 12.3, below which the data points start to turn-over. Again, this could be due to the incompleteness caused, for example, by a bias against weak C IV lines with large $`b`$-values, or evidence of a real turn-over in column density distribution. By simulating C IV lines for the two lowest column density intervals in Figure 1 with $`b`$-values drawn at random from the observed distribution, we can estimate the incompleteness correction factor by determining the frequency with which these lines are recovered. Once this has been taken into account, there is no shortfall compared with the power law, showing that it continues at least down to log $`N`$(C IV) = 11.7, a factor of ten more sensitive than previous analyses. ## 3. Probing the Low Column Density Ly$`\alpha `$ Forest Direct detection of the C IV systems associated with low column density Ly$`\alpha `$ clouds (log $`N`$(H I) $`<`$ 14.0) is observationally very challenging due to the extreme weakness of the absorption. In the past, efforts have been made to overcome this problem by stacking together the regions where C IV absorption is expected in order to produce a high S/N composite spectrum (e.g Lu et al 1998). In Ellison et al (1999), we showed how a random redshift offset between the Ly$`\alpha `$ line and its associated C IV feature could ‘smear’ out the stacked feature and consequently underestimate the amount of metals present. Instead, we favour the optical depth method developed by Cowie & Songaila (1998) which we have found is more robust against redshift offset (see Ellison et al 2000 for a detailed discussion of this point). Briefly, the optical depth method consists of stepping through the spectrum and measuring the optical depth ($`\tau `$) of each Ly$`\alpha `$ pixel and its corresponding C IV. The results of this analysis are shown by the solid line in Figure 2 and are consistent with a constant level of C IV/H I (as shown by the dashed line) for optical depths down from $`\tau `$(Ly$`\alpha `$) $`100`$ over two orders of magnitude, below which $`\tau `$(C IV) flattens off to an approximately constant value. In order to interpret these results, simulated spectra were reproduced using the Ly$`\alpha `$ forest taken directly from the data, adding C IV with a given enrichment recipe and then analysing the synthetic spectrum with the optical depth technique. A total of 3 simulated spectra were created, all of which include a random redshift offset of 17 kms<sup>-1</sup> between Ly$`\alpha `$ and C IV and assume $`b`$(C IV) = 1/2 $`b`$(Ly$`\alpha `$). Spectrum ‘A’ includes only the directly detected C IV with no additional enrichment and the optical depth analysis reveals that it is clearly C IV deficient in comparison with the data at almost all optical depths. Clearly, there is more C IV in the data than is accounted for in the 34 directly identifiable systems. Adding C IV to the log $`N`$(C IV) $`>`$ 14.5 Ly$`\alpha `$ clouds at the detection limit of the spectrum (log $`N`$(C IV) = 12.0) can reproduce the results obtained for the data for $`\tau `$(Ly$`\alpha `$) $`>`$3 but not at lower optical depths (spectrum ‘B’). The data are more consistent with spectrum ‘C’ in which a constant C IV/H I ratio of $`2.6`$ was also included in log $`N`$(H I) $`<`$ 14.5. We investigated the possible limitations of our analysis due to such effects as contamination by other metal lines, errors in the continuum fit and scatter in the C IV/H I ratio. We conclude that overall this is a robust technique and that the limiting factor is likely to be the accuracy of the continuum fit in the Ly$`\alpha `$ forest regions which could mimic the flattening of $`\tau `$(C IV) that we observe in the data at low H I optical depths. Nevertheless, we find that even in the high optical depth H I pixels (that will not be seriously affected by small continuum errors) the identified C IV systems are insufficient to account for the all the measured absorption and that there are clearly more metals in the IGM than we can currently detect. ### Acknowledgments. SLE is very grateful to the LOC for their generous financial assistance towards attending this conference. ## References Cowie, L. L. & Songaila, A. 1998, Nature, 394, 44 Cowie, L. L., Songaila, A., Kim, T. -S. & Hu, E. M. 1995 AJ, 109, 1522 Ellison, S. L., G. F., Pettini, M., Chaffee, F. H. & Irwin, M. J. 1999, ApJ, 520, 456 Ellison, S. L., Songaila, A., Schaye, J. & Pettini, M. 2000, AJ, submitted Hernquist, L., Katz, N., Weinberg, D. H. & Miralda-Escudé, J. 1996, ApJ, 457, L51 Lu, L., Sargent, W. L. W., Barlow, T. A. & Rauch, M. 1998, preprint, astro–ph/9802189 Petitjean, P. & Bergeron, J. 1994, A&A, 283, 759 Songaila, A. 1997, ApJ, 490, L1
no-problem/0002/cond-mat0002394.html
ar5iv
text
# The effect of strain on the adsorption of CO on Pd(100) ## I Introduction A number of recent experiments and calculations have shown that chemisorption causes surface strain and that straining a surface modifies the properties of adsorbed molecules. Webb, Lagally and their students demonstrated that the structure of a silicon surface can be changed by bending it. They worked with a (100) face of a thin silicon slab. This surface has steps separated by two kinds of terraces: on one, the dimer rows are parallel to the steps; on the other, they are perpendicular to the steps. These terraces give different low electron diffraction (LEED) patterns. When the slab is bent, the relative LEED intensities change, because the size of these terraces is modified by strain. Ibach and his coworkers have demonstrated a “reciprocal” phenomenon: chemisorption on the surface of a very thin slab will cause it to bend. If adsorption causes strain, then it should be possible to affect adsorption by straining the surface. Mentzel’s group implanted noble gas ions under a metal surface, causing local strain. They then showed that the strained surface has different adsorption properties. Other groups have strained a surface by growing a very thin metal film on a substrate of a different metal. If the two lattices are mismatched, but the growth is in registry, the atoms in the film are either stretched or compressed. This affects the chemisorption properties of the film. In this kind of experiment the chemisorption properties are modified by the strain and also by electronic effects caused by binding to a substrate made of a different metal. The two effects cannot be separated experimentally. Density functional calculations have been used to explore the effect of surface strain on the properties of the adsorbates. Ratsch, Seisonen and Scheffler have shown that the activation energy for diffusion of a Ag atom on a Ag surface is affected by strain. Mattsson and Metiu have used this effect to show that periodic strain on a surface can order nanostructures nucleated on it and increase their size uniformity. Finally, Mavrikakis, Hammer and Nørskov calculated the change in the binding energy of O and CO and in the dissociation energy of CO when the Ru(0001) surface on which they are adsorbed is under strain. Ibach has published a thorough review of the effects of strain in epitaxy and surface reconstruction. Norskov discusses the effect of strain in an excellent review of density functional studies of chemisorption systems relevant to catalysis. It is not surprising that straining a surface affects the chemistry taking place on it. Strain changes the distance between the surface atoms and this must change the properties of the adsorbates. The only question is whether this change can be, at least in some cases, sufficiently large to matter. If this is so, one can envision the development of “elastochemistry” as a new subfield of physical chemistry. In this paper we study how the binding energy, the vibrational frequencies and the adsorption isotherm of CO on Pd(100) change with surface strain and coverage. We use generalized gradient, density functional theory for energy calculations and Monte-Carlo simulations and an analytical model for calculating the adsorption isotherm. The change in these quantities, caused by straining the surface, is sufficiently large to be measurable. In the case of the adsorption isotherm the coverage at a given gas pressure changes by almost two orders of magnitude. This happens because relatively small changes in adsorption energy have large effects on the isotherm. To model the adsorption isotherm we need to know how the properties of the adsorbed molecules change with coverage. Ideally, we should calculate how these properties change when we change the clustering of CO on the surface. One would need then the energy of all possible clusters. This is beyond the current capability of the density functional method, which uses a periodic system with a unit cell of limited size. For this reason we have adopted the following strategy. We calculate the properties of adsorbed CO for 1/8, 1/2 and 1 monolayer. The one-monolayer calculation is used to determine the adsorption energy of a molecule that does not interact with its neighbors. The half-monolayer results provide the interaction energy between the next-nearest-neighbors. The full monolayer one is used to extract the interaction energy between nearest-neighbors. A Hamiltonian based on these energies is then used in a grand-canonical Monte Carlo simulation to determine the adsorption isotherm. We have also tested the quasi-chemical approximation and found it to be inaccurate. ## II Methodology The density functional calculations reported here were performed with the Vienna program VASP. This uses periodic boundary conditions, a plane-wave basis set and fully nonlocal Vanderbilt-type ultra soft psudopotentials. The generalized gradient exchange-correlation energy is that of Wang and Perdew. The metal is represented by a four-layer slab bounded by an empty region whose thickness equals that of six Pd layers. When the coverage $`\rho `$ is high ($`\rho =1`$ and 1/2), we use the small supercell shown by the dashed lines in Fig. 1; the thick lines show the supercell for the lowest coverage ($`\rho =1/8`$). The CO molecules are placed on the top layer of the slab. The positions of the atoms in the bottom layer was fixed to coincide with those of the bulk Pd, calculated with the same program. The distance between the bottom layer and the layer next to it is also fixed to the bulk value. The calculated equilibrium lattice constant of bulk Pd is $`a=3.961`$ Å. The atoms in the other layers are allowed to relax and take the values that give a minimum total energy. Brillouin-zone integrations have been performed on a grid of $`9\times 9\times 1`$ $`𝐤`$ points for the high-coverage case and on a grid of $`4\times 4\times 1`$ $`𝐤`$ points for the low-coverage case. The Methfessel-Paxton smearing is $`\sigma =0.3`$ eV. For high coverage we have tested convergence by performing calculations with thicker slab and a denser $`𝐤`$ mesh. For low coverage, a $`4\times 4\times 1`$ $`𝐤`$-point mesh is the densest we can afford on a workstation with 1GB memory. The cut-off energy is 495 eV, which is a high value. ## III The properties of adsorbed CO, as a function of coverage and strain ### A Bulk Pd To determine the lattice constant of bulk Pd we used the supercells described in Fig. 1 and a slab having ten Pd layers. It is not necessary to use such a thick slab, but we had the results from calculations for another project. If the sample is unstrained, we allow the atoms to relax in all directions and obtain a lattice constant of 3.961 Å. This corresponds to a distance of 1.9805 Å between the (100) layers. We have also calculated the bulk structure of two samples, subject to an uniform strain of $`\pm 2\%`$ in a plane (the xy plane) perpendicular to the (100) direction. In these calculations the $`x`$ and $`y`$ coordinates of the atoms are fixed, but the space between the layers is allowed to relax. In the sample with $`2\%`$ strain the distance (in the z direction) between the layers is 1.9250 Å; if the strain is $`2`$%, this distance is 2.0457 Å. Increasing the distance between atoms in the $`xy`$ plane decreases the distance between the layers in the $`z`$ direction. ### B Strain and coverage dependence of adsorption energy All calculations involving adsorbed molecules are performed on a slab having $`4`$ Pd layers, surrounded by a vacuum whose dimension in the z-direction is equal to that occupied by $`6`$ Pd layers. The distance between the lowest two layers is fixed at the bulk value mentioned above (this distance depends on the state of strain of the surface). The top two layers are allowed to relax. We calculate the energy $`E_s`$ of the slabs constrained in this way for 0 and $`\pm 2`$% strain. The properties of the chemisorbed CO molecules are calculated by placing 1 ML, 1/2 ML and 1/8 ML CO on top of the Pd slab. Since these calculations are extremely time consuming, we have investigated only the bridge binding sites, which have the lowest binding energy when the surface is not under strain. It is possible that other sites are occupied with a finite probability, when the system is in thermal equilibrium. Also, the CO molecules may prefer a different binding site, when the surface is under strain. These possibilities have not been considered in our calculations. We also ignore the fact that at high coverage the CO molecules might change their binding site. The energy of the CO covered slab is denoted by $`E_{tot}`$, and depends on CO coverage and on surface strain. The adsorption energy $`\overline{\epsilon }_{ad}`$ is then given by: $`\overline{\epsilon }_a=(E_{tot}E_snE_{CO})/n.`$$`E_{CO}=14.73`$ meV is the total energy of a free CO molecule, $`n`$ is the number of CO molecules in a supercell and $`E_s`$ is the energy of the Pd slab, without CO on it. For the supercells used in our calculations, $`n=2`$ for 1 ML of CO and $`n=1`$ for 1/2 ML or 1/8 ML (see Fig. 1). The adsorption energies calculated with this equation, for different CO coverages and strains, are given in Table I. For 1/2 ML and no strain, the adsorption energy of $`1.918`$ eV is consistent with that reported in Ref. , which is $`1.92`$ eV. The adsorption energy $`\overline{\epsilon }_a`$ defined above includes the lateral interactions between the molecules. Such a definition is often used when the dependence of desorption rate on coverage is modelled. As we shall see later, this definition does not work well; we prefer the one given when we define Model II (see Section D). ### C The dependence of the vibrational frequencies of CO on coverage and strain The dependence of the vibrational frequencies of chemisorbed CO on strain is of interest for two reasons. They are measurable and therefore the predictions made here can be tested by experiment. Furthermore, the change of these frequencies with strain will affect the chemical potential of the chemisorbed molecules and therefore will influence the adsorption isotherm. A thorough study of these effects would have to perform a complete phonon analysis of the system. Such calculations are possible, in principle, but they require too much computer power, especially for the case of low coverage, to be attempted here. Since the adsorption isotherm is dominated by the chemisorption energy and by the lateral interactions, small errors in evaluating the vibrational energies are not likely to affect our conclusions. Because of these considerations, we calculate the vibrational frequencies by giving small displacements to the O and C atoms, away from their equilibrium positions, while keeping the metal atoms fixed. The displacements along the $`x`$ and $`y`$ directions are $`\pm 0.05`$Å and $`\pm 0.1`$Å, and those along the $`z`$ direction are $`\pm 0.01`$Å and $`\pm 0.02`$Å. We calculate the forces exerted on the C and O atoms, for all combinations of shifted coordinates, and fit it to a linear form in the displacements. This allows us to calculate the force constants and then the vibrational frequencies. The results are given in Table II, for different coverages and strains. In Fig. $`4`$ we show the amplitudes of four vibrations in the $`xy`$ plane, as seen by looking down towards the surface. The highest frequency, $`\omega _6`$, is the CO stretch. In the absence of strain, at a coverage of 1/2 ML, our result (1878 cm<sup>-1</sup>) is close to that calculated by Eichler and Hafner (1887 cm<sup>-1</sup>). They also report a frequency of 417 cm<sup>-1</sup> for the carbon-metal stretch; our calculations give 341 cm<sup>-1</sup>. This discrepancy is not sufficiently large to affect the chemisorption isotherm. We suspect that this is due to the fact that we use more k points in our calculation. The frequency $`\omega _6`$ is easiest to measure. Our calculations predict that either a compressive or a tensile strain, which changes the lattice constant by two percent, will increase $`\omega _6`$ by an amount measurable by infrared spectroscopy. The largest change, of 29 cm<sup>-1</sup>, is at the lowest coverage (1/8 ML). ### D The interaction between the adsorbed CO molecules To calculate the adsorption isotherm we need to know the energy of the interaction between molecules. A very precise treatment of this problem would have to determine the energy of dimers, trimers,…. This would require an enormous amount of computer power and would be an overkill at this stage in our research. For this reason we have studied two approximate models. In Model I we assume that the effect of the interaction between the adsorbates is to make the adsorption energy depend on coverage. This is obviously a mean field model; in reality the desorption energy is different, for different local configurations. For each strain, the adsorption energy at the three coverages used here is given in Table I. For other coverages we interpolate between these values. In Model II we partition the total energy into an adsorption energy and the energy of interaction between the adsorbed molecules. First we assume that at a coverage of 1/8 ML there are no interactions between the CO molecules. Therefore, $`\epsilon _a={\displaystyle \frac{E_t(1/8)E_sn(1/8)E_{CO}}{n(1/8)}}`$gives the adsorption energy of a CO molecule. Here $`E_t(1/8)`$ is the total energy of the slab with 1/8 ML CO on it, $`E_s`$ is the total energy of the slab without CO, $`E_{CO}`$ is the energy of the gaseous CO and $`n(1/8)`$ is the number of CO molecules in the supercell when the CO coverage is 1/8 ML. Next, we assume that the adsorption energy of one molecule is independent of coverage. This means that we attribute the change in adsorption energy at higher coverages to the interaction between molecules. When the coverage is 1/2 ML we assume that the total energy is the slab energy, plus the adsorption energy of the molecules in the supercell, plus the interaction between all the next-nearest-neighbors (nnn) in the supercell. Thus, we can calculate the interaction $`\epsilon _{nnn}`$ between the nnn-pairs from the equation $`\epsilon _{nnn}={\displaystyle \frac{E_t(1/2)E_s(1/2)n(1/2)E_{CO}n_a(1/2)\epsilon _a}{n_{nnn}(1/2)}}.`$ Here $`n_a`$ is the number of adsorbed CO molecules in the unit cell. The total energy of a CO monolayer can then be used to calculate the interaction energy between the nearest neighbors (nn). We assume that the total energy of the CO-covered surface is the energy of the slab, plus the adsorption energy of the CO molecules, plus the interaction energy of all the nn and nnn pairs in the supercell. As a consequence, the interaction energy $`\epsilon _{nn}`$ between the nearest neighbors is given by $`\epsilon _{nn}={\displaystyle \frac{E_tE_sn_{nn}(1)E_{CO}n_{nnn}(1)\epsilon _{nnn}n_a(1)\epsilon _a}{n_{nn}(1)}}`$ The values of $`\epsilon _{nn}`$ and $`\epsilon _{nnn}`$ calculated from these equations, for the strained and the unstrained surfaces, are given in Table III. We have now all the elements needed for writing an effective Hamiltonian to be used in the statistical mechanical theory of the adsorption isotherm: the vibrational frequencies of the molecule, the adsorption energy, the energy of interaction between nearest neighbors and that between next-nearest neighbors. ## IV The calculation of the adsorption isotherm We are now ready to calculate the coverage of the adsorbed layer in equilibrium with a gas of CO molecules. This calculation is performed in two ways. The first method uses the equilibrium condition: the chemical potential of a CO molecule in gas is equal to the chemical potential of an adsorbed CO molecule. To calculate the latter we use the quasichemical approximation. The second method, uses Grand Canonical Monte Carlo simulations with the chemical potential of the gas calculated from the equations provided by Statistical Mechanics for ideal gases. ### A The quasichemical approximation The quasichemical approximation gives for the chemical potential of CO on Pd $`\mu _a`$ $`=`$ $`\epsilon _a+2\epsilon _{nn}+{\displaystyle \underset{i=1}{\overset{6}{}}}[{\displaystyle \frac{\omega _i}{2}}+k_BT\mathrm{ln}(1e^{\omega _i/k_BT})]`$ (2) $`+k_BT\mathrm{ln}[({\displaystyle \frac{1\rho }{\rho }})^3({\displaystyle \frac{\rho a}{1\rho a}})^2],`$ where $`\rho `$ is the coverage of CO (number of molecules divided by the number of lattice sites). The symbol $`a`$ stands for $`a=2\rho (1\rho )/(\beta +1)`$with $`\beta =\sqrt{14\rho (1\rho )(1e^{\epsilon _{nn}/k_BT})}.`$ Since the pressure is low, we can use the ideal gas expression for the chemical potential of the gas: $$\mu _g=k_BT\mathrm{ln}[Pf(T)]+\frac{\omega _0}{2}+k_BT\mathrm{ln}(1e^{\omega _0/k_BT}),$$ (3) with $$f(T)=\frac{1}{k_BT}(\frac{2\pi \mathrm{}^2}{mk_BT})^{3/2}\frac{\mathrm{}^2}{2Ik_BT}.$$ (4) In these equations, $`P`$ is the pressure, $`m`$ is the mass of CO-molecule, $`I`$ is the moment of inertia of CO and $`\omega _0`$ is the vibration frequency of free CO, 2114 cm<sup>-1</sup>. In equilibrium calculations we must ensure that the two chemical potentials use the same reference energy. For us, this is the electronic energy of the slab plus the electronic energy of a CO molecule in gas phase. The zero point vibrational energy of the molecule in the gas and on the surface is included in the chemical potential. By making the chemical potential for the adsorbed CO, \[Eq. (2)\], equal to the chemical potential of the gas \[Eq. (3)\], we obtain a relationship between surface coverage $`\rho `$ and gas pressure $`P`$ (the adsorption isotherm). Since we know the dependence of the parameters on strain, we can calculate the adsorption isotherm for a strain of $`\pm 2`$%, at $`300`$ K. We first consider Model I, which assumes that $`\epsilon _{nn}=\epsilon _{nnn}=0`$ and includes the effect of the interactions between the molecules in the adsorption energy $`\overline{\epsilon }_a`$. The magnitude of $`\overline{\epsilon }_a`$ is given in Table I, for different coverages and strains. The values at other coverages are obtained by interpolation. For the vibrational frequencies we use the mean of the values given in Table II. The adsorption isotherm for Model I is calculated by setting $`\mu _g`$ given by Eq. (3) equal to $`\mu _a`$ given by Eq. (2), with $`ϵ_{nn}=0`$ and $`ϵ_a`$ replaced with $`\overline{ϵ}_a`$. This gives us an equation connecting the gas pressure to the coverage (the adsorption isotherm). The dependence of coverage on the gas pressure obtained with this method is plotted in Fig. 3(a), for 0 and $`\pm 2`$ % strains, at $`T=300`$ K. In the next calculation we use Model II, which assumes that the adsorption energy $`ϵ_a`$ is independent of coverage, that the nearest-neighbors interact with the energy $`ϵ_{nn}`$, and that $`ϵ_{nnn}=0`$. The isotherm is obtained by making $`\mu _a=\mu _g`$. The resulting adsorption isotherm is plotted in Fig. 3(b). It is clear that in both models the strain has a very large effect on chemisorption. For example, the Fig. 3(b) shows that at $`10^6`$ Torr the CO coverage on the unstrained surface is $`0.64`$ ML. Stretching the metal to increase the lattice constant by $`2`$% changes the coverage to $`0.92`$ ML. Another way to look at this is to note that a coverage of 0.7 is achieved at roughly $`1.2\times 10^8`$ Torr if the lattice constant is increased by $`2`$%, at $`1.2\times 10^6`$ Torr if the surface is not strained, and at $`10^3`$ Torr if the lattice constant is decreased by $`2`$%. These effects are large enough to be easily detected experimentally, if one could find a convenient way to strain the surface in ultra-high vacuum. ### B Monte Carlo simulation We use grand-canonical Monte Carlo (MC) calculations to determine the coverage of CO on the Pd(100) surface, at given gas pressure and temperature. The methodology is presented well in an excellent book by Frenkel and Smit so there is no need to discuss it here. The Hamiltonian consists of the adsorption energy $`\epsilon _a`$ multiplied by the number of adsorbed molecules, plus the pairwise interaction energy $`\epsilon _{nn}`$ multiplied by the number distinct nearest-neigbor pairs, plus the energy $`\epsilon _{nnn}`$ multiplied by the number of distinct next-nearest-neighbor pairs, plus the adsorption energy, plus the vibrational energies of the adsorbate. The chemical potential of the gas is calculated from Eq. (3). We use $`1000\times 1000`$ lattice sites. The results of the simulations are plotted in Fig. 4 for $`T=300`$ K. As in the calculations performed in the previous section, we find that strain substantially affects the surface coverage. For example, at a pressure of $`10^4`$ Torr and a temperature of 300 K, the coverage is $`0.70`$ for $`2\%`$ strain, $`0.66`$ in the absence of strain, and $`0.57`$ for $`2\%`$ strain. The effect is much stronger if the quasi-chemical approximation is used (see Fig. 3 (b)), but we attribute this to inaccuracies in Model II. The discrepancy between this model and the Monte Carlo simulation is so great that we have to discount Model II as insufficiently accurate. We suspect that this is due to the neglect of the repulsive, next-nearest-neighbor interactions, which makes the coverage given by Model II larger than in reality (here the Monte Carlo simulation is the “reality”). Model I is so far off from both the quasichemical approximation and the Monte Carlo simulation that it should be completely discarded. This is a pity since, had it been correct, this model would have allowed a very simple interpretation of the experiments. ## V Summary We have studied how uniform strain in the plane parallel to the (100) surface of palladium affects the properties of chemisorbed CO. The effect on the vibrational frequencies is small: it is comparable to or less than that observed for the molecule located in different binding sites. The shift in the stretch frequency of CO is sufficiently large to be detected by infrared spectroscopy. The effect on the binding energy at low coverage is larger. We found the weakest binding when the surface is compressed (i.e. for $`2`$ % strain). Releasing the compressive strain raises the binding energy by 0.025 eV. Going to a tensile strain of $`2\%`$ increases the binding energy by another 0.027 eV. From the equilibrium condition ( $`\mu _g`$ given by Eq.(3) is equal to $`\mu _{ad}`$ given by Eq.(2) ) we see that the logarithm of the equilibrium pressure is proportional to the adsorption energy. This means that small changes in the adsorption energy lead to large changes in the equilibrium pressure. While the quasi-chemical approximation is not quantitatively accurate, this particular prediction is reliable. It is indeed confirmed by the Monte Carlo simulations. It is interesting to speculate on the broader consequences of these findings. Ibach’s experiments showed that chemisorption induces surface strain, which results in the bending of the thin slab on which the adsorption was performed. This means that in all equilibrium studies we ought to take strain as an additional thermodynamic variable and stress as its conjugate variable. It follows that all equilibria will depend on the strain state of the surface. Moreover, the outcome of transformations performed at constant strain will be different from that of transformations taking place at constant stress, just as transformations at constant volume differ from those at constant pressure. This will affect the adsorption isotherm and chemical equilibrium at the surface. Since strain affects equilibrium, detailed balance tells us that it must also affect kinetics. The only question is whether the modifications caused by strain are large enough to make it necessary to include them in a thermodynamic analysis of surface phenomena. The present work suggests that they are, and therefore they deserve further study. Besides using simple mechanical means, one could induce strain by passing sound through the solid. In particular, the excitation of the Rayleigh mode would seem most effective, since it affects the surface more than other modes. Based on our calculations one would expect desorption to occur from the regions compressed by the sound wave. Moreover, chemical equilibrium on the surface is likely to be affected. Indeed, Krishner and Lichtman have shown that passing sound through a surface causes desorption of the molecules adsorbed on it. Inoue et al. have demonstrated that sound affects the yield of catalytic reactions. Prompted by these experiments King et al. have performed careful ultra-high vacuum experiments in which the interplay between sound and chemisorbed molecules can be studied in a clean and controlled environment. They found that exposing a Pt surface to sound can change the rate of CO oxidation. By using photo-electron microscopy, Kelling, Cerasari, Rotermundt, Ertl and King have shown that the reaction is affected mostly by controlling CO adsorption at given CO pressure. In most “sonochemistry”, sound acts by heating and compressing the system. This is not the case in the surface science experiments mentioned above, which have ruled out heating effects. The effect of sound on chemical reaction is very hard to explain by conventional arguments: sound frequency is very low and excites phonons of long wavelength with very low energy. It is unlikely that these can affect the rate of chemical processes. For this reason we speculate that perhaps it is the strain induced by the sound wave that plays a role in the process. The fact that the presence of sound does not affect the activation energy of the oxidation reaction but influences the CO coverage supports t his view. We expect that strain effects also play a role in supported catalysts, especially those using small clusters. When such a cluster is deposited on a support it suffers two modifications: a charge rearrangement and a distortion of its geometry (strain). It is therefore possible that a part of the change in the catalytic activity cause by depositing clusters on a inert support comes from the strain. ###### Acknowledgements. We thank Nick Blake, Ross Larsen and Professor Jurgen Hafner for help with the computations. We are grateful to Jens Norskov for many useful discussions in the early stages of this work. This work has been supported by AFOSR F49620-98-1-0366, by NSF (through CDA96-01954), and by Silicon Graphics Inc.
no-problem/0002/hep-ph0002002.html
ar5iv
text
# Mirror Neutrinos and the Early Universe ## 1 Lorentz Group: Full and Exact The Exact Parity Model (EPM) sees the ordinary particle sector reflected, darkly, in a mirror sector.<sup>1,2,3</sup> It is a phenomenologically acceptable extension of the Standard Model (SM) of particle physics which displays invariance under Improper Lorentz Transformations (Parity and Time-Reversal invariance). Remarkably, the invariance or otherwise of microphysical laws and the physical vacuum under the full Lorentz Group is still an open question, despite the $`VA`$ character of weak interactions. Almost as a byproduct, the EPM furnishes a unified solution to the solar and atmospheric neutrino problems; it can also easily accomodate the LSND result.<sup>3</sup> In this talk I will review the resolution of the neutrino anomalies within the EPM. I will also discuss how the oscillation driven relic neutrino asymmetry amplification mechanism<sup>4</sup> ensures consistency with Big Bang Nucleosynthesis (BBN).<sup>5</sup> Consider your favourite parity-violating Lagrangian $`(\psi )`$ which is invariant under gauge group $`G`$. Here I take $``$ to be the Lagrangian of the minimal SM augmented by right-handed neutrinos and nonzero neutrino masses. I also go to the region of parameter space where the see-saw mechanism operates, so the ordinary left-handed neutrinos are naturally very light Majorana particles. For every ordinary field $`\psi `$, introduce a mirror or parity partner $`\psi ^{}`$. All ordinary fields are parity-doubled, including gauge and Higgs bosons. The new Lagrangian $$(\psi ,\psi ^{})=(\psi )+(\psi ^{})$$ (1) is a parity-invariant extension of $``$ with gauge group $`GG`$. The ordinary and mirror sectors couple by gravitation only at this stage. It is clear, therefore, that the parity-invariant theory has the same particle phenomenology as the original. It is also immediately obvious that the mirror sector is astrophysically and cosmologically dark. At least part of the “missing mass” in the universe may be in the form of mirror gas, mirror stars and the like. It is important to realise that the physical equivalence of the ordinary and mirror sectors at the microscopic level does not inevitably imply equivalent macrophysics. Some of the dark matter can be mirror matter without requiring the universe to at all stages consist of an equal mixture of ordinary and mirror particles. For specific mechanisms see Ref.<sup>6</sup>. In general, the ordinary and mirror sectors will also interact non-gravitationally. The interaction Lagrangian $`_{\mathrm{int}}(\psi ,\psi ^{})`$ is the sum of all renormalisable, gauge and parity-invariant terms which couple the $`\psi `$ to the $`\psi ^{}`$. In the EPM based on the SM, these terms are proportional to<sup>3</sup> $$F^{\mu \nu }F_{\mu \nu }^{},\varphi ^{}\varphi \varphi ^{}\varphi ^{},\overline{\mathrm{}}_L\varphi (\nu _L^{})^c+\overline{\mathrm{}}_R^{}\varphi ^{}(\nu _R)^c,\overline{\nu }_R\nu _L^{}+\overline{\nu }_L^{}\nu _R,$$ (2) where $`F^{\mu \nu }`$ is the field strength tensor of the weak hypercharge gauge field, $`\varphi `$ is the Higgs doublet, $`\mathrm{}_L`$ is a left-handed lepton doublet while $`\nu _R`$ is a right-handed neutrino. The primed fields are mirror partners. Thus $`\nu _L^{}`$ is the mirror partner of $`\nu _R`$, and they are both gauge singlets. The leptonic terms have suppressed family indices. Each term in Eq.(2) is multiplied by an a priori arbitrary parameter. The full Lagrangian of the EPM is $$(\psi ,\psi ^{})=(\psi )+(\psi ^{})+_{\mathrm{int}}(\psi ,\psi ^{}).$$ (3) The parameters controlling the strength of the first two terms in Eq.(2), which induce photon–mirror-photon and Higgs–mirror-Higgs mixing respectively, are constrained to be small by BBN. The construction above ensures invariance under the non-standard parity transformation $`\psi \stackrel{P^{}}{}\psi ^{}`$. By decomposing the standard $`CPT`$ operator via $`CPTP^{}T^{}`$, we see that non-standard time reversal under $`T^{}`$ also follows. The full Lorentz Group is a symmetry of Eq.(3). It is straightforward to check that a large region of Higgs potential parameter space admits a $`P^{}`$ symmetric vacuum.<sup>2</sup> For your amusement, observe that the Exact Parity construction is quite similar to supersymmetry in that both involve extensions of the Proper Lorentz Group and both require particle doubling. These “orthogonal” possibilities are summarised in Fig.1. ## 2 Mirror Neutrino Solution to the Solar and Atmospheric Neutrino Problems The last two terms in Eq.(2) cause ordinary and mirror neutrinos to mix. For full details about the mass matrix, its diagonalisation and the see-saw mechanism see Ref.<sup>3</sup>. It will suffice here to present a fairly model-independent discussion. Suppose some mechanism, for instance the see-saw as considered above, produces three light ordinary neutrinos and three light mirror neutrinos. Mixing will also be present. In the region of parameter space where interfamily mixing is small, three pairs of parity-eigenstate and therefore maximally-mixed ordinary and mirror neutrinos are produced. The pairwise maximal mixing is enforced by the unbroken parity symmetry. The mass eigenstate neutrinos of family $`\alpha `$, where $`\alpha =e,\mu ,\tau `$, are given by $$|\nu _{\alpha \pm }=\frac{|\nu _\alpha \pm |\nu _\alpha ^{}}{\sqrt{2}}.$$ (4) This mass eigenstate pattern is identical to the pseudo-Dirac case as far as terrestrial experiments are concerned, because the mirror neutrinos are effectively sterile. The EPM is in part an explicit theory of light sterile neutrinos, with the important additional feature of pairwise maximal mixing with the corresponding ordinary neutrino. The $`\mathrm{\Delta }m^2`$ values are in general arbitrary parameters. The atmospheric neutrino data can be explained by $`\nu _\mu \nu _\mu ^{}`$ oscillation with $`\mathrm{\Delta }m_{\mu \mu ^{}}^2=10^310^2`$ eV<sup>2</sup>. The observed $`\pi /4`$ mixing angle is a successful prediction of the EPM. Most of the solar neutrino data can be analogously explained by maximal $`\nu _e\nu _e^{}`$ oscillations, with $`\mathrm{\Delta }m_{ee^{}}^2=4\times 10^{10}10^3`$ eV<sup>2</sup> (the Homestake data point is lower than the EPM expectation). Neutral current measurements will further test these hypotheses or rule them out. The LSND results can be accomodated by switching on the appropriate amount of small interfamily mixing.<sup>3</sup> ## 3 Early Universe Cosmology Theories with light sterile or mirror neutrinos give rise to interesting early universe cosmology. It has been shown that ordinary-sterile (or ordinary-mirror) neutrino oscillations can dynamically amplify the CP asymmetry of the primordial plasma through the production of large relic neutrino-antineutrino asymmetries or chemical potentials.<sup>4,5</sup> The $`\alpha `$-like asymmetry $`L_{\nu _\alpha }`$ is defined by $$L_{\nu _\alpha }=\frac{n_{\nu _\alpha }n_{\overline{\nu }_\alpha }}{n_\gamma },$$ (5) where $`n_f`$ is the number density of species $`f`$. The oscillation-driven amplification mechanism can generate asymmetries as high as about $`3/8`$. This should be compared to the known baryon asymmetry which is at the $`10^{10}`$ level. A $`\nu _\alpha \nu _\beta ^{}`$ ordinary-mirror mode will generate a large asymmetry prior to the BBN epoch provided the oscillation parameters satisfy<sup>5</sup> $`10^{10}\stackrel{<}{}\mathrm{sin}^22\theta _{\alpha \beta ^{}}\stackrel{<}{}\mathrm{few}\times 10^4\left({\displaystyle \frac{\mathrm{eV}^2}{|\mathrm{\Delta }m_{\alpha \beta ^{}}^2|}}\right)^{\frac{1}{2}},`$ $`\mathrm{\Delta }m_{\alpha \beta ^{}}^2<0,|\mathrm{\Delta }m_{\alpha \beta ^{}}^2|\stackrel{>}{}10^4\mathrm{eV}^2.`$ (6) The small vacuum mixing angle required means that only interfamily $`\nu _\alpha \nu _\beta ^{}`$ $`(\alpha \beta )`$ modes can drive this phenomenon. Large neutrino asymmetries have two important consequences. First, they suppress ordinary-mirror oscillations and hence mirror neutrino production in the early universe. Such a suppression mechanism is very welcome, because successful BBN will not be achieved if the mirror sector is in thermal equilibrium with the ordinary sector during the BBN epoch. The doubling in the expansion rate of the universe that would result is not compatible with light element abundance observations. Second, a $`\nu _e\overline{\nu }_e`$ asymmetry of the correct magnitude will appreciably affect the neutron-proton interconversion rates and hence also the primoridal Helium abundance. This can be conveniently quantified by quoting an effective number of relativistic neutrino flavours $`N_{\nu ,\mathrm{eff}}`$ during BBN. Depending on the sign, an $`e`$-like asymmetry can increase or decrease $`N_{\nu ,\mathrm{eff}}`$ from its canonical value of 3. A full analysis within the EPM of neutrino asymmetry generation and implications for BBN can be found in Ref.<sup>5</sup>. I can provide only a very brief summary here. The mirror neutrino suppression mechanism relies on the effective potential for the $`\nu _\alpha \nu _\beta ^{}`$ in primordial plasma. For temperatures between the muon annihilation epoch and BBN it is given by $$V_{\mathrm{eff}}=\sqrt{2}G_Fn_\gamma \left[L^{(\alpha )}L^{(\beta )}A_\alpha \frac{T^2}{M_W^2}\frac{p}{p}\right],$$ (7) where $`G_F`$ is the Fermi constant, $`M_W`$ is the $`W`$-boson mass and $`A_\alpha `$ is a numerical factor, $`p`$ is the neutrino momentum or energy with $`p3.15T`$ being its thermal average. The effective asymmetry $`L^{(\alpha )}`$ is given by $$L^{(\alpha )}=L_{\nu _\alpha }+L_{\nu _e}+L_{\nu _\mu }+L_{\nu _\tau }+\eta $$ (8) where $`\eta `$ is a small term due to the baryon and electron asymmetries. If the effective asymmetry is large, then the large matter potential will suppress the associated oscillation mode. Under the conditions discussed above, a small angle $`\nu _\alpha \nu _\beta ^{}`$ $`(\alpha \beta )`$ mode will generate a brief period of exponential growth of $`L_{\nu _\alpha }`$ at a critical temperature $`T_c`$ given approximately by $`T_c16[(|\mathrm{\Delta }m_{\alpha \beta ^{}}^2|/\mathrm{eV}^2)\mathrm{cos}2\theta _{\alpha \beta ^{}}]^{1/6}`$ MeV. Consider the parameter space region $$m_{\nu _{\tau \pm }}m_{\nu _{\mu \pm }}m_{\nu _{e\pm }}$$ (9) with the $`\nu _\alpha \nu _\alpha ^{}`$ splittings relatively small. Let the interfamily mixing angles be small. Focus on the $`\tau \mu `$ subsystem, set $`\mathrm{\Delta }m_{\mu \mu ^{}}^2`$ to the atmospheric anomaly range, and set $`\mathrm{\Delta }m_{\tau \tau ^{}}^2=0`$ for simplicity. The $`\nu _\tau \nu _\mu ^{}`$ mode satisfies the conditions for generating a large $`L_{\nu _\tau }`$. Considered by itself, this generates a large $`\mu \mu ^{}`$-like effective asymmetry $`L^{(\mu )}L^{(\mu )}`$ and so suppresses the maximal $`\nu _\mu \nu _\mu ^{}`$ mode. This prevents the $`\nu _\mu ^{}`$ from thermally equilibrating. However, it turns out that the $`\nu _\mu \nu _\mu ^{}`$ tries to destroy its effective $`\mu \mu ^{}`$-like asymmetry. For a given $`\mathrm{\Delta }m_{\mu \mu ^{}}^2`$, there is a region of $`(\mathrm{\Delta }m_{\tau \mu ^{}}^2,\mathrm{sin}^22\theta _{\tau \mu ^{}})`$ parameter space in which $`L^{(\mu )}L^{(\mu )}`$ is not efficiently destroyed, and another region in which it is. To analyse this complicated dynamics, one must solve a set of coupled Quantum Kinetic Equations. The outcome is displayed in Fig.2. The three solid lines correspond to $`\mathrm{\Delta }m_{\mu \mu ^{}}^2=10^3,10^{2.5},10^2`$ eV<sup>2</sup> from bottom to top. Above the solid line, $`\nu _\mu ^{}`$ production via $`\nu _\mu \nu _\mu ^{}`$ is negligible. Below the line, the $`\nu _\mu ^{}`$ is brought into thermal equilibrium. The dot-dashed line refers $`\nu _\mu ^{}`$ production by the asymmetry-creating mode $`\nu _\tau \nu _\mu ^{}`$. To the left of the line, $`N_{\nu ,\mathrm{eff}}<3.6`$. The BBN “bound” of $`3.6`$ was chosen for illustrative purposes only. It is interesting that the $`\mathrm{\Delta }m_{\tau \mu ^{}}^2`$ values required for consistency with BBN are compatible with a $`\nu _\tau `$ mass in the hot dark matter range (shaded band). A lengthy analysis allows one to also estimate the $`e`$-like asymmetry that gets generated if the $`e`$ family couples non-negligibly with the more massive families.<sup>5</sup> It turns out that the mass-squared difference between the first and second families, denoted $`\mathrm{\Delta }m_{\mathrm{small}}^2`$ in Ref.<sup>5</sup>, plays an important role. The results for $`\delta N_{\nu ,\mathrm{eff}}N_{\nu ,\mathrm{eff}}3`$ depend on the sign of the asymmetry generated, which unfortunately cannot be predicted at present because it depends on the unknown initial values of the asymmetries.<sup>4</sup> The point is simply that an unknown mechanism, not associated with light neutrino oscillations, may generate nonzero asymmetries at some much earlier epoch. While the final magnitudes of the asymmetries around the time of BBN are insensitive to the initial values provided the latter are small, the same is not true for the overall signs of the asymmetries. The signs could also be affected by spatial inhomogeneities in the baryon asymmetry.<sup>7</sup> The convenient quantity $`\delta N_{\nu ,\mathrm{eff}}`$ will be greater or less than zero depending on the sign of $`L_{\nu _e}`$. The results show that $`\delta N_{\nu ,\mathrm{eff}}`$ gets to the $`\pm 1`$ regime for $`\mathrm{\Delta }m_{\mathrm{small}}^2`$ values in the eV<sup>2</sup> to few eV<sup>2</sup> range. Interestingly, this puts the $`\nu _e\nu _\mu `$ mass-squared difference in the LSND regime. Improved primordial abundance measurements are needed before definite conclusions can be drawn about the cosmologically favoured first-second family mass splitting in the EPM. At this meeting, A. Dolgov<sup>8</sup> argued that the oscillation-driven neutrino asymmetry amplification mechanism cannot generate asymmetries as high as claimed in Refs.<sup>4,5</sup>. I do not agree. Space limitations prevent discussion of these issues here. See Ref.<sup>9</sup> for comments on the criticisms of Dolgov et al. ## 4 Conclusions The Mirror or Exact Parity Model solves the solar and atmospheric neutrino problems in a cosmologically consistent way. It is also compatible with the LSND result. The role of neutrino oscillations prior and during BBN is quite remarkable because of the relic neutrino asymmetry amplification phenomenon. If they exist, light mirror neutrinos will make life and the universe even more interesting! ## Acknowledgments Warmest thanks to Rachel Jeannerot, Goran Senjanovic and Alexei Smirnov for organising this interesting meeting. I acknowledge lively discussions during the meeting with Z. Berezhiani, A. Dolgov, K. Enqvist, K. Kainulainen, S. Pastor and A. Sorri on topics relevant to this paper. I would like to thank Nicole Bell, Roland Crocker, Pasquale Di Bari, Robert Foot, Keith Lee, Paolo Lipari, Maurizio Lusignoli and Yvonne Wong for their collegiality and their insights. This work was supported by the Australian Research Council. ## References Unfortunately, space does not permit full referencing here. The numbers appearing after references below correspond to the appropriate citations contained in Ref.<sup>5</sup>. 1. 3. 4. 24; 25; 26; 28; 29; P. Di Bari, P. Lipari and M. Lusignoli, hep-ph/9907548, Int. J. Mod. Phys. (in press); P. Di Bari, these proceedings. R. Foot and R. R. Volkas, hep-ph/9904336, Phys. Rev. D (in press). E. W. Kolb, D. Seckel and M. S. Turner, Nature 514, 415 (1985); H. M. Hodges, Phys. Rev. D 47, 456 (1993); V. Berezinsky and A. Vilenkin, hep-ph/9908257; Z. Berezhiani, these proceedings. P. Di Bari, hep-ph/9911214. A. Dolgov, these proceedings. P. Di Bari and R. Foot, hep-ph/9912215; A. Sorri, hep-ph/9911366.
no-problem/0002/math0002046.html
ar5iv
text
# On embeddings into toric prevarieties ## Introduction This note is concerned with embeddability and non-embeddability into toric prevarieties. Włodarczyk has shown that a normal variety $`Y`$ admits a closed embedding into a toric variety $`X`$ if and only if every pair $`y_1,y_2Y`$ is contained in a common affine open neighbourhood. If one drops the latter condition there still exists a closed embedding into some toric prevariety $`X`$. This means that $`X`$ may be non-separated. The above embedding result has the following refinement: Supposing that $`Y`$ is $``$-factorial one can choose $`X`$ to be simplicial and of affine intersection. In other words, $`X`$ is $``$-factorial and the intersection of any two affine open subsets of $`X`$ is again affine (see and ). Embeddings into toric prevarieties of affine intersection can for example be used to obtain global resolutions of coherent sheaves on $`Y`$ (see ). The purpose of this note is to find out whether or not every normal variety $`Y`$ can be embedded into a toric prevariety $`X`$ which is simplicial or of affine intersection. Unfortunately, the answer is negative: We provide examples of normal surfaces that admit neither embeddings into toric prevarieties of affine intersection nor into simplicial ones. Together with Włodarczyk’s embedding result, these counterexamples imply that there are toric prevarieties of affine intersection that cannot be embedded into simplicial toric prevarieties of affine intersection. An analogous statement is easily seen to hold in the category of toric varieties, since there are complete toric varieties with trivial Picard group (see for example ). ## 1. Non-projective complete surfaces Here we briefly discuss some properties of complete normal surfaces which are non-projective. First, we have to fix notation. Throughout, we will work over an uncountable algebraically closed ground field $`k`$. The word prevariety refers to an integral $`k`$-scheme of finite type. A variety is a separated prevariety, and a surface is a 2-dimensional variety. By a toric prevariety we mean a prevariety $`X`$ arising from a system of fans in some lattice $`N`$ in the sense of . For $`k=`$ this notion precisely yields the normal prevarieties $`X`$ endowed with an effective regular action of a torus $`T`$ with an open orbit. The following facts are well known (compare , theorem 4.2, and ): ###### Proposition 1.1. Let $`S`$ be a normal surface and denote by $`s_1,\mathrm{},s_nS`$ its non-$``$-factorial singularities. 1. The surface $`S`$ is quasi-projective if and only if the points $`s_1,\mathrm{},s_n`$ have a common affine neighbourhood. 2. The surface $`S`$ admits a closed embedding into a toric variety if and only if every pair $`s_i,s_j`$ lies in a common affine neighbourhood. In view of these statements, the simplest possible candidate for a normal variety without closed embeddings into toric prevarieties of affine intersection is a normal surface having precisely two non-$``$-factorial singularities $`s_1,s_2S`$. In fact we will work with surfaces of this type. Let $`S`$ be a complete normal surface. There is a projective reduction $`r:SS^{\mathrm{prj}}`$ of $`S`$, which means that the morphism $`r`$ is universal with respect to morphisms to projective schemes (see ). Since $`𝒪_{S^{\mathrm{prj}}}r_{}(𝒪_S)`$ is necessarily bijective, the projective scheme $`S^{\mathrm{prj}}`$ is normal and the fibres of $`r`$ are connected. If $`S^{\mathrm{prj}}`$ is a curve, any Weil divisor $`E`$ on $`S`$ has a decomposition $$E=E^{\mathrm{vert}}+E^{\mathrm{hor}}$$ into the part $`E^{\mathrm{vert}}`$ consisting of those prime cycles of $`E`$ that are contained in the fibres of $`r`$ and the part $`E^{\mathrm{hor}}`$ which consists of the remaining prime cycles. A Weil divisor $`E`$ on $`S`$ is called vertical if $`E=E^{\mathrm{vert}}`$, and it is called horizontal if $`E=E^{\mathrm{hor}}`$. For a Weil divisor $`D`$ on a variety $`Y`$ and $`yY`$, we denote by $`D_y`$ the Weil divisor obtained from $`D`$ by omitting all prime cycles not containing the point $`y`$. The following statement will be crucial for non-embeddability: ###### Lemma 1.2. Let $`S`$ be a complete normal surface with precisely two non-$``$-factorial singularities $`s_1`$, $`s_2S`$. Assume that $`S^{\mathrm{prj}}`$ is a curve and that every vertical Weil divisor on $`S`$ is $``$-Cartier. If $`E`$ is a $``$-Cartier divisor on $`S`$ with $`E_{s_2}0`$ and $`E^{\mathrm{hor}}0`$, then $`E_{s_1}^{\mathrm{hor}}`$ is not effective. ###### Proof. Assume that $`E_{s_1}^{\mathrm{hor}}0`$ holds. Since $`E`$ and $`E^{\mathrm{vert}}`$ are $``$-Cartier, so is their difference $`E^{\mathrm{hor}}`$. Consider the decomposition $$E^{\mathrm{hor}}=E_+^{\mathrm{hor}}+E_{}^{\mathrm{hor}}$$ into positive and negative parts. Since $`s_1`$ and $`s_2`$ are not contained in $`E_{}^{\mathrm{hor}}`$, the surface $`S`$ is $``$-factorial near $`E_{}^{\mathrm{hor}}`$. So $`E_{}^{\mathrm{hor}}`$ is $``$-Cartier. Hence $`E_+^{\mathrm{hor}}`$ is also $``$-Cartier . As $`E^{\mathrm{hor}}0`$, there must be at least one horizontal Cartier divisor $`CS`$. Consider the invertible sheaf $`=𝒪_S(C)`$. Obviously, $``$ is $`S^{\mathrm{prj}}`$-generated on $`SC`$. Since $`CS^{\mathrm{prj}}`$ is finite, we can apply , Theorem 1.1 (over affine open subsets $`\mathrm{Spec}(\mathrm{R})\mathrm{S}^{\mathrm{prj}}`$), and deduce that $`^n`$ is $`S^{\mathrm{prj}}`$-generated for some $`n>0`$. Consequently, the homogeneous spectrum $$S^{}=\mathrm{Proj}(f_{}\mathrm{Sym}())$$ is a projective $`S^{\mathrm{prj}}`$-scheme, hence projective. But $``$ is ample on the generic fibre of $`r:SS^{\mathrm{prj}}`$, contradicting the universal property of the projective reduction. ∎ ###### Remark 1.3. Surfaces as above really exist, compare , 2.5. Here the assumption that the ground field $`k`$ is uncountable comes in. We will also make use of the following elementary fact: ###### Lemma 1.4. Let $`f:YX`$ be a morphism of integral normal prevarieties. Given a Cartier divisor $`D`$ on $`X`$ such that the preimage $`E:=f^{}(D)`$ exists as Cartier divisor. Decompose $`D=_i\lambda _iD_i`$ and $`E=_j\mu _jE_j`$ into prime cycles. If there is a component $`E_j`$ with $`\mu _j<0`$, then there is a component $`D_i`$ with $`\lambda _i<0`$ and $`f(E_j)D_i`$. ###### Proof. Assume there is no such $`D_i`$. Decompose $`D=D_+D_{}`$ into positive and negative parts. Then we have $`E_jf^1(D_{})`$. Hence the restriction of $`E`$ to $`Yf^1(D_{})`$ is not effective. On the other hand, the restriction of $`D`$ to $`XD_{}`$ is effective, contradiction. ∎ ## 2. Non-embeddability Recall that a scheme $`X`$ is separated if the diagonal morphism $`\mathrm{\Delta }:XX\times X`$ is a closed embedding. A weaker condition is that $`\mathrm{\Delta }`$ is affine. In this situation we call $`X`$ to be of affine intersection. To check this property it suffices to find an open affine covering $`X=_iX_i`$ such that each intersection $`X_iX_j`$ is affine. Clearly, if $`X`$ is of affine intersection, every subscheme is so. ###### Theorem 2.1. Let $`S`$ be a complete normal surface with precisely two non-$``$-factorial singularities $`s_1,s_2S`$. Assume that the projective reduction $`S^{\mathrm{prj}}`$ is a curve and that all irreducible components of the fibres of $`r:SS^{\mathrm{prj}}`$ are $``$-Cartier. Let $`f:SX`$ be a morphism to a toric prevariety $`X`$. If $`X`$ is simplicial or of affine intersection, then there is a morphism $`\stackrel{~}{f}:S^{\mathrm{prj}}X`$ with $`f=\stackrel{~}{f}r`$. Let us record the following evident ###### Corollary 2.2. The surface $`S`$ is neither embeddable into a toric prevariety of affine intersection nor into a simplicial toric prevariety. Proof of theorem 2.1. First we treat the case that $`X`$ is of affine intersection. Let $`T`$ denote the acting torus of $`X`$. There is a unique $`T`$-orbit $`BX`$ such that $`f(S)\overline{B}`$ holds. Note that $`\overline{B}`$ is again a toric prevariety of affine intersection. Replacing $`X`$ by $`\overline{B}`$ we may assume that $`f(S)`$ hits the open $`T`$-orbit. Set $`x_1:=f(s_1)`$ and $`x_2:=f(s_2)`$. For $`i=1,2`$ let $`X_iX`$ be the affine $`T`$-stable neighborhoods containing $`Tx_i`$ as closed orbits, respectively. Set $`N:=Hom(k^{},T)`$ and let $`M:=Hom(N,)`$. Then we have $$X_i=X_{\sigma _i}=\mathrm{Spec}(k[\sigma _i^{}M])$$ for certain cones $`\sigma _i`$ in the lattice $`N`$. Since $`X`$ is of affine intersection, the set $`X_{12}:=X_1X_2`$ is of the form $`X_{12}=X_{\sigma _{12}}`$ for some common face $`\sigma _{12}`$ of $`\sigma _1`$ and $`\sigma _2`$. Hence there is a linear form $`uM`$ with $$u\sigma _1^{}\mathrm{and}\sigma _{12}=\sigma _1u^{}.$$ Since $`f(S)`$ hits the open $`T`$-orbit, all characters $`\chi ^m`$, $`mM`$, admit pullbacks $`f^{}(\chi ^m)`$ as rational functions. In particular $`\chi ^u`$ has a pullback $`\psi `$ with respect to $`f`$, and the principal divisor $`\mathrm{div}(\chi ^u)`$ has the principal divisor $`\mathrm{div}(\psi )`$ as pullback. Since $`\chi ^u`$ is defined on $`X_1`$, we have $$\mathrm{div}(\psi )_{s_1}0.$$ We claim that no component of $`\mathrm{div}(\psi )_{s_1}`$ contains the point $`s_2S`$. Otherwise, let $`ES`$ be such a component. Then there exists a $`T`$-stable component $`D_1X_1`$ of $`\mathrm{div}(\chi ^u)X_1`$ containing $`f(E)X_1`$. This component corresponds to an edge $`\varrho _1\sigma _1`$. Note that by , p. 61 we have $`\varrho _1u^{}`$. Take the closure $`D:=\overline{D_1}`$ in $`X`$, set $`D_2:=DX_2`$, and let $`\varrho _2\sigma _2`$ be the edge corresponding to $`D_2X_2`$. Then $`s_2E`$ implies $$x_2=f(s_2)f(E)X_2D_2.$$ Thus $`DX_2`$ is non-empty, hence $`D_{12}:=DX_{12}`$ is a $`T`$-stable Weil divisor in $`X_{12}=X_{\sigma _{12}}`$. Let $`\varrho _{12}\sigma _{12}`$ be the edge corresponding to $`D_{12}X_{12}`$. Since $`D_{12}`$ is induced by $`D_1`$ and $`D_2`$ as well, we conclude $`\varrho _1=\varrho _{12}=\varrho _2`$. Thus we have $`\varrho _1\sigma _{12}u^{}`$, contradicting $`\varrho _1u^{}`$. As a consequence of our claim, $`\mathrm{div}(\psi )_{s_1}`$ is an effective $``$-Cartier divisor. According to lemma 1.2, the divisor $`\mathrm{div}(\psi )_{s_1}`$ is vertical. Next, we show that for every principal divisor $`D=\mathrm{div}(\chi ^m)`$, $`mM`$ its pullback to $`S`$ is vertical. It suffices to check this for a set of generators of $`M`$, for example $`M\sigma _2^{}`$. Assume that there is some $`mM\sigma _2^{}`$ such that $`E:=f^{}(D)`$ is not vertical. Decompose $`D=_i\lambda _iD_i`$ and $`E=_j\mu _jE_j`$ into prime cycles. Since $`D_{x_2}0`$, we also have $`E_{s_2}0`$. By lemma 1.2 there must be some horizontal component $`E_j`$ containing $`s_1`$ with $`\mu _j<0`$. Lemma 1.4 tells us that there is some component $`D_i`$ with $`\lambda _i<0`$ and $`f(E_j)D_i`$. In particular we have $`x_1D_i`$. Let $`\varrho _i\sigma _1`$ be the edge corresponding to the $`T`$-invariant Weil divisor $`D_iX_1`$. Since $`x_2D_i`$ we have $`\varrho _i\sigma _{12}`$. Hence $`\varrho _i\sigma _1u^{}`$, and $`D_i`$ occurs with positive multiplicity in $`\mathrm{div}(\chi ^u)_{x_1}`$. On the other hand, we already have seen that $`\mathrm{div}(\psi )_{s_1}`$ is vertical. Consequently, $`E_jf^1(D_i)`$ is also vertical, contradiction. We have shown that $`\mathrm{div}(f^{}(\chi ^m))`$ is vertical for all $`mM`$. As a consequence, the induced morphism $`f^1(T)T`$ is constant along the generic fibre of $`r:SS^{\mathrm{prj}}`$. By the rigidity lemma (see for example , 1.5), all fibres of $`r`$ are mapped to points under $`f:SX`$. Now , 8.11.1 gives a morphism $`\stackrel{~}{f}:S^{\mathrm{prj}}X`$ with $`f=\stackrel{~}{f}r`$. This proves the theorem for toric prevarieties of affine intersection. It remains to treat the case that $`X`$ is simplicial. According to , Section 1, there is a toric prevariety $`X^{}`$ of affine intersection and a local isomorphism $`g:XX^{}`$. As we have seen, $`gf`$ is constant along the fibres of $`r:SS^{\mathrm{prj}}`$. Since the fibres of $`r`$ are connected, this implies that $`f`$ is constant along the fibres of $`r`$, and we obtain the desired factorization as above.
no-problem/0002/cond-mat0002469.html
ar5iv
text
# Non-equilibrium Surface Growth and Scalability of Parallel Algorithms for Large Asynchronous Systems ## 1 Introduction Dynamic Monte Carlo (MC) simulations are invaluable tools for investigating the evolution of complex systems. For a wide range of systems it is plausible to assume (and in rare cases it is possible to derive) that attempts to update the state of the system form a Poisson process. The basic notion is that time is continuous, and the discrete events (update attempts) occur instantaneously. The state of the system remains constant between events. It is worthwhile to note that the standard random-sequential update schemes (easily implementable on serial computers) produce this dynamics for “free:” the waiting-time distribution for the attempts to update each subsystem or component is geometrical and approaches the exponential distribution in the large-system limit. This uniquely characterizes the Poisson process. The parallel implementation of these dynamic MC algorithms belongs to the class of parallel discrete-event simulation, which is one of the most challenging areas in parallel computing Fuji . The numerous applications range from the natural sciences and engineering to computer science and queueing networks. For example, in lattice Ising models the discrete events are spin-flip attempts, while in queueing systems they are job arrivals. The difficulty of parallel discrete-event simulations is that update attempts are not synchronized by a global clock. In fact, the traditional dynamic MC algorithms were long believed to be inherently serial, i.e., in spin language, the corresponding algorithm was thought to be able to update only one spin at a time. However, Lubachevsky presented an approach for parallel simulation of these systems Luba without changing the underlying Poisson process. Applications include modeling of cellular communication networks GLNW , particle deposition LPR , and metastability and hysteresis in kinetic Ising models KNR . In a distributed massively parallel scheme each processing element (PE) carries a subsystem of the full system. The parallel algorithm must concurrently advance the Poisson streams corresponding to each subsystem without violating causality. This requires the concept of local simulated time, as well as a synchronization scheme. Intuitively it is clear that systems with short-range interactions contain a “substantial” amount of parallelism. For the “conservative” approach Luba , the efficiency of the algorithm is simply the fraction of PEs that are guaranteed to attempt the update without breaking causality. The rest of the PEs must idle. ## 2 Time Horizon Evolution and Efficiency Modeling We consider a $`d`$-dimensional hypercubic regular lattice topology where the underlying physical system has only nearest-neighbor (nn) interactions (e.g., Glauber spin-flip dynamics) and periodic boundary conditions. The scalability analysis is made for the “worst-case” scenario in which each PE hosts a single site (e.g., one spin) of the underlying system. While this may be the only scenario for a special-purpose computer with extremely limited local memory, on architectures with relatively large memory one PE can host a block of sites. This substantially increases the efficiency, bringing it to the level of practical applicability KNR . In the basic parallel scheme Luba , each PE generates its own local simulated time for the next update attempt. The set of local times $`\{\tau _i(t)\}_{i=1}^{L^d}`$ constitute the simulated time horizon. Here, $`L`$ is the linear size of the lattice ($`L^d`$ is the number of PEs), and $`t`$ is the index of the simultaneously performed parallel steps. Initially, $`\tau _i(0)`$$`=`$$`0`$ for every site. At each parallel time step, only those PEs for which the local simulated time is not greater than the local simulated times of their nn can attempt the update and increment their local time by an exponentially distributed random amount, $`\eta _i(t)`$. Without loss of generality we take $`\eta _i(t)`$$`=`$$`1`$. The other PEs idle. Due to the continuous nature of the random simulated times, for $`t`$$`>`$$`0`$ the probability of equal-time updates for any two sites is of measure zero. The comparison of the nn simulated times and idling if necessary enforces causality. Since at worst the PE with the absolute minimum simulated time makes progress, the algorithm is free from deadlock. For this basic conservative scheme, the theoretical efficiency (ignoring communication overheads) is simply the fraction of non-idling PEs. This corresponds to the density of local minima of the simulated time horizon. Note that the evolution of the simulated time horizon is completely independent of the underlying model (except for its topology) and can be written as: $$\tau _i(t+1)=\tau _i(t)+\underset{jD_i^{\mathrm{nn}}}{}\mathrm{\Theta }\left(\tau _j(t)\tau _i(t)\right)\eta _i(t).$$ (1) Here $`D_i^{\mathrm{nn}}`$ is the set of nearest neighbors of site $`i`$, and $`\mathrm{\Theta }()`$ is the Heaviside step function. The evolution of the simulated time horizon is clearly analogous to an irreversibly growing and fluctuating surface. There are two important quantities to study. The first is the density of local minima, $`u(t)_L`$, in particular its asymptotic (or steady-state) value and finite-size effects. It corresponds directly to the efficiency of the algorithm. The second is the surface width, $`w^2(t)`$$`=`$$`(1/L^d)_{i=1}^{L^d}\left[\tau _i(t)\overline{\tau }(t)\right]^2`$, where $`\overline{\tau }(t)`$$`=`$$`(1/L^d)_{i=1}^{L^d}\tau _i(t)`$. It describes the macroscopic roughness of the time horizon and has important consequences for actual implementations GSS (e.g., optimal buffer size for a collecting statistics network Luba ). For $`d`$$`=`$$`1`$ we showed KTNR by coarse-graining and direct simulation of (1) that the evolution of the simulated time horizon belongs to the KPZ universality class KPZ . Our simulation confirmed that before reaching the steady state, $`w^2(t)`$$``$$`t^{2\beta }`$ with $`\beta `$$``$$`1/3`$ \[Fig. 1(a)\]. At the same time the density of local minima, $`u(t)_L`$, decreases monotonically with time towards a long-time asymptotic limit well separated from zero \[Fig. 1(b)\]. The steady state is governed by the Edwards-Wilkinson Hamiltonian EW , and the stationary width scales as $`w^2`$$``$$`L^{2\alpha }`$, where $`\alpha `$$`=`$$`1/2`$ is the roughness exponent. This guarantees that the coarse-grained landscape is a simple random-walk surface; the local slopes are short-range correlated. Thus, the density of local minima is non-zero. The non-zero density of local minima is a universal characteristic of this class KTNR ; TKSZ . Further, its steady-state finite-size effects can be written as $`u_L=u_{\mathrm{}}+\mathrm{const}/L`$. The $`𝒪(1/L)`$ correction in $`d`$$`=`$$`1`$ appears to be rather robust for periodic boundary conditions TKSZ . The extrapolated value for the efficiency is $`u_{\mathrm{}}`$$`=`$$`0.2464(1)`$ \[Fig. 1(c)\]. In higher dimensions we observe the same qualitative behavior as for $`d`$$`=`$$`1`$. The surface roughens and saturates for any finite system, as seen in Fig. 2(a,c). Simultaneously, the density of local minima decreases monotonically towards its asymptotic ($`t`$$``$$`\mathrm{}`$) finite-size value \[Fig. 2(b,d)\]. Again, the steady-state density of local minima appears to be well separated from zero. For $`d`$$`=`$$`2`$ $`u_{\mathrm{}}`$$``$$`0.12`$, and for $`d`$$`=`$$`3`$ $`u_{\mathrm{}}`$$``$$`0.075`$. The $`u_{\mathrm{}}`$$``$$`𝒪(1/K)`$ behavior appears to be rather general GSS , where $`K`$$`=`$$`2d`$ is the number of nearest neighbors on a regular lattice. Similar to the $`d`$$`=`$$`1`$ case, corrections to scaling are very strong, both for the surface width and the density of local minima. While for $`d`$$`=`$$`1`$ we were able to simulate large systems ($`L`$$``$$`10^3`$) to obtain the KPZ scaling exponents and the steady-state finite-size behavior of $`u_L`$, in higher dimensions the relatively small system sizes prevented us from extracting the scaling behavior of the width and the finite-size effects of the density of local minima. We conjecture that the simulated time horizon exhibits KPZ-like evolution in higher dimensions as well. While the scalability of the parallel scheme for random topology has been investigated earlier GSS , the underlying mechanism for regular lattices (macroscopic roughening) was only recently pointed out KTNR . The major implication is that while the algorithm is scalable, and the width of the simulated time horizon saturates for any finite number of PEs, it diverges in the limit of an infinite number of PEs. Thus, in an actual implementation the programmer has to take some actions to handle statistics collection. ## 3 Summary and Outlook The analogy between the evolution of the simulated time horizon and non-equilibrium surface growth illustrates that when devising parallel algorithms, the programmer has to be very much concerned with the morphological properties of the associated surface. To fully describe the evolution of the simulated time horizon and the efficiency of the algorithm, one must focus on both the long wave-length behavior (the macroscopic width) and short-distance properties (the density of local minima). While most previous surface studies focus on the long wave-length properties, physics at short distances is just as challenging, has real applications, and may reveal universal features KTNR ; TKSZ . ## Acknowledgments We thank B. D. Lubachevsky, A. Weiss, and S. Das Sarma for useful discussions. We acknowledge the support of DOE through SCRI-FSU, NSF-MRSEC at UMD, and NSF through Grant No. DMR-9871455.
no-problem/0002/quant-ph0002023.html
ar5iv
text
# Tailoring of vibrational state populations with light-induced potentials in molecules ## Abstract We propose a method for achieving highly efficient transfer between the vibrational states in a diatomic molecule. The process is mediated by strong laser pulses and can be understood in terms of light-induced potentials. In addition to describing a specific molecular system, our results show how, in general, one can manipulate the populations of the different quantum states in double well systems. Quantum control of molecular processes has recently opened new possibilities for controlling chemical processes, and also for understanding the multistate quantum dynamics of molecular systems. With strong and short laser pulses one can create quantum superpositions of vibrational and continuum states, i.e., wave packets, which often propagate like classical objects under the influence of molecular electronic potentials . Typically in spectroscopy one induces with laser light population transfer between individual vibrational states, and describes the process using Franck-Condon factors, perturbation theory and a very truncated Hilbert space for the molecular wave functions. If the laser light comes in as a very short pulse we expect to couple many vibrational states because of the broad spectrum. However, in this Letter we show that it is possible to start from a single vibrational state, and end up selectively on another single vibrational state, with very high efficiency, even when one uses strong and fast laser pulses. We have previously discussed how one can transfer the vibrational ground state population of one electronic state to the vibrational ground state of another electronic state . This process was called adiabatic passage in light-induced potentials (APLIP). These potentials, which depend on time, due to the time dependence of the laser pulse envelopes, provide a useful description of the process (see e.g. Ref. and references therein). Here we use the same kind of light-induced potentials to describe and understand transfer processes between excited vibrational states. However, we will see that the process is more complicated than the simple adiabatic following assumed in APLIP. To demonstrate the new process we have chosen the same Na dimer potentials, shown in Fig. 1(a), as in our previous study . The three electronic states are coupled in a ladder formation by two laser pulses. Instead of describing the system in terms of the vibrational states and Franck-Condon factors, we consider only the electronic state wave functions, $`\mathrm{\Psi }_i(R,t)`$, $`i=X,A,\mathrm{\Pi }`$. If we apply the rotating wave approximation, we can ‘shift’ the $`X^1\mathrm{\Sigma }_g^+`$ and $`2^1\mathrm{\Pi }_g`$ state potentials in energy by the corresponding laser photons. Then we obtain the situation shown in Fig. 1(b), after a suitable redefinition of the energy zero point. The resonances between the electronic state potentials now become curve crossings. The evolution of the three wave functions $`\mathrm{\Psi }_i(R,t)`$ is given by the time-dependent Schrödinger equation with the Hamiltonian $$H=\frac{\mathrm{}^2}{2m}\frac{^2}{R^2}+𝒰(R,t)$$ (1) where $`R`$ is the internuclear separation, $`m`$ is the reduced mass of the molecule, and the electronic potentials and couplings are given by $$𝒰(R,t)=\left[\begin{array}{ccc}U_X(R)& \mathrm{}\mathrm{\Omega }_1(t)& 0\\ \mathrm{}\mathrm{\Omega }_1(t)& U_A(R)+\mathrm{}\mathrm{\Delta }_1& \mathrm{}\mathrm{\Omega }_2(t)\\ 0& \mathrm{}\mathrm{\Omega }_2(t)& U_\mathrm{\Pi }(R)+\mathrm{}(\mathrm{\Delta }_1+\mathrm{\Delta }_2)\end{array}\right].$$ (2) Here $`U_X(R)`$, $`U_A(R)`$, and $`U_\mathrm{\Pi }(R)`$ are the three potentials, $`\mathrm{\Delta }_1`$ and $`\mathrm{\Delta }_2`$ are the detunings of the two pulses from the lowest points of the potentials \[dashed lines in Fig. 1(a)\], and $`\mathrm{\Omega }_1(t)=\mu _{XA}E_1(t)/\mathrm{}`$, $`\mathrm{\Omega }_2(t)=\mu _{A\mathrm{\Pi }}E_2(t)/\mathrm{}`$ are the two Rabi frequencies. We have assumed for simplicity that the two dipole moments are independent of $`R`$ and we have used Gaussian pulse shapes, $`\mathrm{\Omega }_i(t)=\mathrm{\Omega }\mathrm{exp}\{[(tt_i)/T]^2\}`$, $`i=1,2`$. The light-induced potentials are obtained by diagonalising the potential term (2). For these potentials the curve crossings become avoided crossings. It is easy to see from Fig. 1(b) that in the absence of the pulses (or when they both are weak) one of the light-induced states corresponds, at low energies, to the double well structure formed by the $`X^1\mathrm{\Sigma }_g^+`$ and $`2^1\mathrm{\Pi }_g`$ electronic state potentials. In Fig. 1(b) this corresponds to the lower part of the two solid curves and we will call this the active eigenstate. In Ref. we showed that, if the pulses are applied in a counterintuitive order ($`t_1>t_2`$), the double well structure will disappear as the bottom of the (initially empty) well on the right moves up and vanishes. (In Fig. 1(b) this is seen to be because of a repulsion between the $`2^1\mathrm{\Pi }_g`$ and A$`{}_{}{}^{1}\mathrm{\Sigma }_{u}^{+}`$ ($`\mathrm{\Delta }<0`$) state). After this the left well broadens and moves to the right. Finally the double well structure is re-established as the pulses reduce in intensity. If we now consider that the ground vibrational state of the left well is initially populated, that population would follow the light-induced potential and be transformed smoothly into the ground state wave function of the right well. This is APLIP, and the smooth change in the total wave function is demonstrated in the contour plot in Fig. 2(a). If we choose $`\mathrm{\Delta }_1=\mathrm{\Delta }_2\mathrm{\Delta }`$, the bottoms of the two wells are on the same level initially and finally \[as in Fig. 1(b)\]. The sign of $`\mathrm{\Delta }`$ determines the evolution of the light-induced potential. For negative $`\mathrm{\Delta }`$ we obtain the APLIP situation, but for positive $`\mathrm{\Delta }`$ the right well drops down at first, instead of disappearing. In this case the APLIP situation is not obtained. One is tempted to associate the APLIP process with STIRAP , and consider the active light-induced potential as a dark state , which, because of the counterintuitive pulse order, remains uncoupled from the other states. It is true that during the APLIP process the intermediate state ($`A^1\mathrm{\Sigma }_u^+`$) population remains very small. However, the process is not the same as STIRAP, because the strength of the atom-light coupling, i.e. Rabi frequency, becomes much larger than the vibrational spacing; it is not possible to isolate only a few energy levels. Furthermore, in a STIRAP process there would not be a smooth transport of the wave packet in Fig. 2(a) because only two simple vibrational eigenfunctions would be involved. STIRAP is also considered to be an adiabatic process, which is not strictly the case for Fig. 2. We can see this by considering the initial (and final) vibrational states of the active light-induced potential which matches the two original electronic potentials as shown in Fig. 2(b). The lowest states correspond to the individual vibrational states of the original potentials. In STIRAP we should start with the lowest state on left, and we would expect it to evolve adiabatically into the lowest state on the right, though not in the smooth way seen for the APLIP process in Fig. 2(a). However, if the evolution were truly adiabatic the population would remain in the left well; by definition of adiabatic following it cannot change state. This is because, for the example shown in Fig. 2(b), as the right well rises up, and the right-hand-well wave function with it, there is a diabatic crossing of the energy states which allows the population to remain in the left well as the right well vanishes. As long as the barrier between the two wells is thick enough, the crossing of vibrational eigenenergies remains diabatic (uncoupled) and the wave packet (left-hand well vibrational ground state) can be channelled from left to right (APLIP). There always has to be at least one such diabatic crossing, but if there is only one, it could come at the start, or the end, of the time evolution depending on whether the initial or final vibrational state is lower in energy. Thus the interesting processes mediated by the light-induced potentials are different from STIRAP, and they can not be obtained by perfect adiabatic following. Figure 2(b) also displays the initial and final situation if we now switch to a positive detuning. We choose $`\mathrm{}\mathrm{\Delta }=2010`$ cm<sup>-1</sup> for the time evolution shown in Fig. 3(a) where $`\mathrm{}\mathrm{\Omega }=733`$ cm<sup>-1</sup> and $`t_1t_2=T=5.5`$ ps. We denote the vibrational quantum number of the states in the electronic potential with $`\nu `$, and indicate with the terms “left well” and “right well”, whether these states correspond to the initial state ($`X^1\mathrm{\Sigma }_u^+`$), or the final state ($`2^1\mathrm{\Pi }_g`$), respectively. We transfer the $`\nu =0`$ state on the left well into the $`\nu =1`$ state on the right well. The process is very efficient: the occupation probability of the right well ($`2^1\mathrm{\Pi }_g`$ state) in the end is 94 %. Clearly the crucial moment is around $`t=20`$ ps where the original single-peak distribution extends to the right and forms temporarily a four-peaked distribution over the combined well system. Also, around $`t=25`$ ps the four-peak structure compresses into the two-peak structure located on the right well. In order to understand the process, we have calculated the time dependence of the eigenenergies of the active light-induced potential \[seen for initial and final time in Fig. 2(b)\]. The figure shows clearly that if the evolution is fully adiabatic, no change of $`\nu `$ can occur during the time evolution because there are no crossings of eigenenergies. This also means that no change of well is possible either. However, we do see several avoided crossings between the eigenenergies and the point is that when the pulses are weak, these crossings are passed diabatically, and when the pulses are strong they are passed adiabatically. In our example, Fig. 3(b), the second lowest state corresponds initially, and finally, to the lowest vibrational state of the left well (i.e., the state populated initially in our calculation) as may be confirmed by inspecting Fig. 2(b). During the time evolution the wave function passes the two first avoided crossings diabatically, but for $`t>20`$ ps it follows nearly adiabatically the fourth eigenstate. Just before $`t=30`$ ps it moves diabatically onto the third eigenstate, which asymptotically corresponds with the $`\nu =1`$ state on the right well \[see again Fig. 2(b)\]. In Fig. 4 we show the change at the crucial moment around $`t=21`$ ps. The transition from a narrow single well wave function into a wide combined two-well wave function takes place very quickly. A fascinating aspect is that the evolution of the total wave function still follows the fourth vibrational eigenstate of the active light-induced potential. A similar adiabatic following happens also near $`t=25`$ ps when the third and the fourth eigenstate again approach each other. The example given here is not unique. By allowing $`\mathrm{\Delta }_1\mathrm{\Delta }_2`$ we have a very good handle in choosing which single well states are paired into a combined well state. In Fig. 5 we show two other examples. They demonstrate that the process does not require the initial state on the left to be $`\nu =0`$, and the process can be used also to change $`\nu `$ but keep the initial and final well the same . Even in the region where both pulses are strong the adiabatic following of a single vibrational eigenstate is not complete. Small oscillating contributions appear in the final state. The contour plots tend to emphasise these oscillations. For example, in the particular case of Fig. 3(a), only about 2 % probability of diabatic transfer creates these oscillations. The examples shown here for tailoring the vibrational state populations have two advantages over the APLIP process. Firstly, having $`\mathrm{\Delta }>0`$ (a region in which APLIP does not work) moves the central frequencies of the two laser pulses away from each other. Secondly, the required intensities are clearly smaller as we mainly need to shift the potentials, but not to deform them strongly. Of course, the discussion in Ref. regarding the role of electronic states outside our truncated three-state system, and the rotational states still apply. We have used Gaussian pulse shapes, but with other pulse shapes it might be possible to achieve even better control of the adiabaticity at different avoided crossings between the eigenstates. In this Letter we have shown how, even with strong and short laser pulses one can still perform very selective and yet efficient tailoring of vibrational state populations in molecules. If we were to describe the process using the vibrational state basis of the three electronic potentials, the treatment would involve complicated transfer processes between a large number of states. Our presentation shows that the process can be understood very well in terms of the light-induced potentials, and their time-dependent vibrational eigenstates. Furthermore, this description also allows one to identify how the system parameters should be set in order to achieve specific outcomes. The main ingredient is establishing the right balance between the adiabatic following of the time-dependent vibrational eigenstates, and nonadiabatic transfer between them at avoided crossings. We have discussed the situation in a molecular multistate framework, but in the light-induced potential description the relevant process really takes place within a general two-well potential structure. Thus our observations hold also for any two-well structure, where the barrier height and the well depths can be controlled time dependently in similar manner. This work was supported by the Academy of Finland, Project no. 43336.
no-problem/0002/astro-ph0002475.html
ar5iv
text
# Prospects of Variable Star Research by Future Space Missions ## 1. Introduction Many advantages are present when going in space. The most exploited one has been the access to wavelengths not transmitted through the atmosphere. For photometry near the visible part of the spectrum, there are many benefiting aspects: No atmosphere means no absorption bands of $`H_2O`$ and $`O_3`$, no scintillation, no emission or diffusion from the sky (dark sky), no variable extinction. Furthermore depending on the satellite orbit, it may be possible to make long continuous monitorings (however eclipses by the Moon or the Earth may occur) or to make a global sky coverage. As a comparative example, the COROT team (see section 6) studied fictive Earth projects (cf. Baglin 1997a) and showed that even an immoderate project of 8 8m-class telescopes disseminated around the Earth would not surpass their small space mission. The drawbacks of space projects are their complexity, cost, aging, limited operation time, and many technical constraints. Space missions are fragile on technical and also political point of views. The global budgets of ESA and NASA in local currencies have been relatively flat for about 8 years, but they may be subjected to high volatility according to changes in the policy. ## 2. Main types of missions Three main types of missions will be presented: one performing astrometry, one oriented towards asteroseismology and one aiming at detecting telluric planets. Many of these projects lead to massive photometric surveys and have thus very valuable scientific returns as discussed by Paczyński (1997). Then, an overview of the distinctive features of some specific satellites will be given. ### 2.1. Astrometric missions After the pioneering mission Hipparcos, projects performing astrometric measurements are flourishing. Thanks to the advance of technology (CCD, interferometry), the jump in astrometric precision is of 2-3 orders of magnitude, thus making a new mission very attractive. The accuracies reached by these projects are now labeled in $`\mu `$as (micro arcsec), cf. Table 1. The interest for variable stars is threefold. First, several proposed missions are scanning the whole sky and performing photometry. They will thus give, as Hipparcos did, huge amounts of photometric measurements, achieving magnitude completeness. As the scanning law is optimised for the astrometric purpose, the time series have a semi-regular sampling which produce aliasing. Second, the information of parallax is a stringent constraint for testing stellar models when used in asteroseismology (cf. Baglin 1997b, Favata 1999). In the case of Hipparcos for instance, Matthews et al. (1999) showed a comparison of asteroseismologic parallax versus trigonometric parallax. On an other hand, absolute luminosities or masses derived from parallaxes can be used as starting point of the seismologic models. Third, Parallaxes are also of primordial importance for calibrating distance indicators as Cepheids, RR Lyrae and Miras. All the presented missions will literally pin down many of these objects. ### 2.2. Asteroseismologic missions Several unsuccessful attempts have been made to measure solar-like oscillations from Earth, for such measurements the precision in luminosity determination to be reached is as high as few ppm, furthermore high frequency resolution is required which imposes very long continuous monitoring. As previously remarked, the scintillation is a limiting factor which is magnitude independent as long as photon noise is not dominating. With space missions these requirements can be reached. ### 2.3. Planet detection missions Among several methods to detect planets, the most famous is the measurement of the periodic variation in radial velocity that a planet induces on its host star. Another promising technique, shown to be more appropriate for the discovery of telluric planets, is the detection of the luminosity decrease during an eclipsing transit of the planet. In this case, the system planet-star should be in a favorable configuration for the observer. Thus large numbers of stars must be observed to achieve valuable statistics. ## 3. The GAIA Mission The goal of the GAIA (http://astro.estec.esa.nl/SA-general/Projects/GAIA/) satellite is to describe our Galaxy and its history by making an inventory of 1% of its stars (down to magnitude V=20) in terms of their kinematical and physical properties. However, the impact of GAIA will go well beyond gaining knowledge of the Galaxy, it will infiltrate many domains of astronomy such as stellar astronomy, general relativity and cosmology. GAIA should not be seen merely as a Super-Hipparcos; it has several instruments among them one is performing astrometry reaching the precision of 4 $`\mu `$as for magnitude $`V=12`$ and 10 $`\mu `$as for magnitude $`V=15`$. These measurements lead to parallaxes and proper motions. On board, GAIA has a spectrometer which measures radial velocities (with precision of 3-10 km/s for $`V<17`$) to complete the 3-dim velocity field. A provisional 11 intermediate band photometric system was specially designed in order to determine the temperature, gravity and metallicity of the observed stars. GAIA will produce photometric time series of about 100 measurements spread over 5 years. The precision is magnitude dependent (cf. Grenon et al. 1999) and for a single transit is of the order of 0.05 mag at $`V=17`$ in the band F51 (centered at 570 nm and of 90 nm width). The time sampling is semi-regular as a result of the scanning law of the satellite. Eyer & Cuypers (these proceedings) are studying the expectation for variable star numbers. Although the estimations are at preliminary stages, GAIA might detect a few thousand Cepheids and about 90 000 RR Lyrae. GAIA will permit to study star pulsations in terms of the physical parameters, especially the metallicity. ## 4. FAME FAME (http://aa.usno.navy.mil/FAME/) stands for Full-sky Astrometric Mapping Explorer. FAME will observe 40 million stars and will perform a Galactic survey magnitude complete up to $`V=15`$. The astrometric precision will be $`<`$ 50 $`\mu `$as for mag $`V<9`$ and $`<`$ 500 $`\mu `$as for mag $`V<15`$. It will collect photometric measurements in four bands of the Sloan system (Fukugita et al. 1994). The announced precision of the photometry is 1.6 mmag for mag 9 and 25 mmag for mag 15 in the astrometric filter. The scanning law will determine the time sampling. The rotation period of the satellite is 40 minutes, the length of the mission is 2 years and a half. The satellite will furnish thousands of measurement per star, with a semi-regular sampling grouped in sequences of 9-31-9 etc minutes. ## 5. DIVA The DIVA (`http://www.aip.de/groups/DIVA/`, Double Interferometer for Visual Astrometry) satellite is a project of the German space agency (DLR). DIVA will reach an astrometric precision of 150 $`\mu `$as at $`V=9`$ and 5 mas at $`V=15`$. DIVA will have broad-band and narrow-band systems. The photometric precision is under study. The scanning law is very similar to the Hipparcos one, with a rotation period of 2 hours. ## 6. The COROT Mission The COROT (`http://www.astrsp-mrs.fr/www/pagecorot.html`, COnvection and ROtation) satellite has two major goals, first it was designed for doing asteroseismology, focussing on 5 principal fields for very precise and continuous monitoring of 5 months each. The noise level is 0.6 ppm for G type stars over 5 days. The long time base makes possible a high accuracy on frequencies determination (0.1 $`\mu `$Hz) necessary for the detailed seismic investigation of the stellar structure proposed for COROT. A planet detection program using the transit method was added, this other goal has to survey large sample of stars (30 000 to 60 000) with a photometric precision of 0.1 % for 16 minutes of integration time in order to detect telluric planets. This survey is achieved in dense fields down to mag 15. Two-color information will be available for $``$ 1/3 of the targets to establish the achromaticity of the phenomenon. There is also an exploratory program where COROT will measure 50 to 80 solar-like stars with $`V<9`$ at a precision better than 2.4 ppm. ## 7. MONS MONS (`http://www.obs.aau.dk/MONS/`, Measuring Oscillation in Nearby Stars) is a project of the Danish Small Satellite Program, which already launched one successful satellite (Ørsted). MONS will observe about 20 solar-like stars over two years as a primary scientific objective; it will also measure $`\delta `$ Scuti, roAp stars and samples of stars of all types as a secondary objective. Its eccentric orbit with high apogee (Molniya orbit) permits access to a large part of the sky. It will measure changes in colour ratio. Oscillations of amplitude of 1-10 ppm will be detected (cf. Kjeldsen et al. 1999). Furthermore, MONS will obtain science data from the Star Imagers designed for attitude control; they will permit measurement of about 900 stars, 700 in with the Imager SI 1, and 200 with the Imager SI 2. The accuracy will be about 11 ppm (SI 1) and 26 ppm (SI 2) at magnitude $`V=6`$. ## 8. MOST MOST (`http://www.astro.ubc.ca/MOST/`, Microvariability and Oscillations of STars) is a funded project which is the first satellite of the Canadian Space Agency. MOST will observe several (3-5) solar-like stars over 40 days. It will also observe roAp, Wolf-Rayet and $`\delta `$ Scuti stars. The photometric precision for a $`V=6`$ star is better than a few ppm for 10 days of integration. ## 9. Kepler Although the Kepler project (`http://www.kepler.arc.nasa.gov/`) was not selected due to questions about the ability to perform the photometry in space, the scientific case was rated very highly. The Kepler team is planning to resubmit the project. Its goal is to ”explore the structure and diversity of planetary systems” by performing differential photometry. It will monitor 100 000 main-sequence stars in the magnitude range from 9th to 14th with an accuracy of 0.002 % including instrument noise, shot noise and stellar variability for a star of $`V=12`$. The mission would point continuously at a single field in Cygnus during 4 years. Because only data for preselected stars are saved, all objects to be monitor must be pre-specified. The team will probably entertain ”guest observing” for additional interesting objects like variable stars. A subset of a few hundred brighter stars will also be incorporated and observed with a shorter integration time for asteroseismological purposes. ## 10. Conclusion Many of these projects might be selected in a near future for their realisation. The projects performing astrometry will provide deep galactic surveys and thus an inventory for galactic variable stars. The asteroseismological projects have common goals but they use different ingenious techniques and have different and diverse by-products. The future spatial projects will open new domains in variable star studies, therefore precise predictions are delicate. Since some presented missions are already funded, there is the conviction that after the projects like Hipparcos or the microlensing surveys, the domain of variable stars will still be under a strong evolution. ## References Baglin, A. 1997a, CNES, COR-SP-0-83-PROJ Baglin, A. 1997b, in The First Results of Hipparcos and Tycho (JD 14 of the XXIII General Assembly of the IAU), ed. C. Turon, 555 Grenon, M., Jordi, C., Figueras, F., Torra, J. 1999, to be published Favata, F. 1998, in The First MONS Workshop: Science with a Small Space Telescope, eds. H. Kjeldsen, T.R. Bedding, 89 Fukugita, M., Hichikawa, T., Gunn, J.E., Doi, M., Shimasaku, K., Schneider, D.P. 1996, AJ111, 1748 Kjeldsen, H., Bedding, T.R., Christensen-Dalsgaard, J. 1999, Document No. MONS-99/01 Matthews, Kurtz, D., & Martinez, P. 1999, ApJ 511, 422 Paczyński B. 1997, in Variables Stars and the Astrophysical Returns of the Microlensing Surveys (12th IAP Astrophysics Colloquium), eds. R. Ferlet, J.-P. Maillard & B. Raban, Editions Frontières, 357
no-problem/0002/gr-qc0002011.html
ar5iv
text
# REFERENCES \[ Comment on “Relativistic Effects of Light in Moving Media with Extremely Low Group Velocity” \] In Leonhardt and Piwnicki have presented an interesting analysis of how to use a flowing dielectric fluid to generate a so–called “optical black hole”. Unfortunately there is subtle misinterpretation in the analysis regarding these “optical black holes”. While it is clear that “optical black holes” can certainly exist as theoretical constructs, and while the experimental prospects for actually building them in the laboratory are excellent, the particular model geometries of are in fact not black holes at all. Leonhardt and Piwnicki consider a vortex geometry, where the dielectric fluid is swirling in a purely azimuthal direction around a straight linear core — there is no radial motion into the core in their models, and this is enough to prevent the formation of trapped surfaces and event horizons. This observation depends only on the fact that the effective metric can globally be cast in the ADM-like form $$[g_{\mathrm{eff}}]_{\mu \nu }=\left(\begin{array}{cc}[c_{\mathrm{eff}}^2g_{ab}v_{\mathrm{eff}}^av_{\mathrm{eff}}^b]& [v_{\mathrm{eff}}]_i\\ [v_{\mathrm{eff}}]_j& [g_{\mathrm{eff}}]_{ij}\end{array}\right).$$ (1) In the acoustic geometry of , the constant-time 3-space metric $`[g_{\mathrm{eff}}]_{ij}`$ is particularly simple (it’s the identity matrix), the effective velocity $`[v_{\mathrm{eff}}]_i`$ equals the fluid velocity, and $`c_{\mathrm{eff}}`$ is just the local speed of sound. In the non-relativistic limit of the non-dispersive moving-medium optical geometry of , the effective velocity $`[v_{\mathrm{eff}}]_i`$ equals the fluid velocity adjusted by the Fresnel drag correction, while $`c_{\mathrm{eff}}c/n`$ is just the local speed of light, and the 3-metric acquires $`O(v^2)`$ corrections. In the extreme-dispersion model of the optical geometry is more complicated but the effective velocity is still proportional to the fluid velocity. These technical complications do not affect the key issue: An ergo-region certainly forms once the norm of this effective velocity exceeds the local effective propagation speed ($`v_{\mathrm{eff}}>c_{\mathrm{eff}}`$), but provided that $`c_{\mathrm{eff}}`$ remains positive, a trapped surface will form only if the inward normal component of the effective velocity exceeds the local effective propagation speed . Since there is no inward velocity in either of the vortex models , there is no possibility of forming an event horizon. This result is generic and does not depend on any optical-acoustic analogy. What Leonhardt and Piwnicki actually do is to demonstrate that the vortex geometries they write down possesses an unstable circular photon orbit, very similar in its qualitative properties to the unstable circular photon orbit that occurs at $`r=3M`$ in the Schwarzschild geometry (when the Schwarzschild geometry is written in down in Schwarzschild coordinates.) Photons/light-rays/null geodesics with high angular momentum, higher than some critical value which depends on the details of the fluid velocity profile, either (1) come in from spatial infinity and return to spatial infinity without ever crossing the unstable photon orbit, or (2) emerge from the vortex core, never get past the unstable photon orbit and subsequently fall back to the vortex core. Photons/light-rays/null geodesics with lower than critical angular momentum come in from spatial infinity, cross the unstable photon orbit, and eventually crash into the vortex core. If their angular momentum is just fractionally less than critical they may appear to “hover” near the unstable photon orbit. However, when Leonhardt and Piwnicki use the phrase “Schwarzschild radius” it is the radius of this unstable circular photon orbit they are referring to, and their usage of the phrases “Schwarzschild radius” and “event horizon” has nothing to do with the sense in which they are defined in general relativity. Despite this technical issue, which causes problems for the two particular toy models they discussed , it is clear that the basic idea is fine — it certainly is possible to form “optical black holes” but only by adding an inward radial component to the vortex flow. Alternatively, you could think of a nozzle that accelerates a low-compressibility dielectric fluid to (effectively) superluminal velocities. Any region of superluminal effective velocity $`v_{\mathrm{eff}}`$ will be an ergo-region, and any surface for which the inward normal component of this effective velocity is superluminal will be a trapped surface . If successful, this technique will be able to probe aspects of semiclassical quantum gravity, such as the existence of Hawking radiation. Because the effective metric is not constrained by the Einstein equations you only probe kinematic aspects of how quantum fields react to being placed on a curved background geometry, but do not probe any dynamical questions of just how quantum matter feeds into the Einstein equations to generate real spacetime curvature . Though “effective metric” techniques are limited in this sense, they are still a tremendous advance over the current state of affairs. Matt Visser Washington University in Saint Louis Saint Louis, Missouri 63130-4899 e-mail: visser@kiwi.wustl.edu homepage: http://www.physics.wustl.edu/~visser PACS numbers: 42.50.Gy, 04.20.-q Received 2 Feb 2000; 31 May 2000
no-problem/0002/astro-ph0002471.html
ar5iv
text
# Exclusion Limits on the WIMP-Nucleon Cross-Section from the Cryogenic Dark Matter Search ## Abstract The Cryogenic Dark Matter Search (CDMS) employs Ge and Si detectors to search for WIMPs via their elastic-scattering interactions with nuclei while discriminating against interactions of background particles. CDMS data give limits on the spin-independent WIMP-nucleon elastic-scattering cross-section that exclude unexplored parameter space above 10 GeV c<sup>-2</sup> WIMP mass and, at $`>84`$% CL, the entire 3$`\sigma `$ allowed region for the WIMP signal reported by the DAMA experiment. preprint: CWRU-P5-00/UCSB-HEP-00-01 Extensive evidence indicates that a large fraction of the matter in the universe is nonluminous, nonbaryonic and “cold” – nonrelativistic at the time matter began to dominate the energy density of the universe. Weakly Interacting Massive Particles (WIMPs) are an excellent candidate for nonbaryonic, cold dark matter . Minimal supersymmetry provides a natural WIMP candidate in the form of the lightest superpartner, with a typical mass $`M100`$ GeV c<sup>-2</sup> . WIMPs are expected to have collapsed into a roughly isothermal, spherical halo within which the visible portion of our galaxy resides. WIMPs scatter off nuclei via the weak interaction, potentially allowing their direct detection . The expected spectrum of recoil energies (energy given to the recoiling nucleus during the interaction) is exponential with a characteristic energy of a few to tens of keV . The expected event rate is model-dependent but is generically 1 kg<sup>-1</sup> d<sup>-1</sup> or lower . This Letter reports new exclusion limits on the spin-independent WIMP-nucleon elastic-scattering cross-section by the Cryogenic Dark Matter Search (CDMS). The rate of rare WIMP-nucleon interactions is constrained by extended exposure of detectors that discriminate WIMP-induced nuclear recoils from electron recoils caused by interactions of background particles . The ionization yield $`Y`$ (the ratio of ionization production to recoil energy in a semiconductor) of a particle interaction differs greatly for nuclear and electron recoils. CDMS detectors measure phonon and electron-hole pair production to determine recoil energy and ionization yield for each event. The data discussed here were obtained with two types of detectors, Berkeley Large Ionization- and Phonon-mediated (BLIP) and Z-sensitive Ionization- and Phonon-mediated (ZIP) detectors . For both types, the drift field for the ionization measurement is supplied by radially-segmented electrodes on the faces of the disk-shaped crystals . In BLIP detectors, phonon production is determined from the detector’s calorimetric temperature change. In ZIP detectors, athermal phonons are collected to determine phonon production and xy-position. Detector performance is discussed in detail elsewhere . Photons cause most bulk electron recoils, while low-energy electrons incident on the detector surfaces cause low-$`Y`$ electron recoils in a thin surface layer (“surface events”). Neutron, photon, and electron sources are used to determine efficiencies for discrimination between nuclear recoils and bulk or surface electron recoils. Above 10 keV, CDMS detectors reject bulk electron recoils with $`>`$ 99% efficiency and surface events with $`>`$ 95% efficiency. CDMS detectors that sense athermal phonons provide further surface-event rejection based on the differing phonon pulse shapes of bulk and surface events . This phonon-based surface-event rejection alone is $`>`$ 99.7% efficient above 20 keV. The 1-cm-thick, 7-cm-diameter detectors are stacked 3 mm apart with no intervening material. This close packing enables the annular outer ionization electrodes to shield the disk-shaped inner electrodes from low-energy electron sources on surrounding surfaces. The probability that a surface event will multiply scatter is also increased. The low rate of WIMP interactions necessitates operation at a site with low background-particle flux. CDMS detectors are operated beneath 16 meters-water-equivalent overburden, which stops the hadronic component of cosmic-ray air showers and reduces the muonic component by a factor of 5. A custom, radiopure extension to a modified Oxford S-400 dilution refrigerator provides a low-background 20 mK volume . Several layers of shielding surround the cryostat. Outermost is a $`>`$ 99.9% efficient plastic-scintillator veto to detect muons and thus allow rejection of muon-coincident particles. Inside the veto, a 15-cm-thick lead shield reduces the background photon flux by a factor of 1000. A 1-cm-thick shield made of ancient lead provides additional photon shielding inside the cryostat . Samples of all construction materials were screened to ensure low radioactive contamination. The measured event rate below 100 keV due to photons is roughly 60 keV<sup>-1</sup> kg<sup>-1</sup> d<sup>-1</sup> overall and 2 keV<sup>-1</sup> kg<sup>-1</sup> d<sup>-1</sup> anticoincident with veto. Neutrons with energies capable of producing keV nuclear recoils are produced by muons interacting inside and outside the veto (“internal” and “external” neutrons, respectively). The dominant, low-energy ($`<50`$ MeV) component of these neutrons is moderated by a 25-cm thickness of polyethylene between the outer lead shield and cryostat . However, high-energy external neutrons may punch through the moderator. A simulation of these neutrons assumes the production spectrum given in and propagates them through the shield to the detectors. The accuracy of the simulation’s propagation of neutrons is confirmed by the excellent agreement of the simulated and observed recoil-energy spectra due to veto-coincident and calibration-source neutrons. A large fraction of the external neutrons are vetoed: $``$40% due to neutron-scintillator interactions and an unknown fraction due to associated hadronic showers. This unknown fraction, combined with a factor of $``$4 uncertainty in the production rate, makes it difficult to accurately predict the absolute flux of unvetoed external neutrons. However, normalization-independent predictions of the simulation, such as relative rates of single scatters and multiple scatters, relative rates in Si and Ge detectors, and the shapes of nuclear-recoil spectra, are insensitive to reasonable changes in the neutron spectrum. Two data sets are used in this analysis: one consisting of 33 live days taken with a 100 g Si ZIP detector between April and July, 1998, and another taken later with Ge BLIP detectors. The Si run yields a 1.6 kg d exposure after cuts. The total low-energy electron surface-event rate is 60 kg<sup>-1</sup> d<sup>-1</sup> between 20 and 100 keV. Four nuclear recoils are observed in the Si data set. Based on a separate electron calibration, the upper limit on the expected number of unrejected surface events is 0.26 events (90% CL). These nuclear recoils also cannot be due to WIMPs. Whether their interactions with target nuclei are dominated by spin-independent or spin-dependent couplings, WIMPs yielding the observed Si nuclear-recoil rate would cause an unacceptably high number of nuclear recoils in the Ge data set discussed below. Therefore, the Si data set, whose analysis is described elsewhere , measures the unvetoed neutron background. Between November, 1998, and September, 1999, 96 live days of data were obtained using 3 of 4 165 g Ge BLIP detectors. One detector is discarded because it displays a high rate of veto-anticoincident low-energy electron surface events, 230 kg<sup>-1</sup> d<sup>-1</sup> as compared to 50 kg<sup>-1</sup> d<sup>-1</sup> for the other detectors (10 to 100 keV). This detector suffered additional processing steps that may have contaminated its surface and damaged its electrodes. Data-quality, nuclear-recoil acceptance, and veto-anticoincidence cuts reduce the exposure (mass $`\times `$ time) by 45%. To take advantage of close packing, analysis is restricted to events fully contained in the inner electrodes, reducing the exposure further by a factor of 2.5 to yield a final Ge exposure of 10.6 kg d . Figure 1 shows a plot of ionization yield vs. recoil energy for the Ge data set. Bulk electron recoils lie at ionization yield $`Y1`$. Low-energy electron events form a distinct band at $`Y0.75`$, leaking into the nuclear-recoil acceptance region below 10 keV. Figure 2 displays the recoil-energy spectrum of unvetoed nuclear recoils for the Ge data set. Only single scatters (events triggering a single detector) are shown; the WIMP multiple-scatter rate is negligible. An analysis threshold of 10 keV, well above trigger thresholds, is imposed. This choice reduces the data set’s sensitivity but simplifies analysis by rendering low-energy electron misidentification negligible. The nuclear-recoil efficiency is determined using calibration-source neutrons and its stability is monitored using veto-coincident neutrons. Thirteen unvetoed nuclear recoils are observed in the 10.6 kg d exposure between 10 and 100 keV; this rate is similar to that expected for the WIMP signal claimed by the DAMA experiment . However, much evidence indicates that the CDMS nuclear recoils are caused by neutrons rather than WIMPs. Figure 3 displays a scatter plot of ionization yields for multiple scatters. The observation of 4 Ge multiple-scatter nuclear recoils is the primary evidence for the neutron interpretation. It is highly unlikely that these events are misidentified low-energy electron events. Figures 1 and 3 demonstrate excellent separation of low-energy electron events from nuclear recoils. Analysis using events due to electrons emitted by the contaminated detector yields an upper limit of 0.05 misidentified multiple-scatter low-energy electron events (90% CL). All other pieces of evidence are also consistent with the neutron interpretation. First, the 4 nuclear recoils observed in the Si data set cannot be interpreted as WIMPs or surface events. Second, there is reasonable agreement between predictions from the Monte Carlo simulation and the relative observed numbers of 4 Ge multiple scatters, 4 Si single scatters, and 13 Ge single scatters. Normalizing the simulation by the 17 total Ge nuclear-recoil events yields 2.7 expected Si single scatters and 1.3 expected Ge multiple scatters. Statistically, the expected neutron background should result in a less likely combination of Ge single scatters, Ge multiple scatters, and Si single scatters 23% of the time. Finally, a Kolmogorov-Smirnov test indicates that the deviation between the observed and simulated nuclear-recoil spectral shapes would be larger in 28% of experiments. The 90% CL excluded region for the WIMP mass $`M`$ and the spin-independent WIMP-nucleon elastic-scattering cross-section $`\sigma `$ is derived using an extension of the approach of Feldman and Cousins . The above arguments require accounting for the component of the observed Ge single scatters (with energies $`E_\mathrm{i}`$) that is due to the unvetoed neutron flux $`n`$. This flux is constrained by the number $`N_\mathrm{m}`$ of multiple scatters in Ge and the number $`N_{\mathrm{S}i}`$ of nuclear recoils in Si. To determine the 90% CL excluded region in the plane of $`M`$ and $`\sigma `$ alone, the parameter $`n`$ is projected out. For a grid of physically allowed values of $`M`$, $`\sigma `$, and $`n`$, the expected distribution of the likelihood ratio $`R=(E_i,N_\mathrm{m},N_{\mathrm{S}i}|\sigma ,M,\stackrel{~}{n})/\widehat{}`$ is calculated by Monte Carlo simulation in order to determine the critical parameter $`R_{90}`$ such that 90% of the simulated experiments have $`R>R_{90}`$. Here $`\stackrel{~}{n}`$ is the value of $`n`$ that maximizes the likelihood $``$ for the given parameters $`M`$ and $`\sigma `$ and the observations. $`\widehat{}`$ is the maximum of the likelihood for any physically allowed set of parameters. The WIMP-nucleon cross-section $`\sigma `$ is converted to a WIMP-nucleus cross-section assuming $`A^2`$ scaling with target nuclear mass. This scaling is valid for models of supersymmetric WIMPs currently favored . The 90% CL region excluded by the observed data set, with ratio $`R_{\mathrm{d}ata}`$, consists of all parameter space for which $`R_{\mathrm{d}ata}R_{90}`$. Figure 4 displays the lower envelope of points excluded for all values of $`n`$. Because all the nuclear recoils may be neutron scatters, $`\sigma =0`$ is not excluded. This limit excludes new parameter space for WIMPs with $`M>`$ 10 GeV c<sup>-2</sup>, some of which is allowed by supersymmetry . The data are compatible with the DAMA/NaI-0 exclusion limit based on pulse-shape analysis . However, these data exclude, at $`>84\%`$ CL, the entire region allowed at 3$`\sigma `$ by the DAMA/NaI-1 to 4 annual modulation signal . This region, given by the $`v_0=220`$ km s<sup>-1</sup> curve in Figure 4a of Ref. , is used because it is determined solely from the annual modulation signal. The data presented here also exclude the analogous 2$`\sigma `$ allowed region for DAMA/NaI-1 to 2 at $`>96`$% CL . Although without theoretical support, non-$`A^2`$ scaling may allow the two results to be compatible. We thank Paul Luke of LBNL for his advice regarding surface-event rejection. We thank the engineering and technical staffs at our respective institutions for invaluable support. This work is supported by the Center for Particle Astrophysics, an NSF Science and Technology Center operated by the University of California, Berkeley, under Cooperative Agreement No. AST-91-20005, by the National Science Foundation under Grant No. PHY-9722414, by the Department of Energy under contracts DE-AC03-76SF00098, DE-FG03-90ER40569, DE-FG03-91ER40618, and by Fermilab, operated by the Universities Research Association, Inc., under Contract No. DE-AC02-76CH03000 with the Department of Energy.
no-problem/0002/astro-ph0002221.html
ar5iv
text
# Long-term photometry of the Wolf-Rayet stars WR 137, WR 140, WR 148, and WR 153 Based on observations collected at the National Astronomical Observatory Rozhen, Bulgaria ## 1 Introduction Photometric studies of Wolf-Rayet (WR) stars during the past decades (e.g. Moffat & Shara 1986; Lamontagne & Moffat 1987; van Genderen et al. 1987; Balona et al. 1989; Robert et al. 1989; Gosset et al. 1990; Antokhin et al. 1995; Marchenko et al. 1998a; Marchenko et al. 1998b) have revealed light variations of several per cent (up to 0.1 mag) on time-scales (typically) of days. WR stars are generally believed to be evolved Population I stars, descendants of Of-type stars (Maeder 1996). They exhibit strong, dense winds (mass loss rates of $`10^5`$ to $`10^4`$ M yr<sup>-1</sup>) which, in most cases, hide the stellar surface. The wind-flow is dependant on time. Moffat et al. (1988, 1994) and Robert (1994) discovered the existence of small, outward moving wind condensations, which they called propagating blobs. Unlike most O-type stars, the continuum light of many WR stars originates from a layer in the dense wind ($`\tau =1`$), a “pseudo-photosphere” (van Genderen et al. 1987), which could be inhomogeneous because of dynamical wind instabilities. The brightness variations of some WR stars proved to be periodic and are possibly due to binary or rotation effects. Core (photospheric) eclipses as well as atmospheric eclipses have been observed. The latter are characterized by only one V-shaped minimum on the light curve, which is caused by the atmospheric eclipse of an O-type star by the WR star’s extended wind (Lamontagne et al. 1996). Random light variations are common in WR stars and they are often superimposed on the regular (binary) variations, increasing the “noise” and sometimes even totally disturbing the underlying regular light variations. Marchenko et al. (1998b) suggested that random light variations (light scatter) may be caused by short-lived, core-induced, multimode fluctuations, propagating in the wind. Other causes of variability, such as radial pulsations (Maeder 1985) non-radial pulsations (Vreux 1985, Antokhin et al. 1995, Rauw et al. 1996) and axial rotation (Matthews & Moffat 1994) have been proposed for WR stars. Occasional “eclipses” caused by dust formation in late-type WC stars have been studied by Veen et al. (1998). WR 137 is a well known dust maker (Williams 1997; Marchenko et al. 1999). However little is known about long-term light variations for that star and its binary status is still uncertain. WR 140 is another repeating dust maker (Williams 1997). The orbit is well determined. Because of its high eccentricity ($`e=0.84`$) the strongest wind interaction occurs at periastron passage. During the last periastron passage in 1993, WR 140 received much attention and has been studied at different wavelengths from X-ray to radio. However, only a few photometric studies in the optical were carried out so far and the long-term behaviour of that star is not known. Both stars WR 137 and WR 140 are included in the infrared study by Williams et al. (1987a) and reported to have dust shells. WR 148 is a good candidate for a WR + c (WR plus compact companion) binary. There is some controversy about the light variations concerning the period and the shape of the light curve. Marchenko et al. (1998a) were not able to detect the 4.31 d binary period in the HIPPARCOS photometry data, otherwise well known from ground-based observations (Marchenko et al. 1996). The very “noisy” light curve and unusually broad minimum need further investigation. WR 153 is a quadruple system (Massey 1981), containing a WN + O and an O + O system, or two WN + O pairs (Panov & Seggewiss 1990). During the past years, several photometric studies have been carried out. Yet the light variability of the two pairs could not always be unambiguously separated (Lamontagne et al. 1996). Our aim is to try to solve some of these controversial questions. ## 2 Observations In 1991 we started a long-term photometric study of the four WR stars at the National Astronomical Observatory Rozhen, Bulgaria, using the 60 cm telescope and the UBV single channel, photon counting photometer. The photometric equipment has been used for many years and proved to be very stable (cf. Panov et al. 1982). Table 1 contains the comparison and the check stars used. Generally, a 20″ diaphragm and an integration of 10 s were used. Each measurement consists of four consecutive integration cycles. An observing cycle was arranged in the following way: Sky - Comp - WR - Comp - Sky and was repeated 3 to 5 times, depending on the quality of the night. A separate measurement of the comparison star against the check star in the same way was obtained before or after the WR star observation. Thus a nightly mean was calculated from the 3 to 5 individual measurements. The standard error of the nightly mean is 0.003 - 0.005 mag in most cases. Reduction of the data was made taking into account dead-time effects, atmospheric extinction, and transformation into the standard $`UBV`$ system. In the following tables we present the magnitudes in the standard system; for WR 137, WR 140, and WR 148 the data are magnitude differences in the sense: comparison star minus WR star. The contribution $`\delta (BV)`$ and $`\delta (UB)`$ of emission lines to the respective colours are taken from Pyper (1966) and included in Table 1 (last two columns). No corrections have been applied for emission lines in our data. However, it does seem possible to distinguish the continuum light from the emission line variations by comparing the light in the $`UBV`$ passbands. Many WR stars show subtle short and long term variations as can be seen in the case of WR 148 (Sec. 3.3). Therefore it is preferable to use the same photometric equipment for long term studies. This reduces possible systematic effects caused by slightly different passbands or response curves. ## 3 Discussion ### 3.1 WR 137 WR 137 = HD 192641 (WC7 + ?) has been studied in the infrared (IR) and peaks in brightness were reported in 1984.5 and in 1997, probably caused by heated dust (Williams 1997). The dust emission has been directly IR-imaged at two epochs recently using the Hubble Space Telescope by Marchenko et al. (1999). The repetition of IR maxima occurs with a $``$13 yr period, suggesting a possible binary origin, as found for other WR periodic dust makers. WR 137 was discovered to be a spectroscopic binary by Annuk (1995). However, Underhill (1992) did not find any evidence for binary motion in her data. Therefore, the binary status of WR 137 remains uncertain. Marchenko & Pikhun (1992) published a long-term photometric study for 1958 - 1989, but it is based on photographic plates and the accuracy is insufficient to reveal light variations below a few per cent. Our photometry is presented in Table 2 and the light curves are shown in Fig 1. We searched for periodicities using the procedure of Lafler & Kinman (1965), in the period range from 1 d to 100 d, but no period could be found. The only photometric variations we can see in our data are random light variations with amplitudes of 0.02 mag (peak to peak) in $`V`$ during each observing season and up to 0.03 mag (peak to peak) when we compare different years. (However, the peak to peak amplitude from all data is 0.05 mag in $`B`$, and 0.07 mag in $`U`$.) During 1991-1998 22 measurements of the check star HD 192987 were obtained. The mean values (N = 22) of the magnitude differences (HD 192538 minus HD 192987) and their standard deviations are $`\mathrm{\Delta }V=0.002\pm 0.008`$ mag and $`\mathrm{\Delta }B=0.088\pm 0.009`$ mag. The scatter in Fig. 1 is greater than the observational error ($`5\sigma `$ in $`B!`$) and, therefore, probably contains real erratic variations with small amplitudes. In 1997, when the last peak in the IR was observed (Williams 1997), no photometric effect can be seen, apart from small-amplitude random variations. Their origin should arise in the continuum, as the plots in Fig. 2 suggest: There are some correlations ($`r=0.58`$ for $`B`$ and $`V`$ and $`r=0.64`$ for $`U`$ and $`B`$) between the lightcurves in each of the three passbands, which would be difficult to explain by variability of emission lines. The origin of the small-amplitude random continuum variations of WR 137 is possibly related to dynamical wind instabilities, resulting in temperature effects at the “pseudo-photospheric” level. ### 3.2 WR 140 WR 140 = HD 193793 (WC7 + O4-5) is another periodic dust maker. Williams et al. (1978, 1987a, 1987b, 1990) and Williams (1997) reported variations in the IR, revealing brightenings in 1977, 1985, and in 1993, which they attributed to the building of dust grains in the WR 140 wind with a period of 7.94 yr. The re-occurrence of the heated dust has been interpreted as due to wind-wind interaction in a binary system. Earlier spectroscopic studies failed to reveal the binary motion. However, a re-analysis of earlier published radial velocities and using the period in the IR (7.94 yr) led to a successful determination of the orbit (Williams et al. 1987c). It was found that the grain formation coincides with the periastron passage (PP) in the system (actually occurring before PP). This discovery was later confirmed by Moffat et al. (1987) and now presents the basic model for studies of WR 140. Williams et al. (1990) and van der Hucht et al. (1991) reported on variability of WR 140 at X-ray, UV, IR and radio-wavelengths. Our photometry of WR 140 is presented in Table 3 and the light curves are shown in Fig. 3. From Fig. 3, there is clear evidence for a dip in the light in 1993, between orbital phases $``$0.9 and 1.1. The dip is seen in all passbands and should therefore be due to continuum light attenuation. The amplitude of the “eclipse” in the $`V`$ passband is 0.03 mag. Two remarkable features are to be mentioned. First, the very broad shape of the light minimum, assuming a smooth trend between yearly data. After 1993, the light gradually increased to reach the “pre-eclipse” level in 1997, or even 1998. Considering the “eclipse” to be caused by an obscuration of the star(s) by the wind, the light curves strongly suggest that a dust envelope was built up around the WR star by the wind-wind interaction at the PP, which was gradually dispersed in the following years. Possibly it is the same dust observed in the IR when still heated. Second, it is apparent (Fig. 3) that the amplitude of the eclipse increases towards shorter wavelengths. In the lower panel of Fig. 3, the variation of the colour $`\mathrm{\Delta }(UV)`$ is shown, which is in the sense: WR 140 colour gets redder when its light is attenuated. This conclusion is easily obtained when considering the magnitudes of the comparison star HD 193888, which are: $`V=8.54`$, $`BV=0.07`$, $`UB=0.25`$, and $`UV=0.32`$. The amplitude of the colour variation in $`UV`$ of WR 140 is 0.04 mag (again Fig. 3). As is well known, the main source of opacity, (non-relativistic) electron scattering, has no effect on colours. Thus, in this case electron scattering alone is not sufficient, and an additional opacity source should be introduced, possibly Rayleigh (or Mie) scattering by small carbon dust particles. Occasional “eclipses” have been observed in the carbon-rich late-type WC stars WR 103, WR 113, and WR 121 (for a history of “eclipses” see Veen et al. 1998). In these cases the “eclipses” were caused by occasional formation of dust in the line-of-sight. Although dust formation in the winds of late-type WC stars is now well established, the problem with grain condensation in the very hostile environments where the grains are believed to form remains unsolved. Clearly, a trigger is needed to start the grain formation. In the case of WR 140, this could be the shock compression in the colliding winds at PP. We assume that the fading of WR 140 shortly after PP is due to dust condensation in the wind of the WC star. After the condensation ceases the star brightens again because the dust is blown away and gradually dispersed. The “eclipses” studied by Veen et al. (1998) have typical amplitudes of several tenths of a magnitude and last from several days up to a month. In contrast, the amplitude of the light dip in WR 140 is much smaller and the recovery of brightness lasts several years. This implies continuing supply (expanding from the PP production + new?) of dust, even 2 – 3 years after PP. If there is new dust, this would be really surprising, since the trigger seems no longer to be effective. Following the procedure of Veen et al. (1998, using their equations (5), (6), and (7)) and taking the terminal velocity $`v_{\mathrm{}}=2900\mathrm{km}\mathrm{s}^1`$ from Eenens & Williams (1994), we obtain for the distance $`R_{\mathrm{cc}}`$ of the dust formation region from the WC star in WR 140: $`R_{\mathrm{cc}}\mathrm{300\hspace{0.17em}000}R_{\mathrm{}}`$. This is only a rough estimate, but it is much larger than the respective distances for all “eclipses” studied by Veen and co-workers. It is also much larger than the radius of the shell of WR 140 obtained by Williams et al. (1987a) which is $`R_{shell}=1490R_{\mathrm{}}`$. Taking for the carbon particle density $`1.85\mathrm{g}\mathrm{cm}^3`$ we get for the dust mass production rate (over unit area) the value $`\dot{M}_d=\mathrm{2\hspace{0.17em}10}^{13}\mathrm{kg}\mathrm{m}^2\mathrm{s}^1`$. These results should be taken with caution because of the small amplitude of the “eclipse” in WR 140 and of possible deviations from the model used (e.g. continued supply of dust after PP). WR 140 was observed photometrically during PP in 1977 by Fernie (1978) but no changes of brightness were found. This is likely due to his low precision data. Like WR 137, the observations of WR 140 also show small-amplitude, day-to-day random light variations (amplitudes up to 0.02 mag), in addition to the eclipse variation. Fig. 4 shows the correlations of the random light variations in $`UBV`$, indicating that they are likely due to continuum rather than emission line variations (similar to WR 137, Fig. 2). Dynamical wind instabilities could be the origin, as in WR 137. Moffat & Shara (1986) suggested a 6.25 d period for the light variations they observed in WR 140, which, however, does not fit our data. Our observations during 1991 - 1998 cover 90% of the orbit. It remains to be seen whether the forthcoming PP in 2001 will repeat the light curve so far observed. ### 3.3 WR 148 WR 148 (= HD 197406, WN8 + c?) is a single-line spectroscopic binary, possibly hosting a compact companion. The star has been studied by Bracher (1979). She determined the orbital period as $`P=4.3174`$ d and also found light variations with the same period and an amplitude of 0.04 mag in $`V`$. Further spectroscopic studies by Moffat & Seggewiss (1979, 1980) revealed an unusually low mass function of the system, which was later confirmed by Drissen et al. (1986): f(m) = 0.28 M. WR 148 has also an exceptionally large distance from the galactic plane, $`z=500800`$ pc (Moffat & Isserstedt 1980; Dubner et al. 1990). Smith et al. (1996) found that WR 148 is a WN8 star. The low mass function and high $`z`$ value led Moffat & Seggewiss (1980) to advance the idea that WR 148 harbours a compact companion as product of a supernovae explosion some 5 Myr ago. In their model, the companion is orbiting within the WR envelope. As the companion orbits around the WR star the projected envelope density varies. This is the origin of the light variations of WR 148, because electron scattering occurs in this envelope. Photometric studies by Antokhin (1984), Moffat & Shara (1986), and Marchenko et al. (1996) confirmed the light variations found by Bracher (1979) with an amplitude of 0.03 mag in $`V`$ and also point to the very “noisy” appearence of the light curve. (With the ephemeris of Drissen et al. (1986), the light minimum occurs at phase zero with the WR star in front). Marchenko et al. (1996) noted the unusual wide-shaped light minimum, quite different from other known WR + O systems with atmospheric eclipses and a V-shaped light minimum (Lamontagne et al. 1996). For WR 148, Marchenko et al. suggested that the secondary light arises from an extended hot cavity in the WR envelope, near the companion, and which is ionized by X-rays. According to Marchenko et al., the rather weak X-ray source observed in WR 148 (Pollock et al. 1995) may be explained by the hot X-ray cavity being locally embedded in the WR envelope. Presently, the evolutionary status of WR 148 remains unclear and the companion could be either a B2-B4 III-V star or a relativistic object (as deduced from the mass function, Marchenko et al. 1996). Our photometry is presented in Table 4 and the light curves are shown in Fig. 5, plotted with the ephemeris of Drissen et al. (1986). From Fig. 5 it is apparent that our light curves in 1993 are similar to the light curves published by Moffat & Shara (1986). The minimum occurs at phase zero. The 1994 light curves, however, show a remarkable change in their shape and mean light level. Random light variations, already noted in other works, could well contribute to the disturbance of the light curve shape, but it is unlikely that they would change the mean light. Furthermore, long-term changes in mean light appear to be correlated in $`U`$, $`B`$, and $`V`$ (Fig. 5). Therefore, they too should be due to changes in continuum light. There is a strong evidence for a long-term variation of the mean light. Although the time-base is too short, there are some indications that the long-term variation is periodic, possibly with a cycle of about 4 years. Marchenko et al. (1998b) point to a possible “overall brightening” of WR 148 in 1994 and 1995. As shown in Fig. 5, it is obvious that in 1993 the mean light was even some 0.05 mag higher, as in 1994. This long-term variation completely masks the short-term binary variations if the whole data set is depicted in one plot. Therefore we plotted the data separately for each year in Fig. 5. Taking into account the model of Marchenko et al.(1996), the long-term light variations in WR 148 could be due to variations of the size of the hot X-ray cavity. Further conclusions at that time seem premature. A comment should be given on the observation at JD 2449585.4, phase = 0.57 (the companion $``$ in front), which strongly deviates from the regular light curve of 1994. As we can exclude observational errors as a reason for this measurement, it has to have some astrophysical origin. For instance, an event of accretion onto a compact companion could be invoked to explain this flare-like burst. Flickering and flaring of WR 148 on different time-scales have been reported by Antokhin & Cherepashchuck (1989), Zhilyaev et al. (1995) and Khalack & Zhilyaev (1995). Matthews et al. (1992) looked for flares in the WR star EZ Canis Majoris (WR 6 = HD 50896, WN5) and reported one flare event. Flare-type activity of EZ CMa was also observed by Duijsens et al. (1996). This star is in many respects similar to WR 148, e.g. showing light variations with a 3.77 d period, long-term changes in the light curve, and a possible WR + c binary status (Firmani et al. 1980; Balona et al. 1989; Duijsens et al. 1996). ### 3.4 WR 153 WR 153 (= HD 211853 = GP Cep) is a quadruple system (Massey 1981) with orbital periods 6.6884 d (pair A, WR + O) and 3.4696 d (pair B, WR + O or O + O). Earlier spectroscopic studies by Hiltner (1945) and Bracher (1968) revealed radial velocity variations due to binary motion with a period of 6.68 d. Panov & Seggewiss (1990) reanalysed Hiltner‘s velocity data and found evidence for two WR stars, one in each pair. WR 153 has been observed photometrically by Hjellming & Hiltner (1963), Stepien (1970), Moffat & Shara (1986), Panov & Seggewiss (1990), and Annuk (1994), all detecting eclipses with both periods, 6.6884 d and 3.47 d. Finally, Annuk (1994) refined the second period to 3.4696 d, in agreement with the velocity data of Massey (1981). However, in the recent analysis of WR star light curves by Lamontagne et al. (1996) the 3.47 d variation of pair B could not unambiguously be extracted from their data. Our photometry of WR 153 is presented in Table 5 and the light curves are shown in Fig. 6a and Fig. 6b, with the 6.6884 d and 3.4696 d periods, respectively. From Fig. 6, our data are consistent with the ephemeris of Massey (1981) and Annuk (1994), respectively. Since the true shape of both light curves is unknown, no allowance is made for the 3.47 d period in Fig. 6a, where it is superimposed on the 6.69 d light variations. In Fig. 6b, the data points around the 6.69 d period minimum (at phases from 0.96 to 0.13 in Fig. 6a) have been removed. The light curve with the 6.69 d period (pair A) is probably due to an atmospheric eclipse (only one, V-shaped light minimum!). In pair B, two light minima are seen due to a core eclipse in that pair. Independent evidence can be obtained from the HIPPARCOS photometry data. We made a period search, using 122 data-points from HIPPARCOS. The analysis was performed with the PERIBM procedure, developed at the Astronomical Institute of the University of Vienna (latest version from: ftp://dsn.astro.univie.ac.at./pub/PERIOD98/current/). Our analysis clearly shows that there are peaks at $`f1=0.5763641`$ d<sup>-1</sup>, corresponding to a 1.735 d period, and at $`f2=0.1495867`$ d<sup>-1</sup>, corresponding to the 6.69 d period. The 1.735 d period is exactly 1/2 of the 3.47 d period and the reason that it shows up in the amplitude spectrum is because of the double-wave light curve (two eclipses in pair B), consistent with our ground-based photometry. Moffat & Shara (1986) also deduced that pair A had a single minimum at phase 0.00 ($`P=6.69`$ d) and pair B had a double minimum at phases 0.00 and 0.50 ($`P=3.47`$ d). ## 4 Conclusions Our photometry of WR 137 reveals only small amplitude ($``$ 0.03 mag in the $`V`$ passband) random light variations. No periodicity could be found. These variations should be attributed to the continuum and they are probably due to dynamical wind instabilities. WR 140 exhibited remarkable light variations and a shallow light dip is seen in all passbands shortly after periastron passage in 1993. The light attenuation lasted until 1997 or even 1998, probably because of a dust envelope built around the WR star by wind-wind interaction at periastron passage. The dust envelope was gradually dispersed. From the wavelength dependence of the light attenuation, we find strong evidence for Rayleigh (Mie-like) scattering, contributing to the opacity, in addition to electron scattering. For WR 148, our photometric study confirms the 4.317364 d light variation, but reveals occasional scatter, disturbing the light curve shape. On one occasion, we see a flare-like event at a phase when the companion is in front. Our photometry reveals long-term variations of the mean light and, possibly, of the amplitude of the regular variation. There is some evidence for a periodicity of the long-term light variation and a 4 year cycle cannot be ruled out. Our photometry of WR 153 is consistent with the quadruple model for this star and both the 6.6884 d and the 3.4696 d periods are seen in the light. In pair A (6.6884 d period) we found evidence for an atmospheric eclipse, in agreement with the results of other works, while in pair B (3.4696 d period) the eclipse is probably photospheric (core eclipse). ###### Acknowledgements. This research project was supported by the Deutsche Forschungsgemeinschaft DFG, grant 436 BUL 113/88/0. In addition, M. Altmann is grateful to the DFG for grant Bo 779/21. K.P. Panov kindly acknowledges the support by the Bulgarian National Science Foundation, grant F 826. Thanks are due to J.S.W. Stegert for his participation in the observations. The authors are thankful to the referee Dr. A.F.J. Moffat for his fruitful suggestions. For our research we made with pleasure use of the SIMBAD data base in Strasbourg.
no-problem/0002/astro-ph0002226.html
ar5iv
text
# The Morphology, Color, and Gas Content of Low Surface Brightness Galaxies ## 1. Introduction The importance of low surface brightness (LSB) galaxies in the local universe has recently been emphasized in a study by O’Neil & Bothun (2000) which has extended the known distribution of galaxies in the local universe. Their result is a flat surface brightness distribution function from the Freeman value of 21.65 $`\pm `$ 0.30 to the survey limit of 25.0 B mag arcsec<sup>-2</sup>, more than 10$`\sigma `$ away (Figure 1a). This indicates that the majority of the galaxies, and potentially the majority of the baryons in the local universe, are contained in gravitational potentials only dimly lit by the embedded galaxy. Realizing that most galaxies are optically diffuse, then, it becomes extremely important to understand these systems if we wish to understand galaxy formation and evolution as a whole. With this in mind I wish to undertake a brief review both of our current understanding of LSB galaxy properties as well as a review of some popular ideas behind the formation of these enigmatic systems. ## 2. What We (think) We Know about LSB Galaxies LSB galaxy colors: Contrary to what was first believed, the colors of LSB galaxies range across the entire high surface brightness (HSB) galaxy spectrum, including what may be the bluest galaxy known (UGC 12695, with $`UI=0.2`$) as well has some fairly red systems ($`BV>\mathrm{\hspace{0.25em}1.0}`$) (Figure 1b) (O’Neil, et.al. 1997a, 1997b, 1998). Although currently it appears that LSB galaxy colors do not quite extend into the extremely red colors found in some HSB galaxies, this is likely due more to small number statistics rather than to an actual lack of extremely red LSB systems. Gas-to-luminosity ratio of LSB galaxies: The gas-to-luminosity ratio (M<sub>HI</sub>/L<sub>B</sub>) of LSB galaxies spans an extremely large range, from fairly low (M<sub>HI</sub>/L<sub>B</sub> = 0.1 $`M_{}/L_{}`$ ) through what may be the highest M<sub>HI</sub>/L<sub>B</sub> galaxies known (\[OBC97\] N9-2, M<sub>HI</sub>/L<sub>B</sub> = 46 $`M_{}/L_{}`$ ). Additionally, if any trend can be seen between the galaxies’ color and gas content it is that there may be an increase in their M<sub>HI</sub>/L<sub>B</sub> with redder color (Figure 2a) (O’Neil, Bothun, & Schombert 2000, OBS from now on). Tully-Fisher relation: Although previous studies have shown LSB galaxies to follow a slightly broadened version of the standard Tully-Fisher (T-F) relation defined by HSB galaxies (Zwaan, et.al. 1995), a recent study of over 40 LSB galaxies found no significant correlation between LSB galaxy velocity widths and absolute magnitudes, with only 40% of the sample falling within 1$`\sigma `$ of the previously defined LSB T-F relation (Figure 2b). At the least, then, there is a significant population of LSB galaxies which do not adhere to the T-F relation (OBS). Rotation curves: The rotation curves of LSB galaxies have been shown to rise more slowly than similar HSB galaxies Using ‘standard’ values for the stellar mass-to-luminosity ratio, as taken from HSB galaxies ($`\mathrm{{\rm Y}}_{}`$ = 1 – 3), this leads to the conclusion that many LSB galaxies have a baryonic mass fraction up to 3$`\times `$ less than HSB galaxies with the same velocity width (i.e. Swaters, et.al. 2000; Van Zee, et.al. 1998; de Blok & McGaugh 1997). ## 3. What Are LSB Galaxies? The faded version of HSB galaxies? No. LSB galaxies often have both very blue colors and very low metallicities ($`BV`$ $`<`$ 0.2, Z $`<`$ 0.01$`Z_{}`$ ), precluding the possibility that LSB galaxies are primarily composed of an old stellar population. As a caveat, though, it should be noted that a number of very red LSB galaxies have now been found, and these could be faded HSB galaxies. If this is correct, though, all other LSB galaxies (i.e. those which do not have extremely red colors) would have to be explained, as well as why there are two separate populations of LSB galaxies (O’Neil, et.al. 1997a, 1997b). “Stretched out” HSB galaxies? No. Current theories describing LSB galaxies as extending further into their dark matter haloes than similar HSB galaxies predict that LSB galaxies will follow either a universal T-F relation or one which is unique at each $`\mu `$(0) . These theories therefore cannot account for the galaxies of OBS which fall well off the T-F relation, with no correlation between $`\mu `$(0) and residual error. Additional problems with the models can (depending on which models are considered) include: difficulty matching the observed shape of LSB galaxy rotation curves; inability to allow for high gas fraction, red LSB galaxies, (i.e. Dalcanton, et.al. 1997; McGaugh & de Blok 1998; Avila-Reese & Firmani 2000; McGaugh 1999). A completely new type of galaxy? No. Although this idea could justify ignoring LSB galaxies when determining theories of galaxy formation and evolution, no evidence has been seen for LSB galaxies to be anything but a continuation of the HSB galaxy spectrum. There is a smooth transition between LSB and HSB galaxies in surface brightness and complete overlap in LSB and HSB galaxy colors, scale lengths, mass, luminosity, etc. (i.e. Bell & de Blok 2000; O’Neil, et.al. 1997a, 1997b). Galaxies with a different stellar population? Maybe. Although this is not a popular idea, as having an IMF which depends on galaxy properties (i.e. surface density) adds complication to models of galaxy evolution, this theory has not yet been disproved. The gas density of LSB galaxies is typically at or below the nominal threshold for star formation, as set by the Toomre criterion (i.e. Van Zee, et.al. 1998; de Blok, McGaugh, & Van der Hulst 1996). With this in mind, it would be suprising if LSB galaxies’ IMF was not at least somewhat affected by their low density. Additionally, recent HST WFPC-2 studies of three nearby LSB dE galaxies failed to find evidence for a significant number of red giants ($`<`$ 13 per 10 pc<sup>2</sup>, as opposed to the 100s per 10 pc<sup>2</sup> typically found in HSB galaxies (O’Neil, et.al. 1999)). Using these two ideas – the low gas density and the lack of evidence for significant numbers of giant branch stars – we can construct a toy model wherein no stars greater than 2$`M_{}`$ are allowed to form. When this is done, not only are LSB galaxy colors, gas fractions, etc. readily matched, but it is also remarkably easy to form both red and blue galaxies which do not follow the canonical T-F relation (see OBS). Additionally, the addition of a large number of small stars to any galaxy dramatically increases the galaxy’s stellar mass-to-luminosity ratio ($`\mathrm{{\rm Y}}_{}`$) and can dramatically decrease the total amount of dark matter needed in LSB systems (Swaters, et.al. 2000; OBS). Although these models are admittedly extremely oversimplified, they pave the way for further studies into this idea, and currently appear to be the best theory going. ## References Avila-Reese & Firmani 2000 RevMexAA 36, 1 Becker, et.al. 1988 A&A 203, 21 Bell & de Blok 2000 MNRAS 311, 668 Bothun Sullivan, & Schommer 1982 AJ 87, 725 Dalcanton, et.al. 1997 ApJ 482, 659 Davies 1990 MNRAS 244, 8 de Blok & McGaugh 1997 MNRAS 290, 533 de Blok, McGaugh, & Van der Hulst 1996 MNRAS 283, 18 de Blok, Van der Hulst, & McGaugh 1996 BAAS 189, 8402 de Blok, Van der Hult, & Bothun 1995 MNRAS 274, 235 de Jong 1996 A&A 313, 46 Matthews & Gallagher 1997 AJ 114, 1899 McGaugh 1999 Galaxy Dynamics ed. Meritt, et.al. (San Francisco: ASP) McGaugh & de Blok 1998 ApJ 499, 41 O’Neil & Bothun 2000 ApJ preprint O’Neil, Bothun, & Schombert 2000 AJ 119 136 (OBS) O’Neil, Bothun, & Impey 1999 AJ 118, 1618 O’Neil, et.al. 1998 AJ 116, 657 O’Neil, et.al. 1997b AJ 114, 2448 O’Neil, Bothun, & Cornell 1997a AJ 113, 1212 Phillipps, et.al. 1987 MNRAS 229, 505 Schombert, et.al. 1995 AJ 110, 2067 Swaters, Madore, & Trewhella 2000 ApJL preprint Van Zee, Skillman, & Salzer 1998 ApJ 497 L1 Zwaan, et.al. 1995 MNRAS 273 L35
no-problem/0002/astro-ph0002090.html
ar5iv
text
# Measuring the Size of the Vela Pulsar’s Radio Emission Region ## 1. Theoretical Background Waves from a pointlike source observed through a scattering medium will suffer random phase changes. If the phase changes are much larger than 1 radian, the observer will receive radiation from many Fresnel zones, and the scattering is said to be “strong”. In this case the electric field at the plane of the observer is the sum of the electric field from many lines of sight, differing random phases (Goodman 1985). The net electric field is the result of a random walk. The electric field is thus drawn from a Gaussian distribution. Its square modulus, the intensity, is drawn from an exponential distribution (Scheuer 1968). The region from which the observer receives radiation is known as the scattering disk. Scattering changes phases in the Fresnel zones, and thus acts somewhat like a lens. If the source is resolved by this “lens”, the observed intensity is an incoherent sum from each part of the source. For a source of small but finite size, the resulting distribution of intensity is the sum of 3 exponentials. The scales of the smaller exponentials are approximately the size of the source along either direction on the sky, in units of the linear resolution of the scattering disk (Gwinn et al. 1998). Figure 1 shows example of the resulting intensity distributions for a point source, and for a small but resolved source. When the source is resolved, the lowest intensities are absent. ## 2. Observations We compare the observed distribution of intensity with theoretical models to find the size of the Vela pulsar. The Vela pulsar is a favorable object for such observations because it is strong and heavily scattered. Observations at decimeter wavelengths easily capture many independent scintles in time and frequency. We observe the source interferometrically, rather than with a single dish, to avoid interference and effects of the substantial noise baseline seen in single-dish observations. Details of the observations are described elsehwere (Gwinn et al. 2000a). Figure 1 shows an example of the observed distribution of correlated flux density on the short Tidbinbilla-Parkes baseline for the Vela pulsar. We find a size of $`340\pm 80`$ km for the data shown in the figure. Noise affects the distribution shown in Figure 1 strongly. Like finite source size, noise reduces the number of points at small amplitude. Noise can be measured accurately from observations of quasars, blank sky, or between pulses. Its effects can then be removed. The effects of changes in spectral structure on noise from digitization can also be caculated (Gwinn et al. 2000b). Several effects other than noise can also affect the observed distribution. Among these are correlator saturation, shot noise, pulse-to-pulse variability, and gain variations. These can be either calculated theoretically, measured from observations, or inferred from the distribution of intensity. Gwinn et al. (2000a) discuss these effects in detail. ## 3. Modulation Index The fact that source size affects the distribution of intensity, in scintillation, has long been known. (“Stars twinkle, planets do not.”) The modulation index, $`m=\sqrt{<I^2><I>^2}/<I>`$, quantifies the effect (Salpeter 1967, Cohen, Gundermann, & Harris 1967). For a point source $`m=1`$; for an extended source $`m<1`$, with smaller modulation $`m`$ for a larger source, other factors being equal. Single-dish observers used measurements of modulation index to infer source sizes before the advent of radio interferometry, and this technique remains standard at low frequencies (Hewish, Readhead, & Duffett-Smith 1974, Hajivasiliou 1992). However, it is more subject to scintillation shot noise, and less immune to systematic effects, than a direct comparison of distribution functions. A finite observation necessarily samples a finite number of scintles. Averages over this sample approximate the statistical averages $`<I^2>`$ and $`<I>`$. Because the nearly-exponential distribution falls off rapidly at high intensity, these sums (particularly $`<I^2>`$) are dominated by the relatively rare scintles with the highest intensities. On the other hand, the effects of source structure are most important at the lowest intensities, where the number of scintillations is large, but the contribution to $`<I>`$ and $`<I^2>`$ is small. Thus, direct estimation of the modulation index is relatively insensitive to source size and relatively more sensitive to scintle shot noise than a direct comparison of the forms of distribution functions. Correlator saturation also affects the modulation index strongly, because its effects are largest at high intensity. Moreover, since the observable is a single number, rather than a distribution, it is more difficult to know what effects are playing signficant roles. Interestingly, Roberts & Ables (19) measured the modulation index, as well as the characteristic time and frequency scales of scintillation, in their classic study of scattering of southern-hemisphere pulsars. They report a modulation index of $`0.97\pm 0.03`$ for the Vela pulsar at 18 cm wavelength, and of $`0.90\pm 0.02`$ at 9 cm wavelength. Interpolation between these values is consistent with our results quoted above. Interestingly, Roberts & Ables find that the modulation index is smaller at shorter observing wavelengths, suggesting that the source size is greater. This conclusion is surprising from the standpoint of the standard radius-to-frequency mapping. (Note, however, that these measurements are of size rather than emission height.) The larger inferred size might reflect on the more complicated pulse profile of this pulsar at shorter wavelengths (Kern et al. 2000). On the other hand, it might also reflect systematic effects; at short wavelengths the scintles have wide bandwidths but the source remains quite strong, so that correlator saturation should become more serious. In contrast, self-noise and gain variations might be expected to be more important at lower frequencies. Observations of the full distribution of intensity in scintillation, as a function of wavelength, should indicate the origin of this variation of modulation index. ## References Cohen, M.H., Gundermann, E.J., & Harris, D.E. 1967, ApJ, 150, 767 Goodman, J.W. 1985, Statistical Optics, New York: Wiley Gwinn, C.R., Britton, M.C., Reynolds, J.E., Jauncey, D.L., King, E.A., McCulloch, P.M., Lovell, J.E.J., & Preston, R.A. 1998, ApJ, 505, 928 Gwinn, C.R., Britton, M.C., Reynolds, J.E., Jauncey, D.L., King, E.A., McCulloch, P.M., Lovell, J.E.J., Flanagan, C.S., & Preston, R.A. 2000, ApJ, in press Gwinn, C.R., Britton, M.C., Carlson, B., Dougherty, S., Del Rizzo, D., Reynolds, J.E., Jauncey, D.L., McCulloch, P.M., Hirabayashi, H., Kobayashi, H., Murata, Y., & Edwards, P.G. 2000, in preparation Hajivassiliou, C.A. 1992, Nature, 355, 232 Hewish, A., Readhead, A.C.S., & Duffett-Smith, P.J., 1974, Nature, 252, 657 Kern, J., Hankins, T., & Rankin, J. 2000, these proceedings Roberts, J.A., & Ables, J.G. 1982, MNRAS, 201, 1119 Salpeter, E.E. 1967, ApJ, 147, 433 Scheuer, P.A.G. 1968, Nature, 218, 920
no-problem/0002/cond-mat0002138.html
ar5iv
text
# Modelling an Imperfect Market ## I Introduction Financial markets usually consist in trades of commodities and currencies. However, one can easily find cases in other types of human endeavour that parallel the activities observed in financial markets. For example, a politician may select the standpoints in his/her platform in exchange for votes in the coming election. Also, scientists may trade ideas in order to generate citations. However, the easiest quantifiable marketplace is the financial one, where the existence of money allows a direct measure of value. There have been several proposals to model such markets. A recent review was presented by Farmer . Bak, Paczuski and Shubik have proposed a model where they consider the price fluctuations of a single product traded by many agents, all of which use the same strategy. In that model fat tails and anomalous Hurst exponents appear when global correlation between agents are introduced. An alternative model, presented by Lux and Marchesi , is driven by exogenous fluctuations in “fundamental” values of a single good, with induced non-gaussian fluctuations in price assessment of this good arising from switching of strategies (trend followers and fundamentalists) by the agents. Below we give a few considerations that had led us in the development of a model where the dynamics arises solely from the interaction of agents trying to exchange several types of goods. Financial markets exhibit a dynamical behavior that, even in the absence of production, allow people to become either wealthy or poor. If in these markets there were some sort of equilibrium, e.g. due to complete rationality of the players, this would not be possible. In order to create wealth or bankruptcy, people have to outsmart each other. This means that each trader attempts to buy as cheaply and sell as expensively as possible. This demands that an agent should find a seller which sells at a low price, and later another one that is willing to buy the same product at a higher one. Thus different agents price differently the same product. In this work we demonstrate that it is possible to devise a simple model for such a non-equilibrium market. We call it the Fat Cat (FC) model, as it functions on greed (each agent buys to optimize his own assets) and creates a market with large fluctuations, i.e. fat tails. In Sect. II we present this model and show examples of its dynamical evolution. We show that it leads to an ever fluctuating market. In the following Sect. III, the time series generated by this model is analyzed and it is shown that it exhibits persistency in the fluctuations of the wealth of individual agents. In the final section we summarize the work and present suggestions for generalizing the model to let individual agents evolve their individual trading strategies. ## II Description of the FC model Consider a market with $`N_{ag}`$ agents, each having initially a stock of $`N_{un}`$ units of products selected among $`N_{pr}`$ different types of products. In a previous work we have shown that, for the case of agents having memory of past transactions, such a market spontaneously selects one of the products as the most adequate as a means of exchange. The product so chosen acts as money in the sense that it is accepted even when the agent does not need it, because through the memory of past requests of products the agent knows it is in high demand, and therefore it will be useful to have it to trade for other products. The selection of a product as an accepted means of exchange is not indefinite: after some time another one substitutes it as the favorite currency. The time scale for these currency substitutions is large when measured in number of exchanges between individual agents. In the present model we consider that each of the agents initially has $`N_{mon}`$ units of money. According to the discussion in the preceding paragraph, in principle money could be viewed as one of the products, but here we consider it a separate entity in order to quantify prices. The $`N_{un}`$ products given initially to each agent are selected randomly. Later, during the time evolution of the system, each agent $`i`$ has, at each time step, an amount of money $`M(i),i=1,\mathrm{},N_{ag}`$, and a stock of the different products $`j`$, $`S(i,j)`$, where $`j=1,\mathrm{},N_{pr}`$. Since the model uses money as means of exchange, agents assign different prices to the different products in their possessions. The prices of the different items in the stock of agent $`i`$, $`P(i,j)`$, are taken initially to be integers uniformly distributed in the interval $`[1,5]`$. We have verified that the evolution of the system does not depend on this particular choice. How do we picture such a market? We may imagine antique collectors trying to buy objects directly from each other, using their own estimates for the prices of the different stock. When two agents meet, one of them, the buyer, checks the seller’s price list, and compares it with his own price list. We have chosen the antique collector market as an example because few other markets show spatial price fluctuations at such a high level. The decision to buy or not, and the changes in the value of the agents’ products are given by some strategy, which for now assume is the same for all agents in the system. Among all the products that the buyer considers to be possible buys (having a price set by the seller which is lower than the one he would sell the same product for), he will single out the best one, and will then attempt to buy it. But, if the buyer finds no products that he considers as good buys, the seller will consider that he has overpriced his goods and will as a consequence tend to lower his prices. At the same time, the buyer will think that his price estimate was too low, and as a consequence raise his price estimate. This is the basis for our computer simulation for such a market place. In it, we assume that at each time step the following procedure takes place: 1. Buyer ($`b`$) and seller ($`s`$) are selected at random among the $`N_{ag}`$ agents. If the seller has no products to offer, then another seller is chosen. 2. The buyer selects the product $`j`$ in the seller’s stock which maximizes $`P(b,j)P(s,j)`$ (i.e. his profit). The corresponding $`j`$, we call $`j_{bb}`$ (best buy). 3. If the buyer does not have enough money, (i.e. if $`M(b)<P(b,j_{bb})`$), we go back to the first step, choosing a new pair of agents. 4. If the buyer has enough money we proceed. If $`P(s,j_{bb})<P(b,j_{bb})`$, the transaction is performed at the seller’s price. This means that we adjust: $`S(b,j_{bb})S(b,j_{bb})+1`$, $`S(s,j_{bb})S(s,j_{bb})1`$, $`M(b)M(b)P(s,j_{bb})`$, $`M(s)M(s)+P(s,j_{bb})`$. 5. If $`P(s,j_{bb})P(b,j_{bb})`$, the transaction is not performed. In this case, the seller lowers his price by one unit, $`P(s,j_{bb})\mathrm{max}(P(s,j_{bb})1,0)`$, and the buyer raises his price by one unit, $`P(b,j_{bb})P(b,j_{bb})+1`$. We see that, according to these rules, buyer and seller decide on a transaction based only on their local information, i.e. their estimates of the prices for the different products they possess. These prices are always non-negative integers. Also note that since, as defined in step 3 above, the price offered by the buyer cannot be higher than the amount of money he has, we are not allowing for the agents to get in debt. Further, the model tends to equilibrize large price differences, according to step 5, but induces price differences when buyer and seller agree on the price of the most tradeable product. This non-equilibrizing step is essential to induce dynamics in a model like the present one, where all agents follow precisely the same strategy. Without it the system would freeze into a state where all agents agree on all prices. The rules given above are just one possible set of rules for transactions. We have found other sets that lead to a behavior qualitatively similar to the one shown below for the present rules. <sup>§</sup><sup>§</sup>§A quote from Marx is appropriate here: “These are my principles. If you do not like them, I have others” . We now show that, under this set of local rules, the distribution of wealth organizes itself into a dynamically stable pattern, and the same phenomenon takes place with the prices. One should emphasize that there is not an accepted market value for the products. Indeed, due to the price adjustments performed in unsuccessful encounters, the prices never reach equilibrium, and different agents may assign different prices for the same product. In Fig. 1 we illustrate this point, showing the price assigned by two different agents to the same product, as a function of time. Time is defined here in terms of the number of encounters between agents, and one time unit is the average time between events where a given agent acts as a buyer. We note that during a considerable fraction of the time there is a relatively large difference between the prices assigned by the agents. This shows that there is a margin for making profit in such a market, i.e. arbitrage is possible. We have verified that the average market price of a good fluctuates with a Hurst exponent of $`0.5`$. In Fig. 2 we show, for the same time interval, an example of the evolution of key quantities in the model associated to Agent 1 in Fig. 1. The total wealth of an agent $`i`$ is the amount of money plus the value of all goods in the agent’s possession: $$w(i)=M(i)+G(i).$$ (1) Here the value of product $`j`$ is defined as the average of what all agents consider its value to be: $$P_{ave}(j)=\frac{1}{N_{ag}}\underset{i=1}{\overset{N_{ag}}{}}P(i,j),$$ (2) and the value of all agent $`i`$’s goods $`G(i)`$ is then defined as $$G(i)=\underset{j}{}S(i,j)P_{ave}(j).$$ (3) We note that there are considerable fluctuations in the wealth of this agent. The study of these fluctuations is essential to the understanding of the properties of the model, and we develop this in the following section. ## III Fluctuations in the FC model In order to quantify the fluctuations in wealth, we show in Fig. 3 the RMS fluctuations of the wealth of a selected agent as function of time. The figure illustrates that the wealth fluctuations can be characterized by a Hurst exponent $`H0.7`$. We have examined variants of both model parameters and rules to check the stability of this result. We found it to be stable, as long as one keeps greed in the model. For example, if one reduces the number of product types to only $`N_{pr}=2`$ (keeping $`N_{ag}=100`$, $`N_{mon}=500`$ and $`N_{un}=100`$ initially) the Hurst exponent remains unchanged although the scaling regime shrinks. Similarly setting $`N_{un}=2`$ (with $`N_{pr}=100`$ $`N_{ag}=100`$ and $`N_{mon}=500`$) resembling antique dealing where each agents owns a few of many possible products, also lets the Hurst exponent unchanged. On the other hand, if greed is removed from the model, e.g. the buyer selects a product at random from the sellers store, without consideration to the profit margin, the Hurst exponent drops to $`0.5`$, signaling that no correlations develop in such a case. Thus, the optimization of product selection expressed by step 2 in the procedure is closely related to the persistent fluctuations seen in our model. We have checked that other optimization procedures, as e.g. selecting the cheapest product or the product the buyer has the least of in stock also give similar persistent fluctuations. Oppositely, random selection reflect an economy where different products do not interact significantly with each other, and where our market of $`N_{pr}`$ different types of products nearly decouples into $`N_{pr}`$ different markets. With random selection, the only interaction between products is indirect; it appears due to the constraint of the agents’ money take only non-negative values. Overall, the random strategy gives a less fluctuating market where agents agree more on prices. Greed indeed makes our model world both richer and more interesting (which is not to say better.). We now try to quantify these wealth fluctuations. Fig. 4 displays the changes in the value of $`w`$ for several time intervals $`\mathrm{\Delta }t`$. The three curves are histograms of wealth changes for respectively $`\mathrm{\Delta }t=10`$, $`\mathrm{\Delta }t=100`$ and $`\mathrm{\Delta }t=1000`$. One observes fairly broad distributions with a tendency to asymmetry in having bigger probability for large losses than for large gains. Similar skewness is seen in real stock market data. Furthermore, the tails are outside the Gaussian regime . In the upper panel of Fig. 5, this is investigated further by plotting the histograms as function of the logarithmic changes in $`w`$, $`r_{\mathrm{\Delta }t}=\mathrm{log}_2w(t+\mathrm{\Delta }_t)\mathrm{log}_2w(t)`$ (log-returns), for the same three time intervals. In that figure we have collapsed the curves onto each other by rescaling them using the Hurst exponent $`H=0.69`$, consistent with the one found in Fig. 3. For the two short time intervals the collapse is nearly perfect, even in the non-Gaussian fat tails. For larger time intervals the distribution changes from a steep power law or stretched exponential, to an exponential, and finally becomes Gaussian for very large intervals (not shown as it is very narrow on the scale of this figure). We think it is interesting to notice that our model is consistent with the empirical observation that in real markets the probability for large negative fluctuations is larger than that for large positive ones. In the lower panel of Fig. 5 we examine in details the fluctuations for $`\mathrm{\Delta }t=1`$, and compare the fat tails with truncated power law decays $`P(r)1/r^4\mathrm{exp}(|r|/R)`$ which for such small time intervals is nearly symmetrical. The $`1/r^4`$ is consistent with the fat tail observed on 5-minute interval trading of stocks . The cut off size $`R=0.8`$ corresponds to cut offs when price changes are about a factor 2 from the original price, a regime which is not addresses in the short time trading analysis of . We stress that our analysis of fat tails includes a wide distribution of wealth, thus large relative changes of wealth are presumably mostly associated to poor agents. Thus the seemingly good fit to fat tails observed for stock market fluctuations in the 1000 largest US companies may be coincidental. ## IV Summary and Discussion The appearance of fat tails and Hurst exponents larger than 0.5 in the distribution of monetary value appears to be a characteristic of real markets. The present model is, as it was the case with the previous version , qualitatively consistent with these features. We stress that here, as for the previous model, we are not including any development of strategy by the agents that might force the emergence of cooperativity . A more important difference to game theoretic models is that the minority game, as well as the evolving Boolean network of Paczuski, Bassler and Corral evolves on basis on a global reward function. With the present model we would like to open for models which evolve with imperfect information, preferably in a form which allows direct comparison with financial data. The present model does this, and in a setting where there are many products and thus possibilities for making arbitrage along different “coordinates”. Compared to models with fat tails or persistency arising from boom-burst cycles, as the trend enhancing model of Delong or the trend following model of Lux and Marchesi , the present model discuss anomalous scaling in a market where no agent has a precise knowledge of the global or average value of a product. There is only local optimization of utility (estimated market value) and all trades are done locally without the effects of a global information pool. This imperfect information gives a possibility for arbitrage and opens for a dynamic and evolving market. The model we propose is for a market composed of agents, goods and money. We have demonstrated that such a market easily shows persistent fluctuations of wealth, and seen that this persistency is closely related to having a market with several products which influence each others trading. As seen in Fig. 2, wealth increase of an agent is associated with active trading of few products. This may be understood as follows: when the number of options a seller presents is small, it is hard for the buyer to find a bargain, so if a transaction is performed it will probably be at a good price for the seller. The simpler model of Ref. , had persistency in the fluctuations in the demand for different products, whereas, as mentioned above, the persistence here is in the fluctuations in the wealth of the different agents. It is interesting to mention that the Hurst exponents in the two models take very similar values, and this for a wide variety of the respective parameters. There is a kind of duality in that in the simpler model a product increased in value when it was held by relatively few agents, whereas in the present model the agents increase their wealth by specializing to few products. The setup proposed here with agents and products with individual local prices allows for also different individual strategies of the agents. For example both the selection of greed versus random strategy under step 2, and the particular adjustment of values defined under step 4-5 could be defined differently from agent to agent. One may accordingly have different strategies for different agents, and each agent could change its strategy in order to improve his performance. The evolution of these strategies would then become an inherent part of the dynamics. This opens for evolution of strategies as part of the financial market, and will be discussed in a separate publication. We thank F.A. Oliveira and H.N. Nazareno for warm hospitality and the I.C.C.M.P. for support during our stay in Brasí lia. R.D., A.H. and S.R.S thank NORDITA for support during our stay in Copenhagen. R.D. and S.R.S. thank MCT/FINEP/CNPq (PRONEX) under contract number 41.96.0886.00 for partial financial support.
no-problem/0002/astro-ph0002242.html
ar5iv
text
# The RR Lyrae U Com as a test for nonlinear pulsation models ## 1 Introduction Variable stars play a key role in many astrophysical problems, since their pulsation properties do depend on stellar parameters, and therefore they can supply valuable and independent constraints on a large amount of current evolutionary predictions. In particular, the empirical evidence found long time ago in Magellanic Cepheids of the correlation between period and luminosity was the initial step for a paramount theoretical and observational effort aimed at using variable stars as standard candles to estimate cosmic distances. The current literature is still hosting a vivid debate on the intrinsic accuracy of the Cepheid distance scale (Bono et al. 1999; Laney 2000) and on the use of RR Lyrae variables to evaluate the distance -and the age- of Galactic globulars (Caputo 1998; Gratton 1998, G98). Theoretical insights into the problem of radial stellar pulsations came from the linearization of local conservation equations governing the dynamical instability of stellar envelopes. Linear, nonadiabatic models typically supply accurate pulsation periods and plausible estimates (necessary conditions) on the modal stability of the lowest radial modes. However, a proper treatment of radial pulsations does require the solution of the full system of hydrodynamic equations, including a nonlocal and time-dependent treatment of turbulent convection (TC) to account for the coupling between radial and convective motions (Castor 1968; Stellingwerf 1982, S82). The development of nonlinear, convective hydrocodes (S82; Gehmeyr 1992; Bono & Stellingwerf 1994, BS; Wuchterl & Feuchtinger 1998) gave the opportunity to provide plausible predictions on the properties of radial variables, and in particular on the topology of the instability strip, as well as on the time behavior of both light and radial velocity curves. This new theoretical scenario allowed to investigate, for the first time, the dependence of pulsation amplitudes and Fourier parameters on stellar mass, luminosity and effective temperature (see e.g. Kovacs & Kanbur 1997; Brocato et al. 1996; Feuchtinger 2000, F20). However, all these investigations dealt with parameters related to the light curve, whereas nonlinear computations supply much more information, as given by the detailed predictions of the light variation along a full pulsation cycle. Therefore, the direct comparison between observed and predicted light curves appears as a key test only partially exploited in the current literature (Wood, Arnold, & Sebo 1997, WAS). In order to perform a detailed test of the predictive capability of our nonlinear, convective models we focused our attention on the photometric data collected by Heiser (1996, H96) for the field, first overtone -$`RR_c`$\- variable U Com . The reason for this choice relies on the detailed coverage of the U, B, and V light curves, as well as on the characteristic shape of the light curve, with a well-defined bump close to the luminosity maximum. This secondary feature provides a tight observational constraint to be nailed down by theory. Since the period of the variable strongly depends on the structural parameters (mass, luminosity, and radius) of the pulsator, the problem arises whether or not nonlinear pulsation models account for the occurrence of similar pulsators, and in affirmative how precisely the observed light curves can be reproduced by theoretical predictions. In §2 we present the comparison between theory and observations, while in §3 we discuss the calibration of the TC model. Finally, in §4 we briefly outline the observables which can further validate this theoretical scenario. ## 2 Comparison between theory and observations On the basis of spectroscopic measurements Fernley & Barnes (1997, FB97) estimated for U Com a metallicity $`[Fe/H]=1.25\pm 0.20`$, while Fernley et al. (1998, FB98) found a negligible interstellar extinction ($`E(BV)=0.015\pm 0.015`$). According to these empirical evidence and to a well-established evolutionary scenario, we expect for a metal-poor RR Lyrae a stellar mass of the order of 0.6 $`M_{}`$ and a luminosity ranging from log $`L/L_{}`$=1.6 to 1.7. At the same time, pulsation predictions on double-mode pulsators suggest similar mass values (Cox 1991; Bono et al. 1996). As a consequence, we computed a sequence of nonlinear models at fixed chemical composition (Y=0.24, Z=0.001) and pulsation period $`P_{FO}0.29`$ d. Along such an iso-period sequence the individual models were constructed at fixed mass value ($`M/M_{}`$= 0.60), while both the luminosity and the effective temperature were changed according to the pulsation relation given by Bono et al. (1997, BCCM). Both linear and nonlinear models were computed by adopting the input physics, and physical assumptions already discussed in BS, BCCM, and in Bono, Marconi & Stellingwerf (1999). According to S82, current models were computed by assuming a vanishing efficiency of turbulent overshooting in the region where the superadiabatic gradient attains negative values. This means that the convective flux can only attain positive or vanishing values ($`F_c0`$). In the next section we show that the assumption adopted in our previous investigations -i.e. $`F_c`$ can attain both positive and negative values- marginally affects the topology of the instability strip, but the predicted light curves are somewhat at variance with empirical ones. The top panels of Fig. 1 show that at fixed stellar mass ($`M/M_{}`$=0.60), the double peak feature appears in models characterized by luminosities approximately equal to log $`L/L_{}`$$``$1.61, and effective temperatures ranging from 6950 to 7150 K. These models also present B amplitudes in reasonable agreement with empirical estimates ($`A_B=0.64`$ mag). The bottom panels of Fig. 1 display the light curves of models along the iso-period sequence constructed by adopting a fixed effective temperature ($`T_e=7100`$ K) but different assumptions on stellar mass and luminosity. A glance at these curves shows that the luminosity amplitude is mainly governed by the stellar mass, whereas the shape of the light curve is only marginally dependent on this parameter. We find that the best fit to the observed B light curve is obtained for $`M/M_{}`$=0.6, log $`L/L_{}`$=1.607, $`T_e=7100`$ K, $`P_{FO}=0.290`$ d, together with a distance modulus $`(m_BM_B)=11.01`$ mag. The fit -though not perfect- appears rather satisfactory, thus suppling a substantial support to the predictive impact of the adopted theoretical scenario. On the basis of this finding, we are now interested in testing the accuracy of theoretical predictions in different photometric bands. Fig. 2 shows from left to right the comparison between predicted light curves (solid lines) and empirical data (open circles) in the U, B, V, and K band respectively. The comparison was performed by adopting the same distance modulus, i.e. by neglecting the interstellar extinction, and the agreement between theory and observations seems even better than for the B light curve. Thus suggesting that nonlinear models account for luminosity amplitudes which are a long-standing problem of pulsation theory. Not surprisingly, we also find that the time average colors predicted by our model appear, within current uncertainty on both reddening and photometry, in very good agreement with empirical estimates (see Table 1). This result supports the evidence that nonlinear models, at least in this case, can constrain stellar colors by best fitting the light curve in a single photometric band. At the same time, this agreement suggests that the pulsational constraints on the temperature of the pulsator, as derived by the B light curve, are consistent with the theoretical light curves in the other photometric bands. | TABLE 1. U Com: theoretical and empirical colors<sup>a</sup> | | | | | --- | --- | --- | --- | | Color | Theory | Observ. | Observ. | | mag | | $`E\left(BV\right)=0`$ | $`E\left(BV\right)=0.015`$ | | $`<U><B>`$ | $`0.06\pm 0.01`$ | $`0.11\pm 0.02`$ | $`0.09\pm 0.02`$ | | $`<B><V>`$ | $`0.23\pm 0.01`$ | $`0.21\pm 0.02`$ | $`0.19\pm 0.02`$ | | $`<V><K>`$ | $`0.77\pm 0.01`$ | $`0.81\pm 0.07`$ | $`0.77\pm 0.07`$ | <sup>a</sup> Empirical estimates are based on photometric data collected by H96 and by FB97. Theoretical colors refer to the best fit model, and the errors were estimated by assuming an uncertainty of 50 K in the temperature of this model. However, we note that on the basis of both the period and the shape of the B light curve we predicted the effective temperature, the intrinsic luminosity, and in turn the distance modulus of this object. The plausibility of the theoretical constraints can be further tested by comparing them with independent evaluations available in the literature. We find that the effective temperature predicted by nonlinear models -$`T_e=7100\pm 50`$ K- is in remarkable agreement with the empirical temperature -$`T_e=7100\pm 150`$ K- derived by adopting the true intensity mean color $`(<V><K>)_0=0.77\pm 0.07`$ provided by FB97 and the CT relation by Fernley (1989). The same outcome applies by assuming E(B-V)=0, and indeed $`(<V><K>)=0.81\pm 0.07T_e=7050\pm 150`$ K, while the semi-empirical estimate provided by H96 suggests $`T_e=7250\pm 150`$ K. We also note that the effective gravity of the best fit model (log $`g3.0`$) is also in very good agreement with both the photometric estimate obtained by H96 (log $`g=3.1\pm 0.2`$) and the spectroscopic measurements for field RR Lyrae variables provided by Clementini et al. (1995) and by Lambert et al. (1996). As far as the distance modulus is concerned, Fig. 3 shows the comparison of our pulsational estimates (filled circles) with empirical and theoretical U Com absolute magnitudes obtained by adopting different methods. The top and the bottom panel refer to absolute magnitudes based on visual and NIR magnitudes respectively. Data plotted in this figure show that our estimates appear in satisfactory agreement with distances based on the Baade-Wesselink method, on the statistical parallax method, on Hipparcos trigonometric parallaxes and proper motions, and on RR Lyrae K band PL relation. However, one also finds that the distance determination based on HB evolutionary models constructed by including the most recent input physics (Cassisi et al. 1999, C99) seems to overestimate the distance modulus by approximately 0.18 mag when compared with the current pulsation determination. A disagreement between evolutionary and pulsation predictions concerning the luminosities of RR Lyrae stars was brought out by Caputo et al. (1999), and more recently by Castellani et al. (2000) who found that up-to-date He-burning models seem too bright when compared with Hipparcos absolute magnitudes. Data plotted in Fig. 3 confirm this discrepancy between evolutionary and pulsational predictions, possibly due to an overluminosity of HB models. Part of this discrepancy might be due to the higher temperature of U Com when compared with the mean temperature of RR Lyrae gap ($`T_e=6800`$ K) adopted in evolutionary estimates. Finally, we also constructed several sequences of models by increasing or decreasing the metal abundance by a factor of two. We find that the bump becomes more (less) evident at lower (higher) metal contents, and that this change does not allow us to obtain a good fit between theory and observations. This result is in satisfactory agreement with the spectroscopic estimate by FB97 and supports the evidence that the shape of the light curve can also be used to constrain the $`RR_c`$ metallicity (Bono, Incerpi, & Marconi 1996). ## 3 Calibration of the TC model The first set of models we constructed for fitting the empirical light curves was characterized by an unpleasant feature: the peak of the bump was, in contrast with empirical evidence, brighter than the ”true” luminosity maximum. According to Bono & Stellingwerf (1993) the bump along the rising branch presents a strong dependence on the free parameters adopted in the TC model. However, in the calibration of the TC model suggested by BS both the eddy viscosity and the diffusion scale lengths (see their equ. 4 and 9) were scaled to the value of the mixing length parameter. We performed several numerical experiments by changing along each sequence only one of the three free parameters. As a result, we find that full amplitudes models constructed by adopting plausible changes of the free parameters do not simultaneously account for the pulsation amplitude, the shape of the light curve, and the temperature width of first overtone instability region. Due to the lack of a self-consistent theory of time-dependent, nonlocal, convective transport, current investigations were mainly aimed at calibrating the free parameters adopted for treating the coupling between pulsation and convection (Yecko et al. 1998; F20). This is not a trivial effort, since the observables and the comparison between theory and observations are affected by the thorny problem of the transformation into the observational plane and/or by systematic deceptive errors such as reddening and distance estimates. In order to overcome some of these difficulties, F20 calibrated the TC model by performing a detailed comparison between theoretical and observed luminosity amplitudes of field RR Lyrae variables. On the basis of the fine tuning of both mixing length and turbulent viscosity length, F20 found that the Fourier parameters of fundamental light curves agree with observational data. The same outcome did not apply to $`RR_c`$ variables, and indeed predicted values appear, at fixed period, smaller than the empirical ones. A detailed comparison with the convective structure of RR Lyrae models constructed by F20 is not possible because he adopted a convective flux limiter in the turbulent source function and in the convective flux enthalpy, and neglected both the turbulent pressure and the turbulent overshooting. As a consequence, we decided to test the dependence of full amplitude models on the last two ingredients. Interestingly enough, we find that the models constructed by assuming a vanishing overshooting efficiency satisfy empirical constraints i.e. the bump is dimmer than the luminosity maximum, the luminosity amplitudes attain values similar to the observed ones and the temperature width of the region in which the first overtone is unstable agrees with BCCM findings for cluster $`RR_c`$ variables. Fig. 4 shows the B light curve of two models constructed by adopting the same input parameters, but different assumptions on the efficiency of turbulent overshooting. It is noteworthy the simultaneous change in the pulsation amplitude and in the shape of the light curve. The dependence of the convective structure on both overshooting and turbulent pressure will be investigated in a forthcoming paper. ## 4 Discussion and Conclusions The comparison between theory and observations, namely the period and the shape of the B light curve of U Com, allowed us to supply tight constraints on the structural parameters such as stellar mass, effective temperature, and gravity, as well as on the distance of this variable. We also found that the occurrence of a well-defined bump close to the luminosity maximum can be safely adopted for constraining the metallicity of this object and for calibrating the TC model adopted for handling the coupling between convection and pulsation. The approach adopted in this investigation seems quite promising since it only relies on nonlinear, convective models and on stellar atmosphere models. In fact, the best fit model to the empirical data was found by constructing sequences of iso-period models in which the stellar mass, the luminosity and the effective temperatures were not changed according to HB models but to the pulsation relation (BCCM). The comparison between theory and observations shows that both the structural parameters and the distance are in very good agreement with estimates available in the literature. No evidence for a systematic discrepancy was found in pulsation estimates, thus supporting the evidence that the individual fit to light curves can supply independent and firm constraints on the actual parameters and distances of variable stars. This finding confirms the results of a similar analysis on a LMC Bump Cepheid by WAS. Accurate radial velocity data for U Com are not available in the literature and therefore we could not constrain the accuracy of the velocity variation along the pulsation cycle. The three radial velocity points collected by FB98 agree quite well with the predicted curve. However, the radial velocity curve is a key observable for constraining the consistency of the adopted TC model (F20), and therefore new spectroscopic measurements of U Com would be of great relevance for assessing the predictive impact of nonlinear, convective models. Theoretical observables of the best fit model discussed in this paper, as well as both radius and radial velocity variations are available upon request to the authors. It is a pleasure to thank M. Groenewegen for providing us the U Com distances based on the reduced parallax method and on the modified Lutz-Kelker correction, as well as for insightful discussions on their accuracy. We are indebted to R. Garrido for sending us radial velocity data and to T. Barnes for useful suggestions on current data. We also acknowledge an anonymous referee for some useful suggestions that improved the readability of the paper. This work was supported by MURST -Cofin98- under the project ”Stellar Evolution”. Partial support by ASI and CNAA is also acknowledged.
no-problem/0002/gr-qc0002046.html
ar5iv
text
# Untitled Document Inertia in the Structure of Four-dimensional Space A.M. Gevorkian<sup>1</sup><sup>1</sup>1e-mail: dealing51@hotmail.com , R. A. Gevorkian\* Institute of Physical Research, 378410, Ashtarak-2, Republic of Armenia \*Institute of Radiophisics and Electronics, 378410, Ashtarak-2, Republic of Armenia Abstract 1. Following Rimman, Minkowski and Einstein, for the first time equations of the inert filed in the covariant form are found geometrically. 2. In the approximation of a weak field for the first time the Law of Inertia in a material space (as opposed to the absolute space) is received. A consequence is the formulation of Mach’s principle. 3. Analogous to Einstein’s expression for the gravitational field $`c=c_0\left(1{\displaystyle \frac{2GM}{r}}\right),`$ Compton’s formula is received for the inert field $`r={\displaystyle \frac{h}{mc}}\left(1l_0\mathrm{cos}\theta \right)`$. 4. For the first time transcendental equations are received, one of the solutions of which corresponds to the value of the magnetic charge by Dirack, or to the constant of fine structure. Minkowski’s work for the first time led the solution of physical problems to geometrical problems. 1. Such physical theorem as kinematics and theory of inertial systems is obtained on the basis of flat space through equation $$R_{iklm}=0$$ (1) for three-dimensional Euclidean as well as for four-dimensional pseudo-Euclidean spaces. 2. $$G_{ik}=R_{ik}\frac{1}{2}Rg_{ik}$$ (2) is presented as an object of four-dimensional pseudo-Rimman geometry (Einstein) and is determined by formula $$K=\frac{G_{ik}\xi ^i\xi ^k}{\xi ^i\xi _i}$$ (3) where $`K`$ is the mean scalar crookedness of three-dimensional subspace, which is orthogonal to an arbitrarily determined vector $`\xi ^i`$. Tensor $`G_{ik}`$ is called the conservative tensor of Gilbert-Einstein and plays a fundamental role in the structure of the field equation of the General Theory of Relativity (GTR). Let us discuss formula $$G_{ik}=0$$ (4) Einstein predicted that this equation might be a basis for a more general physical theory in case it described electromagnetic fields along with gravitational fields. It is true that this equation is insufficient for describing all physical fields. It describes only gravitational fields outside of the source. From the point of view of geometry this equation is a necessary but not sufficient condition for flat space. For a precise description of the flat space along with equation (4) we will try to introduce one more geometric equation. The starting point of the unified field theory<sup>2</sup><sup>2</sup>2Inert field is included with the fieled of gravitation is the fact that equation (4) alone is insufficient for describing all phenomena. Therefore the problem of finding a mathematical object occurs which, after being added to $`G_{ik}`$ , will allow to obtain the structure of space describing the physical situation completely. Gravitation can be described either by introducing the gravitational field into the flat space, or by discussing a geometrical object in crooked four-dimensional space without the gravitational field. The last simulation is the geometric interpretation of gravitation. Hence if the deformation, which creates crookedness is excluded, we will obtain the flat space. *Mathematical Part* Let us introduce tensor $`\overline{G}_{ik}`$, $`P_{ik}`$ in the following way: $$G_{ik}+\frac{\left(\eta _{ik}g_{ik}\right)}{\xi ^i\xi _i}=P_{ik}$$ $$\overline{G}_{ik}\frac{\left(\eta _{ik}g_{ik}\right)}{\xi ^i\xi _i}=P_{ik}$$ (5) $`\overline{G}_{ik}`$ is the counter-tensor of $`G_{ik}`$ , which describes the counter-space in relation to space determined by $`G_{ik}`$. $`\eta _{ik}`$, $`g_{ik}`$ are metric tensors of flat and crooked spaces respectively, and $`\xi ^i`$ is the vector determined initially. Geometrically (5) means that two spaces of Einstein $`\overline{G}`$ and $`G`$ are called mutually counter if the following is realized: In each point of space the ratio of mean scalar crookednesses of the three-dimensional subspace orthogonal to initially determined vector $`\xi ^i`$ depends on the module of that vector only. Both spaces have a common coordination. $$\frac{K}{\overline{K}}=\frac{e_ie^i}{\xi ^i\xi _i}$$ (6) where $`e_i`$ is the unit vector of vector $`\xi _{i\text{.}}`$ The mean scalar crookedness of the three-dimensional subspace is determined by formula (3) and therefore (6) is written as $$\xi ^i\xi _iG_{ik}\xi ^i\xi ^k=e_ie^i\overline{G}_{ik}\xi ^i\xi ^k$$ (7) Hence we can determine $$G_{ik}2\frac{g_{ik}}{\xi ^i\xi _i}=\overline{G}_{ik}2\frac{\eta _{ik}}{\xi ^i\xi _i}$$ $$G_{ik}+\frac{\left(\eta _{ik}g_{ik}\right)}{\xi ^i\xi _i}=\overline{G}_{ik}\frac{\left(\eta _{ik}g_{ik}\right)}{\xi ^i\xi _i}$$ (8) Let us introduce operator $`()^n`$ which transforms space into counter-space, tensor $`G_{ik}`$ into tensor $`\overline{G}_{ik}`$, and the sign is changed into the opposite one. Even times of counter-operating transforms the space into itself, and odd times of counter-operating transforms the space into the counter-space. If the left side is noted as tensor $`P_{ik}`$ and the right side is noted as counter-tensor $`\left(1\right)P_{ik}`$, then equation (8) can be written as $$P_{ik}=\left(1\right)P_{ik}$$ (9) Theorem 1 For the space to be flat it is necessary and sufficient that tensor $`G_{ik}`$ and counter-tensor $`\overline{G}_{ik}`$ be equal. Proof \- necessity: From equation system (5) we receive $$G_{ik}\overline{G}_{ik}2\frac{\left(\eta _{ik}g_{ik}\right)}{\xi ^i\xi _i}=0$$ (10) Sufficiency: If $`G_{ik}\overline{G}_{ik}=0`$ is a flat space and vice versa $`\eta _{ik}g_{ik}=0`$, then $`G_{ik}=\overline{G}_{ik}`$. Theorem proved. From system (5) follows equation $$G_{ik}+\overline{G}_{ik}=2P_{ik}$$ (11) If $`\overline{G}_{ik}=0`$, then we receive the already familiar structure of Einstein equation , only if we substitute $`2P_{ik}`$ by tensor of energy of impulse and tension. Theorem 2 For the space to be flat, it is necessary and sufficient that $`P_{ik}=0`$. Equation (5) is written as $$G_{ik}+\frac{\left(\eta _{ik}g_{ik}\right)}{\xi ^i\xi _i}=0$$ (12) Based on the example of spherical symmetry, equation (12) has a flat solution (metrics of Minkowski). Consequence: For flat space we have $$G_{ik}\overline{G}_{ik}=0$$ $$G_{ik}+\overline{G}_{ik}=0$$ Hence, for the space to be flat, it is necessary and sufficient that $$G_{ik}=0,\overline{G}_{ik}=0$$ (13) or that space and counter-space coincide and be equal to zero. Thus the condition of sufficiency of $`\overline{G}_{ik}=0`$ is added to the condition of necessity of flat space $`G_{ik}=0`$. If we look at equation (11) from the mathematical point of view, then tensor $`2P_{ik}`$ is a source of crookedness or a source of deformation. However!.. Expression (10) can be decomposed by two menas into two equivalent equations: $$G_{ik}=2\frac{\left(\eta _{ik}g_{ik}\right)}{\xi ^i\xi _i}\text{ }\overline{G}_{ik}=0\text{ (a) \hspace{1em}\hspace{1em}(first way)}$$ $$\overline{G}_{ik}=2\frac{\left(\eta _{ik}g_{ik}\right)}{\xi ^i\xi _i}\text{ }G_{ik}=0\text{ (b) \hspace{1em}\hspace{1em}(second way)}$$ (14) *Physical Part* The first pair of equations (14,a) describes the internal problem of gravitation, or the external problem of inertia, and the second pair of equations (14,b) - vice versa. Based on the above, the full problem of gravitation can be formulated: $$G_{ik}=0\text{ }$$ $$G_{ik}=2\frac{\left(\eta _{ik}g_{ik}\right)}{\xi ^i\xi _i}$$ (15) where the first equation is the external problem, and the second equation is the internal problem. In the right side of the second equation of system (15), the tensor of energy, impulse and tension is substituted by $`T_{ik}=2{\displaystyle \frac{\left(\eta _{ik}g_{ik}\right)}{\xi ^i\xi _i}}`$ \- a tensor of geometric origin (magnitude of deformation of space). The solution of system (15) will be searched for spherical symmetry or for a centrally symmetric field. The solution of the first equation (15) is the famous solution of Schwarzschild and is expressed by metrics: $$ds^2=\left(1\frac{r_{gr}}{r}\right)c^2dt^2r^2\left(\mathrm{sin}^2\theta d\phi ^2+d\theta ^2\right)\frac{dr^2}{1r_{gr}/r}$$ (16) and the second equation of system (15), which describes gravitational filed inside the source, is expressed by the system of equations (is calculated by components of the tensor): $$\mathrm{exp}\left(\lambda \right)\left(\frac{\lambda }{r}\frac{1}{r}\frac{1}{r^2}\right)\frac{1}{r^2}=0$$ $$\frac{\lambda }{t}=0$$ (17) where $$g_{00}=\mathrm{exp}\left(\lambda \right)$$ $$g_{33}=\mathrm{exp}\left(\lambda \right)$$ the solution of which is presented by metrics $$ds^2=\left(1\frac{r}{r_k}\right)c^2dt^2r^2\left(\mathrm{sin}^2\theta d\phi ^2+d\theta ^2\right)\frac{dr^2}{1r/r_k}$$ (18) For small distances it becomes the metrics of usual pseudo-Euclidean geometry. Another constant is determined from the condition of coincidence on the border of the internal and the external solutions. If in the solution of Schwarzschild that constant is expressed as gravitational radius and is a geometrical measure of the active gravitational mass, then for (18) that constant represents the inert radius and expresses the geometrical measure of the active inert mass. From the condition of coincidence on the border of the internal and the external solutions, we have $$1\frac{r_{gr}}{r}=1\frac{r}{r_k},\text{ or }r_{gr}r_k=r^2=r_0^2,\text{ }\frac{Gm_{gr}}{c}r_k=r_0^2$$ If constant $`r_0`$ is the Plank distance (expressed through natural constants), then for $`r_k`$ we receive expression $$r_k=\frac{\text{ }h}{mc}$$ Mass is introduced into physics with different notions: First as inert resistance, inert passive mass $`m_{PI}`$, and secondly, as a constant of association which shows how strongly the gravitational field $`\phi `$ affects the body. This is a constant of association, which is the passive gravitational mass $`m_{PG}`$ (as the mass of a test body). And finally, the third notion is the active gravitational mass $`m_{AG}`$, as gravitational charge or as intensity of the source of gravitational field. The first two notions are included in the equation of motion: $$m_{PI}\frac{d^2x^i}{dt^2}=m_{PG}\frac{d\phi }{dx^i}$$ The third notion is included in the gravitational potential. The principle of equivalence assumes the universal law $$m_{in}=m_{gr,}m_{PI}=m_{PG}\text{ (c)}$$ Note that in formulas where metric coefficients represent potentials of specific force fields, gravitational radius is included, which is determined by the active gravitational mass ($`m_{AG}`$). In the solution of Schwarzschild these are potentials of gravitational filed. Metrical coefficients of the counter-space are interpreted as potentials of the inert field. Mass $`m_{AI}`$, included in the Compton radius, is the fourth notion of mass. It is the center of dispersion of light and is included in the structure of metrical tensor of the counter-space $`\overline{g}_{ik}`$. The fourth notion of mass is presented as the active inert mass. This was noted by Siama. . The equality between the active gravitational and the active inert masses, which are present in formulas (16) and (18) respectively, is formulated as the active principle of equivalence. $$m_{gr}=m_{in}$$ and as a consequence $`r_0={\displaystyle \frac{\left(Gh\right)^{1/2}}{c^3}}`$ is the formula expressing the fundamental meaning of length, only if $`r_{gr}=r_{in}=r_0`$. From the condition of coincidence on the border of the internal and the external solutions, (16) and (18) coincide in point $`r_0`$ and are expressed as $$ds^2=\left(1\frac{m}{m_0}\right)c^2dt^2r^2\left(\mathrm{sin}^2\theta d\phi ^2+d\theta ^2\right)\frac{dr^2}{1m/m_0}$$ (19) where $`m_0=\left({\displaystyle \frac{\text{ }hc}{G}}\right)^{1/2}`$ is the fundamental meaning of mass. Thus if the metrics of Schwarzschild describes the external gravitational filed with source $`r_{gr}`$, then metrics (18) describes the external inert filed with source $`r_k`$. Gravitational and inert fields are linked through the active principle of equivalence in the counter-projection. Let us consider particular cases. 1. Approximation of a weak field The expression of component $`g_{00}`$ of the metric tensor $$g_{00}=1\frac{r}{r_k}$$ describes the potential of the filed, and the force is determined by formula $$F=mc^2\frac{dg_{00}}{dr},\text{ }F=\frac{mc^2}{\text{ }h/Mc}=\frac{Mmc^3}{\text{ }h}\frac{\stackrel{}{r}_0}{\left|r_0\right|}$$ (20) where $`M`$ is the active inert mass, $`m`$ is the mass of the test body, and $`\stackrel{}{r}_0`$ is the unit vector. In formula (20) force $`F`$ does not depend on the distance, which means that in the formation of that force all components and fragments of the Universe participate equally. And if the Universe is homogeneous and isotropic then due to $`M=0`$ (effective mass), this force $`F=0`$. Newton’s first law is obtained - the theory of inertial systems. By the way, in this approximation from Schwarzschild’s solution the Law of Terrestrial Gravitation is obtained. The forces of inertia are so usual that it is very uncommon to think about such a problem. Despite this, a lot has been written and spoken about this. Here we would like to discuss sources of inertial forces. According to Newton , the source of inertial forces is the absolute space. This means that the absolute space can influence matter, and the opposite influence form the matter is excluded. Thus, the interaction between the absolute space and material bodies does not exist. This rather abstract point of view corresponds to the interpretation of physical interactions on the basis of the field theory. Therefore, a result of the proposed theory is that forces of inertia impacting on the body depend on physical qualities of space and all members filling the space. Basically, Mach’s principle is formulated: Inert qualities of an object are determined by distribution of mass-energy in the whole space. On the basis of Mach’s principle and the active principle of equivalence, as well as on the basis of Rimman’s idea on geometry of space corresponding to physics and playing an essential role in it, we were able to build this theory. Rotation of a body relative to the system of static stars is equivalent to the rotation of stars around the body. In both cases the relative motion is the same. This is relativity according to Mach and Berkley. Let us imagine a body located inside of a massive sphere filled with members of Metagalaxy. If the body suddenly gets acceleration, then such acceleration will be similar to the acceleration of the whole Universe relative to the body. In this process all members of the Universe participate equally regardless of their distance. If the Universe is homogeneous and isotropic, then such acceleration for the body means distortion of local isotropness (rotation) or local homogeneity (uniform motion). Forces of inertia occur. Mach proposed a viewpoint similar to this ideology. A system is examined in the Universe , in which a big amount of matter exists on far distances. Centrifugal forces occur due to real rotation around a real rotation axis, and thus the local isotropness is distorted in the system of the Universe. Forces of Coriolis occurring in a rotating system of coordinates are real forces and are created by the rotation of the whole Universe around the considered body. 2. Let us discuss a particular case when the metrics is given on a three-dimensional sphere and is determined by formula: $$ds^2=r^2\left(\mathrm{sin}^2\theta d\phi ^2+d\theta ^2\right)$$ (21) That is equivalent to the motion of a three-dimensional spherical surface in the radial direction in the four-dimensional pseudo-Rimman space (18). $$\left(1\frac{r}{r_k}\right)c^2dt^2\frac{dr^2}{1r/r_k}=0$$ or $$r=r_k\left(1\mathrm{cos}\theta \right),\text{ }r=\frac{h}{mc}\left(1\mathrm{cos}\theta \right)$$ (22) Compton’s formula is received for dispersion of light on charged particles if $`{\displaystyle \frac{v}{c}}=\mathrm{cos}\theta `$, $`r_k={\displaystyle \frac{h}{mc}}`$ are the Compton or inertial radiuses. In Schwarzschild’s solution this particular case brings us to Einstein’s formula , which was obtained before the creation of GTR. $$c=c_0\left(1\frac{2GM}{c^2r}\right)$$ (23) The observed Compton effect on charged particles is an expression of the active inert mass. For big m this effect is not observed due to its small value. In the metrics of Schwarzschild when $`rr_{gr}`$ all processes on the body in relation to the external observer are ”frozen”, and a collapse occurs which creates a frozen body, which does not send any signals to the surrounding environment and interacts with the external world only through its static gravitational field. Such a formation is called a gravitational black hall, or a gravitational collapse. In metrics (18) when $`rr_k`$, all processes on the body in relation to the internal observer are frozen. A frozen inert black hall is formed, which interacts only through its inert filed. The source of the inert field is the equivalent mass of energy of gravitational field. Analogously, the source of the gravitational field is the equivalent mass of energy of the inert field. Suppose $`rr_{gr},r_k`$, then the body will be within gravitational as well as within inertial radiuses. Such situation will occur only in point $`r_0=\left({\displaystyle \frac{GM}{c^3}}\right)^{1/2}`$, which is determined by the Plank distance. This point $`r_0`$ in the metrics of the four-dimensional pseudo-Rimman space seems to us to be an absolute and special point. A body determined in such a way may be considered both blind and deaf. If gravitational and inert fields are created by spherical bodies, then their full mass is expressed as $$m=\frac{4\pi }{c^3}P_0^0r^2𝑑r$$ (24) For the gravitational field the tensor is determined by component $$P_0^0=\frac{2r_0}{r^3}$$ (25) and for the field of inertia $$\overline{P}_0^0=\frac{2}{rr_0}$$ (26) determining the limits of integration. The dead particle with radius $`r_0=\left({\displaystyle \frac{GM}{c^3}}\right)^{1/2}`$ and mass does not create any excitation in the space. The latter remains relativistically flat, although the space is like being filled up with dead mass (blind and deaf particles). Such virtual particles are foundations, on which real particles are born by localizing equivalent energy in the given volume of space through exciting it. That is why it is assumed that during formation of the particle the limits of integration are changed from $`1`$ to $`r`$ on the one side, and from $`1`$ to $`1/r`$ on the other. Thus, $$m=\underset{1/r}{\overset{r}{}}\frac{2r_0}{r^3}r^2𝑑r\text{ if }x=\frac{r_0}{r}$$ for the source of gravitational field $$m=\underset{1/x}{\overset{x}{}}\frac{dx}{x}=2\mathrm{ln}\left|x^2\right|$$ (27) for the source of the inert field $$\overline{m}=\underset{1/x}{\overset{x}{}}x𝑑x=\frac{1}{2}\left(x^2\frac{1}{x^2}\right)$$ (28) Using the active principle of equivalence again, we receive $$x^41=4\left|x^2\right|\mathrm{ln}\left|x^2\right|$$ (29) A transcendental algebraic equation is obtained, the roots of which are $$\left|x_0\right|^2=1,\text{ }x_1=x_0\alpha ^{1/4}$$ and their reverse values. We think that $`x_1^4=j`$ is the value of Dirack’s monopole, if the electric charge $`e^2=1`$, then $`2j^2=\alpha ^2`$ is the value of the constant of the fine structure. To conclude, we would like to mention that physical consequences of this work, including cosmological consequences, are very interesting and will be discussed in a separate article. Literature 1. H.Minkowski, The Principle of Relativity, Dovez Publications, New York (1952) 2. A. Einstein, Theorie unitaire de champ physique, Ann. Just. H. Poincare, 1. 1-24, (1930) 3. A. Einstein, Ann. Phys., 49, 769 (1916) 4. K. Schwarzshild, Sitrungsber. Preuss. Akad. Wiss., s.424, (1916) 5. D. W.Siama, Roy. Astron. Sos. Monthly Notices, 113, 34 (1953) 6. I. Newton, Philosophiae Naturalis Principia Mathematica, University of California Press, p. 546 (1966) 7. E. Mach, The Science of Mechanics 2nd ed., Open Court Pable. Co., (1893) 8. A. Einstein, Ann. Phys., 35, 898 (1911)
no-problem/0002/astro-ph0002400.html
ar5iv
text
# Cosmic Equation of State, Quintessence and Decaying Dark Matter <sup>1</sup><sup>1</sup>institutetext: ESO, Schwarzchildstrasse 2, 85748, Garching b. München, Germany <sup>1</sup><sup>1</sup>1Present Address: 03, impasse de la Grande Boucherie, F-67000, Strasbourg, France. (Received ……; accepted ……) ## Abstract If CDM particles decay and their lifetime is comparable to the age of the Universe, they can modify its equation of state. By comparing the results of numerical simulations with high redshift SN-Ia observations, we show that this hypothesis is consistent with present data. Fitting the simplest quintessence models with constant $`w_q`$ to the data leads to $`w_q1`$. We show that a universe with a cosmological constant or quintessence matter with $`w_q1`$ and a decaying Dark Matter has an effective $`w_q<1`$ and fits SN data better than stable CDM or quintessence models with $`w_q>1`$. There are at least two motivations for the existence of a Decaying Dark Matter (DDM). If R-parity in SUSY models is not strictly conserved, the LSP which is one of the best candidates of DM can decay to Standard Model particles Banks et al. banks (1995). Violation of this symmetry is one of the many ways for providing neutrinos with very small mass and large mixing angle. Another motivation is the search for sources of Ultra High Energy Cosmic Rays (UHECRs)(see Yoshida & Dai crrev (1998) for review of their detection and Blandford stdsrc (1999) and Bhattacharjee & Sigl revorg (1998) respectively for conventional and exotic sources). In this case, DDM must be composed of ultra heavy particles with $`M_{DM}10^{22}10^{25}eV`$. In a recent work Ziaeepour wimpzilla (2000) we have shown that the lifetime of UHDM (Ultra Heavy Dark Matter) can be relatively short, i.e. $`\tau 10100\tau _0`$ where $`\tau _0`$ is the age of the Universe. Here we compare the prediction of this simulation for the Cosmic Equation of State (CES) with the observation of high redshift SN-Ia. For details of the simulation we refer the reader to Ziaeepour wimpzilla (2000). In summary, the decay of UHDM is assumed to be like the hadronization of two gluon jets. The decay remnants interact with cosmic backgrounds, notably CMB, IR, and relic neutrinos, lose their energy and leave a high energy background of stable species e.i. $`e^\pm `$, $`p^\pm `$, $`\nu `$, $`\overline{\nu }`$, and $`\gamma `$. We solve the Einstein-Boltzmann equations to determine the energy spectrum of remnants. Results of Ziaeepour wimpzilla (2000) show that in a homogeneous universe, even the short lifetime mentioned above can not explain the observed flux of UHECRs. The clumping of DM in the Galactic Halo however limits the possible age/contribution. These parameters are degenerate and we can not separate them. For simplicity, we assume that CDM is entirely composed of DDM and limit the lifetime. Fig. 1 shows the evolution of energy density $`T^{00}(z)\rho (z)`$ at low and medium redshifts in a flat universe with and without a cosmological constant. As expected, the effect of DDM is more significant in a matter dominated universe i.e. when $`\mathrm{\Lambda }=0`$. For a given cosmology, the lifetime of DDM is the only parameter that significantly affects the evolution of $`\rho `$. For the same lifetime, the difference between $`M_{DM}=10^{12}eV`$ and $`M_{DM}=10^{24}eV`$ cases is only $`0.4\%`$. Consequently, in the following we neglect the effect of the DM mass. For the same cosmological model and initial conditions, if DM decays, matter density at $`z=0`$ is smaller than when it is stable because decay remnants remain highly relativistic even after losing part of their energy. Their density dilutes more rapidly with the expansion of the Universe than CDM and decreases the total matter density. Consequently, relative contribution of cosmological constant increases. This process mimics a quintessence model i.e. a changing cosmological constant Peebles & Ratra quinorg (1988), Zlatev, Wang & Steinhardt tracker (1999) (see Sahni & Starobinsky quinrev (1999) for recent review). However, the equation of state of this model has an exponent $`w_q<1`$ which is in contrast with the prediction of scalar field models with positive potential (see appendix for an approximative analytical proof). The most direct way for determination of cosmological densities and equation of state is the observation of SN-Ia’s as standard candles. It is based on the measurement of apparent magnitude of the maximum of SNs lightcurve Perlmutter et al. snmeasur (1997) snproj (1999), Riess A. et al. snmeasur1 (1998). After correction for various observational and intrinsic variations like K-correction, width-luminosity relation, reddening and Galactic extinction, it is assumed that their magnitude is universal. Therefore the difference in apparent magnitude is only related to difference in distance and consequently to cosmological parameters. The apparent magnitude of an object $`m(z)`$ is related to its absolute magnitude $`M`$: $$m(z)=M+25+5\mathrm{log}D_L$$ (1) where $`D_L`$ is the Hubble-constant-free luminosity distance: $$D_L=\frac{(z+1)}{\sqrt{|\mathrm{\Omega }_R|}}𝒮\left(\sqrt{|\mathrm{\Omega }_R|}_0^z\frac{dz^{}}{E(z^{})}\right)$$ (2) $$𝒮(x)=\begin{array}{cc}\mathrm{sinh}(x)\hfill & \mathrm{\Omega }_R>0\text{,}\hfill \\ x\hfill & \mathrm{\Omega }_R=0\text{,}\hfill \\ \mathrm{sin}(x)\hfill & \mathrm{\Omega }_R<0\text{.}\hfill \end{array}$$ (3) $`E(z)`$ $`=`$ $`{\displaystyle \frac{H(z)}{H_0}}.`$ (4) $`H^2(z)`$ $`=`$ $`{\displaystyle \frac{8\pi G}{3}}T^{00}(z)+{\displaystyle \frac{\mathrm{\Lambda }}{3}}.`$ (5) Here we only consider flat cosmologies. We use the published results of the Supernova Cosmology Project, Perlmutter et al. snproj (1999) for high redshift and Calan-Tololo sample, Hamuy et al. tololo (1996) for low redshift supernovas and compare them with our simulation. From these data sets we eliminate 4 SNs with largest residue and stretch as explained in Perlmutter et al. snproj (1999) (i.e. we use objects used in their fit B). Minimum-$`\chi ^2`$ fit method is applied to the data to extract the parameters of the cosmological models. In all fits described in this letter we consider $`M`$ as a free parameter and minimize the $`\chi ^2`$ with respect to it. Its variation in our fits stays in the acceptable range of $`\pm 0.17`$, Perlmutter et al. snmeasur (1997). We have restricted our calculation to a range of parameters close to the best fit of Perlmutter et al. snproj (1999) i.e. $`2.38\times 10^{11}\rho _\mathrm{\Lambda }\frac{\mathrm{\Lambda }}{8\pi G}3.17\times 10^{11}eV^4`$. The reason why we use $`\rho _\mathrm{\Lambda }`$ rather than $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is that the latter quantity depends on the equation of state and the lifetime of the Dark Matter. The range of $`\rho _\mathrm{\Lambda }`$ given here is equivalent to $`0.6\mathrm{\Omega }_\mathrm{\Lambda }^{eq}0.8`$ for a stable CDM and $`H_0=70`$ $`km`$ $`Mpc^1\mathrm{sec}^1`$ (we use $`\mathrm{\Omega }_\mathrm{\Lambda }^{eq}`$ notation to distinguish between this quantity and real $`\mathrm{\Omega }_\mathrm{\Lambda }`$). Fig.2 shows the residues of the best fit to DDM simulation. Although up to $`1`$-$`\sigma `$ uncertainty all models with stable or decaying DM with $`5\tau _0\tau 50\tau _0`$ and $`0.68\mathrm{\Omega }_\mathrm{\Lambda }^{eq}0.72`$ are compatible with the data, a decaying DM with $`\tau 5\tau _0`$ systematically fits the data better than stable DM with the same $`\mathrm{\Omega }_\mathrm{\Lambda }^{eq}`$. Models with $`\mathrm{\Lambda }=0`$ are ruled out with more than $`99\%`$ confidence level. In fitting the results of DM decay simulation to the data we have directly used the equation (5) without defining any analytical form for the evolution of $`T^{00}(z)`$. It is not usually the way data is fitted to cosmological models Perlmutter et al. snmeasur (1997)snmeasur1 (1998), Garnavich et al. coseq (1998). Consequently, we have also fitted an analytical model to the simulation for $`z<1`$ as it is the redshift range of the available data. It includes a stable DM and a quintessence matter. Its evolution equation is: $$H^2(z)=\frac{8\pi G}{3}(T_{st}^{00}+\mathrm{\Omega }_q(z+1)^{3(w_q+1)}).$$ (6) The term $`T_{st}^{00}`$ is obtained from our simulation when DM is stable. In addition to CDM, it includes a small contribution from hot components i.e CMB and relic neutrinos. For a given $`\mathrm{\Omega }_\mathrm{\Lambda }^{eq}`$ and $`\tau `$, the quintessence term is fitted to $`T^{00}T_{st}^{00}+\frac{\mathrm{\Lambda }}{8\pi G}`$. <sup>2</sup><sup>2</sup>2The exact equivalent model should have the same form as (7). However, it is easy to verify that in this case, the minimization of $`\chi ^2`$ has a trivial solution with $`w_q=1`$, $`\mathrm{\Omega }_q=0`$. Only one equation remains for non-trivial solutions and it depends on both $`w_q`$ and $`\mathrm{\Omega }_q`$. Consequently, there are infinite number of solutions. The model we have used here generates a very good equivalent model to DDM with less than $`2\%`$ error (Because CDM and quintessence terms are not fitted together, $`\mathrm{\Omega }`$ is not exactly $`1`$). The results of this fit are $`\mathrm{\Omega }_q`$ and $`w_q`$ which characterize an equivalent quintessence model for the corresponding DDM. The analytical model fits the simulation extremely good and the absolute value of relative residues is less than $`0.2\%`$. Results for models in the $`1`$-$`\sigma `$ distance of the best fit is summarized in Table 1. In the next step, we fit an analytical model to the SN-Ia data. Its evolution equation is the following: $$H^2(z)=\frac{8\pi G}{3}((1\mathrm{\Omega }_q)(z+1)^3+\mathrm{\Omega }_q(z+1)^{3(w_q+1)}).$$ (7) The aim for this exercise is to compare DDM equivalent quintessence models with the data. Fig.3 shows the $`\chi ^2`$ of these fits as a function of $`w_q`$ for various values of $`\mathrm{\Omega }_q`$. The reason behind using $`\chi ^2`$ rather than confidence level is that it directly shows the goodness-of-fit. As with available data all relevant models are compatible up to $`1`$-$`\sigma `$, the error analysis is less important than goodness-of-fit and its behavior in the parameter-space. Models presented in Fig.3 have the same $`\mathrm{\Omega }_q`$ as equivalent quintessence models obtained from DDM and listed in Tab.1. These latter models are shown too. In spite of statistical closeness of all fits, the systematic tendency of the minimum of $`\chi ^2`$ to $`w_q<1`$ when $`\mathrm{\Omega }_q<0.75`$ is evident. The minimum of models with $`\mathrm{\Omega }_q>0.75`$ has $`w_q>1`$, but the fit is worse than former cases. Between DDM models, one with $`\mathrm{\Omega }_q=0.71`$ is very close to the best fit of (7) models with the same $`\mathrm{\Omega }_q`$. Regarding errors however, all these models, except $`\mathrm{\Omega }_q=0.8`$ are $`1`$-$`\sigma `$ compatible with the data. One has to remark that $`\mathrm{\Omega }_q`$ and $`w_q`$ are not completely independent (see Footnote 2) and models with smaller $`\mathrm{\Omega }_q`$ and smaller $`w_q`$ has even smaller $`\chi ^2`$. In fact the best fit corresponds to $`\mathrm{\Omega }_q=0.5`$, $`w_q=2.6`$ with $`\chi ^2=61.33`$. The rejection of these models however is based on physical grounds. In fact, if the quintessence matter is a scalar field, to make such a model, not only its potential must be negative, but also its kinetic energy must be comparable to the absolute value of the potential and this is in contradiction with very slow variation of the field. In addition, these models are unstable against perturbations. It is however possible to make models with $`w_q<1`$, but they need unconventional kinetic term Caldwell negqw (1999). These results are compatible with the analysis performed by Garnavich et al. coseq (1998). However, based on null energy condition Wald wald (1984), they only consider models with $`w_q1`$. This condition should be satisfied by non-interacting matter and by total energy-momentum tensor. As our example of a decaying matter shows, a component or an equivalent component of energy-momentum tensor can have $`w_q<1`$ when interactions are present. In conclusion, we have shown that a flat cosmological model including a decaying dark matter with $`\tau 5\tau _0`$ and a cosmological constant or a quintessence matter with $`w_q1`$ at $`z<1`$ and $`\mathrm{\Omega }_q0.7`$ fits the SN-Ia data better than models with a stable DM or $`w_q>1`$. The effect of a decaying dark matter on the Cosmic Equation of State (CES) is a distinctive signature that can hardly be mimicked by other phenomena, e.g. conventional sources of Cosmic Rays. It is an independent mean for verifying the hypothesis of a decaying UHDM. In fact if a decaying DM affects CES significantly, it must be very heavy. Our simulation of a decaying DM with $`M10^{12}eV`$ and $`\tau =5\tau _0`$ leads to an over-production by a few orders of magnitude of $`\gamma `$-ray background at $`E10^910^{11}eV`$ with respect to EGRET observation Sreekumar et al. egret (1998). Consequently, such a DM must have a lifetime much longer than $`5\tau _0`$. However, in this case it can not leave a significant effect on CES. The only other alternative for making a quintessence term in CES with $`w_q`$ sightly smaller than $`1`$, is a scalar field with a negative potential. Nevertheless, as most of quintessence models originate from SUSY, the potential should be strictly positive. Even if a negative potential or unconventional models are not a prohibiting conditions, they rule out a large number of candidates. Present SN-Ia data is too scarce to distinguish with high precision between various models. However, our results are encouraging and give the hope that SN-Ia observations will help to better understand the nature of the Dark Matter in addition to cosmology of the Universe. Appendix: Here we use an approximative solution of (5) to find an analytical expression for the equivalent quintessence model of a cosmology with DDM and a cosmological constant. With a good precision the total density of such models can be written as the following: $$\frac{\rho (z)}{\rho _c}\mathrm{\Omega }_M(1+z)^3\mathrm{exp}(\frac{\tau _0t}{\tau })+\mathrm{\Omega }_{Hot}(1+z)^4+\mathrm{\Omega }_M(1+z)^4\left(1\mathrm{exp}(\frac{\tau _0t}{\tau })\right)+\mathrm{\Omega }_\mathrm{\Lambda }.$$ (8) We assume a flat cosmology i.e. $`\mathrm{\Omega }_M+\mathrm{\Omega }_\mathrm{\Lambda }=1`$ (ignoring the hot part). $`\rho _c`$ is the present critical density. If DM is stable and we neglect the contribution of HDM, the expansion factor $`a(t)`$ is: $$\frac{a(t)}{a(\tau _0)}=\left[\frac{(B\mathrm{exp}(\alpha (t\tau _0))1)^2}{4AB\mathrm{exp}(\alpha (t\tau _0))}\right]^{\frac{1}{3}}\frac{1}{1+z}.$$ (9) $`A`$ $``$ $`{\displaystyle \frac{\mathrm{\Omega }_\mathrm{\Lambda }}{1\mathrm{\Omega }_\mathrm{\Lambda }}},`$ (10) $`B`$ $``$ $`{\displaystyle \frac{1+\sqrt{\mathrm{\Omega }_\mathrm{\Lambda }}}{1\sqrt{\mathrm{\Omega }_\mathrm{\Lambda }}}},`$ (11) $`\alpha `$ $``$ $`3H_0\sqrt{\mathrm{\Omega }_\mathrm{\Lambda }}.`$ (12) Using (9) as an approximation for $`\frac{a(t)}{a(\tau _0)}`$ when DM slowly decays, (8) takes the following form: $`{\displaystyle \frac{\rho (z)}{\rho _c}}`$ $``$ $`\mathrm{\Omega }_M(1+z)^3C^{\frac{1}{\alpha \tau }}+\mathrm{\Omega }_{Hot}(1+z)^4+\mathrm{\Omega }_M(1+z)^4(1C^{\frac{1}{\alpha \tau }})+\mathrm{\Omega }_\mathrm{\Lambda }.`$ (13) $`C`$ $``$ $`{\displaystyle \frac{1}{B}}\left(1+{\displaystyle \frac{4A}{(1+z)^3}}\sqrt{(1+{\displaystyle \frac{4A}{(1+z)^3}})^21}\right).`$ (14) For a slowly decaying DDM, $`\alpha \tau >>1`$ and (13) becomes: $`{\displaystyle \frac{\rho (z)}{\rho _c}}`$ $``$ $`\mathrm{\Omega }_M(1+z)^3+\mathrm{\Omega }_{Hot}(1+z)^4+\mathrm{\Omega }_q(1+z)^{3\gamma _q},`$ (15) $`\mathrm{\Omega }_q(1+z)^{3\gamma _q}`$ $``$ $`\mathrm{\Omega }_\mathrm{\Lambda }(1+{\displaystyle \frac{\mathrm{\Omega }_M}{\alpha \tau \mathrm{\Omega }_\mathrm{\Lambda }}}z(1+z)^3\mathrm{ln}C).`$ (16) Equation (16) is the definition of equivalent quintessence matter. After its linearization: $$w_q\gamma _q1\frac{\mathrm{\Omega }_M(1+4A)(1\sqrt{2A})}{3\alpha \tau \mathrm{\Omega }_\mathrm{\Lambda }B}1.$$ (17) It is easy to see that in this approximation $`w_q<1`$ if $`\mathrm{\Omega }_\mathrm{\Lambda }>\frac{1}{3}`$.
no-problem/0002/astro-ph0002444.html
ar5iv
text
# Heliospheric, Astrospheric, and Insterstellar Lyman-𝛼 Absorption Toward 36 OphBased on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. ## 1 Introduction In 1996, high quality spectra of the very nearby star $`\alpha `$ Cen taken by the Goddard High Resolution Spectrograph instrument on board the Hubble Space Telescope (HST) yielded a serendipitous detection of neutral hydrogen in the outer heliosphere (Linsky & Wood, 1996, hereafter LW96), thereby providing a unique new way to observationally study the structure and internal properties of the heliosphere. Previous studies of heliospheric neutral hydrogen had relied primarily on Lyman-$`\alpha `$ backscatter measurements, which are mostly sensitive to very nearby H I (Bertaux et al., 1985; Quémerais et al., 1995). In the $`\alpha `$ Cen data, the heliospheric H I produces an absorption signature in the stellar Lyman-$`\alpha `$ line that allows it to be detected despite being highly blended with absorption from the local interstellar medium (LISM). LW96 demonstrated that the properties of the heliospheric H I inferred from the data are consistent with the predictions of heliospheric models (Baranov & Malama, 1993, 1995; Pauls, Zank, & Williams, 1995; Zank et al., 1996; Zank, 1999). Gayley et al. (1997) confirmed this result by directly comparing the data with H I absorption predicted by the models, and they also found evidence that H I in the “astrosphere” of $`\alpha `$ Cen is also accounting for some of the non-LISM absorption observed in the Lyman-$`\alpha `$ line. The $`\alpha `$ Cen line of sight lies $`52^{}`$ from the upwind direction of the interstellar flow through the heliosphere. In the upwind side, the heliospheric H I absorption is expected to be dominated by neutral hydrogen in the “hydrogen wall” (or “H-wall” for short), located between the heliopause and solar bow shock. In this region, interactions with solar wind protons heat and decelerate the plasma component of the interstellar wind. Charge exchange with this plasma then heats, compresses, and decelerates the neutral hydrogen as well. The high temperature and decelerated velocity of the H I in the hydrogen wall both play important roles in allowing absorption from this material to be detectable despite being blended with an LISM absorption component with a much larger column density. The high temperature broadens the H-wall absorption substantially, and the deceleration shifts the absorption away from the LISM absorption, resulting in a noticeable excess of Lyman-$`\alpha `$ absorption on the red side of the line. The deceleration should be at its largest in the upwind direction, so the Lyman-$`\alpha `$ signature of the H-wall should become even easier to detect if one looks closer to the upwind direction than the $`\alpha `$ Cen line of sight $`52^{}`$ away. Thus, we searched for another target for HST that would be much closer to the upwind direction. We chose the nearby ($`d=5.5`$ pc) K0 V star 36 Oph A (=HD 155886), which is only $`12^{}`$ from the upwind direction. Our main goals in observing this star are to confirm the detection of the solar hydrogen wall claimed in the analysis of the $`\alpha `$ Cen data, and to provide additional constraints on the properties of the H-wall. ## 2 Observations and Data Reduction On 1999 October 10, we observed 36 Oph A with the Space Telescope Imaging Spectrograph (STIS) instrument on HST. The STIS instrument is described in detail by Kimble et al. (1998) and Woodgate et al. (1998). We observed the $`11601357`$ Å spectral region with the high-resolution E140H grating and $`0.2^{\prime \prime }\times 0.2^{\prime \prime }`$ aperture, and we used two exposures with the E230H grating and $`0.1^{\prime \prime }\times 0.2^{\prime \prime }`$ aperture to observe the $`24302943`$ Å spectral range. The former region contains the Lyman-$`\alpha `$ line at 1216 Å that is of primary interest to us, while the latter region contains interstellar lines of Mg II and Fe II. These metal lines provide important information on the properties of the LISM that can be used to constrain the LISM H I Lyman-$`\alpha `$ absorption, making it easier to separate it from the heliospheric absorption. The data were reduced using the STIS team’s CALSTIS software package written in IDL (Lindler, 1999). The reduction included assignment of wavelengths using calibration spectra obtained during the course of the observations. A substantial amount of scattered light was clearly present in the saturated core of the Lyman-$`\alpha `$ line. This flux was removed from the data using the `ECHELLE_SCAT` routine in the CALSTIS package. A geocoronal Lyman-$`\alpha `$ emission feature in the middle of the interstellar H I absorption was fitted with a Gaussian, and then removed by subtracting the fitted Gaussian from the data. The centroid of the Gaussian is at $`26.6`$ km s<sup>-1</sup>, which agrees very well with the expected location of $`26.4`$ km s<sup>-1</sup> based on the Earth’s projected velocity toward the star at the time of observation. This confirms the accuracy of our wavelength calibration. ## 3 Data Analysis ### 3.1 Single Component Fits Besides H I Lyman-$`\alpha `$ at 1215.670 Å, there are only four interstellar absorption features apparent in our data that can be accurately measured: D I Lyman-$`\alpha `$ $`\lambda `$1215.339, Fe II $`\lambda `$2600.173, and the Mg II h & k lines at 2803.531 Å and 2796.352 Å, respectively. The rest wavelengths quoted above are vacuum wavelengths. These lines are all shown in Figure 1. None of the lines in Figure 1 show any asymmetries that would indicate the presence of more than one absorption component, so all are fitted with a single component. The dotted lines in the figure are the fits before correction for instrumental broadening, and the thick solid lines that fit the data are the fits after the instrumental broadening correction is applied. The line spread functions used in these corrections were taken from Sahu et al. (1999). The oscillator absorption strengths assumed in our fits are from Morton (1991). The parameters of the fits are listed in Table 1. In order to constrain the Mg II fits as much as possible, the 2 Mg II lines were fitted simultaneously with both lines forced to have the same column density ($`N`$), Doppler parameter ($`b`$), and central velocity ($`v`$). We were concerned about the possibility that much of the residual flux present below the Mg II absorption lines may be scattered light. Unfortunately, there is currently no scattered light correction for E230H data like there is for E140H spectra (see above). We tried fits with much of the possible scattered light flux subtracted from the spectrum to see the degree to which the fit parameters were affected. The uncertainties quoted in Table 1 include the systematic uncertainties we estimate due to the uncertain scattered light correction. Both LISM studies and in situ measurements of LISM material within the solar system suggest that the velocity of the incoming interstellar wind is about 26 km s<sup>-1</sup> (Witte et al., 1993; Lallement et al., 1995). However, the LISM flow vector appears to be different for lines of sight toward the Galactic center, with a faster speed of 29.4 km s<sup>-1</sup>. Thus, it has been proposed that a different cloud lies in this direction, the so-called G cloud (Lallement & Bertin, 1992). If this interpretation is correct the Sun must be very near the edge of the Local Interstellar Cloud (LIC), since all the LISM absorption detected by LW96 toward $`\alpha `$ Cen (which is roughly in the Galactic center direction) is at the expected velocity of the G cloud and none is at the velocity predicted for that line of sight by the LIC vector (see also Lallement et al., 1995). The outlines of the LIC and the G cloud in the Galactic plane are shown in Figure 2, as are the projected locations of the Sun, $`\alpha `$ Cen, and 36 Oph. The LIC outline is from Redfield & Linsky (2000). The G cloud shape is not well known, so the contour shown in Figure 2 is only a crude but plausible representation for this cloud. The velocity vectors of the LIC and G clouds are also shown in the figure. With Galactic coordinates of $`l=358^{}`$, $`b=7^{}`$; 36 Oph lies almost directly toward the Galactic center, very near the upwind directions of both the LIC and G cloud vectors. The flow velocities predicted for the 36 Oph line of sight by these two vectors are $`25.1`$ and $`28.4`$ km s<sup>-1</sup>, respectively. The LISM velocity measured toward 36 Oph agrees very well with the latter velocity (see Table 1), so we conclude that all of the LISM absorption we detect is from the G cloud. Inspection of Figure 1 reveals that there is no visible absorption component centered at the LIC velocity. Thus, our findings are very similar to those of LW96. However, the evidence for higher flow velocities toward the Galactic Center, and the close proximity of the edge of the LIC, is particularly striking in our data, because even the projected LISM velocities that we directly observe are larger than the 26 km s<sup>-1</sup> velocity that has been measured for the LIC. We now try to estimate how far the Sun is from the edge of the LIC. We need to first estimate an upper limit for the H I column density, and the best way to do this is to determine upper limits for the D I and/or Mg II column densities and then calculate the H I upper limit based on the known abundance ratios in the LIC. For both D I and Mg II we repeat the fits shown in Figure 1, but with added LIC absorption components centered on the known LIC velocity and with Doppler parameters consistent with previous observations of LIC material ($`b_{\mathrm{D}\mathrm{I}}=8.2`$ km s<sup>-1</sup> and $`b_{\mathrm{Mg}\mathrm{II}}=3.0`$ km s<sup>-1</sup>). We experiment with different LIC/G cloud column density ratios to see at what point we believe the fits become unacceptable. One might think that the Mg II lines would provide the best constraints because they are narrower and more optically thick than D I. However, the D I/Mg II ratio is larger in the LIC than in the G cloud by about a factor of 4 (see below). Based on experiments with different D I fits, we estimate that the D I column density of the LIC is no more than 10% that of the G cloud, corresponding to an upper limit of $`N_{\mathrm{D}\mathrm{I}}<9\times 10^{11}`$ cm<sup>-2</sup>. Note that this corresponds to a LIC contribution of 2.5% to the Mg II lines. For LIC contributions greater than this, the $`\chi _\nu ^2`$ values of the fits become significantly worse, and the G cloud D I absorption becomes more and more blueshifted away from the expected velocity of the G cloud. (If we force it to be at the expected velocity, the quality of the fit degrades even more.) The LIC D/H value is $`1.5\times 10^5`$ (Linsky, 1998; Linsky & Wood, 1998), so $`N_{\mathrm{H}\mathrm{I}}<6\times 10^{16}`$ cm<sup>-2</sup> for the LIC. Assuming a density of $`n_{\mathrm{H}\mathrm{I}}=0.10`$ cm<sup>-3</sup> (Linsky et al., 2000), the upper limit for the distance to the edge of the LIC is $`d_{edge}<0.19`$ pc for the 36 Oph line of sight, consistent with the $`d_{edge}=0.05`$ pc value suggested by the LIC model of Redfield & Linsky (2000) (see Fig. 2). The LIC material is moving toward the Sun with a velocity of $`25.1`$ km s<sup>-1</sup> along this line of sight, so we will reach the edge in $`t_{edge}<7400`$ yrs. The Doppler parameters ($`b`$) listed in Table 1 are related to the temperature, $`T`$, and nonthermal velocity, $`\xi `$, by the following equation: $$b^2=0.0165\frac{T}{A}+\xi ^2,$$ (1) where $`b`$ and $`\xi `$ are in units of km s<sup>-1</sup>, and $`A`$ is the atomic weight of the element in question. In Figure 3, the measured Doppler parameters of the Mg II, Fe II, and D I lines are used with equation (1) to derive curves of $`\xi `$ vs. $`T`$, with error bars. The shaded region in the figure is the area of overlap for these curves, which identifies the temperature and $`\xi `$ values for the observed LISM material. Our measured temperature and nonthermal velocity based on this analysis are $`T=5900\pm 500`$ K and $`\xi =2.2\pm 0.2`$ km s<sup>-1</sup>. This temperature is consistent with the $`T=5400\pm 500`$ K temperature observed toward $`\alpha `$ Cen, supporting the assertion of LW96 that the G cloud temperature is somewhat cooler than the LIC temperature of $`T=8000\pm 1000`$ K (Dring et al., 1997; Piskunov et al., 1997; Wood & Linsky, 1998). However, the nonthermal velocity toward 36 Oph appears to be somewhat larger than that observed toward $`\alpha `$ Cen ($`\xi =1.25\pm 0.25`$ km s<sup>-1</sup>). The relative abundances of D I, Mg II, and Fe II suggested by the column densities in Table 1 are consistent with the $`\alpha `$ Cen measurements. These results suggest that Mg and Fe are less depleted in the G cloud than in the LIC. For example, the D I/Mg II ratio is $`0.9\pm 0.4`$ toward 36 Oph and $`1.2\pm 0.2`$ toward $`\alpha `$ Cen, but is $`4`$ for the LIC (LW96; Dring et al., 1997; Piskunov et al., 1997), with the apparent exception of the Sirius line of sight which has a D I/Mg II ratio of about 1.7 (Lallement et al., 1994; Bertin et al., 1995). In Figure 4, we present our single component fit to the H I Lyman-$`\alpha `$ line, combined with the D I Lyman-$`\alpha `$ fit from Figure 1. The parameters of the H I fit are provided in Table 1. In constructing this fit, we started with an assumed stellar Lyman-$`\alpha `$ profile based on a broadened version of the observed Mg II k line profile. We then performed an initial fit, altered the assumed stellar profile to improve the quality of the fit, and then attempted another fit. Yet another iteration of this process was required before arriving at the fit in Figure 4. This Lyman-$`\alpha `$ fitting technique has been used in many past analyses (LW96; Piskunov et al., 1997; Wood & Linsky, 1998). The argument for using the Mg II line as a starting point is that the Mg II k and Lyman-$`\alpha `$ lines are both highly optically thick chromospheric lines that have similar shapes in the solar spectrum. It should be stated, however, that for single component fits it actually matters little what initial model is used for the stellar Lyman-$`\alpha `$ profile. The initial profile is altered significantly before the final best fit is determined, and the final fit parameters are therefore independent of the initial assumptions about the shape of the stellar profile. This was demonstrated clearly in the $`\alpha `$ Cen analysis, where the Lyman-$`\alpha `$ lines of both members of the $`\alpha `$ Cen binary system were independently fitted with single absorption components and the derived interstellar parameters were the same (LW96). The residuals of the single component H I fit in Figure 4 suggest some minor systematic discrepancies, but the quality of the fit is not too bad. However, the main problem with the fit is that the velocity and Doppler parameter of the H I absorption are completely inconsistent with those of the other lines, D I in particular (see Table 1). The central velocity of H I is $`25.9`$ km s<sup>-1</sup>, as opposed to the $`28.4`$ km s<sup>-1</sup> average velocity observed for the other lines. The Doppler parameter of H I (14.34 km s<sup>-1</sup>) suggests temperatures of about 12,000 K, compared with the $`6000`$ K temperature determined mostly from D I (see Fig. 3). If a curve was plotted in Figure 3 for H I it would be a nearly vertical line centered at about 12,000 K, which actually lies off the right edge of the figure, emphasizing just how discrepant H I is relative to D I and the other lines. These problems with H I are very reminiscent of those found for the $`\alpha `$ Cen line of sight (LW96). Collectively they represent the primary piece of evidence for a hydrogen wall contribution to the Lyman-$`\alpha `$ absorption toward both $`\alpha `$ Cen and 36 Oph, because the only way to resolve the discrepancies is to add a second absorption component to the H I fit with properties that turn out to be consistent with those expected for the solar H-wall. The H I Lyman-$`\alpha `$ line of $`\alpha `$ Cen exhibits a $`+2.2`$ km s<sup>-1</sup> velocity discrepancy and about a $`+3000`$ K temperature discrepancy relative to the other lines (LW96). The 36 Oph discrepancies are in the same direction, but are even larger ($`+2.5`$ km s<sup>-1</sup> and $`+6000`$ K). This is consistent with our expectations, since the H-wall closer to the upwind direction should be more decelerated and may be a bit hotter (see, e.g., Zank et al., 1996). Since the LIC velocity is redshifted relative to the G cloud velocity, one might wonder if LIC H I absorption might be responsible for the velocity discrepancy between H I and the other lines. In Figure 4 we show the LIC absorption associated with the $`N_\mathrm{H}<6\times 10^{16}`$ cm<sup>-2</sup> upper limit derived above (dotted line). Essentially all of the LIC absorption is well within the saturated core of the line, so the LIC will not have any observable effect on the Lyman-$`\alpha `$ line. ### 3.2 The Bisector Technique Although single component H I fits are unique and well-constrained, this is not necessarily the case for two component fits, as LW96 demonstrated in the $`\alpha `$ Cen analysis. Thus, before attempting a two component fit, we try to constrain the LISM H I column density by other means. Wood, Alexander, & Linsky (1996) measured an LISM column density toward $`ϵ`$ Indi by determining the amount of absorption in the wings of the Lyman-$`\alpha `$ line, based on the assumption that the far wings of the stellar Lyman-$`\alpha `$ profile should be centered on the rest frame of the star, as is the case for the Sun. This “bisector technique” only works when there is a substantial wavelength difference between the LISM absorption and the center of the stellar Lyman-$`\alpha `$ line, creating a situation where there is more absorption in one wing than the other. This induces an apparent line shift in the wings, the magnitude of which is dependent on the LISM column density. This analysis technique could not be tried in the $`\alpha `$ Cen analysis, since there is no significant shift between the stellar emission and LISM absorption, but in our 36 Oph data there is about a 30 km s<sup>-1</sup> shift between the two. The bisector technique relies on an accurate stellar radial velocity. The radial velocity listed for 36 Oph in the literature is $`1`$ km s<sup>-1</sup> (Hirshfeld, Sinnott, & Ochsenbein, 1996). However, all the chromospheric lines in our spectra, including Mg II h & k and the O I triplet at 1300 Å, are centered on $`+1.0`$ km s<sup>-1</sup>. Thus, we believe this is a more likely value for the radial velocity, and in any case a far more likely value for the centroid of the Lyman-$`\alpha `$ wings. Since 36 Oph A is a member of a binary system, we speculate that perhaps orbital motion has changed the stellar velocity from the systemic value of $`1`$ km s<sup>-1</sup>. Based primarily on the centroids of the Cl I $`\lambda `$1351.657 and O I\] $`\lambda `$1355.598 lines, which can be measured very accurately because of the very narrow line widths of these features ($`FWHM0.04`$ Å), we assume a velocity of $`+1.0\pm 0.2`$ km s<sup>-1</sup> for the star. Since other chromospheric lines in our spectra have this centroid velocity, including Mg II h & k, it is reasonable to assume the far wings of Lyman-$`\alpha `$, which are formed at the base of the chromosphere, will have it too. Figure 5 shows how the analysis technique works. First, we fit polynomials to the wings of the observed Lyman-$`\alpha `$ line, interpolating over the D I line. Working from these fits (thick solid lines in Fig. 5a), we can derive what the wings of the stellar Lyman-$`\alpha `$ line would look like assuming different values of the H I column density. The results for five different column densities are shown in Figure 5a. In Figure 5b we display the bisectors of these wings for various values of $`\mathrm{log}N_\mathrm{H}`$. As the column density increases the bisectors become more and more blueshifted, due to the fact that the LISM is absorbing more in the blue wing of the line than in the red wing. Ideally, a vertical bisector centered at the radial velocity of the star should result when the correct value of $`\mathrm{log}N_\mathrm{H}`$ is assumed, but in practice this does not happen due to uncertainties in the polynomial fits to the data and in measuring the location of the bisectors. Nevertheless, in Figure 5b we identify with a solid line the bisector that we believe best matches the stellar radial velocity of $`+1.0`$ km s<sup>-1</sup>, which corresponds to $`\mathrm{log}N_\mathrm{H}=17.84`$. Column densities in the range $`\mathrm{log}N_\mathrm{H}=17.718.0`$ all produce bisectors with average velocities within about $`\pm 1`$ km s<sup>-1</sup> of the expected velocity, so we decide on a value of $`\mathrm{log}N_\mathrm{H}=17.85\pm 0.15`$ for our best estimate of the LISM H I column density toward 36 Oph. This range of columns yields deuterium-to-hydrogen ratios in the range $`\mathrm{D}/\mathrm{H}=(1.02.0)\times 10^5`$, so we quote a value of $`\mathrm{D}/\mathrm{H}=(1.5\pm 0.5)\times 10^5`$. This is an encouraging result, since it is consistent with the mean value $`\mathrm{D}/\mathrm{H}=(1.5\pm 0.1)\times 10^5`$ for the LIC (Linsky, 1998; Linsky & Wood, 1998). The error bars in the G cloud D/H value are too large to tell if it is actually identical to the LIC value, but apparently the G cloud does not have a drastically different D/H value than the LIC. ### 3.3 Multi-Component Fits In Figure 6a, we present a two component fit to the Lyman-$`\alpha `$ line, where the dotted line represents LISM absorption and the dashed line represents absorption from the solar hydrogen wall. The LISM component is forced to have the average velocity observed for the other LISM lines ($`v=28.4`$ km s<sup>-1</sup>) and a Doppler parameter derived from the T and $`\xi `$ values measured from the other lines ($`b=10.11`$ km s<sup>-1</sup>). Only a single two component fit is shown in Figure 6, but in practice we performed many fits assuming different stellar Lyman-$`\alpha `$ profiles that allowed the LISM H I column density to vary within the $`\mathrm{log}N_\mathrm{H}=17.718.0`$ range allowed by the bisector analysis. This experimentation allowed us to better determine the best fit parameters and their uncertainties, which are listed in Table 1. The quality of the two component fit is certainly better than the one component fit in Figure 4, based on both the residuals shown in the figures and the $`\chi _\nu ^2`$ values listed in Table 1. More importantly, in the two component fit the D I and H I lines are self-consistent. The H-wall temperature derived from the Doppler parameter of the two component fit is $`T=49,000\pm 9000`$ K, compared with $`T=29,000\pm 5000`$ K toward $`\alpha `$ Cen. It is worth noting, however, that the meaning of these measured temperatures is not entirely clear. The absorption component fits to the Lyman-$`\alpha `$ line use Voigt functions for the opacity profiles, which amounts to an implicit assumption that velocity distributions for each component are Maxwellian, but neutrals in the outer heliosphere are far from being in equilibrium and therefore do not have Maxwellian distributions (Baranov, Izmodenov, & Malama, 1998; Lipatov, Zank, & Pauls, 1998; Müller, Zank, & Lipatov, 2000). Furthermore, the absorption components are integrated over a long line of sight through the heliosphere, resulting in even more complex velocity distributions. There is a serious problem with the central velocity of the H-wall absorption inferred from the two-component fit. Heliospheric models all suggest substantial decelerations within the hydrogen wall in the upwind direction (Baranov & Malama, 1993, 1995; Pauls et al., 1995; Zank et al., 1996; Baranov et al., 1998; Lipatov et al., 1998; Müller et al., 2000), meaning the H-wall absorption should be significantly redshifted relative to projected velocity of the LIC for that line of sight, $`25.1`$ km s<sup>-1</sup>. The measured velocity, $`v=26.7\pm 0.3`$ km s<sup>-1</sup>, is only slightly redshifted relative to the G cloud absorption, and it is actually blueshifted relative to the LIC velocity. Figure 6a illustrates why this is the case. Non-LISM absorption exists on both the blue and red sides of the H I absorption feature, and the heliospheric component cannot be greatly redshifted away from the LISM absorption in order to account for the excess absorption on the blue side. The only way the heliospheric absorption can have decelerations consistent with the models is for there to be yet another absorption component that accounts for the non-LISM absorption on the blue side of the line. Gayley et al. (1997) found this to be the case for the $`\alpha `$ Cen data as well when they made direct comparisons between the data and the H I absorption predicted by heliospheric models. Gayley et al. (1997) interpreted this to mean that the “astrosphere” of $`\alpha `$ Cen was responsible for the blueshifted non-LISM H I absorption. We propose that our Lyman-$`\alpha `$ data are also contaminated by astrospheric absorption. Figure 2 shows the velocity vector of 36 Oph, which is computed from the radial velocity and proper motion information in Hirshfeld et al. (1996). From this 32.5 km s<sup>-1</sup> stellar vector and the known 29.4 km s<sup>-1</sup> G cloud vector, we can determine the interstellar wind vector in the rest frame of 36 Oph (dotted arrow in Fig. 2). We find that the wind speed relative to the star is 40 km s<sup>-1</sup> and the line of sight toward the Sun is $`\theta =134^{}`$ from the upwind direction, meaning we are looking at the downwind portion of 36 Oph’s astrosphere. Much of the heliospheric H I in downwind directions is formed by charge exchange between LISM neutrals and solar wind protons inside the heliopause, whereas the H-wall H I observed in upwind directions is created by charge exchange between LISM neutrals and heated LISM protons outside the heliopause. Another difference is that in the upwind direction, the H I in the H-wall is decelerated, but models suggest that for downwind directions the H I gas is accelerated relative to the LISM flow. One consequence of this is that heliospheric absorption will be redshifted relative to the LISM absorption in all directions. Likewise, astrospheric absorption will always be blueshifted. Izmodenov, Lallement, & Malama (1999) claim to have detected redshifted heliospheric absorption in a downwind direction toward the star Sirius, consistent with these theoretical expectations. Therefore, it is not unreasonable to suppose that astrospheric material downwind from 36 Oph is responsible for the non-LISM absorption on the blue side of the Lyman-$`\alpha `$ absorption feature. Thus, in Figure 6b we present a three component fit to the Lyman-$`\alpha `$ line, the parameters of which are listed in Table 1. Because they are so highly blended, the three components must be constrained somehow to produce a unique fit. The LISM component is constrained the same as in the two component fit. Furthermore, we force the heliospheric and astrospheric absorption components to have velocities roughly consistent with model predictions. Heliospheric models suggest an average deceleration of roughly 40% within the hydrogen wall in the upwind direction, meaning the projected velocity toward 36 Oph should be $`V_\mathrm{H}=0.6V_0\mathrm{cos}\theta `$, where $`V_0=26`$ km s<sup>-1</sup> is the LISM flow velocity for the Sun and $`\theta =12^{}`$ is the angle of the line of sight relative to the upwind direction. Thus, $`V_\mathrm{H}=15`$ km s<sup>-1</sup> for the heliospheric absorption. In downwind directions, the heliospheric models suggest accelerations of about 30% for the H I. Assuming this is the case for 36 Oph, the predicted velocity of the astrospheric absorption is then $`V_\mathrm{H}=V_{rad}+(1.3V_0\mathrm{cos}\theta )`$, where $`V_{rad}=1.0`$ km s<sup>-1</sup> is the stellar radial velocity, $`V_0=40`$ km s<sup>-1</sup> is the LISM flow velocity relative to the star, and $`\theta =134^{}`$ is the angle relative to the upwind direction of the solar line of sight. Thus, $`V_\mathrm{H}=35`$ km s<sup>-1</sup> for the astrospheric absorption. We arbitrarily assume uncertainties of $`\pm 5`$ km s<sup>-1</sup> for both the estimated heliospheric and astrospheric velocities. Analogous to the procedure used in the two component fit, we perform many fits varying the LISM column density and the heliospheric/astrospheric velocities within the allowed error bars, which allows us to determine the best fit parameters and uncertainties quoted in Table 1. Since the heliospheric component no longer has to account for the excess absorption on both the red and blue sides of the Lyman-$`\alpha `$ line, the column density and Doppler parameter are significantly lower than for the two component model (see Table 1). The column density ($`\mathrm{log}N_\mathrm{H}=14.6\pm 0.3`$) is now consistent with that measured toward $`\alpha `$ Cen ($`\mathrm{log}N_\mathrm{H}=14.74\pm 0.24`$), as is the temperature inferred from the Doppler parameter once the quoted uncertainties are considered ($`T=38,000\pm 8000`$ K for 36 Oph, $`T=29,000\pm 5000`$ K for $`\alpha `$ Cen). The astrospheric component has a column density ($`\mathrm{log}N_\mathrm{H}=14.7\pm 0.3`$) and Doppler parameter ($`b=26.0\pm 3.0`$ km s<sup>-1</sup>) very similar to that of the heliospheric component, with the Doppler parameter implying a temperature of $`41,000\pm 10,000`$ K. We would have expected temperatures somewhat higher than this for a downwind line of sight, based on the H I temperatures predicted by the hydrodynamic models (e.g., Müller et al., 2000). However, a direct comparison between the data and the absorption predicted by the models is needed to see if they are truly inconsistent. The average LISM density toward 36 Oph is $`n_\mathrm{H}=0.030.06`$ cm<sup>-3</sup>. This is lower than the $`n_\mathrm{H}0.1`$ cm<sup>-3</sup> densities typically observed toward the nearest stars, including $`\alpha `$ Cen (Linsky et al., 2000). One possible explanation for this is that hot interstellar material without any neutral H occupies part of the line of sight, either in the foreground of the observed absorption between the LIC and G clouds; or beyond the G cloud, which would mean that 36 Oph lies outside the G cloud. However, if our detection of astrospheric H I is valid, the LISM around 36 Oph must necessarily contain H I, meaning 36 Oph must be within the G cloud. Furthermore, the existence of substantial G cloud material toward $`\alpha `$ Cen, which is only 1.3 pc away, suggests that a large gap between the LIC and G clouds toward 36 Oph is unlikely (see Fig. 2). Thus, the most likely explanation for the low average density is a negative density gradient within the G cloud toward 36 Oph. Wood & Linsky (1998) used similar reasoning to infer a density gradient within the LIC toward 40 Eri. ## 4 Summary We have analyzed absorption features observed in HST/STIS spectra of the 5.5 pc line of sight to 36 Oph A. Our findings are summarized as follows: Only one LISM absorption component is seen toward 36 Oph, with a velocity consistent with the flow vector of the G cloud. No absorption whatsoever is detected from the LIC, meaning the edge of the LIC must be very close. Because the line of sight is very close to the upwind directions of both the G and LIC clouds, the velocity of $`28.4`$ km s<sup>-1</sup> that we observe is very close to the actual flow speed of the observed material. The fact that this projected velocity is greater than the 26 km s<sup>-1</sup> velocity of the LIC provides particularly strong evidence for the existence of faster G cloud material in that direction. We estimate that the LIC D I column density must be less than $`9\times 10^{11}`$ cm<sup>-2</sup> to explain why no LIC absorption is detected. This corresponds to $`N_{\mathrm{H}\mathrm{I}}<6\times 10^{16}`$ cm<sup>-2</sup> based on the known LIC D/H ratio ($`1.5\times 10^5`$). Assuming a density of $`n_{\mathrm{H}\mathrm{I}}=0.1`$ cm<sup>-3</sup>, the distance to the edge of the LIC is $`d_{edge}<0.19`$ pc. The Sun will reach the edge in $`t_{edge}<7400`$ yrs, based on the LIC velocity vector. The temperature ($`T=5900\pm 500`$ K) and relative abundances of D I, Mg II, and Fe II (e.g., $`\mathrm{D}\mathrm{I}/\mathrm{Mg}\mathrm{II}=0.9\pm 0.4`$) are within the error bars of the measurements made for the $`\alpha `$ Cen line of sight, which also samples G cloud material (LW96). These measurements suggest that the G cloud has a slightly lower temperature and less Mg and Fe depletion than the LIC. However, the nonthermal velocity toward 36 Oph, $`\xi =2.2\pm 0.2`$ km s<sup>-1</sup>, is higher than the $`\alpha `$ Cen measurement ($`\xi =1.25\pm 0.25`$ km s<sup>-1</sup>). Using a Lyman-$`\alpha `$ bisector technique first described by Wood et al. (1996), we estimate a column density of $`\mathrm{log}N_\mathrm{H}=17.85\pm 0.15`$ toward 36 Oph and a deuterium-to-hydrogen ratio for the G cloud of $`\mathrm{D}/\mathrm{H}=(1.5\pm 0.5)\times 10^5`$. The latter quantity is consistent with the LIC value of $`\mathrm{D}/\mathrm{H}=(1.5\pm 0.1)\times 10^5`$. The average G cloud density toward 36 Oph is $`n_\mathrm{H}=0.030.06`$, significantly less than the $`n_\mathrm{H}0.1`$ cm<sup>-3</sup> densities typically observed toward the nearest stars, including $`\alpha `$ Cen within the G cloud. The most likely interpretation is that the G cloud H I density decreases toward 36 Oph. The H I Lyman-$`\alpha `$ absorption line is redshifted and suggests a much higher temperature than the other lines. LW96 used a very similar discrepancy toward $`\alpha `$ Cen to argue for the presence of absorption from heated, decelerated material in the solar hydrogen wall. Our results strongly support this interpretation. In two component fits to the H I Lyman-$`\alpha `$ line, the solar hydrogen wall component is not decelerated relative to the projected LIC velocity, which is a serious contradiction with theoretical expectations, so we propose that absorption from astrospheric material around 36 Oph is responsible for some of the absorption on the blue side of the line. Thus, our best model of the Lyman-$`\alpha `$ absorption has three components: LISM (G cloud), heliospheric, and astrospheric. In the three component fits, the temperature and column density of the heliospheric absorption ($`T=38,000\pm 8000`$ K and $`\mathrm{log}N_\mathrm{H}=14.6\pm 0.3`$) are consistent with the measurements made for the $`\alpha `$ Cen line of sight. We would like to thank the anonymous referee for several helpful suggestions. Support for this work was provided by NASA through grant number GO-07262.01-99A from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555.
no-problem/0002/astro-ph0002304.html
ar5iv
text
# The frequency content of 𝛿 Sct stars as determined by photometry ## 1. Introduction $`\delta `$ Sct variables are now a well-defined class of stars. They are located on or just above the zero-age main sequence, in the lowest part of the classical instability strip. $`\delta `$ Sct stars have masses between 1.5 and 2.5 M$`_{}`$ and they are close to the end of the core hydrogen burning phase (Breger & Pamyatnykh 1998). The presence of convective zones and related phenomena such as convective overshooting, make them very interesting objects for the understanding of stellar evolution. The investigation of pulsational properties of pre-main sequence stars allowed their instability strip in the H-R diagram to be defined (Marconi & Palla 1998); some of these stars showing $`\delta `$ Sct variability were discovered (Kurtz & Muller 1999). Photometric monitoring is the most practiced approach to study the properties of $`\delta `$ Sct stars and several stars have been deeply investigated. However, the spectroscopic approach tells us that many modes that are not photometrically detectable are actually excited. Rotation acts as an important factor in the increase of the number of excited modes. The observed frequencies are between 5 and 35 cd<sup>-1</sup> and multiperiodicity is very common; only a few stars show a monoperiodic behaviour above the current limit of the detectable amplitude from ground, i.e., $``$1 mmag. The observed modes are in the domain of pressure ($`p`$) modes; there is no observational evidence that gravity ($`g`$) modes are excited in $`\delta `$ Sct stars, even if some cases are suggested. At the moment, $`g`$-modes seem to be present only in the $`\gamma `$ Dor stars, which in turn do not show $`p`$-modes. The great observational effort made by several teams allows us to handle a well-defined phenomenological scenario of the $`\delta `$ Sct variability. This contribution tries to summarize the results obtained in the past years by means of extended photometric time series. The paper is structured as follows: * 2. The best candidates for asteroseismological studies + 2.1 FG Vir: 24 independent modes + 2.2 XX Pyx: strong and rapid amplitude changes + 2.3 4 CVn: presence of combination terms and amplitude variations + 2.4 Comparison between FG Vir, XX Pyx and 4 CVn * 3. Other stars studied by the Merate Group + 3.1 44 Tau: variable amplitude and recurrent ratio 0.77 + 3.2 BH Psc: variable amplitude and rich pulsational content + 3.3 V663 Cas: growth of new modes + 3.4 The help of the spectroscopic approach * 4. Monoperiodic pulsators * 5. $`\delta `$ Sct stars in binary systems * 6. $`\delta `$ Sct stars in open clusters * 7. The frequency content of high amplitude $`\delta `$ Sct stars * 8. Summing-up and Conclusions * 9. The future: the exportation of the results on galactic stars to extragalactic research In what manner the photometric results can be used to identify modes (i.e., to classify the oscillation in terms of quantum numbers $`n`$, $`\mathrm{}`$ and $`m`$) is discussed by Garrido (2000). Some improvements, both observational and theoretical, are probably necessary to make new, substantial steps forward. A better connection between theory and observation will allow us to really progress in the asteroseismology of these stars. ## 2. The best candidates for asteroseismological studies The studies and the related papers on $`\delta `$ Sct stars follow a recurrent paradigm in the process of the determination of the frequency content. In most cases a first solution, often wrongly considered a “good” solution, is obtained on the basis of a few, fragmented nights. At this stage, the observations are hardly useful for a significant analysis. However, since the complicated light behaviour of our stars always leaves some unclear facts, the same authors or another team plan a second run. As a consequence, it appears that the solution proposed in the first paper was indeed preliminary. A very interesting result often obtained while doing this refinement is the detection of variations in amplitude and/or in frequency for the excited modes. Therefore, more observations are requested and, in general, they are never sufficient … We can however obtain more and more satisfactory results from the observational works on $`\delta `$ Sct stars by delving deeper and deeper in this process, even if it involves stronger and stronger efforts. The three cases reported below are probably the best examples that the $`\delta `$ Sct community can offer. ### 2.1. FG Vir: 24 independent modes FG Vir can be considered a cornerstone in the development of our knowledge of $`\delta `$ Sct stars. After a few nights of observations by López de Coca et al. (1984), it was studied first by Mantegazza, Poretti, & Bossi (1994) on the basis of a single-site campaign carried out at the European Southern Observatory, Chile. These authors proposed seven certain frequencies and a possible eighth one; they also claimed the presence of undetected terms, owing to the relatively high level of noise in some parts of the power spectrum. A successive multisite campaign (Breger et al. 1998) confirmed the seven frequencies, demonstrating how a single site observing run can be successful in the detection of the main components of a multiperiodic pulsator. However, the eighth, small-amplitude frequency was an alias and the misidentification originated from the combination of the spectral window with the noise. A search for the presence of the hitherto previously undetectable terms was undertaken and the number of frequencies increased to at least 24. However, new campaigns on this star are considered necessary to improve the frequency resolution and a time baseline of several months is requested. The light curves obtained in a multisite campaign look quite fine and they bear witness to the efforts made by the $`\delta `$ Sct researchers to improve the quality and the quantity of the data; Fig. 1 shows the very dense $`v`$ and $`y`$ light curves obtained on a baseline spanning 20 days, a subset of the 1995 campaign. Considering all the frequencies now known in the light curve, it seems that the pulsation of FG Vir is much more stable than that of XX Pyx and 4 CVn: the amplitude variability is very limited and the frequency values seem to be stable over a baseline of decades. The frequencies are mainly distributed in two subgroups: the first ranges from 9.2 to 11.1 cd<sup>-1</sup>, the second from 19.2 to 24.2 cd<sup>-1</sup>. An isolated peak is found at 16.1 cd<sup>-1</sup> and a few between 28.1 to 34.1 cd<sup>-1</sup>. Contrasting with the 22 independent frequencies found, only 2 combination terms have been detected. It should be noted that the identification was accepted at an amplitude S/N limit of 4.0 for an independent term and 3.5 for a combination frequency. Kuschnig et al. (1997) supply a theoretical basis for these assumptions. The combination terms are related to the $`f_1`$ term, which has an amplitude 5 times larger than the other ones. This term also displays an asymmetric shape, since the 2$`f_1`$ harmonic is also observed ($`R_{21}=A_{2\mathrm{f}}/A_\mathrm{f}=0.04`$). Photometric measurements are adequate to perform mode identification by means of the phase shifts in different colours. Viskum et al. (1998) and Breger et al. (1999a) agree that the dominant mode can be identified with $`\mathrm{}`$=1 and the 12.15 cd<sup>-1</sup> mode with the radial fundamental. Considering the sophisticated pulsational modeling proposed by Breger et al. (1999a), FG Vir looks as a very good candidate to match theory and observations. ### 2.2. XX Pyx: strong and rapid amplitude changes Our knowledge on XX Pyx has rapidly grown in the last years, following the paradigm described above. Its variability was discovered with the Whole Earth Telescope (Handler et al. 1996) and then the first solution of the light curve (based on 116.7 hours of photometry) was already very good: 7 frequencies between 27.01 and 38.11 cd<sup>-1</sup> were unambiguously found. However, a second campaign was planned to match the requirements of stellar seismology. Not only was the number of frequencies increased to 13 (Handler et al. 1997), but, as noted above, the possibility of comparing different observing seasons immediately evidenced the variability of the amplitudes. The three dominating modes change their photometric amplitude within one month at certain times, while the amplitudes can remain constant at other times (Handler et al. 1998): Fig. 2 shows the behaviour of these modes. The investigation about the nature of these variations considered various hypotheses: oblique pulsator, precession of the pulsational axis, beating of closely spaced frequencies. However, none of them can explain satisfactorily the amplitude changes. It is important to note that no change in the pulse shape of the $`f_1`$ mode seems to accompany the amplitude variations, while changes are expected in case of frequency beating. In the same way, evolutionary effects, binarity, magnetic field cannot explain period changes. The distribution of the frequencies is clearly shifted toward high values. A 2$`f_1`$ harmonic term is observed; Handler et al. (1996) suggested the presence of very small combination terms as a representation of nonlinearities originating in the outer part of the star’s envelope, but they are very close to the significance limit. As a last step, a pulsational model of XX Pyx was undertaken, but a unique solution could not be proposed (Pamyatnykh et al. 1998). The presence of a frequency spacing of $``$$`\mathrm{\hspace{0.17em}26}\mu `$Hz was used to analyze different possibilities, but the seismic modeling was unable to match the observed frequencies since the mean departures exceed the mean observational frequency by at least one order of magnitude. Unfortunately, this star is very faint ($`V`$=11.5) and line-profile variations, which could help in the mode identifications, are very difficult to measure with the requested accuracy. An improvement is expected from the results of a new multisite campaign (Arentoft et al. 2000), perpetuating the observational paradigm of $`\delta `$ Sct stars. ### 2.3. 4 CVn: presence of combinations terms and amplitude variations Several campaigns were organized on this bright ($`V`$=6.1) star, allowing the determination of a set of reliable frequencies on nine occasions over 30 years. They are mostly free from the 1 cd<sup>-1</sup> alias problem and are accurate to 0.001 cd<sup>-1</sup>. Breger et al. (1999b) propose a solution of the light curve composed of 18 independent frequencies and 16 combination terms $`f_i\pm f_j`$; the residual rms is decreased to the level expected for the noise. The frequency content of 4 CVn is characterized by the grouping of all the 18 independent frequencies in the interval 4.749–8.595 cd<sup>-1</sup>. In the 1996 campaign, 5 have an amplitude larger than 9 mmag; 5 others have an amplitude between 6.4 and 3.2 mmag. After a single peak at 1.6 mmag, the other terms (also including the combination terms) have an amplitude smaller than 1 mmag. However, the main result of the survey of 4 CVn is the large variability of the amplitude. This is surely not an effect of the noise distribution, since the most relevant changes also occur in the largest amplitude terms. Fig. 3 shows 6 cases of such variations: note the decrease in the amplitude of the 5.05 cd<sup>-1</sup> term and the corresponding increase in that of the 8.59 cd<sup>-1</sup> term. Other terms, such as the 6.98 cd<sup>-1</sup> term, show a more stable value for the amplitude. It is quite impossible to discern a rule in the behaviour of the amplitudes. A complete discussion of the amplitude changes in the light curves of 4 CVn is given by Breger (2000a). The presence of a large number of combination terms is another important aspect; they flank the set of the independent frequencies. The sums $`f_i+f_j`$ are more numerous than the differences $`f_if_j`$: this should be a selection effect since the signal at low frequencies is more difficult to detect owing to the presence of a higher level of noise. The pulsational modes of 4 CVn have also been investigated for the variability of the period: some results have been obtained (Breger & Pamyatnykh 1998), but the matter has to be investigated further, probably considering also the high-amplitude $`\delta `$ Sct stars (Szeidl 2000). The lack of a well-defined trend in the observed changes is the main difficulty to face when proposing an explanation. From a methodological point of view, it is important to notice the relevant contribution of the measurements performed by the 0.75 m Automatic Photometric Telescope “Wolfgang” located at Washington Camp in Arizona, USA (Breger & Hiesberger 1999). As a matter of fact, the data are of excellent quality (standard deviation 3 mmag in $`B`$ and $`V`$ light), even if a variation in the brightness differences between the two comparison stars (about 2 mmag) was observed. It coincides with the two subsets (each lasting 4 weeks) in which the measurements are separated by a gap of two weeks. According to the authors, the sudden variation is probably a consequence of instrumental problems of the APT and it is not due to the variation of the comparison stars. The solution of this kind of problem can in the future ensure the collection of a large quantity of good-quality data by means of robotic telescopes, a new frontier for the study of $`\delta `$ Sct stars. ### 2.4. Comparison between FG Vir, XX Pyx and 4 CVn Figure 4 shows the frequency content of the well-studied pulsators described in this section; the combination terms are omitted. At first glance, it also appears that their content is completely different: high frequencies only for XX Pyx, low frequencies only for 4 CVn, two groups of intermediate frequency values for FG Vir. The obvious conclusion is that $`\delta `$ Sct stars are very complicated pulsators and that a general recipe cannot be used to predict their mode excitation. As regards the similarities between $`\delta `$ Sct stars it should be noted that the frequency content of 4 CVn matches very closely that of HD 2724 (Mantegazza & Poretti 1999): at least 7 frequencies have almost the same value. Among the high-amplitude terms in 4 CVn, only the 5.05 cd<sup>-1</sup> term has no correspondence in HD 2724. The combination frequencies, very numerous in 4 CVn, are not seen in the light curve of HD 2724. This can be due both to the smaller amplitude of the modes excited in HD 2724 and to the lack of a powerful tool such as a multisite campaign, not yet exploited on HD 2724. The physical parameters of the two stars are very similar, too. So, it seems that we can have the possibility to group stars and not only to observe a different behaviour for every star. The variations of the amplitudes are a further complication, even if we can argue that the disappearance or the damping of some terms can enhance or make discernible other modes, increasing the number of known frequencies. The cause of the amplitude variability is not clear: it can either be intrinsic or originate from the beating of two close frequencies. Both in the case of XX Pyx and 4 CVn the intrinsic damping is considered by the respective authors as a more satisfactory explanation since the other hypothesis could not match some observations. ## 3. Other stars studied by the Merate Group In our effort to reveal the pulsational behaviour of $`\delta `$ Sct stars the Merate group regularly performed some campaigns on selected objects. At the beginning (second half of the eighties) there was no well-studied object and the main goal of our studies was to monitor different variables and try to solve the light curves as satisfactorily as possible. As a matter of fact, the history of the worldwide observations of FG Vir started with our campaign in 1992 (Mantegazza et al. 1994). Our contribution to the field was guaranteed by the extensive campaigns carried out in Merate (as a rule one target was observed for several weeks) and at ESO, in both cases using a 0.5-m telescope. Breger (2000b) reports on the list of our campaigns. Here we would like to discuss some interesting cases. ### 3.1. 44 Tau: variable amplitude and recurrent ratio 0.77 This star has an important place in the development of the studies performed by the Merate group. The intensive survey carried out in 1989 allowed us to verify for the first time the variability of the amplitude of some modes; this fact drove a change in our approach to the study of $`\delta `$ Sct stars. It was realized that a few nights on an object cannot constitute a significant improvement on the knowledge of the pulsational behaviour and therefore the planning of an observational campaign must satisfy critical parameters such as frequency resolution (i.e., a long baseline), alias damping (long nights or, better, multisite observations), evaluation of observational errors and spectral window effects, … Several datasets were available on 44 Tau and a comparison among them led to the detection of amplitude variations (Poretti, Mantegazza, & Riboni 1992). However, this bright star ($`V`$=5.5) was considered a good target for other teams also; Akan (1993) and Park & Lee (1995) provided other intensive datasets and the verification of the amplitude variations was an important goal of their studies. Figure 5 summarizes the results: the three papers quoted above supply the three values around 1990. The 6.90 cd<sup>-1</sup> term is always the dominant one even if a slight decrease has been observed in the last years, in correspondence with an increase in the amplitude of the 7.01 cd<sup>-1</sup> term. Note also the similar behaviour of the 9.12 and 9.56 cd<sup>-1</sup> terms and the stability of the other ones. The general look of Fig. 5 suggests a real variation in the amplitudes rather than the effect of uncertainties in the amplitude determination. The frequency detected in the light curve of 44 Tau also emphasizes how complicated it is to type the modes. In several old papers the 0.77 ratio between two modes was considered as a clear fingerprint of the pulsation in the fundamental and first overtone radial modes, as happens in high-amplitude $`\delta `$ Sct stars. However, several examples of ratios in the range 0.76-0.78 can be found among the seven terms: 6.90/8.96, 7.01/8.96, 6.90/9.12, 7.01/9.12, 7.30/9.56, 8.96/11.52 … As a consequence, it is quite evident that nonradial modes can also show the same ratio even in a small bunch of terms (only seven here) and that trying to find two frequencies showing a 0.77 ratio to type them as radial modes is naive and misleading. As a final remark, it should be noted that 44 Tau is probably a unique case in the $`\delta `$ Sct scenario owing to the very small $`v\mathrm{sin}i`$ value, i.e., 4 km s<sup>-1</sup>. In a certain sense, it proves that a multimode pulsation can be found in a slow rotator or in a pole-on star. ### 3.2. BH Psc: variable amplitude and rich pulsational content BH Psc was observed by our group in 1989, 1991, 1994 and 1995 at the European Southern Observatory, Chile. The analysis of the first two campaigns showed a very complex light variability resulting from the superposition of more than 10 pulsation modes with frequencies between 5 and 12 cd<sup>-1</sup> and semi-amplitudes between 17 and 3 mmag (Mantegazza, Poretti, & Zerbi 1995). The fit left a high r.m.s. residual 2.3 times greater than that measured between the two comparison stars. A second photometric campaign was carried out in October and November 1994; we hoped to reveal more terms and to check the stability of their amplitudes. The analysis of the new data allowed us to single out 13 frequencies (Mantegazza, Poretti, & Bossi 1996). They are concentrated in the region 5–11.5 cd<sup>-1</sup>: this distribution is slightly larger than that observed for 4 CVn, but they can be considered very similar. However, more terms should be present, with an amplitude below 3 mmag, since a good 15% of the variance could not be explained with the detected terms and the noise. Moreover, the standard deviation around the mean value was considerably higher in 1991 than in 1994 (26 mmag against 18 mmag); considerations about the amplitudes of the modes confirmed that the pulsation energy in 1994 was lower than in 1991. BH Psc constitutes another example of a $`\delta `$ Sct star showing strong amplitude variations. Figure 6 shows the fit of the 13-term solution to the $`B`$ measurements in some nights: undetected terms are the more probable explanations for the systematic deviations which appear on some occasions. According to the paradigm discussed in Sect. 2 and looking at Fig. 6, we were at the point where deeper observations are needed and we organized a multisite campaign in September and October 1995. The analysis of the 1174 $`B`$ and 1251 $`V`$ measurements strengthens the hypothesis of a strong amplitude variation, probably as rapid as in the case of XX Pyx (Mantegazza et al., in preparation). ### 3.3. V663 Cas: growth of new modes The solution of the light curve of V663 Cas$``$HD 16439$``$SAO 4710 was not an easy task. At the beginning it was considered as a monoperiodic pulsator with $`f_1`$=17.13 cd<sup>-1</sup> (Sedano, Rodríguez, & López de Coca 1987), but an observing campaign in the winter of 1988–89 evidenced a second peak at 10.13 cd<sup>-1</sup> (Mantegazza & Poretti 1990). The amplitude of this second peak is only 3 mmag, so we considered that this term was not detected by Sedano et al. owing to their limited dataset. The panels in the left column of Fig. 7 show the detection of these terms in the 1988–89 dataset. The analysis of the original data allowed us to identify the $`f_1`$ term (top panel). Then the frequency of this term (but neither its amplitude or its phase) was introduced as a known constituent (k.c.) in the subsequent iteration, in which the $`f_2`$ term was detected (middle panel). Once more, these two terms were introduced as k.c.’s: in this case no significant third term was revealed (bottom panel). Still following the paradigm of the study of the $`\delta `$ Sct stars, a second intensive campaign was performed in the winter of 1994–95 (Poretti, Mantegazza, & Bossi 1996). The results of the frequency analysis are shown in the panels of the right column of Fig. 7. After the easy detection of the $`f_1`$ term (top panel), the second spectrum looks different: the structure at about 10 cd<sup>-1</sup> is more complex than the one observed in the 1988–89 dataset (to compare the two middle panels). Indeed, after considering the $`f_2`$ term, a new $`f_3`$ term was revealed at 10.48 cd<sup>-1</sup> and the noise around 18 cd<sup>-1</sup> was higher than in the 1988–89 dataset (bottom panel). Since the time baseline, the number, and the standard deviation of the measurements are the same for the two observing seasons, this fact cannot be related to a sampling problem. Moreover, the amplitude of the $`f_1`$ term has slightly decreased from 15.9 to 14.0 mmag in $`B`$ light. By comparing the two observing seasons we can discern the growth of new modes in the light curve of V663 Cas. ### 3.4. The help of the spectroscopic approach It is quite evident that every star carries a brick with which to build our castle of knowledge about $`\delta `$ Sct pulsations, and results on one star have consequences for the analysis of the next one. However, mode identification from photometric data always leaves some uncertainties. For this reason we turned toward spectroscopy: independent constraints on mode typing and detection of low and high degree $`\mathrm{}`$ modes can be obtained from the analysis of the time series of the individual pixels defining the line profiles. Moreover, trying to fill the gap between the output of the observations and the inputs for a reliable pulsational model, we tried to discriminate between different modes through a direct fit of pulsational model to spectroscopic and photometric data. Such a synergic approach is described by Mantegazza (2000). ## 4. Monoperiodic pulsators It is a quite accepted concept that detecting as many frequencies as possible in the light curves is mandatory in order to make progress in the field of asteroseismology. Only in this way is it possible to compare the values predicted by the theoretical models with the observed ones. Consequently, the observation of monoperiodic stars has been put aside in the last decade. However, even these variables display a wide variety of intriguing behaviour. #### Beta Cas: It is probably the best known monoperiodic pulsator, also included in the secondary target list of the MONS satellite. Riboni, Poretti, & Galli (1994) proved that a constant value of the frequency cannot fit the times of maximum brightness collected since 1965; on the other hand, a constant value (9.897396 cd<sup>-1</sup>) can fit all the datasets since 1983. Hence, a probable variation of the period occurred between 1965 and 1983. On the other hand, the amplitude has remained stable ($`\mathrm{\Delta }V`$ = 0.03 mag) over a 25-year baseline since no significant variation could be inferred. Considering the value of the period (0.101 d), this behaviour is typical for a high-amplitude $`\delta `$ Sct star rather than for a small amplitude one. #### AZ CMi: The light curve of AZ CMi is asymmetrical ($`R_{21}`$=0.13 in $`V`$ light; Poretti 2000) and very stable in shape and amplitude ($`\mathrm{\Delta }V`$=0.055 mag); hints of a possible decreasing value of the amplitude need more observations to be confirmed. $`B`$ and $`V`$ photometry is available: the measured phase shift is $`\varphi _{BV}\varphi _V`$= –9$`\pm `$9 degrees. Since it is negative, such a value slightly suggests a nonradial mode, but the error bar hampers a definite identification (Poretti et al. 1996). The examples reported above seem to suggest that monoperiodic stars are simple and stable pulsators even if the ambiguity between radial or nonradial modes cannot be solved. However, the analysis of other cases complicates the scenario a little: #### BF Phe: All the measurements available on this star are well satisfied by the single term $`f`$=16.0166 cd<sup>-1</sup> (Poretti et al. 1996). The reality of a secondary peak was discussed and, at the end, rejected. The amplitude observed in 1991 in $`V`$ light is very similar to that observed in 1993 in $`y`$ light. However, in $`b`$ light the amplitude observed in 1989 (25 mmag) is larger than that observed in 1993 (16 mmag) and this discrepancy cannot be explained by errors in the measurements; unfortunately BF Phe was measured in 1989 in the $`b`$-light only. Hence, this monoperiodic pulsator shows an amplitude that has changed by a factor of 1.7 in two years, if we consider the amplitude was the same in 1991 and 1993. #### 28 And: Rodríguez et al. (1993) demonstrated that 28 And$``$GN And is a monoperiodic pulsator. They carried out a frequency analysis on the datasets available in the literature and when the main frequency is prewhitened the resulting periodograms do not show any trace of another significant peak. New $`uvby`$ photometric data were acquired in 1996 (Rodríguez et al. 1998); the observed amplitude of the light curves is very small compared with any other previous dataset. More precisely, the amplitude was about 19 times smaller than that observed five years before. In this new dataset a second term was detected; however, the reality of this term is not well-established, since it is marginally significant (S/N=4.1 and other smaller peaks are visible in the power spectrum). Further observations are requested to verify if 28 And will display a pulsational behaviour similar to that of V663 Cas, with new frequencies slowly growing to a detectable level. The cases of genuine monoperiodic $`\delta `$ Sct stars as BF Phe and (probably) 28 And yield the observational evidence that in these pulsators also the amplitudes of the excited modes are affected by variations. Moreover, a multiperiodic pulsator as V663 Cas can show a different pulsational content in different seasons, greatly complicating observational investigation. As a matter of fact we proved that the changing amplitude is not a prerogative of pulsators with a large number of excited modes (as observed in XX Pyx, 4 CVn and 44 Tau). It is also possible to find a $`\delta `$ Sct pulsator displaying a single period with a very small amplitude: the light curve of HD 19279 has a full amplitude of only 4.2 mmag (Mantegazza & Poretti 1993). Moreover, the detection of a single frequency in the light curve is not an indication of radial pulsation for $`\delta `$ Sct stars of low amplitude. Table 1 lists the physical parameters as determined by means of Hipparcos parallaxes and Strömgren photometry: as can be noticed, these stars display a large variety of pulsational constants $`Q`$’s and, hence, different excited modes. ## 5. $`\delta `$ Sct stars in binary systems In principle, the study of $`\delta `$ Sct stars belonging to binary systems can supply useful suggestions to understand the physics of the pulsation driving. The complications introduced by the companion are much less important than the measurable parameters we can attain. Lampens & Boffin (2000) provide a review on $`\delta `$ Sct stars in stellar systems. Here we would like to draw attention to a few binary systems: #### $`\theta ^2`$ Tau: It is the component of a wide binary ($`P`$=140.728 d) and there is no interaction between the two stars. However, it is a well-studied pulsator. Breger et al. (1989) determined five frequencies, very close to each other (from 13.23 to 14.62 cd<sup>-1</sup>), without regular spacing among them. Such a very close (much closer than in the case of 4 CVn), isolated multiplet is not usual in the solution of light curves. The amplitudes of the modes seem to be very stable, even if recently Zhiping, Aiying, & Dawei (1997) claimed to have evidence of amplitude variations; further campaigns could be useful to clarify the matter. #### $`\theta `$ Tuc: This double line spectroscopic binary is located near the South Celestial Pole, which makes it very suitable for long-term monitoring. The orbital period is 7.1036 d; a regular spacing is observed among the 10 detected frequencies, which take place in the interval 15.8–20.3 cd<sup>-1</sup> (Paparó et al. 1996). Low-frequency terms have been observed and they are ascribed to the orbital period. The mode identification was performed by Sterken (1997) using the Strömgren photometry: the results are not conclusive for most of the frequencies, but the $`f`$=20.28 cd<sup>-1</sup> is reconcilable with a radial mode. De Mey, Daems, & Sterken (1998) confirmed this result and they also found that the system has a very low mass ratio, $`q`$=0.09. The possibility that the regular spacing is related to the slow rotation of the $`\delta `$ Sct star deserves further attention. De Mey et al. (1998) also found 1.7 $`<R_1`$$`<2.7R_{}`$ for the radius $`R_1`$ of the $`\delta `$ Sct star. A rotational period of 7.1036 d is not plausible since it should imply 12 $`<v_1`$$`<19`$ km s<sup>-1</sup> for the rotational velocity $`v_1`$, a value irreconcilable with the observed $`v_1\mathrm{sin}i`$: 35 km s<sup>-1</sup> (Mantegazza 2000); even higher values are reported in the literature, see Sterken (1997). #### AB Cas: AB Cas is an eclipsing binary with $`P_{\mathrm{𝑜𝑟𝑏}}`$=1.367 d; the primary component is also a monoperiodic pulsator with $`P_{\mathrm{𝑝𝑢𝑙𝑠}}`$=0.058 d (Fig.8); there is no connection between the two periods (Rodríguez et al. 1998). The light and colour curves are quite normal for this variable: perhaps its main peculiarity is its monoperiodicity. It should be noted that the pulsational period is among the shortest observed in $`\delta `$ Sct stars. The pulsation seems to disappear when the secondary star transits across the disk of the primary: this is due to the large depth of the primary eclipse. However, when the curve due to binarity is removed, the pulsation is visible in the residuals. In fact, during the primary eclipse we can see about 20% of the disk of the pulsating star. Unfortunately, the observational uncertainties do not allow one to discriminate between radial and nonradial modes by using the changes in the amplitude caused by the progress of the eclipse. Other considerations suggest a fundamental radial mode. #### 57 Tau: Paparó et al. (2000) indicated 57 Tau as a pulsator showing simultaneously gravity and pressure modes. However, the discovery of its duplicity (Kaye 1999) explains the low-frequency terms as due to the orbital effect ($`P_{\mathrm{𝑜𝑟𝑏}}`$=2.4860 d); the first example of a pulsator combining the $`\gamma `$ Dor (gravity) and $`\delta `$ Sct (pressure) modes has so far not been found. The promising case of BI CMi (Mantegazza & Poretti 1994) is still awaiting closer investigation. As a concluding remark on this subject, the connection between pulsation and duplicity does not seem to be well-exploited in current research on $`\delta `$ Sct stars. It is quite obvious that simultaneous photometry and spectroscopy of a pulsator in an eclipsing system can help mode typing. One of the main difficulties met in the combined use of photometric and spectroscopic data on $`\delta `$ Sct stars (the synergic approach described in Sect. 3.4.) is to put constraints on the inclination angle. The possibility of determining this angle and the rotational period (as obtained from the orbital solution, assuming synchronous rotation and equatorial orbits) greatly simplifies mode identification by reducing the number of admissible modes. Lampens & Boffin (2000) report on many other candidates to study the connection between duplicity and pulsation. ## 6. $`\delta `$ Sct stars in open clusters $`\delta `$ Sct stars belonging to an open cluster offer a very good opportunity to delve deep into the models thanks to the more precise information available on metallicity, distance and age. The common origin of all the stars allows closer comparisons between the results found on each of them. Interesting results were found by Hernández (1998) by analyzing the data collected by the STEPHI observational network on $`\delta `$ Sct stars located in the Praesepe cluster. Unfortunately all these stars are very complicated pulsators and it is very difficult to define a clear picture of their frequency content. Alvarez et al. (1998) discussed a set of frequencies for two of them, BQ and BW Cnc. In particular, BW Cnc displays two pairs of very close frequencies; inspection of the light curves showed how two close frequencies can disappear or be misidentified in the presence of bad time resolution. These authors recommend long time baselines for campaigns on $`\delta `$ Sct stars, i.e., much longer than 1 week. Also considering the results described below, this requirement has to be considered as mandatory for further works. BQ Cnc is a binary system, but only three frequencies were identified; one of them could be a $`g`$-mode and further investigations are requested. Interesting results on the Praesepe pulsating stars were also obtained by Arentoft et al. (1998), who successfully used defocussed CCD images. The number of detected frequencies was limited, but the possibility of using the Praesepe metallicity leads the analysis toward deriving reliable physical stellar parameters. Peña et al. (1998) also re-analyzed photometric time series of $`\delta `$ Sct stars in Praesepe proposing mode identifications on the basis of the physical properties of the cluster. Unfortunately, the uncertainties due to the effects of rotation and convective overshooting limit these conclusions somewhat. The STACC network monitored some northern open clusters (NGC 7245, NGC 7062, NGC 7226, NGC 7654) searching for $`\delta `$ Sct stars (Viskum et al. 1997). As a result, they found that the fraction of these variables is much lower than among field stars and in other open clusters. This observational fact suggests that some parameters are working in the selection of the pulsation excitation and future efforts should be undertaken to discover what they are. ## 7. The frequency content of high amplitude $`\delta `$ Sct stars After examining the complex phenomenological scenario of low amplitude $`\delta `$ Sct stars, it is interesting to verify what happens in the domain of the high amplitude $`\delta `$ Sct stars (HADS). Rodríguez et al. (1996) analyzed the multicolour data (Strömgren and Johnson systems) for monoperiodic HADS. The results indicate that all these stars, both Pop. I and Pop. II objects, are fundamental radial pulsators. This conclusion was obtained on the basis of the phase shifts and amplitude ratios between light and colour variations. Moreover, Rodríguez (1999) analyzed all the reliable photometric datasets of a selected sample of monoperiodic HADS to study the stability of the light curves. In this manner more than 22000 measurements were scrutinized for seven stars (ZZ Mic, EH Lib, BE Lyn, YZ Boo, SZ Lyn, AD CMi, DY Her). The conclusion was that no significant long-term changes of amplitude occurred for any of these stars. The impression is that the HADS are very stable fundamental radial pulsators, with a few exceptions represented by double-mode HADS. Can such an idyllic scenario resist the observational effort made in the last few years? It seems it does, since a large homogeneity was found by Morgan, Simet, & Bargenquast (1998) analyzing the HADS contained in the OGLE database. We note here that the subgroup with an anomalous $`\varphi _{31}`$ parameter seems to be an artifact (Poretti, in preparation). However, we can single out a few interesting cases among the HADS. #### V1162 Ori: Arentoft & Sterken (2000) discuss the time series collected on V1162 Ori. After a period break and a significant decrease in amplitude (about 50%; Hintz, Joner, & Kim 1998), they also found that the period was no longer valid in early 1998 and that the period changed again during March–April 1998. The latter change was accompanied by an increase in amplitude of the order of 10%; such a phenomenology calls to mind the behaviour of some low-amplitude $`\delta `$ Sct stars. A new campaign is in progress to clarify the reasons for these changes. #### V974 Oph: The case of V974 Oph is more intriguing. This faint ($`V`$=11.8) variable was observed twice at ESO (July 1987 and April 1989). It was first considered a HADS similar to V1719 Cyg (Poretti & Antonello 1988), but the second run clearly demonstrated that it is a multiperiodic star: at least 4 independent frequencies were detected ranging from 5.23 to 6.66 cd<sup>-1</sup>. Three of them have a half-amplitude larger than 0.04 mag; harmonic and combination terms are also observed. The latter result ruled out the simple model of a binary star composed of two HADSs. The strong changes in the shape are the largest observed in the amplitude of a HADS (Poretti 2000). Maybe V974 Oph can be considered as a link between the multiperiodic low amplitude $`\delta `$ Sct stars and the stable fundamental radial HADS pulsators. ## 8. Summing-up and Conclusions In our travel through the observational scenario we met many objects which teach us something about the pulsation of $`\delta `$ Sct stars. The high frequencies shown by XX Pyx, the intermediate frequencies displayed by FG Vir and the low frequencies found in the case of 4 CVn are leading toward the idea that the mode excitation is really complex and unpredictable. In this respect, the close similarity between 4 CVn and HD 2724 is comfortable. Probably the multiperiodic behaviour of the high-amplitude $`\delta `$ Sct star V974 Oph can also be seen as a simplification of the phenomenology, demonstrating that we can observe a multimode excitation even when a large pulsational energy is involved. To complete the similarities in the opposite direction, we observe monoperiodic pulsators with a very small amplitude (HD 19279 and $`\beta `$ Cas). What the excited mode is should be investigated in order to understand if a selection effect acts for these low-amplitude monoperiodic stars and what difference there is with a star such as AZ CMi, which displays an asymmetrical light curve. The amplitude variations are observed in a large variety of stars, both multiperiodic (XX Pyx, 4 CVn, 44 Tau) and monoperiodic (28 And and BF Phe). The observations of mode growth in the light curve of a relatively simple pulsator such as V663 Cas clarifies what can happen in much more complicated ones: some modes can be damped and then re-excited. There are some observational facts which make this explanation preferable even if the model of a beating between two close frequencies with similar, constant amplitude cannot be ruled out. In this context the continuous survey of $`\delta `$ Sct stars which will be performed by space missions could greatly improve the situation; in the case of intrinsic variations, the time-scale of such variations could be determined. The better focusing of the phenomenological scenario is not the only result. The effort made by the observers in the last years allowed us to detect a large number of frequencies in stars such as FG Vir, XX Pyx, 4 CVn and BH Psc. By means of multisite campaigns we are able to detect terms with amplitudes less than 1 mmag; in these conditions ground-based observations can be successfully complementary to space missions. The mode identification techniques are based on both the phase shifts and amplitude ratios of the light and colour curves and the synergic approach performed by considering spectroscopic curves. Their full exploitation should guarantee an important role for our researche in stellar physics. ## 9. The future: the exportation of the results on galactic stars to extragalactic research There is always a bit of confusion about the taxonomy of short-period pulsating stars: in the Galaxy they are called $`\delta `$ Sct stars if Pop. I objects and SX Phe stars if Pop. II objects. The old definitions “dwarf Cepheids” or RRs or AI Vel stars are currently avoided in the dedicated literature, but they turn up again in papers on extragalactic researches. However, the definition currently used by stellar researchers, i.e., high-amplitude $`\delta `$ Sct stars (HADS), seems to be irrespective of which Population they belong to and we are coming back to old definitions with new names. Moreover, the phenomenology previously described warns us that the separation between high– and low-amplitude $`\delta `$ Sct stars is not so obvious and that the amplitude cannot be considered a good physical criterion to separate variable stars. Mateo, Hurley-Keller, & Nemec (1998) reported on the discovery of 20 pulsating stars in the Carina dwarf Spheroidal Galaxy. Their periods range from 0.048 d to 0.077 d and their amplitude is below 0.30 mag in $`V`$-light. Poretti (1999) improved the period values first determined by Mateo et al. (1998), obtaining a more clear picture of the Period–Luminosity–Metallicity relationships (Fig. 9). The observed stars are monoperiodic and their periods are usually shorter than the periods observed in galactic HADS. An extended effort was made to understand the properties of the Fourier parameters of galactic HADS (Morgan et al. 1998), particularly to use them as a mode discriminant. In the case of an external galaxy the approach to the analysis of HADS light curves is the opposite: the Period–Luminosity relationships allow us to separate in a straightforward way the pulsation modes. Then we could have the tool to investigate the differences thus driven, always using the Fourier decomposition. In this direction, the results described by Poretti (1999) are quite preliminary, but we can be confident that in the near future we will be able to expand the horizon of HADS studies to extragalactic topics. ##### Acknowledgments. The author wishes to thank M. Breger, G. Handler, M. Paparó, E. Rodrìguez, and C. Sterken for comments on the first draft, L. Mantegazza for useful discussions, and J. Vialle for checking the English form. ## References Akan C. 1993, A&A, 278, 150 Alvarez, M., Hernández, M.M., Michel, E., et al. 1998, A&A, 340, 149 Arentoft, T., Handler, G., Shobbrook, R.R. et al. 2000, in ASP Conf. Ser., The impact of large-scale surveys on pulsation star research, ed. D. Kurtz & L. Szabados (San Francisco: ASP), in press Arentoft, T., Kjeldsen, H., Nuspl, J., et al. 1998, A&A, 338, 909 Arentoft, T., & Sterken, C. 2000, A&A, in press Breger, M. 2000a, MNRAS, in press Breger, M. 2000b, this volume Breger, M., Garrido, R., Huang., L., Yjang, S-y., Guo, Z-h., Freuh, M., & Paparó, M. 1989, A&A, 214, 209 Breger, M., Handler, G., Garrido, R., et al. 1999b, A&A, 349, 225 Breger, M., & Hiesberger, H. 1999 A&AS, 135, 547 Breger, M., & Pamyatnykh, A.A. 1998, A&A, 332, 958 Breger, M., Pamyatnykh, A.A., Pikall, H., & Garrido, R. 1999a, A&A, 341, 151 Breger, M., Zima, W., Handler, G., Poretti, E., Shobbrook, R.R., et al. 1998, A&A, 331, 271 De Mey, K., Daems, K., & Sterken, C. 1998, A&A, 336, 527 Garrido, R. 2000, this volume Handler, G., Breger, M., Sullivan, D.J., van der Peet, A.J., Clemens, J.S. et al. 1996, A&A, 307, 529 Handler, G., Pamyatnykh, A.A., Zima, W., Sullivan, D.J., Audard N., & Nitta, A. 1998, MNRAS, 295, 377 Handler, G., Pikall, H., O’Donoghue, D., Buckley, D.A.H., Vauclair, G., et al. 1997, MNRAS, 286, 303 Hernández, M.M. 1998, Ph. D. Thesis Hintz, E., Joner, M.D., & Kim, C. 1998, PASP, 110, 689 Kaye, A.B. 1999, Inf. Bull. Var. Stars. 4617 Kurtz, D.W., & Muller, M. 1999, MNRAS, 310, 1071 Kuschnig, R., Weiss, W.W., Gruber, R., Bely P.Y., & Jenkner, H. 1997, A&A, 328, 544 Lampens, P., & Boffin, H.M.J. 2000, this volume López de Coca, P., Garrido, R., Costa, V., & Rolland, A. 1984, IBVS 2465 Mantegazza, L. 2000, this volume Mantegazza, L., & Poretti, E. 1990, A&A, 230, 91 Mantegazza, L., & Poretti, E. 1993, A&A, 274, 811 Mantegazza, L., & Poretti, E. 1994, A&A, 284, 66 Mantegazza, L., & Poretti, E. 1999, A&A, 348, 139 Mantegazza, L., Poretti, E., & Bossi, M. 1994, A&A, 287, 95 Mantegazza, L., Poretti, E., & Bossi, M. 1996, A&A, 308, 847 Mantegazza, L., Poretti, E., & Zerbi, F.M. 1995, A&A, 268, 369 Marconi, M., & Palla, F. 1998, ApJ, 507, L141 Mateo, M., Hurley-Keller, D., & Nemec, J. 1998, AJ, 115, 1856 Morgan, S., Simet, M., & Bargenquast, S. 1998, Acta Astronomica, 48, 509 Pamyatnykh, A.A., Dziembowski, W., Handler, G., & Pikall, H. 1998, A&A, 141, 150 Paparó, M., Rodríguez, E., McNamara, B.J., Kolláth, Z., Rolland, A., Gonzalez–Bedolla, S.F., Jiang Shi-yang, & Zhiping, L. 2000, A&AS, in press Paparó, M., Sterken, C., Spoon, H.W.W., & Birch, P.V. 1996, A&A, 315, 400 Park, N.-K, & Lee, S.-W 1995, AJ109, 774 Peña, J.H., Peniche, R., Hobart, M.A., et al. 1998, A&AS, 129, 9 Poretti, E. 1999, A&A, 343, 385 Poretti, E. 2000, in NATO-ASI, Variable Stars as Essential Astrophysical Tools, ed. C. Ibanoǧlu (Kluwer Academic Publishers, The Netherlands), p. 227 Poretti E., & Antonello, E. 1988, A&A, 199, 191 Poretti, E., Mantegazza, L., & Bossi, M. 1996, A&A, 312, 912 Poretti, E., Mantegazza L., & Riboni, E. 1992, A&A, 256, 113 Riboni E., Poretti, E., & Galli, G. 1994, A&AS, 108, 55 Rodríguez, E. 1999, PASP, 111, 709 Rodríguez, E., Claret, A., Sedano, J.L., García, J.M., & Garrido, R. 1998, A&A, 340, 196 Rodríguez, E., Rolland, A., López de Coca, P., Garrido, R., & Mendoza, E.E. 1993, A&A, 273, 473 Rodríguez, E., Rolland A., López de Coca, & Martín, S. 1996, A&A, 307, 539 Rodríguez, E., Rolland, A., López-Gonzáles, M.J., & Costa, V. 1998, A&A, 338, 905 Sedano, J.L., Rodríguez, E., & López de Coca, P. 1987, IBVS 3122 Sterken, C. 1997, A&A, 325, 563 Szeidl, B. 2000, this volume Viskum, M., Hernández, M.M., Belmonte, J.A., & Frandsen, S. 1997, A&A, 328, 158 Viskum, M., Kjeldsen, H., Bedding, T.R., et al. 1998, A&A, 335, 549 Zhiping, L., Aiying, Z., & Dawei, Y. 1997, PASP, 109, 217
no-problem/0002/astro-ph0002514.html
ar5iv
text
# The Future of Radio Astronomy: Options for Dealing with Human Generated Interference ## 1. Telescope Sensitivity Moore’s law for the growth of computing power with time (ie a doubling every 18 months) is often quoted as being vitally important for the success of the next generation of radio telescopes (working at cm wavelengths) such as the Square Kilometre Array (SKA). It is worth noting that radio astronomy has enjoyed a Moore’s law of it own, having a exponentially improvement in sensitivity with time as shown in Figure 1. In fact the doubling time is approximately 3 years, and has been in progress since 1940, giving an overall improvement in sensitivity of $`10^5`$. However, we are approaching the fundamental limits of large mechanical dishes and noise limits of broad band receivers systems. We need to look to other means of extending this growth into the future. Why do we want to be on this exponential growth curve ? Fields of research continue to produce scientific advances while they maintain and exponential growth in some fundamentally limiting parameter. For radio astronomy sensitivity is definitely fundamentally limiting. An interesting question is whether there are other parameters for which as exponential growth could be maintained for a period of time. The first and most obvious point about exponential growth is that it cannot be sustained indefinitely. Can we maintain it for sometime into the future ? There are 2 basic ways to stay on the exponential curve: 1) Spend more money, 2) Take advantage of technological advances in other areas. ### 1.1. International Mega Science Projects International cooperation is now needed for dramatic improvements in sensitivity, because no one country can afford to do it alone. ALMA, the Atacama Large MM Array being developed by the USA, Europe and Japan is an example. It will cost around $US700M spread over 1999-2007 and will provide an unprecedented opportunity to study redshifted molecular lines. The Square Kilometre Array (SKA) which will work at centimetre wavelengths is likely to be a collaboration of 10 or more nations, spending $US500M over 2008 to 2015. There are 2 basic approaches to funding such large projects: 1) User pays, where member countries pay for slice of the time, or 2) The member countries build the facility and make it openly accessible to all. Optical astronomy has moved very much down path 1) as the Keck, VLT, Gemini type facilities demonstrate. Radio astronomy has traditionally followed path 2) as have projects like CERN. For future facilities there may be pressure to move more into the user pays regime, possibly bringing about substantial change in the dynamics of the radio astronomy community. In the past considerable extensibility was attained by each country learning from the last one to build a telescope. That evolution is very clear from Figure 1. Since we are likely to be moving to internationally funded projects, that path to extensibility is much more restricted and designers must think very carefully about designing extensibility into the next generation telescopes. ### 1.2. Extensibility Through Improved Technologies For a given telescope, past and present extensibility has been achieved by improvements in 3 main areas: System Temperature: Reber started out with a 5000 K system temperature. Modern systems now run at around 20 K, meaning that if everything else was kept constant, Reber’s telescope would now be 250 times more sensitive than when first built. There are possibilities of some improvements in future, but nothing like what was possible in the past. Band Width: Telescopes like the GBT (Green bank Telescope) having bandwidths some 500 times greater than Reber’s, will give factors of 20–25 improvement in sensitivity. Some future improvements will be possible, but again they will not be as large as in the past. Multiple Beams: Whether in the focal or aperture plane, multiple beam systems provide an excellent extensibility path, allowing vastly deeper surveys than were possible in the past. Although multiple beam systems have been used for a number of experiments in the past, the full potential of this approach is yet to be exploited. A notable example that has made a stride forward in this direction is the Parkes L band system (Stavely-Smith et al. 1996, PASA, 13, 243). The fully sampled focal plane phased array system being developed at NRAO by Fisher and Bradley highlights the likely path for the future. Using these three methods, a small telescope like the Parkes 64m has remained on the exponential curve and the forefront of scientific discovery for 35 years (shown in Figure 1 in 1962 and in 1997). Other telescopes have of course undergone similar evolution and we only high light Parkes as an example. Scope for continuing this evolution looks good for the next decade, but beyond that more collecting area will be needed. A 64 beam system installed in 2010, would allow Parkes to stay on the curve for some time. The technology to do this is probably only 3-4 years away from having a realisable system, making it possible to jump well ahead of the curve. Putting a 100 beam system on Arecibo by 2005 is possible and would allow Arecibo to jump way out in front of the curve as it did when first built in 1964. The relevance of Moore’s law in this context, is that if it continues to hold true for the next 1-2 decades, it will provide the necessary back end computational power to realise the gains possible with multiple beams. ## 2. Key Technologies, driving future developments: HEMT receivers which are wide band, cheap, small, reliable, and low noise systems with many elements. Focal plane arrays giving large fully sampled fields of view will allow rapid sky coverage for survey applications and great flexibility for targetted observations, including novel possibilities for calibration and interference excision. Interference rejection allowing passive use of spectrum, outside the bands allocated to passive uses. High dynamic range linear systems, coupled with high temperature superconducting or photonic filters will allow use of the spectrum between communication signals. Adaptive techniques may allow some cochannel experiments, by removing the undesired signals, so that astronomy signals can be seen behind them. More computing capacity may result in much more of the system being defined in software rather than hardware. This may lead to a very different expenditure structure, where software is a capital expense and computing harware is a considered as a consumable or running cost. Fibre/photonic based beamforming and transmission of recorded signals will revolutionise bandwidths and signal quality, especially for high resolution science. Software radio and smart antenna techniques which will allow great flexibilty in signal processing and signal selection. ## 3. Summary of New Facilities ## 4. Interference Sources and Spectrum Management It is important to be clear of what we mean when we talk about interference. Radio astronomers make passive use of many parts of the spectrum legally allocated to communication and other services. As a result, many of the unwanted signals are entirely legal and legitimate. We will adopt the working definition that interference is any unwanted signal, getting into the receiving system. If future telescopes like the SKA are developed with sensitivities up to 100 times greater than present sensitivities, it is quite likely that current regulations will not provide the necessary protection. There is also a range of experiments (eg redshifted hydrogen or molecular lines) which require use of the whole spectrum, but only from a few locations, and at particular times, suggesting that a very flexible approach may be beneficial. Other experiments require very large bandwidths, in order to have enough sensitivity. presently only 1-2% of the spectrum in the metre and centimetre bands is reserved for passive uses, such as radio astronomy. In the millimetre band, much larger pieces of the spectrum are available for passive use, but the existing allocations are not necessarily at the most useful frequencies. ### 4.1. Terrestrial Sources of Interference Interference can arise from a wide variety of terrestrial sources, including communications signals and services, electric fences, car ignitions, computing equipment, domestic appliances and many others. All of these are regulated by national authorities and the ITU (International Telecommunications Union). In the case of Australia, there is a single communcations authority for whole country and therefore for the whole continent. As a result there is a single database containing information on the frequency, strength, location, etc of every licensed transmitter. This makes negotiations over terrestrial spectrum use simpler in principle. ### 4.2. Space & Air Borne Sources of Interference Radio astronomy could deal with most terrestrial interfering signals, by moving to a remote location, where the density and strength of unwanted signals is greatly reduced. However with the increasing number of space borne telecom and other communications systems in low (and mid) Earth orbits, a new class of interference mitigation challenges are arising - radio astronomy can run, but it cant hide ! The are several new aspects introduced to the interference mitigation problem by this and they include: rapid motion of the transmitter, more strong transmitters in dish sidelobes and possibly in primary beam, and different spectrum management challenges. There is an upside to the space borne communication systems in that they help to develop the technology that make space VLBI possible, which leads to the greatest possible resolution. A classic example of the problems that can arise is provided by Irridium mobile communications system, which has a constellation of satellites transmitting signals to every point on the surface of the Earth. Unfortunately in this case, there is some leakage into the passive band around 1612 MHz, with signals levels up to $`10^{11}`$ times as strong as signals from early universe. ### 4.3. Radio Quiet Reserves Radio quiet reserves have been employed in a number of places, with Green Bank being a notable success. For future facilties such as the SKA and ALMA, the opportunity exists to set radio quiet reserve planning in process a deceade before the instruments are actully built. Radio quiet reserves of the future may take advantage not only of spatial and frequency orthogonality to human generated signals, but also time, coding and other means of multiplexing. These later parameters may be particularly important for obtaining protection from space borne undesired signals, a number of which illuminate most of the Earths surface. ## 5. Radio Wavelength Fundamentals Undesired interfering signals and astronomy signals can differ (be orthogonal) in a range of parameters: $``$ Frequency $``$ Time $``$ Position $``$ Polarisation $``$ Distance $``$ Coding It is extremely rare that interfering and astronomy signals do not possess some level of orthogonality in this 6 dimensional parameter space. We therefore need to develop sufficiently flexible back end systems to take advantage of the orthogonality and separate the signals. This is of course very similar to the kinds of problems faced by mobile communication services, which are being addressed with smart antennas and software radio technologies. ## 6. Interference Excision Approaches There is no silver bullet for detecting weak astronomical signals in the presence of undesired human generated signals. Spectral bands allocated for passive use, provide a vital window, which cannot be achieved in any other way. There are a range of techniques that can make some passive use of other bands possible and in general these need to be used in combined or complimentary way. Screening to prevent signals entering the primary elements of receivers. Front end filtering (possibly using high temperature super conductors) to remove strong signals as soon as they enter the signal path. High dynamic range linear receivers to allow appropriate detection of both astronomy (signals below the noise) and interfering signals. Notch filters (digital or ananlog) to excise particularly bad spectral regions. Decoding to remove multiplexed signals. Blanking of period or time dependent signals is a very succesful but simple case of this more general approach. Calibration to provide the best possible characterisation of interfering and astronomy signals. Cancellation of undesired signals, before correlation using adaptive filters and after taking advantage of phase closure techniques (Sault et al. 1997) Adaptive beam forming to steer nulls onto interfering sources. Conceptually, this is equivalent to cancellation, but it provides a way of taking advantage of the spatial orthogonality of astronomy and interfering signals. ### 6.1. Adaptive Systems Of all the approaches listed above, the nulling or cancellation systems (may be adaptive) are the most likely to permit the observation of weak astronomy signals that are coincident in frequency with undesired signals. These techniques have been used extensively in military, communications, sonar, radar, medicine and others (Widrow & Stearns 1985, Haykin 1995). Radio astronomers have not kept pace with these developments and in this case need to infuse rather diffuse technology in this area. A prototype cancellation system developed at NRAO (shown in Figure 2) has demonstrated 70dB of rejection on the lab bench and 30dB of rejection on real signals when attached to the 140 foot at Green Bank (Barnbaum & Bradley 1998). Adaptive Nulling systems are being prototyped by NFRA in the Netherlands. However their application in the presence of real radio astronomy signals is yet to be demonstrated and their toxicity to the weak astronomy signals needs to be quantified. The best prospect for doing this in the near future is recording baseband data from exisiting telescope, containing both interferring and astronomy signals (Bell et al. 1999). A number of algorithms can then be implemented is software and assessed relative to each other. ## 7. The Telecommunications Revolution We cannot (and dont want to) impede this revolution, but we can try to minimise its impact on passive users of the radio spectrum and maximise the benefits of technological advances. The deregulation of this industry has had some impact on the politics. Major companies now play a prominent (dominate ?) role in the ITU (International Telecommunications union). Protection of the bands for passive use must therefore addressed and promoted by government. ## 8. Spectrum Pricing There may be some novel ways in which spectrum pricing could evolve in order to provide incentives for careful use of a precious resource. Radio astronomy and other passive users cannot in general afford commercial rates and therefore need government support. One possibility would be to have a green tax which could be used to fund interference management and research. Such strategies do not come without cost. While the long term economic cost may be relatively small, upfront R&D costs to an individual company may compromise their competitiveness. This issue must therefore be addressed at national or international policy level. Unlike many other environmental resource use problems, spectrum over use is both reversible and possible to curtail. This leads to certain political advantages because politicians like to have problems which they can solve and this is a more soluble problem than many other environemntal problems. ## 9. Remedies Siting Radio Telescopes Choose remote sites with natural shielding helps but doesn’t protect against satellite interference. Establish radio quiet zones, using National government regulations. This is easier for fixed than mobile transmitters. Far side of moon or L2 Lagrangian point are naturally occurring radio quite zones but are very expensive to use. OECD Mega Science Forum: Task Force on Radio Astronomy The goals of the OECD mega science forum are complimentary to IAU efforts, providing a path for top down influence of governments which would otherwise not be possible. This task force aims to promote constructive dialog between: regulatory bodies, international radio astronomy community, telecommunications companies, and government science agencies. It will investigate three approaches favoured by Megascience Forum: Technological solutions, regulation, and radio quiet reserves. Environmental Impact In the meter and cm band $`<`$1% is allocated to passive use! 99% already used, resource use has been extravagant. Almost all the spectrum at wavelenghts $`>`$ 1cm are now polluted, and situation is rapidly deteriorating at shorter wavelengths. It is reversable and sheareable in more creative ways! in contrast to most other pollution problems. ## 10. Funding of radio astronomy University based radio astronomy research in the USA has suffered relative to other wavelengths for 2 reasons: 1) The centralised development at NRAO has made it difficult for many universities to remain involved in technical developments. 2) Space based programs in infrared, optical, UV, X-ray, and Gamma ray bands have a rather different funding structure, where access to research funds is based on successful observing proposals. Radio astronomy has no access to such funds and therefore is a relatively uneconomical pursuit for astronomers. This method of funding is being taken up in other countries and radio astronomy needs to find a way to join the scheme. More globally, radio astronomy has suffered relative to other wavelengths, because data acquisition, reduction and analysis is unnecessarily complex. Researchers have to spend a lot more effort in data processing than other areas of astronomy. For example, in many other bands, fully calibrated data lands on the researchers desk a few days after the observations were taken. Radio astronomy needs to finds ways to move into this regime (as Westerbork have done to some degree), but at the same time preserve the vast flexibility that can be derived from measuring the electric field at the aperture. ## 11. Conclusions The possibilities for the future of radio astronomy are good, but there are some challenges issues for the community to consider and address: $``$ Whole of radio spectrum needed for redshifted lines $``$ About 2% of spectrum is reserved for passive use by regulation so must develop other approaches $``$ We cannot (and don’t want to) impede the telecommunications revolution $``$ Radio astronomy has low credibility until we use advanced techniques $``$ Essential to influence government policy $``$ Astronomers should have a uniform position $``$ Threatening language doesn’t help $``$ Is interference harmful? ## References Barnbaum, c. & Bradley, R., 1998, AJ, 116, 2598. Bell J. F., et al. 1999 ”Software radio telescope: interference mitigation atlas and mitigation strategies”, in Perspectives in Radio Astronomy: Scientific Imperatives at cm and m Wavelengths (Dwingeloo: NFRA), Edited by: M.P. van Haarlem & J.M. van der Hulst. Haykin, S., 1995, “Adaptive Filter Theory” Prentice Hall. Sault, B., Ekers, R., Kewley, L. 1997 “Cross-correlation approaches to interference mitigation” Sydney SKA workshop http://www.atnf.csiro.au/SKA/WS/ Smolders, A. B., 1999, “Phased-array system for the next generation of radio telescopes”, in Perspectives in Radio Astronomy: Scientific Imperatives at cm and m Wavelengths (Dwingeloo: NFRA), Edited by: M.P. van Haarlem & J.M. van der Hulst. Staveley-Smith L. et al. 1996, PASA, 13, 243 ”The Parkes 21cm Multibeam Receiver”. An overview of the science and overall system design of the multibeam receiver. Widrow, B. & Stearns, S., 1985, “Adaptive Signal Processing” Prentice Hall Interference Mitigation Web pages http://www.atnf.csiro.au/SKA/intmit/ AN SKA web site http://www.atnf.csiro.au/SKA/
no-problem/0002/math-ph0002002.html
ar5iv
text
# References CERN-TH/2000-013 math-th/0001— SOLUTIONS OF $`D_\alpha `$ = 0 FROM HOMOGENEOUS INVARIANT FUNCTIONS F. Buccella <sup>1</sup><sup>1</sup>1On leave of absence from Dipartimento di Scienze Fisiche, Università di Napoli. Theoretical Physics Division, CERN CH - 1211 Geneva 23 Abstract We prove that the existence of a homogeneous invariant of degree $`n`$ for a representation of a semi-simple Lie group guarantees the existence of non-trivial solutions of $`D_\alpha =0`$: these correspond to the maximum value of the square of the invariant divided by the norm of the representation to the $`n^{th}`$ power. CERN-TH/2000-013 January 2000 The search of solutions of the equation: $$D_\alpha =0$$ (1) is a necessary tool for the classification of non-trivial supersymmetric vacua . Several years ago a sufficient condition was proposed for the validity of (1), namely the existence of an invariant $`F(z)`$, such that: $$(\frac{\delta F}{\delta z_a})_{z_a}=kz_a^{}$$ (2) with $`k0`$ . On the basis of absence of known counterexamples the condition has been conjectured to be also necessary and the proof has been given by Procesi and Schwarz . Here we want to show that a sufficient condition for the existence of non-trivial solutions of (1) is the existence of a homogeneous invariant $`F^n(z)`$ of degree $`n`$. The proof goes as follows. Be $`F(z_a)`$ a homogeneous invariant of degree $`n`$; let us consider the function: $$G(z_a,z_a^{})=\frac{F(z_a)F(z_a^{})}{N(z_a)^n}$$ (3) where: $$N(z_a)=\mathrm{\Sigma }_az_az_a^{}.$$ (4) Let us look for the maxima of $`G`$, which is real, positive and homogeneous of degree 0 in the domain: $$\frac{1}{2}N(z_a)1.$$ (5) A maximum certainly exists, since $`G`$ is a continuous function of the real variables $`x_a=\frac{z_a+z_a^{}}{2}`$ and $`y_a=\frac{z_az_a^{}}{2i}`$ defined on a closed and limited set (Weierstrass theorem). Since $`G`$ is a homogeneous function of degree 0, it is constant along the intersection of the radii, which start from the origin, with the set defined by (5). Therefore the maximum will occur also on an internal point $`z_a^0`$ of the domain, where: $$\frac{\delta G}{\delta z_a}_{z_a=z_a^0}=\left(\frac{\frac{\delta F}{\delta z_a}F(z_a^{})N(z_a)nz_a^{}F(z_a)F(z_a^{})}{N(z_a)^{n+1}}\right)_{z_a=z_a^0}$$ (6) should vanish. At that point, $`F(z_a)`$ and $`F(z_a^{})`$ are different from zero, since the maximum of a positive function, which is not identically zero, is positive. $`N(z)`$ is also positive, so from (6) we get: $$\frac{\delta F}{\delta z_a}_{z_a=z_a^0}=kz_a^0$$ (7) with $`k=\frac{nF(z_a)}{N(z_a)}0`$, which guarantees the vanishing of the $`D`$ term for $`z_a^{()}=z_a^{0()}`$. To give some application of the theorem just shown, let us consider an irreducible representation $`\varphi `$ of a semi-simple Lie group $`G`$. We may decompose the symmetric product of two $`\varphi `$’s along irreducible representations of $`G`$: $$(\varphi \times \varphi )_S=\mathrm{\Sigma }_l\varphi _l$$ (8) The number of independent quartic invariants, bilinear in $`\varphi `$ and its complex conjugate $`\varphi ^{}`$, is given by the number of terms in the r.h.s. of (8). More precisely they are: $$I_l=N(\varphi \times \varphi )_l.$$ (9) Also the invariant: $$I_\alpha =N(\varphi ^{}\times \varphi )_{adjoint},$$ (10) which is proportional to $`D_\alpha D^\alpha `$, is a combination of the $`I_l`$’s. If the sum in (8) has only one term, there is only one independent quartic invariant, bilinear in $`\varphi `$ and $`\varphi ^{}`$, $`I_\alpha `$ is proportional to $`N(\varphi )^2`$ and there is no non-trivial solution to (1). In that case we can conclude that we cannot write an invariant, which depends only on $`\varphi `$. Let us now consider three cases, where there is a cubic invariant built in terms of an irreducible complex representation of a simple group: according to the theorem just shown, there will be solutions of (1). We shall consider the 6 of $`SU(3)`$, the 15 of $`SU(6)`$ and the 27 of $`E(6)`$ and their symmetric products: $`(6\times 6)_S`$ $`=`$ $`15+\overline{6}`$ $`(15\times 15)_S`$ $`=`$ $`105+\overline{15}`$ $`(27\times 27)_S`$ $`=`$ $`351+\overline{27}`$ (11) So there are two independent quartic invariants bilinear in the 6 (or the 15 or the 27) and in the $`\overline{6}`$ (or the $`\overline{15}`$ or the $`\overline{27}`$), and one has: $`18N(6\times 6)_{\overline{6}}`$ $`+`$ $`15N(6\times \overline{6})_8=8N(6)^2`$ $`9N(15\times 15)_{\overline{15}}`$ $`+`$ $`12N(15\times \overline{15})_{35}=8N(15)^2`$ $`15N(27\times 27)_{\overline{27}}`$ $`+`$ $`9N(27\times \overline{27})_{78}=2N(27)^2`$ (12) where the second terms in the l.h.s.’s of (12) are just proportional to $`D_\alpha D^\alpha `$. The two terms in the l.h.s.’s of (12) have the intriguing property that one vanishes when the other takes its maximum. The vanishing of the second term when the second takes its maximum is a consequence of the theorem we have just shown. In fact, when $`\varphi _a`$ is on a critical orbit of an irreducible representation $`\varphi `$, the invariants: $$\frac{N(6\times 6)_{\overline{6}}}{N(6)^2}\mathrm{and}\frac{N(6\times 6\times 6)_1}{N(6)^3}$$ are both proportional to $`(C_{\phi _a\phi _a\phi _a^{}}^{66\overline{6}})^2`$, which implies that, if $`\varphi _a`$ is a maximum for the first one, it is a maximum for the second one as well. (The implication in the opposite direction does not hold for a non-critical orbit, since in that case $`N(\varphi \times \varphi )_\varphi ^{}`$ receives contributions also from $`\varphi ^{}\varphi _a^{}`$.) A similar property holds for the 15 of $`SU(6)`$ and the 27 of $`E(6)`$. The $`D`$ term vanishes in the $`SO(3)`$, $`Sp(6)`$ or $`F(4)`$ invariant direction respectively. From the other side the first terms in (12) vanish in the $`SU(2)`$, $`SU(4)\times SU(2)`$ or $`SO(10)`$ invariant direction, respectively. This is not surprising, because, when $`\phi _a`$ correspond to the maximal weight of the representation, as in the cases just mentioned, $`(\phi _a\times \phi _a)`$ has components only along the representation with the highest maximal weight. The necessity of the existence of an invariant for the existence of solutions of (1), already proved in , implies that no invariant can be constructed with a representation unable to supply non-trivial solutions to (1). It applies to the case with only one term in the r.h.s. of (8) and $`D_\alpha D^\alpha `$ proportional to $`N(\varphi )^2`$, but also to cases with more than one term present in (8). As an example, let us consider the 16 spinorial representation of $`SO(10)`$, for which: $`(16\times 16)_S=126+10`$ $`32N(16\times 16)_{10}+16N(16\times \overline{16})_{45}=5N(16)^2.`$ (13) While the maximum of the second term in the l.h.s. of (13) in the $`SU(5)`$ invariant direction corresponds to the vanishing of the first term , the maximum of the first term in the $`SO(7)`$ invariant direction corresponds to a minimum, but not to a vanishing value for $`D_\alpha D^\alpha `$. In fact no $`SO(10)`$ invariant can be built only with a 16. For semi-simple groups we shall consider the bifundamental $`(N,\overline{M})`$ representation of $`SU(N)\times SU(M)`$. For $`M=N`$, the existence of the invariant: $$ϵ_{\beta _1\mathrm{}\beta _N}^{\gamma _1\mathrm{}\gamma _N}\varphi _{\alpha _1}^{\beta _1}\mathrm{}\varphi _{\gamma _N}^{\beta _N}$$ (14) implies the existence of a solution of (1), namely the singlet under the sum of the two $`SU(N)`$. If $`N>M`$ the contribution $`\phi _M^2`$ to $`D_\alpha D^\alpha `$ from $`SU(N)`$ cannot vanish, since $$\frac{(D_\alpha D^\alpha )_{SU(N)}}{g_N^2}>\frac{(D_\alpha D^\alpha )_{SU(M)}}{g_M^2}$$ (15) and the r.h.s. of (15) is non-negative: in fact no invariant can be built in that case. For $`N=3,M=2`$ one has $$(\varphi \times \varphi )_s=(6,3)+(\overline{3},1)$$ (16) and $`N(\varphi \times \varphi )_{(\overline{3},1)}`$ takes its maximum in the direction where the contribution to $`D_\alpha D^\alpha `$ from $`SU(2)`$ (but not from $`SU(3)`$) vanishes. Acknowledgements Instructive and inspiring discussions with Prof. S. Ferrara are gratefully acknowledged.
no-problem/0002/nlin0002047.html
ar5iv
text
# Deriving 𝑁-soliton solutions via constrained flows ## 1. Introduction In recent years much work has been devoted to the constrained flows of soliton equations (see, for example, \[1-7\]). It was shown in \[1-3\] that (1+1)-dimensional soliton equation can be factorized by $`x`$\- and $`t`$-constrained flow which can be transformed into two commuting $`x`$\- and $`t`$-finite-dimensional integrable Hamiltonian systems. The Lax representation for constrained flows can be deduced from the adjoint representation of the auxiliary linear problem for soliton equations . By means of the Lax representation, the standard method in \[8-10\] enables us to introduce the separation variables for constrained flows \[11-15\] and to establish the Jacobi inversion problem \[13-15\]. Finally, the factorization of soliton equations and separability of the constrianed flows allow us to find the Jacobi inversion problem for soliton equations \[13-15\]. By using the Jacobi inversion technique , the $`N`$-gap solutions in term of Riemann theta functions for soliton equations can be obtained, namely, the constrained flows can be used to derive the $`N`$-gap solution for soliton equations. It has been believed that the constrained flows can also been used directly to derive the $`N`$-soliton solutions for soliton equations. However this case remains a challenging problem. It is well known that trere are several methods to derive the $`N`$-soliton solution of soliton equations, such as the inverse scattering method, the Hirota method, the dressing method, the Darboux transformation, etc. (see, for example, \[18-20\] and references therein). In present paper, we propose a method to construct directly $`N`$-soliton solution from two commuting $`x`$\- and $`t`$-constrained flows. We will illustrate the method by KdV equation. The method can be applied to other soliton equations. ## 2. Constrained flows We first recall the constrained flows and factorization of soliton equations by using KdV equation. Let consider the Schr$`\ddot{\text{o}}`$dinger spectral problem $$\varphi _{xx}+u\varphi =\lambda \varphi .$$ $`2.1`$ The KdV hierarchy associated with (2.1) can be written in infinite-dimensional integrable Hamiltonian system \[18-20\] $$u_{t_n}=_x\frac{\delta H_n}{\delta u},n=1,2,\mathrm{},$$ $`2.2`$ where $$\frac{\delta H_n}{\delta u}=L^nu,L=_x^2+4u2_x^1u_x,_x^1_x=_x_x^1=1.$$ $`2.3`$ The well known KdV equation reads $$u_t6uu_x+u_{xxx}=0.$$ $`2.4`$ For KdV equation (2.4), the time evolution equation of $`\varphi `$ is given by $$\varphi _t=4\lambda \varphi _x+2u\varphi _xu_x\varphi .$$ $`2.5`$ The compatibility condition of (2.1) and (2.5) gives rise to (2.4). It is known that $$\frac{\delta \lambda }{\delta u}=\varphi ^2.$$ $`2.6`$ The constrained flows of the KdV hierarchy consists of the equations obtained from the spectral problem (2.1) for $`N`$ distinct real numbers $`\lambda _j`$ and the restriction of the variational derivatives for the conserved quantities $`H_{k_0}`$ (for any fixed $`k_0`$) and $`\lambda _j`$ \[2-4\] $$\varphi _{j,xx}+u\varphi _j=\lambda _j\varphi _j,j=1,\mathrm{},N,$$ $`2.7a`$ $$\frac{\delta H_{k_0}}{\delta u}\underset{j=1}{\overset{N}{}}\alpha _j\frac{\delta \lambda _j}{\delta u}=0.$$ $`2.7b`$ The system (2.7) is invariant under all the KdV flows (2.2). For $`k_0=0`$, in order to obtain $`N`$-soliton solution, we take $$\lambda _j<0,\zeta _j=\sqrt{\lambda _j},\alpha _j=4\zeta _j,j=1,\mathrm{},N,$$ one gets from (2.7b) $$u=4\underset{j=1}{\overset{N}{}}\zeta _j\varphi _j^2=4\mathrm{\Phi }^T\mathrm{\Theta }\mathrm{\Phi },$$ $`2.8`$ where $$\mathrm{\Phi }=(\varphi _1,\mathrm{},\varphi _N)^T,\mathrm{\Theta }=diag(\zeta _1,\mathrm{},\zeta _N),\mathrm{\Lambda }=diag(\lambda _1,\mathrm{},\lambda _N).$$ By substituting (2.8), (2.7a) becomes $$\varphi _{j,xx}+4\underset{i=1}{\overset{N}{}}\zeta _i\varphi _i^2\varphi _j=\lambda _j\varphi _j,j=1,\mathrm{},N,$$ or equivalently $$\mathrm{\Phi }_{xx}=\mathrm{\Lambda }\mathrm{\Phi }+4\mathrm{\Phi }\mathrm{\Phi }^T\mathrm{\Theta }\mathrm{\Phi }.$$ $`2.9`$ After inserting (2.8), (2.5) reads $$\mathrm{\Phi }_t=4\mathrm{\Lambda }\mathrm{\Phi }_x+8\mathrm{\Phi }_x\mathrm{\Phi }^T\mathrm{\Theta }\mathrm{\Phi }8\mathrm{\Phi }\mathrm{\Phi }^T\mathrm{\Theta }\mathrm{\Phi }_x.$$ $`2.10`$ The compatibility of (2.7), (2.10) and (2.4) ensures that if $`\mathrm{\Phi }`$ satisfies two compatible systems (2.9) and (2.10), simultaneously, then $`u`$ given by (2.8) is a solution of KdV equation (2.4), namely, the KdV equation (2.4) is factorized by the $`x`$-constrained flow (2.9) and $`t`$-constrained flow (2.10). The Lax representation for the constrained flows (2.9) and (2.10), which can be deduced from the adjoint representation of the spectral problem (2.1) by using the method in , is given by $$Q_x=[\stackrel{~}{U},Q],$$ where $`\stackrel{~}{U}`$ and the Lax matrix $`Q`$ are of the form $$\stackrel{~}{U}=\left(\begin{array}{cc}0& 1\\ \lambda +4\mathrm{\Phi }^T\mathrm{\Theta }\mathrm{\Phi }& 0\end{array}\right),M=\left(\begin{array}{cc}A(\lambda )& B(\lambda )\\ C(\lambda )& A(\lambda )\end{array}\right),$$ $$A(\lambda )=2\underset{j=1}{\overset{N}{}}\frac{\zeta _j\varphi _j\varphi _{j,x}}{\lambda \lambda _j},B(\lambda )=1+2\underset{j=1}{\overset{N}{}}\frac{\zeta _j\varphi _j^2}{\lambda \lambda _j},$$ $$C(\lambda )=\lambda +2\mathrm{\Phi }^T\mathrm{\Theta }\mathrm{\Phi }2\underset{j=1}{\overset{N}{}}\frac{\zeta _j\varphi _{j,x}^2}{\lambda \lambda _j}.$$ Then $`\frac{1}{2}TrM^2(\lambda )=A^2(\lambda )+B(\lambda )C(\lambda )`$, which is a generating function of integrals of motion for the system (2.9) and (2.10), gives rise to $$A^2(\lambda )+B(\lambda )C(\lambda )=\lambda 2\underset{j=1}{\overset{N}{}}\frac{F_j}{\lambda \lambda _j},$$ where $`F_j,j=1,\mathrm{},N,`$ are $`N`$ independent integrals of motion for the systems (2.9) and (2.10) $$F_j=\varphi _{j,x}^2+(\lambda _j2\underset{i=1}{\overset{N}{}}\zeta _i\varphi _i^2)\varphi _j^2+2\underset{kj}{}\frac{\zeta _k(\varphi _{j,x}\varphi _k\varphi _j\varphi _{k,x})^2}{\lambda _j\lambda _k},j=1,\mathrm{},N.$$ $`2.11`$ ## 3. Deriving $`N`$-soliton solution In order to constructing $`N`$-soliton solution, we have to set $`F_j=0`$. It follows from (2.9) that $$\frac{\varphi _{j,x}\varphi _k\varphi _j\varphi _{k,x}}{\lambda _j\lambda _k}=_x^1(\varphi _j\varphi _k).$$ $`3.1`$ Then one gets $$F_j=\varphi _{j,x}^2+(\lambda _j2\underset{i=1}{\overset{N}{}}\zeta _i\varphi _i^2)\varphi _j^22\underset{k=1}{\overset{N}{}}\zeta _k(\varphi _{j,x}\varphi _k\varphi _j\varphi _{k,x})_x^1(\varphi _j\varphi _k)=0,j=1,\mathrm{},N.$$ $`3.2`$ The integrals of motion $`F_j`$ can be used to reduce the order of system (2.9). By multiplying (2.9) by $`\varphi _j`$ and adding it to (3.2), one obtains $$\varphi _j[\varphi _{j,x}2\underset{k=1}{\overset{N}{}}\zeta _k\varphi _k_x^1(\varphi _j\varphi _k)]_x+\varphi _{j,x}[\varphi _{j,x}2\underset{k=1}{\overset{N}{}}\zeta _k\varphi _k_x^1(\varphi _j\varphi _k)]=0,j=1,\mathrm{},N,$$ which results to $$\varphi _{j,x}2\underset{k=1}{\overset{N}{}}\zeta _k\varphi _k_x^1(\varphi _j\varphi _k)=\gamma _j\varphi _j,\gamma _j=\gamma _j(t),j=1,\mathrm{},N,$$ or equivalently $$\mathrm{\Phi }_x=\mathrm{\Gamma }\mathrm{\Phi }+2_x^1(\mathrm{\Phi }\mathrm{\Phi }^T)\mathrm{\Theta }\mathrm{\Phi },$$ $`3.3`$ where $`\mathrm{\Gamma }=diag(\gamma _1,\mathrm{},\gamma _N).`$ Set $$R=2_x^1(\mathrm{\Phi }\mathrm{\Phi }^T)\mathrm{\Theta },$$ $`3.4`$ Eq. (3.3) can be rewritten as $$\mathrm{\Phi }_x=\mathrm{\Gamma }\mathrm{\Phi }+R\mathrm{\Phi }.$$ $`3.5`$ Notice that $$2\mathrm{\Phi }\mathrm{\Phi }^T=R_x\mathrm{\Theta }^1,\mathrm{\Theta }R=R^T\mathrm{\Theta },$$ $`3.6`$ it follows from (3.4) and (3.5)that $$R_x=2_x^1(\mathrm{\Phi }_x\mathrm{\Phi }^T+\mathrm{\Phi }\mathrm{\Phi }_x^T)\mathrm{\Theta }$$ $$=2_x^1(\mathrm{\Gamma }R_x+RR_xR_x\mathrm{\Gamma }+R_xR)=\mathrm{\Gamma }RR\mathrm{\Gamma }+R^2.$$ $`3.7`$ We now show that $$\gamma _j^2=\lambda _j,\text{or}\mathrm{\Gamma }^2=\mathrm{\Lambda }.$$ $`3.8`$ In fact, it is found from (3.5), (3.6) and (3.7) that $$\mathrm{\Phi }_{xx}=\mathrm{\Gamma }\mathrm{\Phi }_x+R\mathrm{\Phi }_x+R_x\mathrm{\Phi }=\mathrm{\Gamma }^2\mathrm{\Phi }+(\mathrm{\Gamma }RR\mathrm{\Gamma }+R^2)\mathrm{\Phi }+R_x\mathrm{\Phi }$$ $$=\mathrm{\Gamma }^2\mathrm{\Phi }+2R_x\mathrm{\Phi }=\mathrm{\Gamma }^2\mathrm{\Phi }+4\mathrm{\Phi }\mathrm{\Phi }^T\mathrm{\Theta }\mathrm{\Phi },$$ which together with (2.9) leads to (3.8). Therefore, we can take $`\mathrm{\Gamma }=\mathrm{\Theta }`$, (3.5) and (3.7) can be rewritten as $$\mathrm{\Phi }_x=\mathrm{\Theta }\mathrm{\Phi }+R\mathrm{\Phi },$$ $`3.9`$ and $$R_x=\mathrm{\Theta }RR\mathrm{\Theta }+R^2,$$ $`3.10`$ $$2\mathrm{\Phi }\mathrm{\Phi }^T=R_x\mathrm{\Theta }^1=\mathrm{\Theta }R\mathrm{\Theta }^1R+R^2\mathrm{\Theta }^1.$$ $`3.11`$ To solve (3.9), we first consider the linear system $$\mathrm{\Psi }_x=\mathrm{\Theta }\mathrm{\Psi }.$$ $`3.12`$ It is easy to see that $$\mathrm{\Psi }=(c_1(t)e^{\zeta _1x},\mathrm{},c_N(t)e^{\zeta _Nx})^T.$$ $`3.13`$ Take the solution of (3.9) to be of the form $$\mathrm{\Phi }=\mathrm{\Psi }M\mathrm{\Psi },$$ then $`M`$ has to satisfy $$M_x=\mathrm{\Theta }M+M\mathrm{\Theta }R+RM.$$ $`3.14`$ Comparing (3.14) with (3.10), one finds $$M=\frac{1}{2}R\mathrm{\Theta }^1=_x^1(\mathrm{\Phi }\mathrm{\Phi }^T).$$ $`3.15`$ So we have $$\mathrm{\Phi }=(IM)\mathrm{\Psi }=[I_x^1(\mathrm{\Phi }\mathrm{\Phi }^T)]\mathrm{\Psi },$$ $`3.16`$ which leads to $$\mathrm{\Psi }=\underset{n=0}{\overset{\mathrm{}}{}}M^n\mathrm{\Phi }.$$ $`3.17`$ By using (3.15) and (3.17), it is found from that $$_x^1(\mathrm{\Psi }\mathrm{\Psi }^T)=_x^1\underset{n=0}{\overset{\mathrm{}}{}}\underset{l=0}{\overset{n}{}}M^l\mathrm{\Phi }\mathrm{\Phi }^TM^{nl}$$ $$=_x^1\underset{n=0}{\overset{\mathrm{}}{}}\underset{l=0}{\overset{n}{}}M^lM_xM^{nl}=\underset{n=1}{\overset{\mathrm{}}{}}M^n.$$ $`3.18`$ Set $$V=(v_{ij})=_x^1(\mathrm{\Psi }\mathrm{\Psi }^T),v_{ij}=\frac{c_i(t)c_j(t)}{\zeta _i+\zeta _j}e^{(\zeta _i+\zeta _j)x}.$$ $`3.19`$ One obtain $$(I+V)\mathrm{\Phi }=\mathrm{\Psi },\text{or}\mathrm{\Phi }=(IM)\mathrm{\Psi }=(I+V)^1\mathrm{\Psi }.$$ $`3.20`$ By inserting (3.9) and (3.11), (2.10) becomes $$\mathrm{\Phi }_t=[4\mathrm{\Theta }^34\mathrm{\Theta }^2R+8(\mathrm{\Theta }+R)\mathrm{\Phi }\mathrm{\Phi }^T\mathrm{\Theta }8\mathrm{\Phi }\mathrm{\Phi }^T\mathrm{\Theta }(\mathrm{\Theta }+R)]\mathrm{\Phi }$$ $$=4\mathrm{\Theta }^3\mathrm{\Phi }4R\mathrm{\Theta }^2\mathrm{\Phi }.$$ $`3.21`$ Let $`\mathrm{\Psi }`$ satisfy the linear system $$\mathrm{\Psi }_t=4\mathrm{\Theta }^3\mathrm{\Psi },$$ $`3.22`$ then $$\mathrm{\Psi }=(c_1(t)e^{\zeta _1x},\mathrm{},c_N(t)e^{\zeta _Nx})^T,c_i(t)=\beta _je^{4\zeta _j^3t},j=1,\mathrm{},N.$$ $`3.23`$ We now show that $`\mathrm{\Phi }`$ determined by (3.20) and (3.23) satisfy (3.21). In fact, we have $$\mathrm{\Phi }_t=(I+V)^1V_t(I+V)^1\mathrm{\Psi }+(I+V)^1\mathrm{\Psi }_t$$ $$=4\mathrm{\Theta }^3\mathrm{\Phi }4M\mathrm{\Theta }^3\mathrm{\Phi }4(IM)V\mathrm{\Theta }^3\mathrm{\Phi }$$ $$=4\mathrm{\Theta }^3\mathrm{\Phi }8M\mathrm{\Theta }^3\mathrm{\Phi }=4\mathrm{\Theta }^3\mathrm{\Phi }4R\mathrm{\Theta }^2\mathrm{\Phi }.$$ Therefore $`\mathrm{\Phi }`$ given by (3.20) and (3.23) satisfies (2.9) and (2.10) simultaneously and $`u=4\mathrm{\Phi }^T\mathrm{\Theta }\mathrm{\Phi }`$ is the solution of KdV equation (2.4). It is easy to show that this solution is just the $`N`$-soliton solution. Notice that $$2_x(\mathrm{\Psi }^T\mathrm{\Phi })=2\mathrm{\Psi }^T\mathrm{\Theta }\mathrm{\Phi }+2\mathrm{\Psi }^T(\mathrm{\Theta }+R)\mathrm{\Phi }$$ $$=4\mathrm{\Phi }^T(I+V)(IM)\mathrm{\Theta }\mathrm{\Phi }=4\mathrm{\Phi }^T\mathrm{\Theta }\mathrm{\Phi },$$ namely $$u=2_x\underset{i=1}{\overset{N}{}}c_i(t)e^{\zeta _ix}\varphi _i.$$ $`3.24`$ Formulas (3.20), (3.23) and (3.24) are just that obtained from the Gelfand-Levintan-Marchenko equation for determining the $`N`$-soliton solution for KdV equation and finally results to the well-known expression for $`N`$-soliton solution of KdV equation (2.4) $$u=2_x^2\text{ln}(\text{det}(I+V)).$$ ## 4. Conclusion The factorization of the KdV equation into two compatible $`x`$\- and $`t`$-constrained flows enables us to derive directly the $`N`$-soliton solution via the $`x`$\- and $`t`$-constrained flows. The method presented in this paper can be applied to other soliton equations for directly obtaining $`N`$-soliton solutions from constrained flows. ## Acknowledgments This work was supported by the Chinese Basic Research Project ”Nonlinear Sciences”. ## References 1. Cao Cewen, 1991, Acta math. Sinica, New Series 17, 216. 2. Zeng Yunbo, 1991, Phys. Lett. A 160, 541. 3. Zeng Yunbo, 1994, Physica D 73, 171. 4. Zeng Yunbo and Li Yishen, 1993, J. Phys. A 26, L273. 5. Antonowicz M and Rauch-Wojciechowski S, 1992, Phys. Lett. A 171, 303. 6. Ragnisco O and Rauch-Wojciechowski S, 1992, Inverse Problems 8, 245. 7. Ma W X and Strampp W, 1994, Phys. Lett. A 185, 277. 8. Sklyanin E K, 1989, J. Soviet. Math. 47, 2473. 9. Kuznetsov V B, 1992, J. Math. Phys. 33, 3240. 10. Sklyanin E K, 1995, Theor. Phys. Suppl. 118, 35. 11. Eilbeck J C, Enol’skii V Z, Kuznetsov V B and Tsiganov A V, 1994, J. Phys. A 27, 567. 12. Kulish P P, Rauch-Wojciechowski S and Tsiganov A V,1996, J. Math. Phys. 37, 3463. 13. Zeng Yunbo, 1997, J. Math. Phys. 38, 321. 14. Zeng Yunbo, 1997, J. Phys. A 30, 3719. 15. Zeng Yunbo and Ma W X, 1999, Physica A 274(3-4), 1. 16. Dubrovin B A, 1981, Russian Math. Survey 36, 11. 17. Its A and Matveev V, 1975, Theor. Mat. Fiz. 23, 51. 18. Ablowitz M and Segur H, 1981, Solitons and the inverse scattering transform (Philadelphia, PA:SIAM). 19. Newell A C, 1985, Soliton in Mathematics and Physics (Philadelphia, PA:SIAM). 20. Faddeev L D and Takjtajan L A, 1987, Hamiltonian methods in the theory of solitons (Berlin: Springer).
no-problem/0002/astro-ph0002214.html
ar5iv
text
# REIONIZATION AND THE ABUNDANCE OF GALACTIC SATELLITES ## 1. Introduction Cold Dark Matter (CDM) models with scale-invariant initial fluctuation spectra define an elegant and well-motivated class of theories with marked success at explaining most properties of the local and high-redshift universe. One of the few perceived problems of such models is the apparent overprediction of the number of satellites with circular velocities $`v_c1030\mathrm{km}\mathrm{s}^1`$ within the virialized dark halos of the Milky Way and M31 (Klypin et al. 1999a, hereafter KKVP99; Moore et al. 1999; see also Kauffmann et al. 1993 and Gonzales et al. 1998). The most recent of these investigations are based on dissipationless simulations, so the problem may be rephrased as a mismatch between the expected number of dark matter subhalos orbiting within the Local Group and the observed number of satellite galaxies. In order to overcome this difficulty, several authors have suggested modifications to the standard CDM scenario. These include (1) reducing the small-scale power by either appealing to a specialized model of inflation with broken scale invariance (e.g., Kamionkowski & Liddle 1999) or substituting Warm Dark Matter for CDM (e.g., Hogan 1999), and (2) allowing for strong self-interaction among dark matter particles, thereby enhancing satellite destruction within galactic halos (e.g., Spergel and Steinhardt 1999; but see the counter-argument by Miralda-Escudé 2000). In this paper, we explore a more conservative solution. Many authors have pointed out that accretion of gas onto low-mass halos and subsequent star formation are inefficient in the presence of a strong photoionizing background (Ikeuchi 1986; Rees 1986, Babul & Rees 1992; Efstathiou 1992; Shapiro, Giroux, & Babul 1994; Thoul & Weinberg 1996; Quinn, Katz, & Efstathiou 1996). Motivated by these results, we investigate whether the abundance of satellite galaxies can be explained in hierarchical models if low-mass galaxy formation is suppressed after reionization, and we propose that the observable satellites correspond to those halos that accreted a substantial amount of gas before reionization. This solution is similar in some ways to the idea that supernova feedback ejects gas from low-mass halos (Dekel & Silk 1986), but it offers a natural explanation for why some satellite galaxies at each circular velocity survive, while most are too dim to be seen. In addition, the reionization solution seems physically inevitable, while the feedback mechanism may be inadequate in all but the smallest halos (Mac Low & Ferrara 1998). Our approach to calculating the satellite abundance uses an extension of the Press-Schecter (1974) formalism to predict the mass accretion history of galactic halos and a simple model for orbital evolution of the accreted substructure. For our analysis, we adopt a flat CDM model with a non-zero vacuum energy and the following parameters: $`\mathrm{\Omega }_m=0.3,\mathrm{\Omega }_\mathrm{\Lambda }=0.7,h=0.7,\sigma _8=1.0`$, where $`\sigma _8`$ is the rms fluctuation on the scale of $`8h^1`$ Mpc, $`h`$ is the Hubble constant in units of $`100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, and $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ are the density contributions of matter and the vacuum respectively, in units of the critical density. ## 2. Method ### 2.1. Modeling the Subhalo Distribution Because N-body simulations that resolve low-mass subhalos are computationally expensive, we develop an approximate analytic model for the accretion history and orbital evolution of satellite halos within a typical Milky Way-size dark halo. Our general procedure is to use the extended Press-Schechter method (Bond et al. 1991; Lacey & Cole 1993, hereafter LC93) to construct the subhalo accretion history for each galactic halo and then to determine which accreted subhalos are dragged to the center due to dynamical friction and which are tidally destroyed. The subhalos that survive at $`z=0`$ are used to construct the final velocity function of satellite halos. We assume that the density profile of each halo is described by the NFW form (Navarro, Frenk, & White 1997): $`\rho _{\mathrm{NFW}}(x)x^1(1+x)^2`$, where $`x=r/r_s`$, and $`r_s`$ is a characteristic inner radius. Given a halo of mass $`M_{\mathrm{vir}}`$ at redshift $`z`$, the model of Bullock et al. (1999) supplies the typical $`r_s`$ value and specifies the profile completely. The circular velocity curve, $`v^2(r)GM(r)/r`$, peaks at the radius $`r_{\mathrm{max}}2.16r_\mathrm{s}`$. Throughout this paper, we use $`v_c`$ to refer to the circular velocity at $`r_{\mathrm{max}}`$. We begin by constructing mass growth and halo accretion histories for an ensemble of galaxy-sized dark matter halos. This is done using the merger tree method of Somerville & Kolatt (1999, hereafter SK99). We modified the method slightly to require that at each time step the number of progenitors in the mass range of interest be close to the expected average. This modification significantly improves the agreement between the generated progenitor mass function and the analytic prediction. We start with halos of mass $`M_{\mathrm{vir}}=1.1\times 10^{12}h^1M_{}`$, at $`z=0`$, corresponding to $`v_c=220\mathrm{km}\mathrm{s}^1`$, and trace satellite accretion histories back to $`z=10`$ using time steps chosen as described in SK98. We track only accreted halos that are more massive than $`M_m=7\times 10^6h^1M_{}`$, which corresponds to $`v_c10\mathrm{km}\mathrm{s}^1`$ at $`z=10`$ and $`v_c4\mathrm{km}\mathrm{s}^1`$ at $`z=0`$, and treat the accretion of halos smaller than $`M_m`$ as diffuse mass growth. For each step back in time, the merger tree provides a list of progenitors. The most massive progenitor is identified with the galactic halo and the rest of the progenitors (with $`M>M_m`$) are recorded as accreted substructure. We are left with a record of the mass growth for the galactic halo, $`M_{\mathrm{vir}}(z)`$, as well as the mass of each accreted subhalo, $`M_a`$, and the redshift of its accretion, $`z_a`$. For the results presented below, we use 300 ensembles of formation histories for galactic host halos. Our results do not change if the number of ensembles is increased. Each subhalo is assigned an initial orbital circularity, $`ϵ`$, defined as the ratio of the angular momentum of the subhalo to that of a circular orbit with the same energy $`ϵJ/J_c`$. We choose $`ϵ`$ by drawing a random number uniformly distributed from 0.1 to 1.0, an approximation to the distribution found in cosmological simulations (Ghigna et al. 1998). Our results are not sensitive to the choice of the above range. To determine whether the accreted halo’s orbit will decay, we use Chandrasekhar’s formula to calculate the decay time, $`\tau _{DF}^c`$, of the orbit’s circular radius $`R_c`$, as outlined in Klypin et al. (1999b). The circular radius is defined as the radius of a circular orbit with the same energy as the actual orbit. Each subhalo is assumed to start at $`R_c=0.5R_{\mathrm{vir}}(z_a)`$, where $`R_{\mathrm{vir}}(z_a)`$ is the virial radius of the host halo at the time of accretion. This choice approximately represents typical binding energies of subhalos in simulations. The amplitude of the resulting velocity function is somewhat sensitive to this choice. If we adopt $`R_c=0.75R_{\mathrm{vir}}(z_a)`$ (less bound orbits), the amplitude increases by $`50\%`$. A change of this magnitude will not affect our conclusions, discussed in §4. Once $`\tau _{DF}^\mathrm{c}`$ is known, the fitting formula of Colpi et al. (1999) provides the appropriate decay time for the given circularity $$\tau _{DF}=\tau _{DF}^\mathrm{c}ϵ^{0.4}.$$ (1) If $`\tau _{DF}`$ is smaller than the time left between $`z_a`$ and $`z=0`$, $`\tau _{DF}t_0t_a`$, then the subhalo will not survive until the present, and it is therefore removed from the list of galactic subhalos. If $`\tau _{DF}`$ is too long for the orbit to have decayed completely ($`\tau _{DF}>t_0t_a`$), we check whether the subhalo would have been tidally disrupted. We assume that the halo is disrupted if the tidal radius becomes smaller than $`r_{\mathrm{max}}`$. Such a situation means either a drastic reduction in the measured $`v_c`$ (quickly pushing $`v_c`$ below our range of interest) or that the halo (and any central baryonic component) should become unstable and rapidly dissolve (Moore, Katz & Lake 1996). The tidal radius, $`r_t`$, is determined at the pericenter of the orbit at $`z=0`$, where the tides are the strongest. For an orbit that has decayed to a final circular radius $`R_c^0`$, the pericenter at this time, $`R_p`$, is given by the smaller of the roots of the bound orbit equation (e.g., van den Bosch et al. 1999) $$\left(\frac{R_c^0}{r}\right)^2\frac{1}{ϵ^2}=\frac{2[\mathrm{\Phi }(R_c^0)\mathrm{\Phi }(r)]}{ϵ^2v^2(R_c^0)}.$$ (2) Here, $`\mathrm{\Phi }(r)=4.6v_c^2\mathrm{ln}(1+x)/x`$ is the potential of the host halo. Once $`R_p`$ is known, we determine $`r_t`$ as outlined in Klypin et al. (1999b). If $`r_tr_{\mathrm{max}}`$ we declare the subhalo to be tidally destroyed and remove it from the list of galactic subhalos. Accreted halos that survive ($`\tau _{DF}>t_0t_a`$ and $`r_t>r_{\mathrm{max}}`$) are assumed to preserve the $`v_c`$ they had when they were accreted. The resulting velocity function of surviving subhalos within the virial radius of the host ($`200h^1\mathrm{kpc}`$ at $`z=0`$), averaged over all merger histories, is shown by the thin solid line in Figure 1. The error bars reflect the measured dispersion among different merger histories. The thick straight line is the best fit to the corresponding velocity function measured in the cosmological N-body simulation of KKVP99. The upper dashed lines show the velocity function if the effects of dynamical friction and/or tidal disruption are ignored. The analytic model reproduces the N-body results remarkably well. We have not tuned any parameters to obtain this agreement, although we noted above that plausible changes in the assumed initial circular radius $`R_c`$ could change the analytic prediction by $`50\%`$. The good agreement suggests that our analytic model captures the essential physics underlying the N-body results. An interesting feature of the model is that the subhalos surviving at $`z=0`$ are only a small fraction of the halos actually accreted, most of which are destroyed by tidal disruption. We discuss implications of this satellite destruction in §4. ### 2.2. Modeling Observable Satellites The second step in our model is to determine which of the surviving halos at $`z=0`$ will host observable satellite galaxies. The key assumption is that after the reionization redshift $`z_{\mathrm{re}}`$, gas accretion is suppressed in halos with $`v_c<v_T`$. We adopt a threshold of $`v_T=30\mathrm{km}\mathrm{s}^1`$, based on the results of Thoul & Weinberg (1996), who showed that galaxy formation is suppressed in the presence of a photoionizing background for objects smaller than $`30\mathrm{km}\mathrm{s}^1`$. This threshold was shown to be insensitive to the assumed spectral index and amplitude of the ionizing background (a similar result was found by Quinn et al. 1996). Shapiro et al. (1997; see also Shapiro & Raga 2000) and Barkana & Loeb (2000) have suggested that very low mass systems ($`v_c10\mathrm{km}\mathrm{s}^1`$) could lose the gas they have already accreted after reionization occurs, but we do not consider this possibility here. The calculation of the halo velocity function in §2.1 is approximate but straightforward, and we have checked its validity by comparing to N-body simulations. Determining which of these halos are luminous enough to represent known dwarf satellites requires more uncertain assumptions about gas cooling and star formation. We adopt a simple model that has two free parameters: the reionization redshift $`z_{\mathrm{re}}`$ and the fraction $`f=M(z_{\mathrm{re}})/M_a`$ of a subhalo’s mass that must be in place by $`z_{\mathrm{re}}`$ in order for the halo to host an observable galaxy. The value of $`z_{\mathrm{re}}`$ is constrained to $`z_{\mathrm{re}}5`$ by observations of high-$`z`$ quasars (e.g., Songaila et al. 1999) and to $`z_{\mathrm{re}}50`$ by measurements of small-angle CMB anisotropies, assuming typical ranges for the cosmological parameters (e.g., Griffiths, Barbosa, & Liddle 1999). The value of $`f`$ is constrained by the requirement that observable halos have mass-to-light ratios in the range of observed dwarf satellites. For the subset of (dwarf irregular) satellite galaxies with well-determined masses, the mass-to-light ratios span the range $`M/L_V530`$ (Mateo 1998); dwarf spheroidals have similar $`M/L_V`$, but with a broader range and larger observational uncertainties. We can estimate $`M/L_V`$ for model galaxies by assuming that they accrete a baryon mass $`fM_a(\mathrm{\Omega }_b/\mathrm{\Omega }_m)`$ before $`z_{\mathrm{re}}`$ and convert that accreted gas with efficiency $`ϵ_{}`$ into a stellar population with mass-to-light ratio $`M_{}/L_V`$, obtaining $$\left(\frac{M}{L_V}\right)=f^1\left(\frac{\mathrm{\Omega }_m}{\mathrm{\Omega }_b}\right)\left(\frac{M_{}}{L_V}\right)ϵ_{}^1F_o.$$ (3) The factor $`F_o`$ is the fraction of the halo’s virial mass $`M_a`$ that lies within its final optical radius (which may itself be affected by tidal truncation). For the $`M/L_V`$ values quoted above, the optical radius is typically $`2\mathrm{k}\mathrm{p}\mathrm{c}`$ (Mateo 1998), and representative mass profiles of surviving halos imply $`F_o(2\mathrm{k}\mathrm{p}\mathrm{c})0.5`$; however, this factor must be considered uncertain at the factor of two level. The value of $`ϵ_{}`$ is also uncertain because of the uncertain influence of supernova feedback, but by definition $`ϵ_{}1`$. Adopting a value $`M_{}/L_V0.7`$ typical for galactic disk stars (Binney & Merrifield 1998), $`\mathrm{\Omega }_m/\mathrm{\Omega }_b7`$ (based on $`\mathrm{\Omega }_m=0.3,`$ $`h=0.7`$, and $`\mathrm{\Omega }_bh^20.02`$ from Burles & Tytler ), $`ϵ_{}=0.5`$, and $`F_o=0.5`$, we obtain $`(M/L_V)5f^1`$. Matching the mass-to-light ratios of typical dwarf satellite galaxies then implies $`f0.3`$. With the uncertainties described above, a range $`f0.10.8`$ is plausible, and the range in observed $`(M/L_V)`$ could reflect in large part the variations in $`f`$ from galaxy to galaxy. Values of $`f0.1`$ would imply excessive mass-to-light ratios, unless the factor $`F_o`$ can be much smaller than we have assumed. In sum, the two parameters that determine the fraction of surviving halos that are observable are $`z_{\mathrm{re}}`$ and $`f`$, with plausible values in the range $`z_{\mathrm{re}}550`$ and $`f0.10.8`$. For a given subhalo of mass $`M_a`$ and accretion redshift $`z_a`$, we use equation (2.26) of LC93 to probabilistically determine the redshift $`z_f`$ when the main progenitor of the subhalo was first more massive than $`M_f=fM_a`$. We associate the subhalo with an observable galactic satellite only if $`z_fz_{\mathrm{re}}`$. ## 3. Results Figure 2 shows results of our model for the specific choices of $`z_{\mathrm{re}}=8`$ and $`f=0.3`$. The thin solid line is the velocity function of all surviving subhalos at $`z=0`$, reproduced from Figure 1. The thick line and shaded region shows the average and scatter in the expected number of observable satellites ($`z_f>z_{\mathrm{re}}`$). The solid triangles show the observed satellite galaxies of the Milky Way and M31 within radii of $`200h^1\mathrm{kpc}`$ from the centers of each galaxy (note that all results are scaled to a fiducial volume $`1h^3\mathrm{Mpc}^3`$). We see that the theoretical velocity function for visible satellites is consistent with that observed, and that the total number of observable systems is $`10\%`$ of the total dark halo abundance. The reason for this difference is that most of the halos form after reionization. The observable satellites are within halos that formed early, corresponding to rare, high peaks in the initial density field. Other combinations of $`z_{\mathrm{re}}`$ and $`f`$ can provide similar results because the fraction of mass in place by the epoch of reionization is larger if reionization occurs later. We find that the following pairs of choices also reproduce the observed galactic satellite velocity function: $`(f,z_{\mathrm{re}})=(0.1,12)`$;$`(0.2,10)`$;$`(0.4,7)`$;$`(0.5,5)`$. It is encouraging that successful parameter choices fall in the range of naturally expected values. However, our model fails if reionization occurs too early ($`z_{\mathrm{re}}\text{ }>\text{ }12`$), since the value of $`f`$ required to reproduce the observed velocity function would imply excessively large $`M/L_V`$ ratios. We conclude that the observed abundance of satellite galaxies may be explained in the CDM scenario if galaxy formation in low-mass halos is suppressed after the epoch of reionization, provided that $`z_{\mathrm{re}}\text{ }<\text{ }12`$. ## 4. Discussion The suppression of gas accretion by the photoionizing background offers an attractive solution to the dwarf satellite problem. The physical mechanism seems natural, almost inevitable, and requires no fine tuning of the primordial fluctuation spectrum or properties of the dark matter. Although, in principle, a similar explanation could be obtained with supernova feedback, reionization naturally explains why most subhalos are dark, while the fraction $`10\%`$ that accreted a substantial fraction of their mass before $`z_{\mathrm{re}}`$ remain visible today. With feedback, it is not obvious why any dwarf satellites would survive, and certainly not the specific number observed. Photoionization also naturally explains why the discrepancy with CDM predictions appears at $`v_c30\mathrm{km}\mathrm{s}^1`$, rather than at a higher or lower $`v_c`$. In contrast, studies of the feedback mechanism suggest that it is difficult to achieve the required suppression of star formation in halos with $`v_c\text{ }>\text{ }15\mathrm{km}\mathrm{s}^1`$ (Mac Low & Ferrara 1998). In order to obtain a reasonable $`M/L`$ ratio for satellite galaxies in this scenario, the reionization redshift must be relatively low, $`z_{\mathrm{re}}\text{ }<\text{ }12`$. For higher $`z_{\mathrm{re}}`$, it is hard to understand how dwarf galaxies with $`v_c<30\mathrm{km}\mathrm{s}^1`$ could have formed at all, though a blue power spectrum or non-Gaussian primordial fluctuations could help, or perhaps dwarfs could form by fragmentation within larger proto-galaxies. Currently, the reionization redshift is constrained only within the rather broad range $`z_{\mathrm{re}}550`$, but it might be determined in the future by CMB experiments or by spectroscopic studies of luminous high-$`z`$ objects. A clear prediction that distinguishes this model from models with suppressed small-scale power or self-interacting dark matter is that there should be a large number of low-mass subhalos associated with the Milky Way and similar galaxies. If we assume that the model presented in §2.2 applies in all cases, then the observed dwarf satellites should be just the low $`M/L`$ tail of the underlying population. For example, in our fiducial case (Figure 2, $`z_{\mathrm{re}}=8`$, $`f=0.3`$), reducing $`f`$ by a factor of 3 — corresponding to an increase in the average $`M/L`$ by a factor of 3 — raises the predicted number of galaxies by a factor of 6. Reducing $`f`$ (increasing $`M/L`$) by a factor of 7 raises the predicted number of satellites by a factor of 10, and accounts for $`98\%`$ of the dark subhalos. Large area, deep imaging surveys may soon be able to reveal faint dwarf satellites that lie below current detection limits. Dark halo satellites may also be detectable by their gravitational influence. For example, it may be possible to detect halo substructure via gravitational lensing. Mao & Schneider (1998) point out that at least in the case of the quadruply imaged QSO B 1422 + 231, substructure may be needed in order to account for the observed flux ratios. It is also plausible that a large number of dark satellites could have destructive effects on disk galaxies (Toth & Ostriker 1992; Ibata & Razoumov 1998; Weinberg 1998). Moore et al. (1999) used a simple calculation based on the impulse approximation to estimate the amount of heating that the subclumps in their simulations would produce on a stellar disk embedded at the halo center. They concluded that this type of heating is not problematic for galaxies like the Milky Way, but that the presence of galaxies with no thick disk, such as NGC 4244, may be problematic. Our results indicate that the variation in halo formation histories is substantial, suggesting that thin disks could occur in recently formed systems with little time for significant heating to have occurred, but a detailed investigation of this subject is certainly warranted. It is tempting to identify some of these subhalos with the High Velocity Clouds (HVCs) (Braun & Burton 1999; Blitz et al. 1998). In our scenario, it would be possible to account for some of these objects provided we modify the model of §2.2. We currently assume that there is no accretion after reionization, and that most of the gas that accretes before $`z_{\mathrm{re}}`$ is converted into stars. However, it may be that accretion starts again at low redshift, after the level of the UV background drops (e.g., Babul & Rees 1992; Kepner, Babul & Spergel 1997), and these late-accreting systems might form stars inefficiently and retain their gas as HI. If this is the case, HVCs may be associated with subhalos that are accreted at late times. We find that, on average, $`60\%`$ of surviving subhalos fall into the host halo after $`z=1`$, and $`40\%`$ fall in after $`z=0.5`$. Another prediction is that there should be a diffuse stellar distribution in the Milky Way halo associated with the disruption of many galactic satellites (see Figure 1). If we assume that the destroyed subhalos had the same stellar content as the surviving halos, then we can estimate the radial density profile of this component by placing the stars from each disrupted halo at the apocenter of its orbit, where they would spend most of their time. This calculation yields a density profile $`\rho _{}(r)r^\alpha `$, with $`\alpha =2.5\pm 0.3`$, extending from $`r10100h^1\mathrm{kpc}`$, which is roughly consistent with the distribution of known stellar halo populations such as RR Lyrae variables (e.g., Wetterer & McGraw 1996). The normalization of the profile is more uncertain, but for the parameters $`z_{\mathrm{re}}=8`$ and $`f=0.3`$, and with the assumption that each halo with $`z_f>z_{\mathrm{re}}`$ has a mass in stars of $`M_{}=f(\mathrm{\Omega }_b/\mathrm{\Omega }_0)ϵ_{}M_a`$, we find that the stellar mass of the disrupted component is $`M_{}5\times 10^8h^1M_{}`$. This diffuse distribution could make up a large fraction of the stellar halo, perhaps all of it. Observationally, it may be difficult to distinguish a disrupted population from a stellar halo formed by other means, but perhaps phase space substructure may provide a useful diagnostic (e.g., Johnston 1998; Helmi et al. 1999). This disrupted population would not be present in models with suppressed small scale power or warm dark matter. However, it would be expected in the self-interacting dark matter scenario. In this case, the distribution would probably extend to a larger radius because dark matter interactions would disrupt the satellite halos further out. There are other problems facing the CDM hypothesis, such as the possible disagreement between the predicted inner slopes of halo profiles and the rotation curves of dwarf and LSB galaxies (Moore et al. 1999; Kravtsov et al. 1998; Flores & Primack 1994; Moore 1994). The mechanism proposed here does not solve this problem, though more complicated effects of gas dynamics and star formation might do so. We have shown that one of the problems facing CDM can be resolved by a simple gas dynamical mechanism. If this solution is the right one, then the dark matter structure of the Milky Way halo resembles a scaled-down version of a typical galaxy cluster, but most of the low-mass Milky Way subhalos formed too late to accrete gas and become observable dwarf galaxies. We thank Andrew Gould and Jordi Miralda-Escudé for useful discussions. This work was supported in part by NASA LTSA grant NAG5-3525 and NSF grant AST-9802568. Support for A.V.K. was provided by NASA through Hubble Fellowship grant HF-01121.01-99A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
no-problem/0002/astro-ph0002246.html
ar5iv
text
# 𝐼⁢𝑍 photometry of L dwarfs and the implications for brown dwarf surveys ## 1 Introduction Kirkpatrick et al. (1999) have recently identified and classified the first large sample of L dwarfs (objects cooler than M-dwarfs) from the 2-micron all sky survey (2MASS). The optical spectra of these objects are characterised by the disappearance of the TiO and VO bands which dominate late M dwarfs, and their replacement by metallic Hydrides and neutral alkali metals. This would be expected to affect the optical colours one observes for such objects. This is important as many surveys for low mass objects (specifically brown dwarfs) in clusters have relied on colours such as $`VI`$ (e.g. Stauffer, Hamilton & Probst 1994), $`RI`$ (e.g. Jameson & Skillen 1989, Hambly et al. 1999), and $`IZ`$ (e.g Pinfield et al. 1997, Cossburn et al. 1997, Zapatero Osorio et al. 1999). The $`IZ`$ colour has been particularly favoured of late, as it has been found to be an excellent method of picking late cluster M dwarfs from the field. The lowest mass object so far detected in the Pleiades using this technique is the $``$ L1V dwarf Roque 25 which was recently identified by Martin et al. (1998). It is therefore useful to see if the $`IZ`$ colour would remain useful as a method of picking out later $`L`$ dwarfs in cluster fields. For example at the distance of the Hyades an L3V dwarf would have $`I19`$ based on the pseudo-photometry (derived from flux calibrated spectra) presented by Kirkpatrick et al. (1999). This is easily obtained with 2-m class telescopes, and therefore an optical survey at $`I`$ and $`Z`$, at which wavelengths much larger fields of view are generally available than the near infrared $`JHK`$ bands, would appear to be an excellent way of finding such objects. In order to address this question of applicability to cluster surveys, as well as the more general question of the use of $`IZ`$ as a temperature indicator we have carried out $`I_{\mathrm{Harris}}`$ (hereafter $`I_H`$) and $`Z_{RGO}`$ photometry of a small number of L dwarfs ranging in spectral type from L0.5V to L6V. This paper presents the results of that photometry, and discusses the results in the context set out above. ## 2 Observations Our observations were obtained using the 1.0-m Jacobus Kapteyn Telescope (JKT), La Palma on the night of 1999 December 19. The photometric conditions were excellent, with relative humidity always below 5% (giving good stability, especially in the $`Z`$ band) and no cirrus or other cloud. The lunar phase was near full, however this does not affect the $`I`$ and $`Z`$ bands as much as the bluer optical bands. Through the course of the night the seeing was reasonable, slowly varying between 1.1 and 1.3 arcsec FWHM. This was sufficient to adequately resolve the two of our program objects (2MASSW J0147334+345411 and 2MASSs J0850359+105716) that are close ($`2`$ arcsec) binaries. Each object in the programme was observed three times through each filter, with an integration time of 600 seconds per observation (except for the faintest object, 2MASSs J0850359+105716, where the integrations were each 1200 seconds). The standard field PG0918 (Landolt 1992) was observed between each object (i.e. roughly once per 90 minutes) throughout the night. The CCD employed was a SITE 2048x2048 pixel array, and the filters used were $`I_H`$ and $`Z_{RGO}`$. These filters are typically the ones used for the majority of $`IZ`$ surveys of clusters so far carried out. They have been calibrated using a TEK 1024x1024 CCD for a sample of Landolt (1992) standards and M-dwarfs by Cossburn et al. (2000). A comparison of the measured quantum efficiency curves of the two CCDs shows that they are identical to within $`2`$% over the range 6000 - 10000 Å. The standards and M-dwarfs from Cossburn et al. (2000) are therefore directly applicable to the newer SITE detector. It is important to note that the Cossburn et al. (2000) calibration is based on an assigned $`IZ`$ colour of 0.0 for an unreddened A0 star and that this is different to the Gunn $`z`$ and Sloan $`z^{}`$ systems. Data reduction was carried out in the usual manner, using a combination of twilight fields and the median of the programme frames in the appropriate bands to flat field and defringe the data. The results of our photometry are presented in Table 1. Errors were estimated from the dispersion of the measurements of each object. One object, 2MASSW J0918382+213406 has a very bright star roughly 1 arcminute SW, making background estimation difficult due to scattered light. This is reflected in the greater photometric errors for this object compared to the fainter 2MASSW J0913032+184150. ## 3 Discussion In Figure 1 we plot $`I_HZ_{RGO}`$ (hereafter $`IZ`$) versus spectral type (Kirkpatrick et al. 1999) for our sample (circles). Also plotted as crosses are the M-dwarf data of Cossburn et al. (2000) and their observation of DENIS-P J1228.2-1547, which Kirkpatrick et al. (1999) classify as L5V. Based purely on their M dwarf sample Cossburn et al. (2000) made the reasonable claim that $`IZ`$ was a good temperature indicator down to $`2000K`$ (M9). However from the figure we see that between $``$ L1V and $``$ L5V the $`IZ`$ colour of the L dwarfs is bluer than the late M-dwarfs, and in fact overlaps with the mid-M dwarfs. $`IZ`$ is therefore not a good temperature indicator in the range $`0.51`$. The reason for the relatively blue $`IZ`$ colour of the early-mid L dwarfs can be understood by examining the spectra of Kirkpatrick et al. (1999). A comparison of the L0V spectra with (for example) the L3V spectra shows that the TiO and VO band strengths are much reduced in the L3V objects. These opacity sources are especially strong between 7800-8000 Å (VO) and 8400-8600 Å (TiO), i.e. in the region where the $`I`$ band filter transmission is greatest. Overlaying the spectra normalized at the pseudo-continuum point at $`8250`$ Å shows that the regions longward of $`8600`$ Å (where the $`Z`$ band transmission is greatest) overlap well, but a significant flux excess for the L3V object shorter than this wavelength. Therefore such objects will appear bluer than earlier objects in $`IZ`$. For objects later than $``$ L5V $`IZ`$ again becomes a reasonable temperature indicator. Examination of the spectra shows that this is simply due to the extremely cool temperature giving a very steep spectral slope through $`I`$ and $`Z`$ which ‘overwhelms’ the effect of the lack of opacity in the $`I`$ band. As stated previously, $`IZ`$ has recently become the favoured colour for cluster brown dwarf searches, often with considerable success (e.g. Cossburn et al. 1997, Zapatero Osorio et al. 1999). However from Figure 1 it appears that if we wish to find cluster L dwarfs (which compared with field objects have the advantage of known distance, age and metallicity, making mass derivations via. a comparison with isochrones feasible), the $`IZ`$ colour would not be appropriate. To confirm this in Figure 2 we plot the combined $`I,IZ`$ diagram for the four fields containing our L dwarfs. It is apparent that only the earliest and latest L dwarf would be identified from this diagram as a potentially interesting object, the middle two objects overlapping with the background objects (which may be distant M dwarfs, even more distant M giants or extragalactic) as would be expected from their bluer colours. It is therefore apparent that $`I,Z`$ searches for L dwarfs in clusters (and the field) will only be sensitive to objects earlier than $``$L1V and later than $``$L5V. Using the objects listed in Kirkpatrick et al. (1999) which have measured parallaxes to define an absolute $`I`$ magnitude, spectral type relation for the L dwarfs, this indicates that $`IZ`$ will not be a useful colour in the absolute magnitude range $`M_I1517`$. Assuming a Pleiades age of $`120`$Myr and using an extension (Baraffe, priv. comm.) of the Lyon Group absolute magnitude-mass relationship presented by Baraffe et al. (1998) this implies that for that cluster objects with masses lower than $`0.04M_{}`$ will be difficult to pick out with $`IZ`$ photometry. This is consistent with the lowest mass (spectroscopically confirmed) object so far found using that technique (Roque 25) which has a spectral type of $``$ L1V ($`M0.04M_{}`$) (Martin et al. 1998). For older clusters such as the Hyades and Praesepe (age $`1`$ Gyr), objects in the range $`0.0750.06M_{}`$ will be difficult to detect using the $`IZ`$ colour. This is consistent with the lowest mass objects detected in Praesepe with this technique having $`I21.5`$ (Pinfield et al. 1997, Magazzu et al. 1998), corresponding to $`M_I15`$. ## 4 Conclusions We have presented observations that show that the $`IZ`$ colour is not a good temperature indicator for L dwarfs between L1V and L5V, these objects having $`IZ`$ colours which overlap with mid-late M dwarfs. We attribute this to the decreasing blanketing of the $`I`$ band flux in L dwarfs due to the absence of strong TiO and VO bands in their spectra. This imposes limits on the use of $`IZ`$ as an indicator of very cool objects in cluster brown dwarf searches at around $`0.04M_{}`$ for the Pleiades and $`0.075M_{}`$ for Praesepe and the Hyades. For objects of lower mass than this near infrared ($`JHK`$) surveys should be considered. ## Acknowledgements Data reduction for this paper was carried out on the Liverpool John Moores STARLINK node. The JKT is operated by the ING on behalf of the UK Particle Physics and Astronomy Research Council (PPARC) at the ORM Observatory, La Palma. We are pleased to thank Rachel Curran and the technical staff of the ING for their assistance at the telescope. We gladly acknowledge the postscript genius of Andrew Newsam for the addition of error bars to Fig. 2. LH acknowledges a PPARC research studentship.
no-problem/0002/cond-mat0002044.html
ar5iv
text
# Band-theoretical prediction of magnetic anisotropy in uranium monochalcogenides \[ ## Abstract Magnetic anisotropy of uranium monochalcogenides, US, USe and UTe, is studied by means of fully-relativistic spin-polarized band structure calculations within the local spin-density approximation. It is found that the size of the magnetic anisotropy is fairly large ($``$10 meV/unit formula), which is comparable with experiment. This strong anisotropy is discussed in view of a pseudo-gap formation, of which crucial ingredients are the exchange splitting of U $`5f`$ states and their hybridization with chalcogen $`p`$ states ($`f`$$`p`$ hybridization). An anomalous trend in the anisotropy is found in the series (US$``$USe$`<`$UTe) and interpreted in terms of competition between localization of the U $`5f`$ states and the $`f`$$`p`$ hybridization. It is the spin-orbit interaction on the chalcogen $`p`$ states that plays an essential role in enlarging the strength of the $`f`$$`p`$ hybridization in UTe, leading to an anomalous systematic trend in the magnetic anisotropy. \] Magnetic moments in solid originate in the spin and orbital components of electrons. Since electronic states responsible for the magnetism are normally localized in a particular atomic region, the moments can be regarded as site-selective quantities. Experimental techniques such as x-ray magnetic circular dichroism (XMCD) combined with the so-called spin and orbital sum rules provide such separable information of the spin and orbital magnetic moments of ferro- and ferri-magnets. X-ray magnetic scattering can also give us similar information. Usually $`5f`$ electrons play major roles in magnetism of uranium compounds. Since the spin-orbit interaction (SOI) of the $`5f`$ electrons is relatively large, the size of the $`5f`$ orbital moment is often expected to be greater than the spin counterpart. Unlike the $`4f`$ orbitals in rare-earth element systems, the $`5f`$ states are more or less extended and may be possibly affected by their environmental effects, hybridization and crystal field. Thus, the magnetic moment in $`5f`$ systems must be strongly material-dependent. Magnetic anisotropy is another fundamental quantity in magnetism, which is often even more important for applications. However, the magnetic anisotropy energy is usually very small to be evaluated from first principles and furthermore its microscopic origins have never been clearly understood yet . It is, therefore, quite interesting to study such issues on the magnetism by using a state-of-the-art band-theoretical technique. Uranium monochalcogenides, US, USe and UTe, have NaCl-type cubic crystal structure and show a ferromagnetic order at the Curie temperatures, 177K, 160K and 104K, respectively. It is well known that the monochalcogenides show strong magnetic anisotropy , where the () direction is the easy (hard) axis. Interestingly, the saturation magnetic moment depends on the magnetization axis . The largest moment along the easy axis, the smallest along the hard axis. The total magnetic moment per uranium atom increases from sulfide through telluride with increasing lattice constant. As long as the sulfide is concerned, the $`5f`$ electrons are considered to be itinerant from photoemission and other experiments . In the present study, mechanism of the magnetic anisotropy in the uranium monochalcogenides is investigated by first-principles calculations and an anomalous trend in the size of the anisotropy in the series is predicted. Our method is based on the local spin-density approximation (LSDA) to the density functional theory. One-electron Kohn-Sham equations are solved self-consistently by using an iterative scheme of the full-potential linear augmented plane wave method in a scalar-relativistic fashion. We include SOI as the second variation every self-consistent-field step. The improved tetrahedron method proposed by Blöchl is used for the Brillouin-zone integration. More details about our methods and calculated results are published elsewhere . Figure 1 shows calculated fully-relativistic spin-polarized band structure of ferromagnetic US with the magnetization. Shallow core U $`6p`$ states form $`j=1/2`$ and $`3/2`$ bands split by large SOI. S $`3s`$ bands are located just above the U $`6p_{3/2}`$ bands and have relatively large dispersion. S $`3p`$ bands are situated from 7 to 3 eV below the Fermi energy ($`\epsilon _\mathrm{F}`$). Dispersive bands appearing just below $`\epsilon _\mathrm{F}`$ are made mostly of U $`d`$ states. Relatively narrow U $`5f`$ bands are pinned around $`\epsilon _\mathrm{F}`$. It is found that the largest contribution to the magnetic moments comes from U $`5f`$ states as expected. Calculated $`5f`$ moments of US are listed in Table I for three magnetization axes, , and . The spin moments hardly change with the magnetization axis while the orbital moments show large dependence of the axis. Consequently, the total moments depend strongly on the magnetization axis, being in qualitatively good agreement with experiment . Comparing with the experimental data, the spin moment is slightly overestimated while the orbital moment is underestimated. Origins of the discrepancy in the magnetic moments have been already discussed in the previous Hartree-Fock type study . The total-energy difference shows that the direction is the lowest while is the highest. The difference of the total energy between the easy and hard axes is nothing but the magnetic anisotropy energy. The size of the magnetic anisotropy is comparable with experimental data and is about $`10^3`$ times larger than that of $`3d`$ transition metals. Let us discuss mechanism of the magnetic anisotropy and its relation to the magnetic moments. Because of large SOI of $`5f`$ electrons, $`j=5/2`$ and $`7/2`$ states are roughly well separated and the occupied states are composed mostly of the $`j=5/2`$ states. The $`j_z`$-projected density of states (DOS) in the $`j=5/2`$ states is plotted in Fig. 2. One can easily note that a clear pseudo-gap is formed at $`\epsilon _\mathrm{F}`$ in the magnetization, while it becomes less apparent in the direction. Relative stability of the magnetization must come from the existence of the pseudo-gap. In the case of , the plus and minus components are well separated. This is the exchange splitting of the bands and the pseudo-gap is considered to be a sort of the exchange gap. Only a small amount of mixing can be seen in the $`j_z=\pm 3/2`$ states. On the other hand, the pseudo-gap is almost diminished in the case of because of larger mixing in $`j_z=\pm 3/2`$ and $`j_z=\pm 1/2`$. By counting the number of occupied electrons in each partial DOS, one can get the occupation of each $`j_z`$ state. For $`\pm j_z`$ state are well exchange-polarized but for the occupations in the $`j_z=\pm 1/2`$ bands are almost equal due to the mixing. In order to get more intuitive insight into the occupations, a change-density plot of the $`5f`$ electrons around the U atom is depicted in Fig. 3. In the magnetization, the $`5f`$ electrons tend to point to the direction of neighboring-U atoms; hexagonal brim stretching out in the (111) U plane and triangular bell around direction are clearly formed. As a result, the $`5f`$ electrons have less distribution in the neighboring-S directions and they can reduce the hybridization with the S $`p`$ states and gain the exchange splitting. The small mixing found in $`j_z=\pm 3/2`$ can be understood by the fact that the $`j_z=\pm 3/2`$ orbitals are extended to the S atoms with respect to the polar angle . In the magnetization, on the other hand, the $`5f`$ electrons again try to point to the U-U bonds, twelve directions. In the $`xy`$ plane gentle depression is formed in the nearest-neighboring-S directions, because the $`j_z=\pm 5/2`$ states which spread in the $`xy`$ plane can prevent from mixing with the S $`p`$ states by using their azimuthal degrees of freedom. However, no depression is seen in the \[$`00\pm 1`$\] directions. The $`j_z=\pm 1/2`$ states extending to the $`z`$ direction have no way to refuse the mixing with the S $`p`$ states because of less azimuthal degrees of freedom in its $`m=0`$ component. This hybridization of the $`j_z=\pm 1/2`$ states with the neighboring S $`p`$ orbits destroys the pseudo-gap in the corresponding partial band and makes the magnetization unfavorable . This is the most important mechanism of the magnetic anisotropy found in the present study and leads to a very interesting variation in the series of the uranium monochalcogenide as we shall discuss below. A variation in the calculated spin and orbital magnetic moments for USe and UTe as well as for US is shown in Fig. 4. The spin and orbital moments increase as one goes from sulfide through telluride. This increase of the moments can be interpreted by narrowing of the U $`5f`$ bands due to increase of the lattice constant. Based on such theoretical observation on the $`5f`$-electron nature, we can expect a simple trend in the magnetic anisotropy. From sulfide to telluride, the lattice constant increases and the $`5f`$ band width is reduced. Accordingly the magnetic moments are enhanced, approaching to an atomic limit. Therefore, the magnetic anisotropy may show a monotonic decrease from sulfide through telluride with increasing free-atom nature. But the reality is not so simple and calculated magnetic anisotropy energy, plotted in Fig. 5, shows an anomalous behavior: decrease and increase with a minimum appearing at USe. Calling that the strong magnetic anisotropy in the present compounds originates in the hybridization between the U $`5f`$ and chalcogen $`p`$ states ($`f`$$`p`$ hybridization), differences in the chalcogen $`p`$ states may be a clue to understand such an anomalous feature. Hybridization depends upon spatial extension of the relevant orbitals via transfer integrals. Surprisingly, tails of the chalcogen $`p`$ orbitals are almost the same if we consider the nearest neighbor distance between the U and chalcogen atoms. The Te $`5p_{3/2}`$ orbital has the longest tail of the chalcogens but the difference is not so large. In addition to the orbital extension, the orbital energy is another factor to determine the hybridization. Generally, the orbital-energy difference between the U $`5f`$ and chalcogen $`p`$ states becomes smaller when going from sulfur through tellurium. Because of the large SOI, the Te $`5p_{3/2}`$ state raises up and its orbital-energy difference from the U $`5f`$ decreases substantially. Therefore, the $`f`$$`p`$ hybridization becomes very strong in the case of telluride. Basically, from sulfide through telluride, the U $`5f`$ electrons tend to be more localized due to enlargement of the lattice constant. But the $`f`$$`p`$ hybridization, which makes the magnetization unfavorable, increases rapidly from selenide to telluride, resulting to large magnetic anisotropy in UTe. It should be noted that experimental data about the magnetic anisotropy have not been available for USe and UTe . Experimental efforts to examine our prediction are strongly desired. In conclusion, we have carried out fully-relativistic LSDA calculations for uranium monochalcogenides, US, USe and UTe. The magnetic anisotropy can be well described by the LSDA calculations. The pseudo-gap formation stabilizes the magnetization. We have emphasized important roles of the $`j_z=\pm 1/2`$ states, which make the magnetization unfavorable significantly. Calculated magnetic moments show the monotonic increase in the series with increasing lattice constant. On the other hand, the magnetic anisotropy shows the anomalous systematic trend. This can be understood by considering the hybridization of the U $`5f`$ states with the chalcogen $`p`$, especially the $`p_{3/2}`$ components. Anyhow, SOI not only on U but also on the chalcogen sites is essential to realize such interesting nature in magnetism of the uranium chalcogenides. We thank Takeo Jo for invaluable discussion. This work was supported in part by Japan Science and Technology Corporation.
no-problem/0002/physics0002004.html
ar5iv
text
# Condensation of microturbulence-generated shear flows into global modes ## Numerical results. — We discuss the results of turbulence simulations of the three dimensional electrostatic drift Braginskii equations with isothermal electrons (a subset of the equations of Ref. ) for two different cases: (a) the predominant instability is the resistive ballooning mode with the nondimensional parameters $`\alpha _d=0.2`$, $`ϵ_n=0.08`$, $`q=5`$, $`\tau =1`$, $`\eta _i=1`$, $`\widehat{s}=1`$; (b) there is a significant contribution from ITG modes with $`\alpha _d=0.4`$, $`\eta _i=3`$ and the other parameters as in (a). The radial domain width in terms of the resistive ballooning scale length, $`L_0`$, in (a) was $`24L_0`$ and in (b) $`48L_0`$, the width $`L_\theta `$ perpendicular to $`𝐫`$ and $`𝐁`$ was $`24L_0`$ \[only for (a)\], $`192L_0`$, $`384L_0`$, and $`768L_0`$ (the corresponding tokamak minor radius is $`a=L_\theta q/(2\pi )`$). For a definition of these parameters and units see Refs. . The parameters of the largest domain are consistent with the physical parameters $`R=3`$ m, $`a=1.5`$ m, $`L_n=12`$ cm, $`q_0=3.2`$, $`n=3.5\times 10^{19}`$ m<sup>-3</sup>, $`Z_{\mathrm{eff}}=4`$, $`B_0=3.5`$ T, and for (a) $`T=100`$ eV, $`L_0=5.1`$ mm, $`\rho _s=0.58`$ mm and for (b) $`T=200`$ eV, $`L_0=3.6`$ mm, $`\rho _s=0.82`$ mm. The perpendicular grid step size was $`\mathrm{\Delta }=0.19L_0`$ (a) and $`\mathrm{\Delta }=0.38L_0`$ (b). Parallel to the magnetic field 12 points per poloidal connection length were sufficient due to the large parallel scales of the ballooning modes. The largest runs had a grid of $`128\times 4096\times 12`$. The dependence of the average $`(0,0)`$ shear flow energy density on the domain size, $`L_\theta `$, is compared for the two cases in Fig. 1. In contrast to case (b), the shear flows in (a) are apparently not condensed into the $`(0,0)`$ mode since its energy density decreases proportional to $`1/L_\theta 1/a`$, as is expected when a given shear flow energy density is distributed equally among an increasingly dense set of modes. The $`k_\theta `$ spectrum of the flow velocity, $`v=v_\theta =_r\varphi `$, for the $`L_\theta =768L_0`$ runs \[for case (a) see Fig. 2\] exhibits a rise at low $`k_\theta `$ associated with the shear flows, different from the microturbulence fluctuations at $`k_\theta 1`$. The square amplitude of the $`m=0`$ mode in case (a) and (b) is $`0.3`$ and $`7`$ times, respectively, the total shear flow amplitude, suggesting strong condensation for (b). In both cases, the typical poloidal scale length of the $`m0`$ shear flows is roughly a factor $`10`$ greater than the scales of the turbulence. Failure of the computational domain to accomodate the scales of the uncondensed shear flows in case (a) results in an overestimate of the shear flow amplitude, and hence in an underestimate of the anomalous transport. The particle flux for $`L_\theta =L_r=24L_0`$ was found to be $`25\%`$ lower than for $`L_\theta =768L_0`$. ## Analytic model. — As the first ingredient of a qualitative model for the poloidal shear flow spectra, we calculate the linear dispersion relation for finitely elongated shear flows. For clarity, in the linear electrostatic vorticity equation (with the plasma parameters absorbed into the units, see, e.g., ), $$_{}^2(_t+\gamma )\varphi +_{}^2\varphi =0,$$ (1) we neglect temperature fluctuations, parallel ion velocity, drift effects, curvature and magnetic fluctuations. These effects can lead to a real frequency (e.g., geodesic acoustic modes ) and to a coupling to parallel sound waves or Alfvén waves. The dissipative effects are the flow damping $`\gamma `$ due to the ion dissipation assumed independent of the wavenumber and the damping due to the resistive electron response. As we will see below, for a potential condensation of the shear flows into global modes only a small region around a certain radial wavenumber, $`k_0`$, is important, which we set to one in (1) since its absolute value is not important. Because of the large poloidal wavelengths of the flows, we approximate $`_{}^2k_0^2=1`$ and obtain the dispersion relation $$\omega _{\mathrm{lin}}=i\left(\gamma +k_{}^2\right),k_{}=\left(\frac{m}{q(r)}n\right).$$ (2) The damping by the parallel resistive electron response is weak if either $`m=n=0`$ holds, or $`r`$ is near a resonant surface defined by $`mnq(r_{mn})=0`$. Focusing on a thin region around $`r=r_0`$ we obtain $`k_{}m\alpha _0(rr_{mn})`$, $`\alpha _0=q^{}(r_0)/q(r_0)^2`$. Hence the resistive flow damping is proportional to $`m^2`$, which is the reason for the poloidal elongation of the flows, i.e., their low mode numbers. As reaction to a shear flow the microturbulence may in turn influence the flows via the Reynolds stress or the Stringer–Winsor mechanism due to poloidal pressure asymmetries . Restricting ourselves to linear response theory, we assume a (coherent) flow amplification rate $`g(k_r)`$ depending only on the radial wavenumber $`k_r`$, because of the large poloidal correlation lengths of the shear flows. With the (incoherent) random forcing, $`f`$, representing the effect of the turbulence fluctuations, the equation for the flow amplitude in frequency space has the form of a Langevin equation, $$_tv=i\omega _{\mathrm{lin}}v+g(k_r)v+f.$$ (3) From (3) we obtain a relation between the mean square spectra of the flows and the forcing in frequency space, $$\overline{|\widehat{v}|^2}=\frac{\overline{|\widehat{f}|^2}}{\omega ^2+(i\omega _{\mathrm{lin}}+g(k_r))^2}.$$ (4) Assuming that $`\overline{|\widehat{f}|^2}`$ is independent of $`𝐤,\omega `$ (white noise), the integration of (4) over $`\omega `$ yields the relation between the mean square flow amplitude at an instant of time and the forcing, $$\overline{|v|^2}=\frac{\overline{|\widehat{f}|^2}\pi }{|i\omega _{\mathrm{lin}}+g(k_r)|}.$$ (5) The flow intensity (5) replaces the Bose distribution in the BEC case. Both functions tend to infinity when the amplification (stimulated emission) terms approach the damping (absorption) terms. As long as every mode is net-damped at a rate independent of the system size, the energy density stored in (0,0) modes must decrease proportional to the system size, since the total shear flow energy is distributed among an increasingly dense set of modes. However, analogous to the thermodynamic theory of the BEC, when the continuous flow spectrum is unable to hold the shear flow energy for non-zero minimum net-damping rate and given random forcing, the nonlinear flow amplification term must adjust so that the remaining part of the flow energy is excited in the form of the most weakly damped modes, which are (0,0) modes. Hence, to demonstrate the possibility of condensation, it has to be shown that the flow amplitude in $`m0`$ modes stays finite when the net-damping rate of the $`m=0`$ modes tends to zero, in the limit of infinite system size or, equivalently, in the approximation of a continuous poloidal mode spectrum. It is sufficiently general, to assume that $`g(k_r)`$ has a maximum at $`k_r=k_0`$ of order of the turbulence wavenumbers and is parabolic near that maximum, $`g(k_r)=g_0g_1(k_rk_0)^2`$. The amplification terms will nearly cancel the damping terms only for wavenumbers near $`k_0`$, which justifies the approximation $`k_rk_0`$ which was made in the derivation of (2). For the following analysis we shift the $`k_r`$ spectrum of the flows so that $`k_0=0`$. With $`ik_r=_r`$, the operator in the denominator of (5), $$(\gamma g_0)g_1_r^2+\left(m\alpha _0(rr_{mn})\right)^2,$$ (6) is the quantum mechanical Hamiltonian of the harmonic oscillator. Its eigenvalue for a mode with the “quantum numbers” $`(m,r_{mn},l)`$, $`l\{0,1,2,\mathrm{}\}`$, is $`\omega _l=\gamma g_0+2\sqrt{g_1}\left|\alpha _0m\right|(l+1/2).`$ The sum over $`l`$ of the eigenmode contributions to (5) at fixed $`(m,r_{mn})`$ results in a logarithmic divergence, which stems from the infinitely broad random forcing spectrum and infinitely fast turbulence response. Hence, we cut off $`\omega _l`$ at an appropriate $`\omega _\mathrm{c}`$ depending on the turbulence. The sum is then approximated by an integral over $`l`$. The resulting amplitude associated with each pair $`(m,r_{mn})`$ is $$\overline{|v|^2}(m,r_{mn})=\frac{\overline{|\widehat{f}|^2}\pi }{2\sqrt{g_1}|\alpha _0m|}\mathrm{ln}\frac{\omega _\mathrm{c}}{\omega _0}$$ (7) with $`\omega _0=\gamma g_0+\sqrt{g_1}|\alpha _0m|<\omega _\mathrm{c}`$. The density of rational surfaces is $`|\alpha _0m|`$ for given $`m`$. Approximating the sum over all $`\overline{|v|^2}(m,r_{mn})`$ contributions to (5) with $`m0`$ by an integral (which becomes exact for infinite system size), the total instantaneous energy density of the flow modes with $`m0`$, $$\overline{|v|^2}_{m0}=_{m_\mathrm{c}}^{m_\mathrm{c}}\left|\alpha _0m\right|\overline{|v|^2}(m,r_{mn})𝑑m,$$ (8) is obtained, where the integration interval is limited by the cutoff $`m_\mathrm{c}`$ defined by $`\omega _0(m=m_\mathrm{c})=\omega _\mathrm{c}`$. With the minimum net-damping rate $`\mathrm{\Omega }=\omega _0(m=0)=\gamma g_0`$ we obtain $$\overline{|v|^2}_{m0}=\frac{\pi \overline{|\widehat{f}|^2}}{2|\alpha _0|g_1}\left[\omega _\mathrm{c}\mathrm{\Omega }\left(1+\mathrm{ln}\frac{\omega _\mathrm{c}}{\mathrm{\Omega }}\right)\right].$$ (9) This expression converges to a finite value for $`\mathrm{\Omega }0`$. On the other hand, because the energy density of the $`m=0`$ modes, which tends to infinity for $`\mathrm{\Omega }0`$ \[the integral over $`k_r`$ of (5) does not exist for $`i\omega _{\mathrm{lin}}+g_0=0`$\], has to be finite, we always have $`\mathrm{\Omega }>0`$. If the turbulence saturation requires a higher flow level than (9) at $`\mathrm{\Omega }0`$, the description of the system by a continuum of poloidal mode numbers breaks down, the flow energy which can not be received by the $`m0`$ modes condenses into $`m=0`$ modes, and simultaneously $`\mathrm{\Omega }0`$. In a similar manner, it can be shown that in the limit of infinite system size the $`m=0`$ condensate is completely contained in the $`n=0`$ modes. Furthermore, there is no condensation of the radial wave numbers but the $`k_r`$ spectrum becomes arbitarily narrow around the point of weakest net-damping for large system size. Strictly speaking, condensation is unprovable by numerical studies, due to the restriction to finite system sizes. However, the validity of the individual parts of the model can be checked in the simulations. The localization of the $`m0`$ shear flows on resonant surfaces is obvious in a plot of the flow spectra versus radius (Fig. 3). The total flow amplitude associated with each $`(m,r_{mn})`$ quantum number in Eq. (7) (Fig. 4) has much weaker slope than $`k_\theta ^2m^2`$ for $`k_\theta ,m0`$. Therefore the integral over all $`m0`$ shear flows (8) is expected to be finite in the limit of infinite system size (even if the estimate (7) for the individual amplitudes should be quantitatively wrong). Furthermore, the integral is reasonably well approximated by the corresponding sum in the finite system. Consequently, the infinite system will have approximately the same ratio of $`m=0`$ flow amplitude to $`m0`$ flow amplitude. Finally, we note that the numerical studies agree with the above analytical prediction, that the flow condensate exhibits a strong peaking in $`k_r`$ for sufficiently large system size. ## Conclusions and consequences. — It has been shown numerically that in general the shear flows controlling the turbulence are not only $`(0,0)`$ modes but rather consist of a spectrum of poloidal mode numbers. The $`(m,n)(0,0)`$ flows differ from drift waves or convective cells by their large poloidal \[10 times larger than the turbulence (Fig. 2)\] and parallel scale length, while their perpendicular scale length is similar to that of the turbulence. These shear flows are localized in the vicinity of resonant surfaces (Fig. 3). In the limit of large system size, a non-zero (0,0)-mode amplitude develops only if the shear flows undergo a condensation into these modes, analogous to the Bose–Einstein condensation. Several features predicted by the analytic model have been reproduced by the numerical simulations. Due to the cancelling of damping and amplification terms, the $`(0,0)`$ flow condensate is practically undamped. This means in quantum mechanical language that the rate of absorption and incoherent re-emission, the “collision” rate, vanishes. Hence, far ranging interactions or ordering effects might be mediated via the shear flow condensate (but not by the uncondensed flows that suffer collisions and are pinned to resonant surfaces). As a consequence in the simple system used here the $`k_r`$ spectra become arbitrarily narrow. Since the flows depend on the distribution of rational surfaces and mode numbers, to accurately model the shear flow system in numerical studies, care has to be taken to not introduce spurious resonant surfaces or modes, e.g., by parallel extension of the flux tube . Remarkably, it can be shown that increasing the flux tube length does not lead to the correct limit of large system sizes, since, e.g., for an infinitely long flux tube, condensation into $`(0,0)`$ modes can not occur. Up to now, in flux tube based turbulence computations the shear flows were implicitly assumed to be global modes. With domain widths too small for the large poloidal scales of the continuous part of the flow spectrum, the flows appear to have zero poloidal and toroidal mode number. Such modes do not experience the resistive damping, which would reduce the flow amplitude in a full system. Hence, the simulations tend to overestimate the total flow amplitude, which may therefore exert a strong stabilizing effect on the turbulence. To avoid an underestimate of the transport, flux tube simulations have to be checked for influences of a finite poloidal scale length of the flows. The author would like to thank Dr. D. Biskamp for valuable discussions.
no-problem/0002/nucl-ex0002014.html
ar5iv
text
# Background Studies for the Neutral Current Detector Array in the Sudbury Neutrino Observatory ## 1 Introduction Using heavy water as its target the Sudbury Neutrino Observatory (SNO) is the first solar neutrino detector capable of determining both the flux of electron neutrinos and the total flux of all active neutrinos from the Sun. Photomultiplier tubes are used to detect the Čerenkov light associated with charged-current (CC) neutrino interactions. The total flux of solar neutrinos is determined from the neutral-current (NC) dissociation of the deuteron and the subsequent detection of the free neutron. The SNO collaboration is pursuing two completely independent methods of measuring the NC signal. One involves the addition of a salt to absorb the free neutron on <sup>35</sup>Cl. The second method is the deployment of an array of discrete <sup>3</sup>He-filled proportional counters which detect the neutrons from the NC interaction via the <sup>3</sup>He(n,p)<sup>3</sup>H reaction. Photodisintegration of the deuteron, and alphas from the decay chains of <sup>238</sup>U and <sup>232</sup>Th in the proportional counters are the most significant backgrounds to the <sup>3</sup>He method. Radioassay techniques and underground studies provide first results on the backgrounds in the Neutral-Current Detectors (NCD). An in-situ background test experiment has been designed to measure the photodisintegration background from the NCD array prior to its installation in SNO. ## 2 Neutral-Current Detection via Neutron Capture on <sup>3</sup>He Construction of an array of <sup>3</sup>He-filled proportional counters is nearing completion . With an estimated efficiency of 37%, the expected number of neutron capture events on the NCD array is about 1700 events/year at the flux predicted by standard solar models . The use of two independent methods for the detection of the NC and CC signal allows their separation in real time and will help to determine signal and background events simultaneously. ### 2.1 Proportional Counter Signals Neutrons from the NC interaction of neutrinos with the D<sub>2</sub>O thermalize and then capture via <sup>3</sup>He(n,p)<sup>3</sup>H in the proportional counters. They produce a 573 keV proton and a 191 keV triton that leave ionization tracks in the proportional counters. A number of different backgrounds contribute to the overall event rate. About 1000-10000 alpha particles per day are expected from the decay chains of <sup>232</sup>Th and <sup>238</sup>U. They enter the detection volume from the wall of the proportional counters and create a continuous background that underlies the <sup>3</sup>He(n,p)<sup>3</sup>H peak. Pulse shape analysis of the digitized proportional counter signals can be used to distinguish between neutron capture, alphas, and other ionization events. The <sup>3</sup>He gas contains tritium that is of concern if the concentration is high enough to lead to random coincidences and to pileup of tritium decay pulses. <sup>3</sup>H decays deposit an average of 6 keV/event in the proportional counters. Low temperature purification techniques are used to reduce <sup>3</sup>H in the <sup>3</sup>He-fill gas . The sensitivity of the <sup>3</sup>He counters and their stringent background criteria make it essential to minimize all spurious pulses including signals induced by high voltage. The phenomenon of microscopic surface discharges and techniques for their reduction have been described elsewhere . Gamma rays with an energy greater than 2.2 MeV contribute to the photodisintegration background. ## 3 Photodisintegration Background Gamma rays above the deuteron binding energy can break the deuteron in the heavy water apart and produce neutrons that are indistinguishable from the NC signal. In the SNO detector the main sources of the photodisintegration background are <sup>238</sup>U and <sup>232</sup>Th in the water, gammas from photomultipliers, and ($`\alpha `$, p$`\gamma `$) and ($`\alpha `$, n$`\gamma `$) reactions on the PMT’s and their support structure. <sup>238</sup>U, <sup>223</sup>Th, and cosmogenically activated <sup>56</sup>Co in the nickel bodies of the Neutral-Current Detectors add to the photodisintegration background. Čerenkov light from associated gammas and radioassay techniques can be used to estimate these backgrounds. Results from radioassays indicate that the neutron background from the NCD array will contribute about 130 neutrons/year in the main detector volume, i.e. about 2.8% of the expected neutrons from the NC interaction. ### 3.1 In-Situ Background Measurement The photodisintegration background from the Neutral Current Detector array can be assessed with the Construction Hardware In-Situ Monitoring Experiment (CHIME) prior to the array’s installation in SNO. Observation of Cerenkov light from the CHIME background source will allow a quantitative estimate of the NCD originated photodisintegration background. The CHIME source consists of seven close-packed Neutral-Current Detectors. The construction materials and procedures are identical to those in the NCD array. The CHIME is negative buoyant and will be deployed along the central vertical axis of the SNO detector using a specifically designed winding mechanism. The expected deployment date for the CHIME background source is summer 2000. ## 4 Neutron Background to the NC Signal The backgrounds to the NC signal rate of 4600 n/yr expected in standard solar models can be summarized in terms of the total number of neutrons produced in the detector. The photodisintegration background contributes an estimated 1600 n/yr to the internal D<sub>2</sub>O backgrounds. The neutron background due to spallation reactions with cosmic rays muons (about 6000 n/yr) can be completely discriminated by the Cerenkov signal associated with each muon. Reactor and terrestrial $`\overline{\nu _e}`$ do not contribute more than 50 n/yr. The NCD array itself produces less than 230 n/yr in the D<sub>2</sub>0. The total detector-internal background to the NC signal is estimated to be less than 1900 n/yr . Determination of the photodisintegration and the additional external backgrounds is therefore essential for an accurate measurement of the NC interaction rate.
no-problem/0002/hep-ph0002268.html
ar5iv
text
# Transversity Distribution and Polarized Fragmentation Function from Semi-inclusive Pion Electroproduction ## 1 Introduction Deep inelastic charged lepton scattering off a transversely polarized nucleon target is an important tool to further study the internal spin structure of the nucleon. While a lot of experimental data on the longitudinal spin structure of the nucleon has been collected over the last 10 years, the study of its transverse spin structure is just about to begin. Only a very limited number of preliminary experimental results is available up to now: * measurements of the nucleon structure function $`g_2(x)`$ at CERN smcg2 and SLAC e154g2 -bosted99 , * a first measurement of a single target-spin asymmetry for pions produced in lepton scattering off longitudinally polarized protons at HERMES avakian , * a first study of hadron azimuthal distributions in DIS of leptons off a transversely polarized target at SMC bravar99 . A quark of a given flavour is characterized by three twist-2 parton distributions. The quark number density distribution $`q(x,Q^2)`$ has been studied now for decades and is well known for all flavors. The helicity distribution $`\mathrm{\Delta }q(x,Q^2)`$ was only recently measured more accurately for $`u`$ and $`d`$quarks HERMESdeltaq and is still essentially unknown for $`s`$quarks. The third parton distribution, known generally as ‘transversity distribution’ and denoted $`\delta q(x,Q^2)`$ characterizes the distribution of the quark’s transverse spin in a transversely polarized nucleon. For non-relativistic quarks, where boosts and rotations commute, $`\delta q(x)=\mathrm{\Delta }q(x)`$. Since quarks in the nucleon are known to be relativistic, the difference between both distributions will provide further information on their relativistic nature. The transversity distribution does not mix with gluons under QCD evolution, i.e. even if transversity and helicity distributions coincide at some scale, they will be different at $`Q^2`$ values higher than that. The chiral-odd nature of transversity distributions makes their experimental determination difficult; up to now no experimental information on $`\delta q(x,Q^2)`$ is available. It can not be accessed in inclusive deep inelastic scattering (DIS) due to chirality conservation; it decouples from all hard processes that involve only one quark distribution (or fragmentation) function (see e.g. jaffe97b ). This is in contrast to the case of the chiral-even number density and helicity distribution functions, which are directly accessible in inclusive lepton DIS. In principle, transversity distributions can be extracted from cross section asymmetries in polarized processes involving a transversely polarized nucleon. The corresponding asymmetry can be expressed through a flavor sum involving products of two chiral-odd transversity distributions in the case of hadron-hadron scattering, while in the case of semi-inclusive DIS (SIDIS) a chiral-odd quark distribution function always appears in combination with a chiral-odd quark fragmentation function. These fragmentation functions can in principle be measured in $`e^+e^{}`$ annihilation. The transversity distribution was first discussed by Ralston and Soper ralston in doubly transverse polarized Drell-Yan scattering. Its measurement is one of the main goals of the spin program at RHIC rhic . An evaluation of the corresponding asymmetry $`A_{TT}`$ was carried out omartin by assuming the saturation of Soffer’s inequality soffer95 for the transversity distribution. The maximum possible asymmetry at RHIC energies was estimated to be $`A_{TT}=1÷2\%`$. At smaller energies, e.g. for a possible fixed-target hadron-hadron spin experiment HERA-$`\stackrel{}{N}`$ heran ($`\sqrt{s}40`$ GeV), the asymmetry is expected to be higher. In semi-inclusive deep inelastic lepton scattering off transversely polarized nucleons there exist several methods to access transversity distributions; all of them can in principle be realized at HERMES. One of them, namely twist-3 pion production jaffe-ji93 , uses longitudinally polarized leptons and a double spin asymmetry is measured. The other methods do not require a polarized beam; they rely on polarimetry of the scattered transversely polarized quark: * measurement of the transverse polarization of $`\mathrm{\Lambda }`$’s in the current fragmentation region baldracchini82 -jaffe96 , * observation of a correlation between the transverse spin vector of the target nucleon and the normal to the two meson plane jaffe97b ; jaffe97a , * observation of the Collins effect in quark fragmentation through the measurement of pion single target-spin asymmetries collins93 -mauro . The HERMES experiment Spectr has excellent capabilities to investigate semi-inclusive particle production. Taking the measurement of the Collins effect as an example, it will be shown in the following that HERMES will be capable to extract both transversity and chiral-odd fragmentation function at the same time and with good statistical precision. ## 2 Single Target-Spin Asymmetry in Pion Electroproduction A complete analysis of polarized SIDIS with non-zero transverse momentum effects in both the quark distribution and fragmentation functions was performed in the framework of the quark-parton model in kotzinian95 and in the field theoretical framework of QCD in mulders96 . An important ingredient of this analysis is the factorization property that was proven for $`k_T`$ integrated functions and that can reasonably be assumed for $`k_T`$ depending functions mulders96 . In the situation that the final state polarization is not considered, two quark fragmentation functions are involved: $`D_1^q(z,z^2k_T^2)`$ and $`H_1^q(z,z^2k_T^2)`$. Here $`k_T`$ is the intrinsic quark transverse momentum and $`z`$ is the fraction of quark momentum transfered to the hadron in the fragmentation process. The ‘polarized’ fragmentation function $`H_1^q`$ allows for a correlation between the transverse polarization of the fragmenting quark and the transverse momentum of the produced hadron. It may be non-zero because time reversal invariance is not applicable in a decay process, as was first discussed by Collins collins93 . Since quark transverse momenta cannot be measured directly, integrals over $`k_T`$ (with suitable weights) are defined to arrive at experimentally accessible fragmentation functions: $`z^2{\displaystyle d^2k_TD_1^q(z,z^2k_T^2)}`$ $``$ $`D_1^q(z)`$ (1) is the familiar unpolarized fragmentation function, normalized by the momentum sum rule $`_h𝑑zzD_1^{qh}(z)=1`$. Correspondingly, the polarized fragmentation function is obtained as $`z^2{\displaystyle d^2k_T\left(\frac{k_T^2}{2M_h^2}\right)H_1^q(z,z^2k_T^2)}`$ $``$ $`H_1^{(1)q}(z),`$ (2) where the superscript $`(1)`$ indicates that an originally $`k_T`$ dependent function was integrated over $`k_T`$ with the weight $`\frac{k_T^2}{2M_h^2}`$. Here, $`M_h`$ is the mass of the produced hadron $`h`$. To facilitate access to transversity and polarized fragmentation functions from SIDIS, single-spin asymmetries may be formed through integration of the polarized cross section over $`P_h`$, the transverse momentum of the final hadron, with appropriate weights. In the particular case of an unpolarized beam and a transversely polarized target the following weighted asymmetry provides access to the quark transversity distribution via the Collins effect kotmul97 : $`A_T(x,y,z)`$ (3) $`{\displaystyle \frac{𝑑\varphi ^{\mathrm{}}d^2P_h\frac{|P_h|}{zM_h}\mathrm{sin}(\varphi _s^{\mathrm{}}+\varphi _h^{\mathrm{}})\left(d\sigma ^{}d\sigma ^{}\right)}{𝑑\varphi ^{\mathrm{}}d^2P_h(d\sigma ^{}+d\sigma ^{})}}.`$ Here $`()`$ denotes target up (down) transverse polarization. The azimuthal angles are defined in the transverse space giving the orientation of the lepton plane ($`\varphi ^{\mathrm{}}`$) and the orientation of the hadron plane ($`\varphi _h^{\mathrm{}}`$ = $`\varphi _h\varphi ^{\mathrm{}}`$) or spin vector ($`\varphi _s^{\mathrm{}}`$ = $`\varphi _s\varphi ^{\mathrm{}}`$) with respect to the lepton plane. The angles are measured around the z-axis which is defined by the momenta $`q`$ and $`P`$ of the virtual photon and the target nucleon, respectively. The raw asymmetry (3) can be estimated kotmul97 using $$A_T(x,y,z)=P_TD_{nn}\frac{_qe_q^2\delta q(x)H_1^{(1)q}(z)}{_qe_q^2q(x)D_1^q(z)},$$ (4) where $`P_T`$ is the target polarization and $`D_{nn}=(1y)/(1y+y^2/2)`$ is the transverse spin transfer coefficient. The magnitude of the asymmetry depends on the unknown functions $`\delta q(x)`$ and $`H_1^{(1)q}(z)`$. ## 3 Transversity Distribution and Polarized Fragmentation Function No experimental data are available on any of the transversity distributions $`\delta q(x)`$, while their behaviour under QCD-evolution is theoretically well established ArtruMek . An example for the leading order evolution of the proton structure functions $`g_1(x,Q^2)`$ $`=\frac{1}{2}_ie_i^2\mathrm{\Delta }q_i(x,Q^2)`$ and $`h_1(x,Q^2)=\frac{1}{2}_ie_i^2\delta q_i(x,Q^2)`$ is shown in Fig. 1. It was assumed that $`h_1^p(x)`$ coincides with $`g_1^p(x)`$ at the scale $`Q_0^2=\mathrm{\hspace{0.17em}0.4}`$ GeV<sup>2</sup> and both functions were evolved to the scale $`Q^2=\mathrm{\hspace{0.17em}10}`$ GeV<sup>2</sup>. The evolution was performed using the programs from kumanog1 ; kumanoh1 for $`g_1`$ and $`h_1`$, respectively. The important conclusion, which was already discussed earlier (see e.g. scopetta ), follows that with increasing $`Q^2`$ the two functions are becoming more and more different for decreasing $`x`$ while at large $`x`$ the difference remains quite small. Results from two independent measurements indicate that the polarized fragmentation function $`H_1^{(1)q}(z)`$ may be non-zero: i) azimuthal correlations measured between particles produced from opposite jets in $`Z`$ decay at DELPHI delphi2 and ii) the single target-spin asymmetry measured for pions produced in SIDIS of leptons off a longitudinally polarized target at HERMES avakian . The approach of kotmul97 is adopted to estimate the possible value of $`H_1^{(1)q}(z)`$. Collins collins93 suggested the following parameterization for the analyzing power in transversely polarized quark fragmentation: $$A_C(z,k_T)\frac{|k_T|H_1^q(z,z^2k_T^2)}{M_hD_1^q(z,z^2k_T^2)}=\frac{M_C|k_T|}{M_C^2+|k_T^2|},$$ (5) with $`M_C0.3÷1.0`$ GeV being a typical hadronic mass. Choosing a Gaussian parameterization for the quark transverse momentum dependence in the unpolarized fragmentation function $$D_1^q(z,z^2k_T^2)=D_1^q(z)\frac{R^2}{\pi z^2}\mathrm{exp}(R^2k_T^2),$$ (6) leads to $`H_1^{(1)q}(z)=`$ (7) $`D_1^q(z){\displaystyle \frac{M_C}{2M_h}}\left(1M_C^2R^2{\displaystyle _0^{\mathrm{}}}𝑑x{\displaystyle \frac{\mathrm{exp}(x)}{x+M_C^2R^2}}\right).`$ Here $`R^2=z^2/b^2`$, and $`b^2`$ is the mean-square momentum the hadron acquires in the quark fragmentation process. In the following the parameter settings $`M_C=0.7`$ GeV and $`b^2=0.25`$ GeV<sup>2</sup> are used because they are consistent kotzinian99 with the single target-spin asymmetry measured at HERMES avakian . They are also compatible with the analysis of delphi2 , as can be seen by evaluating the ratio $$R(z_{min})=\frac{_{z_{min}}^1𝑑zH_1^{}(z)}{_{z_{min}}^1𝑑zD_1(z)},$$ (8) where $`H_1^{}(z)`$, in contrast to Eq.(2), is the unweighted polarized fragmentation function used in delphi2 $$z^2d^2k_TH_1^q(z,z^2k_T^2)H_1^q(z).$$ (9) The BKK parameterization bkk was used to estimate the integral over the unpolarized fragmentation function $`D_1(z)`$. The values obtained for the ratio, $`R(0.1)=0.048`$ and $`R(0.2)=0.070`$, are to be compared to the experimental result delphi2 : $`0.063\pm 0.017`$. ## 4 Projected Statistical Accuracy and Systematics A full analysis to extract transversity and polarized fragmentation functions through (4) requires one to take into account all quark flavours contributing to the measured asymmetry. According to calculations with the HERMES Monte Carlo program HMC, the fraction of positive pions originating from the fragmentation of a struck u-quark ranges, depending on the value of $`x`$, between $`70`$ and $`90\%`$ for a proton target and is only slightly smaller for a deuteron target. Therefore, in a first analysis, the assumption of $`u`$-quark dominance in the $`\pi ^+`$ production cross-section appears to be reasonable. This is supported by the sum rule for T-odd fragmentation functions recently derived in schaefer . These authors concluded that contributions from non-leading parton fragmentation, like $`d\pi ^+`$, is severely suppressed for all T-odd fragmentation functions. Consequently, the assumption of $`u`$-quark dominance was used to calculate projections for the statistical accuracy in measuring the asymmetry $`A_T^{\pi ^+}(x)`$. The expected statistics for scattering at HERMES unpolarized leptons off a transversely polarized target (proton or deuteron options are under consideration) will consist of about seven millions reconstructed DIS events. The standard definition of a DIS event at HERMES is given by the following set of kinematic cuts<sup>1</sup><sup>1</sup>1$`Q^2`$ and $`\nu `$ are the photon’s virtuality and laboratory energy, $`x=Q^2/2M\nu `$ is the Bjorken scaling variable, $`y=\nu /E`$ is the fractional photon energy and $`W`$ is the c.m. energy of the photon-nucleon system; $`E=27.5`$ GeV at HERMES.: $`Q^2>1`$ GeV<sup>2</sup>, $`W>2`$ GeV, $`0.02<x<\mathrm{\hspace{0.17em}0.7}`$, $`y<\mathrm{\hspace{0.17em}0.85}`$. An additional cut $`W^2>10`$ GeV<sup>2</sup> was introduced in the analysis to improve the separation of the struck quark fragmentation region. An average target polarization of $`P_T=75\%`$ is used for the analysis. Considering only $`u`$-quarks the expression for the asymmetry (4) reduces to the simple form $$A_T(x,y,z)=P_TD_{nn}\frac{\delta u(x)}{u(x)}\frac{H_1^{(1)u}(z)}{D_1^u(z)}$$ (10) for a proton target, and $`A_T(x,y,z)=`$ (11) $`(1{\displaystyle \frac{3}{2}}\omega _D)P_TD_{nn}{\displaystyle \frac{\delta u(x)+\delta d(x)}{u(x)+d(x)}}{\displaystyle \frac{H_1^{(1)u}(z)}{D_1^u(z)}}`$ for a deuteron target. Here $`\omega _D=0.05\pm 0.01`$ is the probability of the deuteron to be in the $`D`$-state. To simulate a measurement of $`A_T`$ the approximation $`\delta q(x)=\mathrm{\Delta }q(x)`$ could be used in view of the relatively low $`Q^2`$-values at HERMES, in accordance with the above discussion. The Gehrmann-Stirling parameterization in leading order gehrstir was taken for $`\mathrm{\Delta }q(x)`$ and the GRV94LO parameterization grv94lo for $`q(x)`$. The $`Q^2`$ evolution of the quark distributions was neglected and $`Q^2=2.5`$ GeV<sup>2</sup> was taken as an average value for the HERMES kinematical region. The HERMES Monte Carlo program HMC was used to account for the spectrometer acceptance. The following cuts were applied to the kinematic variables of the pion<sup>2</sup><sup>2</sup>2$`x_F=2p_L/W`$, where $`p_L`$ is the longitudinal momentum of the hadron with respect to the virtual photon in the photon-nucleon c.m.s., and $`z=E_h/\nu `$, where $`E_h`$ is the energy of the produced hadron.: $`x_F>0`$, $`z>0.1`$, $`P_h>0.05`$ GeV . The simulated data were divided into 5x5 bins in ($`x,z`$). The expectations for the asymmetry $`A_T^{\pi ^+}(x)`$ as would be measured by HERMES using a proton target, are presented in Fig. 2a in different intervals of the pion variable $`z`$. The projected accuracies for the asymmetry were estimated according to $$\delta A_T=\left(\frac{P_h}{zm_\pi }\mathrm{sin}(\varphi _s^{\mathrm{}}+\varphi _h^{\mathrm{}})\right)^2^{\frac{1}{2}}\frac{1}{\sqrt{N_\pi }},$$ (12) where $`N_\pi `$ is the total number of measured positive pions after kinematic cuts, and $`<\mathrm{}>`$ means averaging over all accepted events. In this way the product of the transversity distribution and the ratio of the fragmentation functions, $$K(x,z)=\delta u(x)\frac{H_1^{(1)u}(z)}{D_1^u(z)},$$ (13) as well as the projected statistical accuracy for a measurement of this function were calculated and are shown in Fig. 2b, again for the case of a proton target. The factorized form of expression (10) with respect to the variables $`x`$ and $`z`$ allows the simultaneous reconstruction of the shape for the two unknown functions $`\delta u(x)`$ and $`H_1^{(1)u}(z)/D_1^u(z)`$, while the relative normalization cannot be fixed without a further assumption. As was discussed above, the transversity distribution $`\delta q(x)`$ conceivably coincides with the helicity distribution $`\mathrm{\Delta }q(x)`$ at small values of $`Q^2`$ where the relativistic effects are expected to be small. According to Fig. 1 the differences are smallest in the region of intermediate and large values of $`x`$. Hence the assumption $$\delta q(x_0)=\mathrm{\Delta }q(x_0)$$ (14) at $`x_0=0.25`$ was made to resolve the normalization ambiguity. The experimental data then consist of 25 measured values of the function $`K(x_i,z_j)`$, as opposed to 9 unknown function values: 4 values for $`\delta u(x_i)`$ and 5 values for $`H_1^{(1)u}(z_j)/D_1^u(z_j)`$, where the indices $`i`$ and $`j`$ enumerate the experimental intervals in $`x`$ and in $`z`$, respectively. The standard procedure of $`\chi ^2`$ minimization was applied to reconstruct the values for both $`\delta u(x)`$ and $`H_1^{(1)u}(z)/D_1^u(z)`$ and to evaluate their projected statistical accuracies expected for a real measurement at HERMES. The results are shown in Figs. 3a,b, respectively. In an analogous way the consideration of the deuteron asymmetry (11) allows the evaluation of the projected statistical accuracies for a measurement of the functions $`\delta u(x)+\delta d(x)`$ and $`H_1^{(1)u}(z)/D_1^u(z)`$ (see Figs. 4a,b, respectively). The projected statistical accuracy is considerably worse than that for the proton target; this is caused mainly by the expected smaller value of the asymmetry (11), which in turn is due to the lower value of $`(\delta u(x)+\delta d(x))/(u(x)+d(x))`$ compared to $`\delta u(x)/u(x)`$. Two sources of systematic uncertainties arising from approximations used in the analysis were investigated. To evaluate the contribution of the normalization assumption (14), the relative difference between transversity distribution $`\delta u(x,Q^2)`$ and helicity distribution $`\mathrm{\Delta }u(x,Q^2)`$ was studied as a function of $`x`$ and $`Q^2`$ in the HERMES kinematics. Starting from $`\delta u(x)=\mathrm{\Delta }u(x)`$ at the scale $`Q_0^2=\mathrm{\hspace{0.17em}0.4}`$ GeV<sup>2</sup> both functions were evolved to higher values of $`Q^2`$. The results are shown in Fig. 5 and allow the conclusion that the relative difference is small for $`x`$ above $`0.2÷0.3`$; the corresponding systematic uncertainty is on the level of $`2÷5`$%. The same conclusion is valid for the evolution of $`\delta u(x)+\delta d(x)`$. A larger contribution to the systematic uncertainty originates from the above mentioned ‘contamination’ of other quark flavors than $`u`$ to $`\pi ^+`$ production, when assuming u-quark dominance in the analysis. The $`x`$ and $`z`$ dependence of this contamination was evaluated with HMC. Both contributions were added linearly; the resulting total projected systematic uncertainties on the extraction of the transversity distribution $`\delta u(x)`$ and the fragmentation function ratio $`H_1^{(1)u}(z)/D_1^u(z)`$, as would be measured using a proton target, are shown as hatched bands in Figs. 3a,b, as a function of $`x`$ and $`z`$, respectively. The same procedure for a deuteron target yields projected systematic uncertainties for $`\delta u(x)+\delta d(x)`$ and $`H_1^{(1)u}(z)/D_1^u(z)`$, as shown as hatched bands in Figs. 4a,b, respectively. ## 5 Conclusions In conclusion, the HERMES experiment using a transversely polarized proton target will be capable to measure simultaneously and with good statistical precision the shapes of the u-quark transversity distribution $`\delta u(x)`$ and of the ratio of the fragmentation functions $`H_1^{(1)u}(z)/D_1^u(z)`$. The normalization can be fixed under the assumption that in the HERMES $`Q^2`$ range the transversity distribution is well described by the helicity distribution at large $`x`$. Using a deuteron target, information on $`\delta u(x)+\delta d(x)`$ will be available, but with considerably less statistical accuracy compared to a measurement of $`\delta u(x)`$ from a proton target. The systematic uncertainty of the method proposed in this paper is dominated by the assumption of $`u`$-quark dominance. ## Acknowledgements We thank Klaus Rith for discussions which led to the development of the method described in this paper and the HERMES Collaboration for the kind permission to use their Monte Carlo Program. We are indebted to Ralf Kaiser for the careful reading of the manuscript.
no-problem/0002/astro-ph0002035.html
ar5iv
text
# Gamma-Ray Bursts as a Probe of the Very High Redshift Universe ## Introduction There is increasingly strong evidence that gamma-ray bursts (GRBs) are associated with star-forming galaxies and occur near or in the star-forming regions of these galaxies . These associations provide indirect evidence that at least the long GRBs detected by BeppoSAX are a result of the collapse of massive stars. The discovery of what appear to be supernova components in the afterglows of GRBs 970228 and 980326 provides direct evidence that at least some GRBs are related to the deaths of massive stars, as predicted by the widely-discussed collapsar model of GRBs . If GRBs are indeed related to the collapse of massive stars, one expects the GRB rate to be approximately proportional to the star-formation rate (SFR). ## GRBs as a Probe of Star Formation Observational estimates indicate that the SFR in the universe was about 15 times larger at a redshift $`z1`$ than it is today. The data at higher redshifts from the Hubble Deep Field (HDF) in the north suggests a peak in the SFR at $`z12`$ , but the actual situation is highly uncertain. However, theoretical calculations show that the birth rate of Pop III stars produces a peak in the SFR in the universe at redshifts $`16z20`$, while the birth rate of Pop II stars produces a much larger and broader peak at redshifts $`2z10`$ . Therefore one expects GRBs to occur out to at least $`z10`$ and possibly $`z1520`$, redshifts that are far larger than those expected for the most distant quasars. Consequently GRBs may be a powerful probe of the star-formation history of the universe, and particularly of the SFR at VHRs. In Figure 1, we have plotted the SFR versus redshift from a phenomenological fit to the SFR derived from submillimeter, infrared, and UV data at redshifts $`z<5`$, and from a numerical simulation by at redshifts $`z5`$. The simulations done by indicate that the SFR increases with increasing redshift until $`z10`$, at which point it levels off. The smaller peak in the SFR at $`z18`$ corresponds to the formation of Population III stars, brought on by cooling by molecular hydrogen. Since GRBs are detectable at these VHRs and their redshifts may be measurable from the absorption-line systems and the Ly$`\alpha `$ break in the afterglows , if the GRB rate is proportional to the SFR, then GRBs could provide unique information about the star-formation history of the VHR universe. More easily but less informatively, one can examine the GRB peak photon flux distribution $`N_{GRB}(P)`$. To illustrate this, we have calculated the expected GRB peak flux distribution assuming (1) that the GRB rate is proportional to the SFR<sup>1</sup><sup>1</sup>1This may underestimate the GRB rate at VHRs since it is generally thought that the initial mass function will be tilted toward a greater fraction of massive stars at VHRs because of less efficient cooling due to the lower metallicity of the universe at these early times., (2) that the SFR is that given in Figure 1, and (3) that the peak photon luminosity distribution $`f(L_P)`$ of the bursts is independent of $`z`$. There is a mis-match of about a factor of three between the $`z<5`$ and $`z5`$ regimes. However, estimates of the star formation rate are uncertain by at least this amount in both regimes. We have therefore chosen to match the two regimes smoothly to one another, in order to avoid creating a discontinuity in the GRB peak flux distribution that would be entirely an artifact of this mis-match. For a peak luminosity function $`f(L_P)`$ and for $`dL_P/d\nu \nu ^\alpha `$, the observed GRB peak flux distribution $`N_{GRB}(P)`$ is given by the following convolution integration: $$N_{GRB}(P)=\mathrm{\Delta }T_{obs}_0^{\mathrm{}}R_{GRB}(P|L_P)f[L_P4\pi D^2(z)(1+z)^\alpha P]𝑑L_P,$$ (1) where $`\mathrm{\Delta }T_{obs}`$ is the length of time of observation, $`D(z)`$ is comoving distance, $$R_{GRB}(P|L_P)\frac{R_{SF}(z)}{1+z}\frac{dV(z)}{dz}\left|\frac{dz(P|L_P)}{dP}\right|,$$ (2) $`R_{SF}(z)`$ is the local co-moving SFR at $`z`$, and $`dV(z)/dz`$ is differential comoving volume . The left panel of Figure 2 shows the number $`N_{}(z)`$ of stars expected as a function of redshift $`z`$ (i.e., the SFR, weighted by the co-moving volume, and time-dilated) for an assumed cosmology $`\mathrm{\Omega }_M=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ (other cosmologies give similar results). The solid curve corresponds to the star-formation rate in Figure 1. The dashed curve corresponds to the star-formation rate derived by . This figure shows that $`N_{}(z)`$ peaks sharply at $`z2`$ and then drops off fairly rapidly at higher $`z`$, with a tail that extends out to $`z12`$. The rapid rise in $`N_{}(z)`$ out to $`z2`$ is due to the rapidly increasing volume of space. The rapid decline beyond $`z2`$ is due almost completely to the “edge” in the spatial distribution produced by the cosmology. In essence, the sharp peak in $`N_{}(z)`$ at $`z2`$ reflects the fact that the SFR we have taken is fairly broad in $`z`$, and consequently, the behavior of $`N_{}(z)`$ is dominated by the behavior of the co-moving volume $`dV(z)/dz`$; i.e., the shape of $`N_{}(z)`$ is due almost entirely to cosmology. The right panel in Figure 2 shows the cumulative distribution $`N_{}(>z)`$ of the number of stars expected as a function of redshift $`z`$. The solid and dashed curves have the same meaning as in the upper panel. This figure shows that $`40\%`$ of all stars have redshifts $`z>5`$. The upper panel of Figures 3 shows the predicted peak photon flux distribution $`N_{GRB}(P)`$. The solid curve assumes that all bursts have a peak (isotropic) photon luminosity $`L_P=10^{58}`$ ph s<sup>-1</sup>. However, there is now overwhelming evidence that GRBs are not “standard candles.” Consequently, we also show in Figure 3, as an illustrative example, the convolution of this same SFR and a logarithmically flat photon luminosity function $`f(L_P)`$ centered on $`L_P=10^{58}`$ ph s<sup>-1</sup>, and having widths $`\mathrm{\Delta }L_P/L_P=10`$, 100 and 1000.<sup>2</sup><sup>2</sup>2The seven bursts with well-determined redshifts and published peak (isotropic) photon luminosities have a mean peak photon luminosity and sample variance $`\mathrm{log}L_P=58.1\pm 0.7`$. The actual luminosity function of GRBs could well be even wider . The middle panel of Figure 3 shows the predicted cumulative peak photon flux distribution $`N_{GRB}(>P)`$ for the same luminosity function. For the SFR that we have assumed, we find that, if GRBs are assumed to be “standard candles,” the predicted peak photon flux distribution falls steeply throughout the BATSE and HETE-2 regime, and therefore fails to match the observed distribution, in agreement with earlier work. In fact, we find that a photon luminosity function spanning at least a factor of 100 is required in order to obtain semi-quantitative agreement with the principle features of the observed distribution; i.e., a roll-over at a peak photon flux of $`P6`$ ph cm<sup>-2</sup> s<sup>-1</sup> and a slope above this of about -3/2. This implies that there are large numbers of GRBs with peak photon number fluxes below the detection threshold of BATSE and HETE-2, and even of Swift. The lower panel of Figure 3 shows the predicted fraction of bursts with peak photon number flux $`P`$ that have redshifts of $`z>5`$, for the same luminosity functions. This panel shows that a significant fraction of the bursts near the Swift detection threshold will have redshifts of $`z>5`$. ## Conclusions We have shown that, if many GRBs are indeed produced by the collapse of massive stars, one expects GRBs to occur out to at least $`z10`$ and possibly $`z1520`$, redshifts that are far larger than those expected for the most distant quasars. GRBs therefore give us information about the star-formation history of the universe, including the earliest generations of stars. The absorption-line systems and the Ly$`\alpha `$ forest visible in the spectra of GRB afterglows can be used to trace the evolution of metallicity in the universe, and to probe the large-scale structure of the universe at very high redshifts. Finally, measurement of the Ly$`\alpha `$ break in the spectra of GRB afterglows can be used to constrain, or possibly measure, the epoch at which re-ionization of the universe occurred, using the Gunn-Peterson test. Thus GRBs and their afterglows may be a powerful probe of the very high redshift ($`z5`$) universe.
no-problem/0002/hep-th0002081.html
ar5iv
text
# 1 Introduction to the N=2 Superconformal Algebras ## 1 Introduction to the N=2 Superconformal Algebras ### 1.1 The N=2 superconformal algebras The N=2 superconformal algebras were discovered in the seventies independently by Ademollo et al. and by Kac . The first authors derived the algebras for physical purposes, in order to define supersymmetric strings, whereas Kac derived them for mathematical purposes along with his classification of Lie superalgebras. In modern notation these algebras read $$\begin{array}{ccccccc}[L_m,L_n]\hfill & =& (mn)L_{m+n}+\frac{c}{12}(m^3m)\delta _{m+n,0},\hfill & & [H_m,H_n]\hfill & =& \frac{c}{3}m\delta _{m+n,0},\hfill \\ [L_m,G_r^\pm ]\hfill & =& \left(\frac{m}{2}r\right)G_{m+r}^\pm ,\hfill & & [H_m,G_r^\pm ]\hfill & =& \pm G_{m+r}^\pm ,\hfill \\ [L_m,H_n]\hfill & =& nH_{m+n}\hfill & & & & \\ \{G_r^{},G_s^+\}\hfill & =& \multicolumn{5}{c}{2L_{r+s}(rs)H_{r+s}+\frac{c}{3}(r^2\frac{1}{4})\delta _{r+s,0}.}\end{array}$$ (1.1) The bosonic generators $`L_n`$ and $`H_n`$ correspond to the stress-energy tensor (Virasoro generators) and to the U(1) current, respectively. The fermionic generators $`G_r^\pm `$, with conformal weight 3/2, correspond to the two fermionic currents. Depending on the modings of the algebra generators $`G_r^\pm `$ and $`H_n`$ one distinguishes three N=2 algebras: the Neveu-Schwarz, the Ramond and the twisted N=2 algebras. For the Neveu-Schwarz N=2 algebra the two fermionic generators $`G_r^\pm `$ are half-integer moded and the U(1) generators $`H_n`$ are integer moded, i.e. $`r𝐙+1/2,n𝐙`$. For the Ramond N=2 algebra $`G_r^\pm `$ and $`H_n`$ are integer moded, i.e. $`r𝐙,n𝐙`$ and for the twisted N=2 algebra $`G_r^+`$ is integer moded whereas $`G_s^{}`$ and $`H_n`$ are half-integer moded, i.e. $`r𝐙,s𝐙+1/2,n𝐙+1/2`$. The Neveu-Schwarz, the Ramond and the twisted N=2 superconformal algebras are not the whole story, however. In 1990 Dijkgraaf, Verlinde and Verlinde presented the Topological N=2 superconformal algebra, which is the symmetry algebra of Topological Conformal Field Theory in two dimensions. This algebra can be obtained from the Neveu-Schwarz N=2 algebra by ‘twisting’ the stress-energy tensor by addding the derivative of the U(1) current, procedure known as ‘topological twist’ . The Topological N=2 algebra reads $$\begin{array}{ccccccc}[_m,_n]\hfill & =& (mn)_{m+n},\hfill & & [_m,_n]\hfill & =& \frac{c}{3}m\delta _{m+n,0},\hfill \\ [_m,𝒢_n]\hfill & =& (mn)𝒢_{m+n},\hfill & & [_m,𝒢_n]\hfill & =& 𝒢_{m+n},\hfill \\ [_m,𝒬_n]\hfill & =& n𝒬_{m+n},\hfill & & [_m,𝒬_n]\hfill & =& 𝒬_{m+n},\hfill \\ [_m,_n]\hfill & =& \multicolumn{5}{c}{n_{m+n}+\frac{c}{6}(m^2+m)\delta _{m+n,0},}\\ \{𝒢_m,𝒬_n\}\hfill & =& \multicolumn{5}{c}{2_{m+n}2n_{m+n}+\frac{c}{3}(m^2+m)\delta _{m+n,0},}\end{array}m,n\text{Z}.$$ (1.2) The fermionic generators $`𝒢_n`$ and $`𝒬_n`$ , with conformal weight 2 and 1, respectively, and the generators $`_n`$ are integer moded, i.e. $`n𝐙`$. The existence of two fermionic zero modes for the Ramond and for the Topological N=2 algebras allows one to classify the states in the corresponding Verma modules into two sectors: the $`(+)`$ and $`()`$ sectors for the Ramond states, $`G_0^{}`$ and $`G_0^+`$ interpolating between them, and the $`𝒢`$ and $`𝒬`$ sectors for the topological states, $`𝒬_0`$ and $`𝒢_0`$ interpolating between them. The two sectors are not the complete description of the states, however, since there exist indecomposible states in the Verma modules which do not belong to any of the sectors . In the case of the Ramond N=2 algebra the indecomposible states are called ‘no-helicity’ states and in the case of the Topological N=2 algebra they are called ‘no-label’ states. ### 1.2 Relations between the N=2 superconformal algebras The Neveu-Schwarz and the Ramond N=2 algebras are connected through the spectral flows. Namely, the even $`(𝒰_\theta )`$ and the odd $`(𝒜_\theta )`$ spectral flows transform the Neveu-Schwarz and the Ramond N=2 algebras into each other for $`\theta =𝐙+1/2`$. However, they do not map highest weight (h.w.) vectors into h.w. vectors, but only in particular cases, and they deform the Verma modules very much, so that they are not isomorphic. The Neveu-Schwarz and the Topological N=2 algebras are connected through the topological twists $`T_W^\pm `$. Under these only the topological h.w. vectors annihilated by $`𝒢_0`$ are transformed into h.w. vectors of the Neveu-Schwarz N=2 algebra and the Verma modules are deformed in a similar way as by the action of the spectral flows for $`\theta =\pm 1/2`$. Thus the corresponding Verma modules are not isomorphic. The topological twists $`T_W^\pm `$ consists of the modification of the stress-energy tensor by adding the derivative of the U(1) current: $`T(z)T(z)\pm 1/2H(z)`$. As a result the conformal weights (spins) of the fermionic fields are modified by $`\pm 1/2`$, what automatically produces a shift of $`\pm 1/2`$ in the modings of these generators: the spin-3/2 fermionic generators $`G_r^+,G_r^{}`$, with $`r𝐙+1/2`$, are transformed into the spin-2 and spin-1 fermionic generators $`𝒢_n`$ and $`𝒬_n`$, respectively, with $`n𝐙`$. In addition, $`𝒬(z)`$ has the properties of a BRST-current, $`𝒬_0`$ being the BRST charge. The topological nature of the resulting algebra manifests itself through the BRST-exactness of the stress-energy tensor: $`_m=1/2\{𝒬_0,𝒢_m\}`$. When this occurs, the correlators of the fields of the superconformal field theory do not depend on the two-dimensional metric, as is well known in the literature (see for example ref. ). Finally, the Ramond and the Topological N=2 algebras are connected through the composition of the spectral flows with $`\theta =𝐙+1/2`$ and the topological twists. For $`\theta =\pm 1/2`$ one finds exact isomorphisms between the states of these algebras level by level . The Verma modules of the Ramond and of the Topological N=2 algebras are therefore isomorphic. ## 2 Representation Theory of the N=2 Superconformal Algebras ### 2.1 Historical overview Let us now briefly review the most important developments concerning the representation theory of the N=2 superconformal algebras. One can distinguish two periods of remarkable activity. In the first period, from 1985 until 1988, the determinant formulae for the Neveu-Schwarz, the Ramond and the twisted N=2 algebras were written down by various authors, unitarity of the representations was analyzed and some singular vectors (called simply null vectors) were computed . Also some embedding diagrams were presented and the (even) spectral flows interpolating between the Neveu-Schwarz and the Ramond N=2 algebras were written down . During the period from 1989 until 1993 there was not much activity in the representation theory of N=2 algebras. However, two developments took place which were of crucial importance. On one side the chiral rings were discovered and with them one realized the necessity to analyze the chiral representations of the N=2 algebras. On the other side the Topological N=2 algebra was written down and its importance for string theory was also realized . In the second period of remarkable activity, from 1994 until nowadays, there have been several important findings regarding the states in the Verma modules of the N=2 algebras: the discovery of two-dimensional singular spaces , the discovery of subsingular vectors and the discovery of indecomposible states . In addition an (almost) complete classification of embedding diagrams was presented for the Neveu-Schwarz N=2 algebra and the determinant formulae for the Topological N=2 algebra were computed as well as the determinant formulae for the chiral representations of the Topological, the Neveu-Schwarz and the Ramond N=2 algebras . Moreover, the odd spectral flows were written down , which are believed to provide the complete set of automorphisms for the N=2 algebras. Furthermore, recently a powerful tool has been developed for the study of the representations of any Lie algebra or superalgebra, the so-called ‘adapted ordering method’. This method has been applied so far sucessfully to the N=2 algebras and to the Ramond N=1 algebra. In what follows we will say a few words about all these new results and developments. ### 2.2 The odd spectral flow The odd spectral flow $`𝒜_\theta `$, when acting on the states and generators of the Neveu-Schwarz and the Ramond N=2 algebras read $$\begin{array}{ccccccc}\hfill 𝒜_\theta L_m𝒜_\theta ^1& =& L_m+\theta H_m+\frac{c}{6}\theta ^2\delta _{m,0},\hfill & & & & \\ \hfill 𝒜_\theta H_m𝒜_\theta ^1& =& H_m\frac{c}{3}\theta \delta _{m,0},\hfill & & & & \\ \hfill 𝒜_\theta G_r^+𝒜_\theta ^1& =& G_{r\theta }^{},\hfill & & & & \\ \hfill 𝒜_\theta G_r^{}𝒜_\theta ^1& =& G_{r+\theta }^+,\hfill & & & & \end{array}$$ (2.3) with $`𝒜_\theta ^1=𝒜_\theta `$ (it is therefore an involution). $`𝒜_\theta `$ and the even (usual) spectral flow $`𝒰_\theta `$ are quasi-mirror symmetric under $`H_mH_m,G_r^+G_r^{},\theta \theta `$. However, $`𝒜_\theta `$ generates $`𝒰_\theta `$ and consequently it is the only fundamental spectral flow, as one can see in the composition rules $$𝒰_{\theta _2}𝒰_{\theta _1}=𝒰_{(\theta _2+\theta _1)},𝒜_{\theta _2}𝒜_{\theta _1}=𝒰_{(\theta _2\theta _1)},$$ (2.4) $$𝒜_{\theta _2}𝒰_{\theta _1}=𝒜_{(\theta _2\theta _1)},𝒰_{\theta _2}𝒜_{\theta _1}=𝒜_{(\theta _2+\theta _1)}.$$ (2.5) $`𝒜_\theta `$ is believed to provide the complete set of automorphisms of the N=2 algebras (for $`\theta 𝐙`$), whereas $`𝒰_\theta `$ provide only ‘half’ of them. ### 2.3 Indecomposible singular vectors The indecomposible singular vectors of the Ramond N=2 algebra were overlooked until very recently (they do not exist for the Neveu-Schwarz N=2 algebra). In fact, they were discovered first for the Topological N=2 algebra in ref. , where they were called ‘no-label’ singular vectors. Shortly afterwards some examples were presented for the Ramond N=2 algebra under the name ‘no-helicity’ singular vectors. At first sight, just by inspecting the anticommutator of the fermionic zero modes, one realizes that indecomposible singular vectors are allowed to exist by the Topological (and Ramond) N=2 algebra. Namely, from $`\{𝒢_0,𝒬_0\}=2_0`$ one deduces that for non-zero conformal weight $`\mathrm{\Delta }0`$ all the states can be decomposed into linear combinations of $`𝒢_0`$-closed states and $`𝒬_0`$-closed states (i.e. states annihilated by $`𝒢_0`$ and states annihilated by $`𝒬_0`$): $$|\chi =\frac{1}{2\mathrm{\Delta }}(𝒢_0𝒬_0|\chi +𝒬_0𝒢_0|\chi ).$$ (2.6) For zero conformal weight $`\mathrm{\Delta }=0`$, however, there is not such a decomposition and the states can be annihilated either by one of the fermionic zero modes or by both or by none of them. The latter are the indecomposible ‘no-label’ states. In the case of the Ramond N=2 algebra, from the anticommutator $`\{G_0^+,G_0^{}\}=2L_0\frac{c}{12}`$ one deduces that the ‘no-helicity’ indecomposible states are allowed for $`\mathrm{\Delta }=\frac{c}{24}`$. Indecomposible singular vectors are also allowed by the analysis of maximal dimensions (see later). They actually exist already at levels 1 and 2 and recently it has been proved that they must necessarily exist . The argument goes as follows. One analyzes two curves in the parameter space of singular vectors. Each curve corresponds to two different families of singular vectors. In some discrete intersection points, however, the four singular vectors reduce to three (two of them coincide). But the rank of the inner product matrix is upper semi-continuous and therefore another singular vector must exist. By analyzing the dimensions of the singular vectors involved one deduces finally that the new singular vector must be indecomposible. ### 2.4 New determinant formulae The determinant formulae for the Topological N=2 algebra have been computed for generic (standard) Verma modules, for chiral Verma modules and for no-label Verma modules . In addition, determinant formulae have been computed for the chiral Verma modules of the Neveu-Schwarz and of the Ramond N=2 algebras , as well as for the no-helicity Verma modules of the Ramond N=2 algebra (as a straightforward derivation of the results for the no-label Verma modules of the Topological N=2 algebra) . The generic Verma modules of the Topological N=2 algebra are built on $`𝒢_0`$-closed and/or $`𝒬_0`$-closed h.w. vectors (annihilated either by $`𝒢_0`$ or by $`𝒬_0`$). For conformal weight $`\mathrm{\Delta }0`$ there are two h.w. vectors at the bottom of the Verma modules (one is $`𝒢_0`$-closed and the other $`𝒬_0`$-closed), giving rise to the two sectors: the $`𝒢`$-sector and the $`𝒬`$-sector. For $`\mathrm{\Delta }=0`$, however, there is only one h.w. vector (plus one singular vector at level zero) at the bottom of the Verma modules. The chiral Verma modules of the Topological N=2 algebra are built on chiral h.w. vectors annihilated by both $`𝒢_0`$ and $`𝒬_0`$. Therefore they have only one h.w. vector at the bottom, which has zero conformal weight $`\mathrm{\Delta }=0`$. They are incomplete Verma modues that can be realized as quotient modules of a generic Verma module with $`\mathrm{\Delta }=0`$ divided by the submodule generated by its level zero singular vector. Similarly, the chiral Verma modules of the Ramond N=2 algebra are built on h.w. vectors annihilated by both $`G_0^+`$ and $`G_0^{}`$, with $`\mathrm{\Delta }=\frac{c}{24}`$. The chiral (anti-chiral) Verma modules of the Neveu-Schwarz N=2 algebra, in turn, are built on chiral (anti-chiral) h.w. vectors annihilated by $`G_{1/2}^+`$ ($`G_{1/2}^{}`$), satisfying $`\mathrm{\Delta }=\frac{h}{2}`$ ($`\mathrm{\Delta }=\frac{h}{2}`$), where $`h`$ is the U(1) charge. Finally, no-label (no-helicity) Verma modules are built on no-label (no-helicity) h.w. vectors. At the bottom they consist of one h.w. vector plus three singular vectors at level zero obtained by the action of $`𝒢_0`$ and $`𝒬_0`$ ($`G_0^+`$ and $`G_0^{}`$) on the no-label (no-helicity) h.w. vector. No-label (no-helicity) Verma modules appear as submodules inside generic Verma modules, the bottom of these submodules consisting of four singular vectors consequently. Apart from no-label (no-helicity) submodules, in generic Verma modules of the Topological (Ramond) N=2 algebra one can find another three types of submodules, taking into account the size and the shape at the bottom of the submodule . In chiral Verma modules, however, one finds only one kind of submodule . ### 2.5 Discovery of subsingular vectors Subsingular vectors are null, but not h.w. vectors, that become singular (i.e. h.w. vectors) after the quotient of the Verma module by a submodule. That is, they become singular by setting one singular vector to zero. This implies that they are located outside that submodule since otherwise they would go away after the quotient. Subsingular vectors for the Neveu-Schwarz, the Ramond and the Topological N=2 algebras were reported for the first time in 1996 - 1997 in refs. . For the twisted N=2 algebra subsingular vectors were presented only recently in ref. . The discovery of subsingular vectors for the N=2 algebras was as follows. From the chiral determinant formulae for the Neveu-Schwarz, the Ramond and the Topological N=2 algebras one deduces the existence of singular vectors in the chiral Verma modules that are not singular in the complete (generic) Verma modules before the quotient that gives rise to the chiral Verma modules. These are therefore subsingular vectors in the generic Verma modules. Many explicit examples of subsingular vectors of this type (i.e. becoming singular in the chiral Verma modules) were presented for the Neveu-Schwarz, the Ramond and the Topological N=2 algebras . For the twisted N=2 algebra subsingular vectors were found by analyzing the Verma modules using the ‘adapted ordering method’ (see later). ### 2.6 Discovery of two-dimensional singular vector spaces The two-dimensional singular vector spaces were discovered first for the Neveu-Schwarz N=2 algebra by Dörrzapf in 1994. In particular, some conditions were found to guarantee the existence of two-dimensional spaces of uncharged singular vectors. These conditions can be written as the simultaneous vanishing of two functions: $`ϵ_r^+(h,c)=ϵ_r^{}(h,c)=0`$, where $`h`$ is the U(1) charge of the h.w. vector of the Verma module. The corresponding uncharged singular vectors are located in Verma modules where two charged and one uncharged singular vectors intersect (at different levels) in such a way that the charged singular vectors are the primitive ones and the two-dimensional uncharged singular space is secondary of the two charged singular vectors. These two-dimensional spaces are spanned by a tangent space of vanishing surfaces corresponding to singular vectors. Once the ‘general formula’ for singular vectors vanish one gets the two-dimensional singular space in the tangent space. An straightforward extension of these results to the Topological N=2 algebra was presented in ref. . As a result four types of topological singular vectors were found for which two-dimensional spaces may exist. The extension of these results to the Ramond N=2 algebra is also straightforward since the corresponding Verma modules are isomorphic to the Verma modules of the Topological N=2 algebra. For the twisted N=2 algebra two-dimensional singular spaces were found using the adapted ordering method . In this case, however, the two-dimensional singular spaces are of different nature than the ones corresponding to the other N=2 algebras: they are spanned by two primitive singular vectors instead of two secondary singular vectors. ### 2.7 N=2 Embedding diagrams In 1995 Dörrzapf presented a complete classification of embedding diagrams for the Neveu-Schwarz N=2 algebra (see also ref. ). He proved that the relative charge $`q`$ of all singular vectors (not only the primitive ones) satisfy $`|q|1`$, correcting many earlier diagrams in the literature , and he presented many more diagrams than previously known . These results have not been improved so far although we know that they must be improved because subsingular vectors were assumed not to exist, being discovered one year afterwards. The embedding diagrams of the Neveu-Schwarz N=2 algebra can be carefully adapted to provide embedding diagrams for the Topological and for the Ramond N=2 algebras. One has to take into account, however, that many singular vectors of these algebras do not correspond to singular vectors of the Neveu-Schwarz N=2 algebra but to null descendants of singular vectors or even to subsingular vectors (for example, indecomposable no-label and no-helicity singular vectors of the Topological and of the Ramond N=2 algebras always correspond to subsingular vectors of the Neveu-Schwarz N=2 algebra ). ## 3 The Adapted Ordering Method The ‘adapted ordering method’, developed in ref. , can be applied to most Lie algebras and superalgebras. It allows: i) to determine maximal dimensions for a given type of singular vector space, ii) to rule out the existence of certain types of singular vectors (with dimension zero), iii) to identify all singular vectors by only a few coefficients, iv) to spot subsingular vectors, v) to obtain easily product expressions of singular vector operators in order to compute secondary singular vectors (or decide whether they vanish), vi) to set the basis for constructing embedding diagrams. The method originates (in rudimentary form) from a procedure developed by Kent for the analytically continued Virasoro algebra . The analytical continuation is not necessary, however, for the adapted ordering method. The key idea of this method is to find a suitable ordering for the terms of the singular vectors, i.e. a criterion to decide which of two terms is the bigger one, for example between the terms at level 4 and charge 1: $`G_2^+L_2`$ and $`G_1^+H_2L_1`$. The ordering must be adapted to a subset of terms $`𝒞_{l,q}^A𝒞_{l,q}`$, where $`𝒞_{l,q}`$ is the set of all possible terms at level $`l`$ with charge $`q`$ (for the details see ref. ). The crucial point of this method is the following. The complement of $`𝒞_{l,q}^A`$ is the ordering kernel $`𝒞_{l,q}^K=𝒞_{l,q}/𝒞_{l,q}^A`$ and its size puts a limit on the dimension of the corresponding singular vector space. Namely, if the ordering kernel $`𝒞_{l,q}^K`$ has $`n`$ elements then there are at most $`n`$ linearly independent singular vectors $`\mathrm{\Psi }_{l,q}`$, at level $`l`$ with charge $`q`$, in a given Verma module. In other words, the singular vectors $`\mathrm{\Psi }_{l,q}`$ span a singular space that is at most $`n`$-dimensional. As a result, if $`𝒞_{l,q}^K=\mathrm{}`$ then there are no singular vectors of type $`\mathrm{\Psi }_{l,q}`$ (the singular space has dimension zero). Therefore we need to find a suitable, clever ordering in order to obtain the smallest possible kernel. Furthermore, the coefficients with respect to the terms of the ordering kernel uniquely identify a singular vector. This implies that just a few (one, two,…) coefficients completely determine a singular vector no matter its size. As a consequence one can find easily product expressions for descendant singular vectors and set the basis to construct embedding diagrams. The adapted ordering method has been applied to the Topological, to the Neveu-Schwarz and to the Ramond N=2 algebras in ref. and to the twisted N=2 algebra in ref. . The maximal dimensions of the existing types of singular vectors have been found to be one or two, with the exception of some types of singular vectors in ‘no-label’ and ‘no-helicity’ Verma modules for which the maximal dimension has been found to be three. For the Topological and the Ramond N=2 algebras the only existing types of singular vectors (primitive as well as secondary), distinguished by the relative charge $`q`$ and the annihilation properties under the fermionic zero modes, resulted in: twenty types in generic Verma modules, with $`|q|2`$, nine types in no-label and no-helicity Verma modules, with $`|q|2`$, and four types in chiral Verma modules, with $`|q|1`$. These results had been conjectured previously in ref. . For the Neveu-Schwarz N=2 algebra one obtained $`|q|1`$, in agreement with the results presented in ref. . As we pointed out before, in the case of the twisted N=2 algebra the application of the adapted ordering method has lead to the discovery of subsingular vectors and two-dimensional singular spaces for this algebra. ## 4 Final Remarks In spite of the progress made in the last six years, the representation theory of the N=2 superconformal algebras is not finished yet. It remains to classify the subsingular vectors and to complete the classification of embedding diagrams. Several of the techniques that have been used for the analysis of these algebras can be easily transferred to the analysis of other Lie algebras and superalgebras. This holds especially for the adapted ordering method, that has already been applied sucessfully to the Ramond N=1 superconformal algebra , leading as a result to the discovery of two-dimensional singular spaces and subsingular vectors, which do not exist for the Neveu-Schwarz N=1 superconformal algebra. Acknowledgements I am very grateful to Prof. Alexander Belavin for the invitation to participate in the Conference on Conformal Field Theory and Integrable Models, at Landau Institute, and for his hospitality. Also I would like to thank the organizers of the 6th International Wigner Symposium, at Bogazici University, for the invitation to participate.
no-problem/0002/astro-ph0002072.html
ar5iv
text
# VLBI differential astrometry at 43 GHz ## 1 Introduction One of the trends in Very-Long-Baseline Interferometry (VLBI) is to augment the angular resolution of the observations in search for a more detailed view of the inner structure of extragalactic radio sources. This is effectively carried out by either observing at millimeter wavelengths (mm-VLBI) or, at cm-wavelengths, by combining ground telescopes with antennas in space. The correct interpretation of these high-resolution observations is of much relevance since they map the morphology of highly variable regions close to the central engine of AGNs. However, multi-epoch analyses directed to understand the dynamical behavior of these inner regions critically depend on the alignment of the images: no solid conclusions can be extracted without an accurate source component (i.e. core) identification. In particular, VLBI reveals that cm-wavelength components break up in complex structures with multiple features at mm-wavelengths. These compact mm-features show a strong variability, which may be the result of phenomena only seen so far in numerical simulations (Gómez et al. 1995). For a meaningful physical understanding of those compact features, a detailed knowledge of the (absolute) kinematics of the region is crucial. It is therefore highly desirable to extend precision differential phase-delay astrometry to mm-wavelengths. In this Letter we demonstrate the feasibility of using phase-delay differential astrometry at 43 GHz. We have selected the pair of sources QSO 1928+738 and BL 2007+777, $``$5° apart, with flat spectra, high flux densities, and rich structures at 43 GHz. The astrometry analysis of these data show the advantages and possibilities of mm-wavelength differential astrometry. ## 2 Observations and Maps We observed the radio sources QSO 1928+738 and BL 2007+777 at 43 GHz on 1999 January 3 from 15:00 to 23:30 UT. We used the complete Very Long Baseline Array (VLBA) recording in mode 256-8-2 in left circular polarization to achieve a recording bandwidth of 64 MHz. We interleaved observations of QSO 1928+738 and BL 2007+777 using integration times between 40 and 130 seconds on each source to make total cycle time durations between 110 and 300 seconds (antenna slew time was 15 seconds). The data were correlated at the National Radio Astronomy Observatory (NRAO, Socorro, NM, USA). Detections were found on both sources to all stations but Hancock, presumably due to severe weather conditions at this site. We made manual phase calibration, visibility amplitude calibration (using system temperatures and gain curves from each antenna), and fringe fitting at the correlation position with the NRAO Astronomical Image Processing System (AIPS). For astrometric purposes, we further processed the data in AIPS (tasks mbdly and cl2hf) to obtain, for each baseline and epoch, estimates of the group delay, phase-delay rate, and fringe phase at a reference frequency of 43,185 MHz. We discarded data from Saint Croix and Pie Town stations, which showed unacceptable scatter in the observables, for the astrometric analysis presented in Sect. 3. For mapping purposes, we transferred the data into the Caltech imaging program difmap (Shepherd et al. 1995). We performed several iterations of self-calibration in phase and gain. We present the resulting hybrid maps in Fig. 1. The 43 GHz map of QSO 1928+738 displays several jet components extending southwards. All these features appear blended together as only one or two components in previous maps at cm-wavelengths (Guirado et al. 1995, hereafter G95; Ros et al. 1999, hereafter R99), and those obtained with space VLBI (Murphy et al. 1999). The brightest knot, labeled as Q1, is probably a jet component, unless the source is two-sided, but it is compact and well defined. Thus, it constitutes an appropriate reference point for relative astrometry at a single epoch. However, this component would not be a suitable reference point for a multi-epoch comparison of the relative separation between the two sources as it is likely to move and evolve in brightness and shape over time. The 43 GHz map of BL 2007+777 represents a significant improvement in the knowledge of the inner structure of this source (see Fig. 1). The brightest knot seen earlier at cm-wavelengths (Guirado et al. 1998, hereafter G98; R99) breaks-up in at least three new features. The kinematic nature of component Q2, almost as bright as the easternmost component Q1, is of much interest. The brightest feature of BL 2007+777 at cm-wavelengths, a blending of all components seen within 1 mas from the origin in our 43 GHz map, has been taken as a reference point for astrometry (G95; G98; R99); even more, this feature has been considered stationary for multi-epoch astrometry analyses. Accordingly, should component Q2 be a travelling knot, the selected reference point at cm-wavelengths is likely to be not stable over time and part of previous astrometry results must be revised. ## 3 Astrometry Analysis A goal of this research has been to calibrate the limitations of our standard astrometry procedure for 43 GHz, as well as to study the potential precision of the astrometric data at this frequency. Therefore, the data-reduction procedure for the 43 GHz observation deliberately followed the same steps as those used for the 5 GHz (G95) or 8.4 GHz observations (G98; R99). We briefly go again over each step of this analysis: For our 43 GHz data, (i) we predicted, via a precise theoretical model of the geometry of the array and the propagation medium, the number of cycles of phase between consecutive observations of the same source to permit us to “connect” the phase delay (e.g. Shapiro et al. 1979; G95; R99); (ii) we defined as reference points in the 43 GHz images of the two radio sources the maximum of the brightness distribution (components Q1 in the maps of QSO 1928+738 and BL 2007+777; see Fig. 1) and subtract the contribution of the structure of the radiosources, with respect to the reference points selected, from the phase delays; (iii) we formed the differenced phase delays by subtracting the residual (observed minus theoretical values) phase delay of BL 2007+777 from the previous observation of QSO 1928+738; and (iv) we estimated the relative position of the reference points from a weighted-least-squares analysis of the differenced residual phase delays. For this analysis we used an improved version of the program VLBI3 (Robertson 1975). In step (i), the geometry of our theoretical model (set of antenna coordinates, Earth-orientation parameters, and source coordinates) was consistently taken from IERS (IERS 1998 Annual Report, 1999). The theoretical model also accounted for the effect of the propagation medium in the astrometric observables. We modeled the ionospheric delay by using total electron contect (TEC) data from GPS-based global ionospheric maps generated at the epoch of our observations by the Center for Orbit Determination in Europe (Schaer et al. 1998). We followed the geometric corrections described in Klobuchar (1975) and Ros et al. (2000). We modeled the tropospheric zenith delay at each station as a piecewise-linear function characterized by values specified at epochs one hour apart. We calculated a priori values at these nodes from local surface temperature, pressure, and humidity, based on the model of Saastamoinen (1973). The antenna elevations were always higher than 20° at all stations; this allowed us to use the dry and wet Chao mapping functions (Chao 1974) to determine the tropospheric delay at non-zenith elevations for each observation at each site. We estimated the tropospheric zenith delay at the nodes of each station, along with the relative position of the sources, from a weighted-least-squares analysis. ## 4 Results and Discussion From the astrometric analysis described in Sect. 3, we obtain the following J2000.0 coordinates of QSO 1928+738 minus those of BL 2007+777 at 43 GHz: | $`\mathrm{\Delta }\alpha =`$ | $`0^h\mathrm{\hspace{0.17em}37}^m\mathrm{\hspace{0.17em}42}\text{.}^s503443`$ | $`\pm \mathrm{\hspace{0.17em}0}\text{.}^s000026`$ | | --- | --- | --- | | $`\mathrm{\Delta }\delta =`$ | $`3^{}\mathrm{\hspace{0.17em}54}^{}\mathrm{\hspace{0.17em}41}\text{.}^{\prime \prime }677208`$ | $`\pm \mathrm{\hspace{0.17em}0}\text{.}^{\prime \prime }000056`$ | where the quoted uncertainties are overall standard errors (see Table 1), nearly twofold smaller than the standard errors corresponding to previous determinations at 5 and 8.4 GHz. From the comparison of the results of the sensitivity analysis displayed in Table 1 with similar sensitivity analyses at lower frequencies (G95; R99), we see that the improvement in precision comes from (i) the small contribution to the standard errors of the reference point identification in the map (dominated by image noise), as a consequence of the improvement of resolution of the maps and of the lack of ambiguity in selecting the components acting as reference, and (ii) the negligible contribution of the ionosphere, that scales down by a factor of 25 with respect to its contribution at 8.4 GHz. As occurs at cm-wavelengths, the quoted standard errors of the relative position are dominated by the uncertainties of the fixed parameters of the astrometric model (entries 3 to 10 of Table 1), and, in particular, by the uncertainties of the coordinates of the reference source, as expected for objects with a large angular separation (notice that this error is not frequency dependent). The comparison and interpretation of the relative position estimate at 43 GHz with previously reported estimates at lower frequencies will be postponed to a later publication where the comparison will be made in great detail. The postfit residuals of the differenced phase delays corresponding to all baselines included in our analysis are shown in Fig. 2. Note the scale of the plots, $`\pm `$23 ps, corresponding to $`\pm `$1 phase cycle. The average root-mean-square (rms) of the postfit residuals is 2 ps, less than one tenth of the phase cycle at 43 GHz. At this level of precision, the absence of systematic effects validates both the astrometric model, based on IERS standards, and the propagation medium procedures for mm-wavelength VLBI astrometry (at least for cycle times, source separations, weather conditions, and antenna elevations similar to those presented in this paper). To calibrate the quality of our procedure, we compared the residual of the differenced phase delay with similar residuals corresponding to observations at 8.4 GHz and 5 GHz made in the past (G95, G98). The rms of the residuals are about 15, 9, and 2 ps for the data sets at 5, 8.4, and 43 GHz, respectively, which expressed in equivalent-phase yield postfit residuals of roughly 30 degrees at each of the three frequencies. This similarity of the rms expressed in phase at all the observed frequencies demonstrates not only that the phase connection process is feasible at 43 GHz, but which is also of no less quality than at lower frequencies. Likely, the most important contributors to the scatter of the phases at 43 GHz are the unmodeled variations of the refractivity of the neutral atmosphere. From the average rms of 2 ps of the phase residuals of Fig. 2, and assuming uncorrelated contributions from the antennas forming each interferometric pair, the average uncertainty for the single-site phase delay is $`\sqrt{2}`$ ps. This uncertainty is in good agreement with the predictions of water vapor fluctuations on time scales of 100 seconds ($``$2 ps) based on refractivity patterns described by Kolmogorov turbulences (Treuhaft & Lanyi 1987). The importance of our result translates to VLBI phase-referencing mapping. This technique (see e.g. Lestrade et al. 1990) relies completely on the behavior of the phase of the reference source (usually a strong radio emitter) to predict the phase of the target source (usually a weak radio emitter). Beasley & Conway (1995) provide useful expressions for the maximum cycle time for phase-referencing with the VLBA. Under good weather conditions, average antenna elevation of 40°, and with the requirement that the rms phase between scans is $`<`$90°, the maximum cycle time at 43 GHz is $``$100s. This estimate should be shortened if atmospheric spatial variations from different lines of sight are considered. Actually, the facts are more favorable. For sources separated 5° on the sky, high antenna elevations, and good weather conditions, our results show that (i) the rms of the phases is below 90° throughout the experiment and does not seem to be substantially dependent of the cycle times used during our observation (100-300s); and (ii) the expected average uncertainty in interpolating the phases of one source to the epoch of the other is $``$30° in the differenced phase. This value is not larger than usual phase errors in phase-reference mapping at cm-wavelengths (e.g. Lestrade et al. 1990). Therefore, with the proper cycle times and nearby calibrator sources, diffraction limited VLBI phase-reference images at 43 GHz should be possible. Our observations have shown that VLBI differential astrometry at 43 GHz provides high-precision relative positions. At this frequency, the astrometric precision is nearly equivalent to the resolution of the maps, and the reference point selected in the source structure might be associated with the core. This makes 43 GHz differential astrometry an ideal technique to trace unambiguously the kinematics of the inner regions of the extragalactic radiosources. ###### Acknowledgements. We thank Patrick Charlot for a constructive refereeing of the paper and Walter Alef for his valuable comments. We thank Jon Romney for his efforts during the correlation. This work has been supported by the Spanish DGICYT grant PB96-0782. The National Radio Astronomy Observatory is operated by Associated Universities Inc., under a cooperative agreement with the National Science Fundation.
no-problem/0002/cond-mat0002137.html
ar5iv
text
# Vortex state in a doped Mott insulator ## I Introduction Nature of the ground state as a function of doping remains one of the recurring unresolved issues in the theory of high-$`T_c`$ cuprate superconductors. The problem is partly due to formidable difficulties related to the theoretical description of doped Mott insulators and partly due to experimental hurdles in accessing the normal state properties in the $`T0`$ limit because of the intervening superconducting order. Probes that suppress superconductivity and reveal the properties of the underlying ground state are therefore of considerable value. So far only pulsed magnetic fields in excess of $`H_{c2}`$ and impurity doping beyond the critical concentration have been used towards this goal. Here we argue that the vortex core spectroscopy performed using scanning tunneling microscope (STM) can provide new insights into the nature of the ground state in cuprates. We analyze the existing experimental data and conclude that they imply strongly correlated “normal” ground state, presumably derivable from a doped Mott insulator. We then develop a theoretical framework for the problem of tunneling in the vortex state of such a doped Mott insulator. In the vortex core the superconducting order parameter is locally suppressed to zero and the region within a coherence length $`\xi `$ from its center can be to the first approximation thought of as normal. Spectroscopy of the vortex core therefore provides information on the normal state electronic excitation spectrum in the $`T0`$ limit. More accurately, the core spectroscopy reflects the spectrum in the spatially non-uniform situation where the order parameter amplitude rapidly varies in response to the singularity in the phase imposed by the external magnetic field. In order to extract useful information regarding the underlying ground state from such measurements a detailed understanding of the vortex core physics is necessary. So far the problem has been addressed using the weak coupling approach based on the Bogoliubov-de Gennes theory generalized to the $`d`$-wave symmetry of the order parameter, and semiclassical calculations. The early theoretical debate focused on the existence or absence of the vortex core bound states . This debate, now resolved in favor of absence of any bound states in pure $`d_{x^{}y^2}`$ state , has somewhat eclipsed the possibly more important issues related to the nature of the ground state in cuprates. The body of work based on mean field, weak coupling calculations yields results for the local density of states in the vortex core which exhibit two generic features: (i) the coherence peaks (occurring at $`E=\pm \mathrm{\Delta }_0`$ in the bulk) are suppressed, with the spectral weight transferred to a (ii) broad featureless peak centered around the zero energy. Here we wish to emphasize the heretofore little appreciated fact that these features are qualitatively inconsistent with the existing experimental data on cuprate superconductors. STM spectroscopy on Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> (BSCCO) at 4.2K indicates a “pseudogap” spectrum in the vortex core with the spectral weight from the coherence peaks at $`\pm \mathrm{\Delta }_040`$meV transferred to high energies, and no peak whatsoever around $`E=0`$. Recent high resolution data on the same compound confirmed these findings down to 200mK and found evidence for weak bound states at $`\pm 7`$meV. Experiments on YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> (YBCO) also indicate low energy bound states, but are somewhat more difficult to interpret because of the high zero-bias conductance of unknown origin appearing even in the absence of magnetic field. The fundamental discrepancy between the theoretical predictions and the experimental findings strongly suggests that models based on a simple weak coupling theory break down in the vortex core. The pseudogap observed in the core hints that the underlying ground state revealed by local suppression of the superconducting order parameter is a doped Mott insulator and not a conventional metal. Taking into account the effects of strong correlations appears to be necessary to consistently describe the physics of the vortex core. Conversely, studying the vortex core physics could provide information essential for understanding the nature of the underlying ground state in cuprates. The first step in this direction was taken by Arovas et al. who proposed that within the framework of the SO(5) theory vortex cores could become antiferromagnetic (AF). They found that such AF cores can be stabilized at low $`T`$ but only in the close vicinity of the bulk AF phase. In contrast, experimentally the pseudogap in the core is found to persist into the overdoped region. More recently microscopic calculations within the same model revealed electronic excitations in such AF cores with behavior roughly resembling the experimental data. Quantitatively, however, these spectra exhibit asymmetric shifts in the coherence peaks (related to the fact that spin gap in the AF core is no longer tied to the Fermi level) not observed experimentally. These discrepancies suggest that generically cores will not exhibit the true AF order. Finally, these previous approaches are still of the Hartree-Fock-Bogoliubov type and cannot be expected to properly capture the effects of strong correlations. Here we consider a model for the vortex core based on a version of the U(1) gauge field slave boson theory formulated recently by Lee. Originally proposed by Anderson the slave boson theory was formulated to describe strongly correlated electrons in the CuO<sub>2</sub> planes of the high-$`T_c`$ cuprates. Various versions of this theory have been extensively discussed in the literature . Interest in spin-charge separated systems revived recently due to the realization that it provides a natural description of the pseudogap phenomenon observed in the underdoped cuprates. The common ingredient in these theories is “splintering” of the electron into quasiparticles carrying its spin and charge degrees of freedom. Within the theories based on Hubbard and $`t`$-$`J`$ models this splintering is formally implemented by the decomposition of the electron creation operator $$c_{i\sigma }^{}=f_{i\sigma }^{}b_i$$ (1) into a fermionic spinon $`f_{i\sigma }`$ and bosonic holon $`b_i`$. The local constraint of the single occupancy $`b_i^{}b_i+f_{i\sigma }^{}f_{i\sigma }=1`$ is enforced by a fluctuating U(1) gauge field $`𝐚`$. The mean field phase diagram is known to contain four phases distinguished by the formation of spinon pairs, $`\mathrm{\Delta }_{ij}=ϵ_{\sigma \sigma ^{}}f_{i\sigma }^{}f_{j\sigma ^{}}^{}`$, and Bose-Einstein condensation of the individual holons $`b=b_i`$, and is illustrated in Figure (1). The effects of magnetic field on such spin-charge separated system is most conveniently studied in the framework of an effective Ginzburg-Landau (GL) theory for the condensate fields $`\mathrm{\Delta }`$ and $`b`$. The corresponding effective action can be constructed based on the requirements of local gauge invariance with respect to the physical electromagnetic vector potential $`𝐀`$ and the internal gauge field $`𝐚`$: $`f_{\mathrm{GL}}`$ $`=`$ $`|(2i𝐚)\mathrm{\Delta }|^2+r_\mathrm{\Delta }|\mathrm{\Delta }|^2+{\displaystyle \frac{1}{2}}u_\mathrm{\Delta }|\mathrm{\Delta }|^4`$ (2) $`+`$ $`|(i𝐚ie𝐀)b|^2+r_b|b|^2+{\displaystyle \frac{1}{2}}u_b|b|^4+v|\mathrm{\Delta }|^2|b|^2`$ (3) $`+`$ $`{\displaystyle \frac{1}{8\pi }}(\times 𝐀)^2+f_{\mathrm{gauge}}.`$ (4) The factor of 2 in the spinon gradient term reflects the fact that pairs of spinons were assumed to condense. $`f_{\mathrm{gauge}}`$ describes the dynamics of the internal gauge field $`𝐚`$. We note that unlike the physical electromagnetic field $`𝐀`$ the gauge field $`𝐚`$ has no independent dynamics in the underlying microscopic model since it serves only to enforce a constraint. Sachdev and Nagaosa and Lee assumed that upon integrating out the microscopic degrees of freedom a term $$f_{\mathrm{gauge}}=\frac{\sigma }{2}(\times 𝐚)^2$$ (5) is generated in the free energy. They then analyzed vortex solutions of the free energy (4) and came to the conclusion that two types of vortices are permissible: a “holon vortex” with the singularity in the $`b`$ field and a “spinon vortex” with the singularity in the $`\mathrm{\Delta }`$ field. Because holons carry electric charge $`e`$ the holon vortex is threaded by electronic flux quantum $`hc/e`$, i.e. twice the conventional superconducting flux quantum $`\mathrm{\Phi }_0=hc/2e`$. Spinons on the other hand condense in pairs, and the spinon vortex therefore carries flux $`\mathrm{\Phi }_0`$. Stability analysis then implies that spinon vortex will be stable over the most of the superconducting phase diagram, while the $`hc/e`$ holon vortex can be stabilized only in the close vicinity of the phase boundary on the underdoped side . This is a direct consequence of the fact that singly quantized vortices are always energetically favorable. As far as the electronic excitations are concerned, the spinon vortex is virtually indistinguishable from the vortex in a conventional weak coupling mean field theory: the spin gap $`\mathrm{\Delta }`$, which gives rise to the gap in the electron spectrum, vanishes in the core. Consequently, the vortex state based on the results of Sachdev-Nagaosa-Lee (SNL) theory does not exhibit the pseudogap in the core and suffers from the same discrepancy with the experimental data as the weak coupling theories based on the conventional Fermi liquid description. Moreover, no evidence exists at present for stable doubly quantized holon vortices predicted by SNL. What is needed to account for the experimental data is a singly quantized holon vortex stable over the large portion of the superconducting phase in the phase diagram of Figure 1. In the core of such a holon vortex the spin gap $`\mathrm{\Delta }`$ remains finite and leads naturally to the pseudogap excitation spectrum. In what follows we show that under certain conditions the free energy (4) permits precisely such solution. The results of the SNL theory are predicated upon the assumption that the “stiffness” $`\sigma `$ of the gauge field is relatively large and that singular configurations in which $`\times 𝐚`$ contains a full flux quantum through an elementary plaquette are prohibited. Consider now a precisely opposite physical situation, allowing unconstrained fluctuations in $`𝐚`$. This amounts to the assumption that the $`f_{\mathrm{gauge}}`$ term (5) can be neglected in (4), i.e. $`\sigma 0`$. Physically this corresponds to the “extreme type-I” limit of the GL “superconductor” (4) with respect to fluctuations in $`𝐚`$. Based on Elitzur’s theorem Nayak recently argued that the exact local U(1) symmetry of the model cannot be broken, implying absence of the phase stiffness term (5) at all energy scales. Our assumption therefore appears reasonable and in Section III. we shall give a more thorough discussion of the significance of the $`f_{\mathrm{gauge}}`$ term for the vortex solutions of interest here. For the time being we shall assume that $`f_{\mathrm{gauge}}`$ can be neglected and explore physical consequences of the resulting theory. $`f_{\mathrm{GL}}`$ given by Eq. (4) is quadratic in $`𝐚`$ and with the $`\times 𝐚`$ term absent the gauge fluctuations can be trivially integrated out. Within the closely related microscopic model this procedure has been recently implemented by Lee. The resulting effective free energy density reads $`f`$ $`=`$ $`f_{\mathrm{amp}}+{\displaystyle \frac{\rho _\mathrm{\Delta }^2\rho _b^2}{4\rho _\mathrm{\Delta }^2+\rho _b^2}}(\varphi 2\theta +2e𝐀)^2`$ (6) $`+`$ $`{\displaystyle \frac{1}{8\pi }}(\times 𝐀)^2,`$ (7) where we have set $`\mathrm{\Delta }=\rho _\mathrm{\Delta }e^{i\varphi }`$, $`b=\rho _be^{i\theta }`$, and $`f_{\mathrm{amp}}`$ $`=`$ $`(\rho _\mathrm{\Delta })^2+r_\mathrm{\Delta }\rho _\mathrm{\Delta }^2+{\displaystyle \frac{1}{2}}u_\mathrm{\Delta }\rho _\mathrm{\Delta }^4`$ (8) $`+`$ $`(\rho _b)^2+r_b\rho _b^2+{\displaystyle \frac{1}{2}}u_b\rho _b^4+v\rho _\mathrm{\Delta }^2\rho _b^2`$ (9) is the amplitude piece. The most important feature of the effective free energy (7) is that it no longer depends on the individual phases $`\varphi `$ and $`\theta `$ but only on their particular combination $$\mathrm{\Omega }=\varphi 2\theta .$$ (10) Since the physical superconducting order parameter $`\mathrm{\Psi }=\mathrm{\Delta }^{}b^2=\rho _\mathrm{\Delta }\rho _b^2e^{i(\varphi 2\theta )}`$ it is reasonable to identify $`\mathrm{\Omega }`$ with the phase of a Cooper pair. Physically, the unconstrained fluctuations of the gauge field in Eq. (4) resulted in partial restoration of the original electronic degrees of freedom in Eq. (7). In the underlying microscopic model this means that on long length scales spinons and holons are always confined, in agreement with Elitzur’s theorem . On lengthscales shorter than the confinement length, such as inside the vortex core, spinons and holons can still appear locally decoupled. In the present effective theory this aspect is reflected by two amplitude degrees of freedom present in (7). More detailed discussion of these issues is given in Refs. . We have thus arrived at an effective theory of a spin-charge separated system containing one phase degree of freedom $`\mathrm{\Omega }`$ and two amplitudes, $`\rho _\mathrm{\Delta }`$ and $`\rho _b`$. Deep in the superconducting phase, where both amplitudes are finite, the physics of (7) will be very similar to that of a conventional GL theory. In the situations where the superconducting order parameter $`\mathrm{\Psi }`$ is strongly suppressed, such as in the vortex core, near an impurity or a wall, the new theory has an extra degree of richness, associated with the fact that it is sufficient (and generally preferred by the energetics) when only one of the two amplitudes is suppressed. Since the two amplitudes play very different roles in the electronic excitation spectrum, the effective theory (7) will lead to a number of nontrivial effects. To illustrate this consider what will happen in the core of a superconducting vortex. Under the influence of the magnetic field the phase $`\mathrm{\Omega }`$ will develop a singularity such that $`\mathrm{\Omega }1/r`$ close to the vortex center. For the free energy to remain finite the amplitude prefactor in the second term of Eq. (7) must vanish for $`r0`$. This is analogous to $`|\mathrm{\Psi }|`$ vanishing in the core of a conventional vortex. In the present case, however, it is sufficient when the product $`\rho _\mathrm{\Delta }\rho _b`$ vanishes. Since suppressing any of the two amplitudes costs condensation energy, in general only one amplitude will be driven to zero. Which of the two is suppressed will be determined by the energetics of the amplitude term (9). On general grounds we expect that the state in the vortex core will be the same as the corresponding bulk “normal” state obtained by raising temperature above $`T_c`$. Thus, very crudely, we expect that holon vortex will be stable in the underdoped while the spinon vortex will be stable in the overdoped region of the phase diagram Figure 1. An important point by which our approach differs from the SNL theory is that in the present theory both types of vortices carry the same superconducting flux quantum $`\mathrm{\Phi }_0`$ and thus compete on equal footing. This is a direct consequence of our assumption of the vanishing phase stiffness $`\sigma `$. In what follows we study in detail the vortex solutions of the free energy (7). Our main objective is to obtain the precise estimates for the energy of the two types of vortices as a function of temperature and doping and deduce the corresponding phase diagram for the state inside the vortex core. We show that for generic parameters in (7) the singly quantized holon vortex with a pseudogap spectrum in the core can be stabilized over a large portion of the superconducting phase, as required by the experimental constraints discussed above. ## II Solution for a single vortex ### A General considerations In order to provide a more quantitative discussion we now adopt some assumptions about the coefficients entering the free energy (7). We assume that $$r_i=\alpha _i(TT_i),i=b,\mathrm{\Delta },$$ (11) where $`T_i`$ are corresponding “bare” critical temperatures, which we assume depend on doping concentration $`x`$ in the following way: $$T_\mathrm{\Delta }=T_0(2x_mx),T_b=T_0x.$$ (12) Here $`x_m`$ denotes the optimal doping and $`T_0`$ sets the overall temperature scale. We furthermore assume that $`u_i`$ and $`v`$ are all positive and independent of doping and temperature. It is easy to see that such choice of parameters qualitatively reproduces the bulk phase diagram of cuprates in the $`x`$-$`T`$ plane shown in Figure 1. The effect of the $`v`$-term is to suppress $`T_c`$ from its bare value away from the optimal doping. In real systems fluctuations will lead to additional suppression of $`T_c`$ which we do not consider here. In the absence of perturbations the bulk values of the amplitudes are given by $`\overline{\rho }_\mathrm{\Delta }^2`$ $`=`$ $`(r_\mathrm{\Delta }u_br_bv)/D,`$ (13) $`\overline{\rho }_b^2`$ $`=`$ $`(r_bu_\mathrm{\Delta }r_\mathrm{\Delta }v)/D,`$ (14) with $`D=u_bu_\mathrm{\Delta }v^2`$. In analogy with conventional GL theories we may define coherence lengths for the two amplitudes $`\xi _\mathrm{\Delta }^2`$ $`=`$ $`(r_\mathrm{\Delta }r_bv/u_b),`$ (15) $`\xi _b^2`$ $`=`$ $`(r_br_\mathrm{\Delta }v/u_\mathrm{\Delta }),`$ (16) one of which always diverges at $`T_c`$ as $`(TT_c)^{1/2}`$. Minimization of the free energy (7) with respect to the vector potential $`𝐀`$ yields an equation $$\times \times 𝐀=e\rho _s(\mathrm{\Omega }2e𝐀),$$ (17) where $$\rho _s=\frac{4\rho _\mathrm{\Delta }^2\rho _b^2}{4\rho _\mathrm{\Delta }^2+\rho _b^2}$$ (18) is the effective superfluid density. The term in brackets can be identified as twice the conventional superfluid velocity $$𝐯_s=\frac{1}{2}\mathrm{\Omega }e𝐀.$$ Making use of the Ampere’s law $`4\pi 𝐣=\times 𝐁`$ we see that Eq. (17) specifies the supercurrent in terms superfluid density and velocity: $`𝐣=2e\rho _s𝐯_s`$. Minimization of (7) with respect to $`\mathrm{\Omega }`$ then implies $`𝐣=0`$; the supercurrent is conserved. Minimizing the free energy (7) with respect to the amplitudes results in the pair of coupled GL equations: $$^2\rho _\mathrm{\Delta }+r_\mathrm{\Delta }\rho _\mathrm{\Delta }+u_\mathrm{\Delta }\rho _\mathrm{\Delta }^3+v\rho _b^2\rho _\mathrm{\Delta }+\frac{4\rho _\mathrm{\Delta }^2\rho _b^2}{(4\rho _\mathrm{\Delta }^2+\rho _b^2)^2}𝐯_s^2=0,$$ (20) $$^2\rho _b+r_b\rho _b+u_b\rho _b^3+v\rho _\mathrm{\Delta }^2\rho _b+\frac{16\rho _\mathrm{\Delta }^2\rho _b^2}{(4\rho _\mathrm{\Delta }^2+\rho _b^2)^2}𝐯_s^2=0.$$ (21) We are interested in the behavior of the amplitudes in the vicinity of the vortex center. In this region, for a strongly type-II superconductor, we may neglect the vector potential A in the superfluid velocity $`𝐯_s`$. In a singly quantized vortex $`\mathrm{\Omega }`$ winds by $`2\pi `$ around the origin leading to a singularity of the form $`𝐯_s\frac{1}{2}\mathrm{\Omega }=\widehat{\phi }/2r`$. First, for the holon vortex we assume that $`\rho _b`$ vanishes in the core as some power $`\rho _b(r)r^\nu `$ and $`\rho _\mathrm{\Delta }(r)\overline{\rho }_\mathrm{\Delta }`$ remains approximately constant. Eq. (21) then becomes $$(\frac{1}{4}\nu ^2)r^{\nu 2}+(r_b+v\overline{\rho }_\mathrm{\Delta }^2)r^\nu +u_b\overline{\rho }_b^2r^{3\nu }=0,$$ (22) where we have neglected $`\rho _b^2(r)`$ compared to $`4\overline{\rho }_\mathrm{\Delta }^2`$ in the denominator of the last term in Eq. (21). The most singular term in Eq. (22) is the first one and we must demand that the coefficient of $`r^{\nu 2}`$ vanishes. This implies $`\nu =\frac{1}{2}`$. The asymptotic short distance behavior of the holon amplitude therefore can be written as $$\rho _b(r)c_b\overline{\rho }_b\left(\frac{r}{\xi _b}\right)^{1/2},$$ (23) where $`c_b`$ is a constant of order unity which may be determined by the full integration of Eqs. (II A). Similar analysis of Eq. (20) in the vicinity of the spinon vortex yields $$\rho _\mathrm{\Delta }(r)c_\mathrm{\Delta }\overline{\rho }_\mathrm{\Delta }\left(\frac{r}{\xi _d}\right),$$ (24) with $`\rho _b`$ approximately constant. We notice the different power laws in the holon and spinon results. Operationally this difference arises from different numerical prefactors of the respective superfluid velocity terms in Eqs. (II A). Physically, the unusual $`r`$ dependence of the holon amplitude in the core reflects the fact that the field $`b`$ describes a condensate of single holons, each carrying charge $`e`$. Superconducting vortex with the flux quantum $`\mathrm{\Phi }_0`$ represents a magnetic “half-flux” for the holon field which results in non-analytic behavior of $`\rho _b(r)`$ at the origin. Singly quantized holon vortex is therefore a peculiar object and we shall discuss it more fully in Section III. Here we note that the physical superconducting order parameter amplitude $`|\mathrm{\Psi }|=\rho _\mathrm{\Delta }\rho _b^2`$ remains analytic in the core of both the spinon and the holon vortex. ### B Holon vs. spinon vortex: the phase diagram We are now in the position to estimate the energies of the two types of vortices and deduce the phase diagram for the “normal” state in the vortex core. To this end we consider a single isolated vortex centered at the origin. The total vortex line energy can be divided into electromagnetic and core contributions. The electromagnetic contribution consists of the energy of the supercurrents and the magnetic field outside the core region. It may be estimated by assuming that the amplitudes $`\rho _\mathrm{\Delta }`$ and $`\rho _b`$ have reached their bulk values $`\overline{\rho }_\mathrm{\Delta }`$ and $`\overline{\rho }_b`$ respectively. Taking curl of Eq. (17) and noting that $`\times \mathrm{\Omega }=2\pi \delta (𝐫)`$ for a singly quantized vortex we obtain the London equation for the magnetic field $`𝐁=\times 𝐀`$ of the form $$B\lambda ^2^2B=\mathrm{\Phi }_0\delta (𝐫)$$ (25) where $$\lambda ^2=8\pi e^2\frac{4\overline{\rho }_\mathrm{\Delta }^2\overline{\rho }_b^2}{4\overline{\rho }_\mathrm{\Delta }^2+\overline{\rho }_b^2}.$$ (26) has the meaning of the London penetration depth for the effective GL theory (7). Aside from the unusual form of $`\lambda `$, Eq. (25) is identical to the conventional London equation. The corresponding electromagnetic energy is therefore the same for both types of vortices and can be calculated in the usual manner obtaining $$E_{\mathrm{EM}}\left(\frac{\mathrm{\Phi }_0}{4\pi \lambda }\right)^2\mathrm{ln}\kappa ,$$ (27) with $`\kappa =\lambda /\mathrm{max}(\xi _\mathrm{\Delta },\xi _b)`$ being the generalized GL ratio. To estimate the core contribution to the vortex line energy we assume that one of the amplitudes is suppressed to zero in the core $$\rho _i(r)=0,r<\xi _i,$$ (28) while the other one stays constant and equal to its bulk value. This is a very crude approximation which we justify below by an exact numerical computation. With these assumptions, the core energy is $$E_{\mathrm{core}}^{(i)}\left(\frac{\mathrm{\Phi }_0}{4\pi \lambda _i}\right)^2,$$ (29) where $`i=\mathrm{\Delta },b`$ for spinon and holon vortex respectively and $$\lambda _i^2=8\pi e^2\overline{\rho }_i^2.$$ (30) Such a crude approximation overestimates the core energy. A more accurate analysis, which we do not pursue here, allows for a more realistic variation of $`\rho _i(r)`$ in the core and indicates that the value of $`E_{\mathrm{core}}^{(i)}`$ has the same form as Eq. (29) multiplied by a numerical factor $`c_10.5`$. Thus, the total energy of the vortex line can be written as $$E^{(i)}=\left(\frac{\mathrm{\Phi }_0}{4\pi \lambda }\right)^2\mathrm{ln}\kappa +c_1\left(\frac{\mathrm{\Phi }_0}{4\pi \lambda _i}\right)^2,$$ (31) where again $`i=\mathrm{\Delta },b`$ for spinon and holon vortex respectively. Eq. (31) parallels the Abrikosov expression for the vortex line energy in a conventional GL theory where $`\lambda `$ and $`\lambda _i`$ are identical and equal to the ordinary London penetration depth. In the vortex state described by the free energy (7) the vortex with lower energy $`E^{(i)}`$ will be stabilized. Eq. (31) implies that the difference in energy between the two types of vortices comes primarily from the core contribution, as expected on the basis of the physical argument presented above. Condition $`\lambda _\mathrm{\Delta }=\lambda _b`$ marks the transition point between the two solutions. For fixed GL parameters $`T_0`$, $`x_m`$, $`\alpha _i`$, $`u_i`$ and $`v`$ this defines a transition line in the $`x`$-$`T`$ plane. According to (30) the equation for this line is $$\overline{\rho }_\mathrm{\Delta }(x,T)=\overline{\rho }_b(x,T).$$ (32) Using Eqs. (11-14) one can obtain an explicit expression for the transition temperature $`T_g`$ between two types of vortices as a function of doping $$T_g(x)=T_0\left[\frac{2x_mx}{1\beta }+\frac{x}{1\beta ^1}\right],$$ (33) with $$\beta =\frac{\alpha _b(u_\mathrm{\Delta }+v)}{\alpha _\mathrm{\Delta }(u_b+v)}.$$ (34) Eq. (33) describes a straight line in the $`x`$-$`T`$ plane, originating at $`[x_m,T_0x_m]`$, i.e. maximal $`T_c`$ at optimum doping, and terminating at $`[2x_m/(1+\beta ),0]`$. Generically, we expect that parameters $`\alpha _i`$ and $`u_i`$ will be comparable in magnitude for the holon and spinon channels. Parameter $`\beta `$ defined in Eq. (34) will therefore be of order unity. The typical situation for $`\beta =0.77`$ is illustrated in Figure 2. More generally the quartic coefficients $`u_i`$ and $`v`$ could exhibit weak doping and temperature dependences leading to a curvature in the phase boundary. The appealing feature of the present theory is that parameter $`\beta `$ may vary from compound to compound. Thus, the experimental fact that in BSCCO the pseudogap in the core persists into the overdoped region is easily accounted for in the present theory. It would be interesting to see if the transition from holon to spinon vortex as a function of doping could be experimentally observed. A good candidate for such observation would be LSCO, where the transport measurements in pulsed magnetic fields established a metal-insulator transition around optimal doping, i.e. $`\beta 1`$. The current theory predicts a holon vortex with the pseudogap spectrum in the underdoped (insulating) region and spinon vortex with conventional metallic spectrum on the overdoped side. ### C Numerical results In order to put the above analytical estimates on firmer ground we now pursue numerical computation of the vortex line energy. For simplicity we consider the strongly type-II situation $`(\kappa 1)`$ where the vector potential term in $`𝐯_s`$ can be neglected to an excellent approximation, as long as we focus on the behavior close to the core. We are then faced with the task of numerically minimizing the free energy (7) with respect to the two cylindrically symmetric amplitudes $`\rho _\mathrm{\Delta }(r)`$ and $`\rho _b(r)`$. As noted by Sachdev direct numerical minimization of the free energy (7) provides a more robust solution than the numerical integration of the coupled differential equations (II A). We discretize the free energy functional (7) on a disk of a radius $`R\xi _i`$ in the radial coordinate $`r`$ with up to $`N=2000`$ spatial points. We then employ the Polak-Ribiere variant of the Conjugate Gradient Method to minimize this discretized functional with respect to $`\rho _\mathrm{\Delta }(r_j)`$ and $`\rho _b(r_j)`$, initialized to suitable single vortex trial functions. The procedure converges very rapidly and the results are insensitive to the detailed shape of the trial functions as long as they saturate to the correct bulk values outside the vortex core. Typical results of our numerical computations are displayed in Figure (3) and are in complete agreement with the analytical considerations of the preceding subsections. Note in particular that $`\rho _b(r)`$ in the holon vortex vanishes with infinite slope, consistent with Eq. (23). Plotting $`\rho _b^2(r)`$ confirms that the exponent is indeed $`1/2`$. In the spinon vortex $`\rho _\mathrm{\Delta }(r)`$ is seen to vanish linearly as expected on the basis of Eq. (24). The nonvanishing order parameter is slightly elevated in the core reflecting the effective “repulsion” between the two amplitudes contained in the $`v`$-term of the free energy. The results for the spinon vortex are consistent with those of Ref. . We explored a number of other parameter configurations and obtained similar results. We find that Eq. (32) is a good predictor of the transition line between the holon and spinon vortex, although the precise numerical value of the transition temperature $`T_g`$ for given $`x`$ tends to deviate slightly from the value predicted by Eq. (33). This is illustrated in Figure (2) where we compare the vortex core phase diagrams obtained numerically and from Eq. (33). Interestingly, the deviation always tends to enlarge the holon vortex sector of the phase diagram at the expense of the spinon vortex sector. This is presumably because the sharper $`\sqrt{r}`$ suppression of the holon order parameter in the core costs less condensation energy. ## III Gauge fluctuations and the spectral properties in the core Theory of the vortex core based on the effective action (7) appears to yield results consistent with the STM data on cuprates in that it implies stable holon vortex solution over the large portion of the superconducting phase diagram. The state inside the core of such a holon vortex is characterized by vanishing amplitude of the holon condensate field, $`|b|=0`$, and a finite spin gap $`|\mathrm{\Delta }|\mathrm{\Delta }_{\mathrm{bulk}}`$. This is the same state as in the pseudogap region above $`T_c`$. One would thus expect the electronic spectrum in the core to be similar to that found in the normal state of the underdoped cuprates, in agreement with the data. The holon vortex with this property carries conventional superconducting flux quantum $`\mathrm{\Phi }_0`$, in accord with experiment. This general agreement between theory and experiment would suggest that the effective action (7) provides the sought for phenomenological description of the vortex core physics in cuprates. In what follows we amplify our argumentation that it is also tenable in a broader theoretical context in that it naturally follows from the U(1) slave boson models extensively studied in the classic and more recent high-$`T_c`$ literature. We then provide a more detailed discussion of the vortex core spectra and propose an explanation for the experimentally observed core bound states. ### A Significance of the $`f_{\mathrm{gauge}}`$ term Derivation of the effective action (7) from the more general U(1) action (4) hinges on our assumption that the stiffness $`\sigma `$ of the gauge field $`𝐚`$ is low and that the $`f_{\mathrm{gauge}}`$ term (5) can be neglected. Assumption of large $`\sigma `$ by SNL leads to very different vortex solutions which appear inconsistent with the recent experimental data. We first expand on our discussion as to why is $`f_{\mathrm{gauge}}`$ term important and then we argue why it may be permissible to neglect it in the realistic models of cuprates. To facilitate the discussion let us rewrite Eq. (4) by resolving the complex matter fields into amplitude and phase components: $`f_{\mathrm{GL}}`$ $`=`$ $`f_{\mathrm{amp}}+\rho _\mathrm{\Delta }^2(\varphi 2𝐚)^2+\rho _b^2(\theta 𝐚e𝐀)^2`$ (35) $`+`$ $`{\displaystyle \frac{1}{8\pi }}(\times 𝐀)^2+{\displaystyle \frac{\sigma }{2}}(\times 𝐚)^2,`$ (36) with $`f_{\mathrm{amp}}`$ specified by Eq. (9). Now consider situation in which the sample is subjected to uniform magnetic field $`𝐁=\times 𝐀`$. Two scenarios (discussed previously by SNL) appear possible. In the first, the internal gauge field develops no net flux, $`\times 𝐚=0`$, and the holon phase $`\theta `$ develops singularities in response to $`𝐀`$ such that $$\times \theta =2\pi \underset{j}{}\delta (𝐫𝐫_j),$$ where $`𝐫_j`$ denotes the vortex positions. The holon amplitude $`\rho _b`$ is driven to zero at $`𝐫_j`$, essentially to prevent the free energy from diverging due to the singularity in the phase gradient. Since holons carry charge $`e`$, each vortex is threaded by flux $`hc/e`$, i.e. twice the superconducting flux quantum $`\mathrm{\Phi }_0=hc/2e`$. This solution represents the doubly quantized holon vortex lattice, considered by SNL. In the second scenario $`𝐚`$ develops a net flux such that $`𝐚e𝐀`$, which screens out the $`𝐀`$ field in the holon term but produces a net flux $`2e𝐀`$ in the spinon term. In response to this flux, spinon phase $`\varphi `$ develops singularities such that $$\times \varphi =2\pi \underset{j}{}\delta (𝐫\stackrel{~}{𝐫}_j),$$ corresponding to the spinon vortex lattice. $`\stackrel{~}{𝐫}_j`$ denotes vortex positions which will be different from $`𝐫_j`$ since at the fixed field $`B`$ there will be twice as many spinon vortices as holon vortices. (Spinon vortices carry conventional superconducting quantum of flux $`\mathrm{\Phi }_0`$.) In this case $`\rho _\mathrm{\Delta }`$ is driven to zero at the vortex centers. In this scenario one pays a penalty for nucleating the net flux in $`\times 𝐚`$ due to last term in Eq. (36). This energy cost can be estimated as $$E_\sigma 8\pi \sigma e^2\left(\frac{\mathrm{\Phi }_0}{4\pi \lambda }\right)^2$$ (37) per vortex. Stiffness $`\sigma `$ must be small enough so that $`E_\sigma `$ is small compared to the vortex energy (31). Taking the dominant $`E_{\mathrm{EM}}`$ term and neglecting $`\mathrm{ln}\kappa `$ this implies that $$\sigma \frac{1}{8\pi e^2},$$ (38) which is the same condition as considered in Ref. . Now consider a third scenario in which a singly quantized holon vortex emerges. As a starting point consider the spinon vortex solution just described. In the underdoped regime the amplitude piece $`f_{\mathrm{amp}}`$ would favor suppressing the holon amplitude in the core instead of the spinon amplitude but according to our previous considerations this would ordinarily require formation of a doubly quantized vortex whose magnetic energy is too large. However, if the gauge field stiffness $`\sigma `$ is sufficiently small, the system could lower its free energy by setting up singularities in $`𝐚`$ which would precisely cancel the singularities in $`\varphi `$ and shift them to the holon term. To arrive at this situation imagine contracting the initially uniform flux $`\times 𝐚`$ so that it becomes localized in the individual vortex core regions. Taking this procedure to the extreme, i.e. taking the limit $`\sigma 0`$, the gauge field will form “flux spikes” of the form $$2(\times 𝐚)=\times \varphi =2\pi \underset{j}{}\delta (𝐫\stackrel{~}{𝐫}_j),$$ (39) completely localized at the vortex centers. Gauge field of this form indeed completely cancels the singularities in the spinon phase gradient in Eq. (36) and $`\rho _\mathrm{\Delta }`$ is no longer forced to vanish in the core. The singularities now appear in the holon term, but they stem from $`𝐚`$ rather that $`\theta `$ which remains nonsingular. Consequently, $`\rho _b`$ is forced to vanish in the vortex cores. By construction the vortices are located at $`\stackrel{~}{𝐫}_j`$ and are therefore singly quantized. This is the singly quantized holon vortex discussed in the framework of the free energy (7). Based on the above discussion the singly quantized holon vortex can be thought of as a composite object formed by attaching half quantum $`(h/2)`$ of the fictitious gauge flux $`\times 𝐚`$ to the spinon vortex. Within the full compact U(1) theory this is essentially equivalent to the Z<sub>2</sub> vortex discussed by Wen in the framework of topological orders in spin liquids. In the framework of the free energy (36) one pays a penalty for such a singular solution due to the gauge stiffness term. In the present continuum model this penalty per single vortex is actually infinite, since according to Eq. (39) it involves a spatial integral over $`[\delta (𝐫\stackrel{~}{𝐫}_j)]^2`$. Thus, in the continuum model the singular solutions of this type are prohibited. In reality, however, we have to recall that our effective action (4) descended from a microscopic lattice model for spinons and holons in which the gauge field $`𝐚`$ lives on the nearest neighbor bonds of the ionic lattice. The ionic lattice constant $`d`$ therefore provides a natural short distance cutoff and the delta function in Eq. (39) should be interpreted as a flux quantum $`\mathrm{\Phi }_0`$ piercing an elementary plaquette of the lattice. The energy cost per vortex thus becomes finite and is given by $$E_\sigma ^{}\frac{\sigma e^2}{2}\left(\frac{\mathrm{\Phi }_0}{d}\right)^2.$$ (40) Again, for the solution to be stable, $`E_\sigma ^{}`$ must be negligible compared to the vortex energy (31). This implies $$\sigma \frac{1}{8\pi ^2e^2}\left(\frac{d}{\lambda }\right)^2,$$ (41) which is a much more stringent condition than (38) since in cuprates $`d\lambda `$. When condition (41) is satisfied it is permissible to neglect the $`f_{\mathrm{gauge}}`$ term in the effective action (4) and it becomes fully equivalent to (7) as far as the vortex solutions are concerned. Eq. (41) gives the precise meaning to the requirement of the weak stiffness of the gauge field loosely stated when deriving the effective action (7). ### B Microscopic considerations As mentioned in the introduction, the gauge field $`𝐚`$ has no dynamics in the original U(1) microscopic model, as it only serves to enforce a constraint on spinons and holons. The stiffness term (5) in the effective theory was assumed to arise in the process of integrating out the microscopic degrees of freedom. While such term is certainly permitted by symmetry, assessing its strength $`\sigma `$ is a nontrivial issue since even deep in the superconducting phase neither holons nor spinons are truly gapped. Thus, in general, integrating out these degrees of freedom may lead to singular and nonlocal interactions between the condensate and the gauge fields. To our knowledge the procedure has not been explicitly performed for the U(1) model and the precise form or magnitude of the gauge stiffness term is unknown. General considerations suggest that the gauge stiffness term is negligible in the class of models with exact local U(1) symmetry connecting the phases of holons and spinons. Consider now an intermediate representation of the problem where only high energy microscopic degrees of freedom have been integrated out. In the presence of a cutoff this is a well defined procedure even for gapless excitations, as explicitly shown by Kwon and Dorsey for a simple BCS model. The corresponding effective Lagrangian density of the present U(1) model can be written as $`_{\mathrm{eff}}`$ $`=`$ $`{\displaystyle \frac{\kappa _\mathrm{\Delta }^\mu }{2}}(_\mu \varphi 2a_\mu )^2+{\displaystyle \frac{\kappa _b^\mu }{2}}(_\mu \theta a_\mu eA_\mu )^2f_{\mathrm{amp}}`$ (42) $`+`$ $`(_\mu \varphi 2a_\mu )J_{\mathrm{sp}}^\mu +(_\mu \theta a_\mu eA_\mu )J_h^\mu `$ (43) $`+`$ $`_{\mathrm{sp}}[\psi _{\mathrm{sp}},\psi _{\mathrm{sp}}^{};\rho _\mathrm{\Delta }]+_h[\psi _h,\psi _h^{};\rho _b]+_{\mathrm{EM}}[A_\mu ].`$ (44) The Greek index $`\mu `$ runs over time and two spatial dimensions, $`\kappa _i^0`$ are compressibilities of the holon and spinon condensates, while $$\kappa _i^j=2(\rho _i)^2,i=\mathrm{\Delta },b,j=1,2,$$ (45) are the respective phase stiffnesses. $`J_{\mathrm{sp}}^\mu `$ and $`J_h^\mu `$ are spinon and holon three currents respectively and $`_{\mathrm{sp}}`$ and $`_h`$ are the low energy effective Lagrangians for the fermionic spinon field $`\psi _{\mathrm{sp}}`$ and bosonic holon field $`\psi _h`$. $`_{\mathrm{EM}}`$ is the Maxwell Lagrangian for the physical electromagnetic field. Thus, $`_{\mathrm{eff}}`$ describes an effective low energy theory of spinons and holons coupled to their respective collective modes and a fluctuating U(1) gauge field. Similar theory has been recently considered by Lee. The precise form of the microscopic Lagrangians $`_{\mathrm{sp}}`$ and $`_h`$ is not important for our discussion. The salient feature which we exploit here is that only the amplitude of the respective condensate field enters into $`_{\mathrm{sp}}`$ and $`_h`$. Coupling to the phases and the gauge field is contained entirely in the Doppler shift terms \[second line of Eq. (44)\]. Such form of the coupling is largely dictated by the requirements of the gauge invariance and the particular form Eq. (44) can be explicitly derived by gauging away the respective phase factors from the $`\psi `$ fields. The gauge field $`a_\mu `$ enters the effective Lagrangian (44) only via two gauge invariant terms: $`(_\mu \varphi 2a_\mu )`$ and $`(_\mu \theta a_\mu eA_\mu )`$, which may be interpreted as the three velocities of the spinon and holon condensates respectively. Furthermore, the only coupling between holons and spinons arises from $`a_\mu `$. Therefore, if we now proceed to integrate out the remaining microscopic degrees of freedom from $`_{\mathrm{eff}}`$, the two velocity terms will not mix. This consideration suggests that upon integrating out all of the microscopic degrees of freedom, the resulting gauge stiffness term will be of the form $`f_{\mathrm{gauge}}^{}`$ $`=`$ $`{\displaystyle \frac{\sigma _\mathrm{\Delta }}{2}}[\times (2𝐚\varphi )]^2`$ (46) $`+`$ $`{\displaystyle \frac{\sigma _b}{2}}[\times (𝐚+e𝐀\theta )]^2.`$ (47) Clearly, such term is permitted by the gauge symmetry. Furthermore, we note that for smooth (i.e. vortex free) configurations of phases the gradient terms will contribute nothing and we recover the gauge term considered in Ref. . In the presence of a vortex in $`\varphi `$ or $`\theta `$ the $`f_{\mathrm{gauge}}^{}`$ term will contribute formally divergent energy. Regularizing this on the lattice, as discussed above Eq. (40), this energy will become finite and can be interpreted simply as the energy of the spinon or holon vortex core states, which have been integrated out. In the microscopic theory (44) such energy would arise upon solving the relevant fermionic or bosonic vortex problem. We stress that, as concluded in the preceding subsection, the main theoretical obstacle to the formation of a singly quantized holon vortex in the original SNL theory was the appearance of a formally divergent contribution in the $`f_{\mathrm{gauge}}`$ term (5). The argument above suggests that $`f_{\mathrm{gauge}}`$ in Eq. (4) should be replaced by Eq. (47), in which such formally divergent contribution appears for arbitrary vortex configuration and upon regularization has a simple physical interpretation in terms of the energy of the vortex core states. Usage of the physically motivated term (47) in place of (5) therefore removes the bias against the singly quantized holon vortex solution, which appears to be realized in real materials. With (47) any bias between the holon and spinon vortex solutions can result only from the difference between the two stiffness constants $`\sigma _\mathrm{\Delta }`$ and $`\sigma _b`$. It is reasonable on physical grounds to assume that constants $`\sigma _\mathrm{\Delta }`$ and $`\sigma _b`$ are of the similar magnitudes. Furthermore, on the basis of Ref. we expect these constants to be negligibly small in the physically relevant models. Consequently we expect that neglecting the $`f_{\mathrm{gauge}}`$ term as in our derivation of effective action (7) will result in accurate determination of the phase diagram for the state in the vortex core. ### C Vortex core states The phenomenological theory based on the effective action (7) does not allow us to address the interesting question of the nature of the fermionic states in the vortex core. To do this we need to consider the microscopic Lagrangian density (44). While the fully self consistent calculation is likely to be prohibitively difficult, one can obtain qualitative insights by first solving the GL theory (7) as described in Sec. II, and then using the order parameters $`\rho _\mathrm{\Delta }`$ and $`\rho _b`$ as an input to the fermionic and bosonic sectors of the theory specified by Eq. (44). The work on a detailed solution of this type is in progress. Here we wish to point out some interesting features of such a theory and argue that it may indeed exhibit structure in the low energy spectral density similar to that found experimentally. It is instructive to integrate out the gauge fluctuations from the Lagrangian (44) as first discussed by Lee. Since $`_{\mathrm{eff}}`$ is quadratic in $`a_\mu `$ the integration can be explicitly performed resulting in the Lagrangian of the form $`_{\mathrm{eff}}^{}`$ $`=`$ $`{\displaystyle \frac{1}{2}}K_\mu (v_s^\mu )^2f_{\mathrm{amp}}+_{\mathrm{EM}}`$ (48) $``$ $`{\displaystyle \frac{2\kappa _b^\mu }{4\kappa _\mathrm{\Delta }^\mu +\kappa _b^\mu }}(v_s^\mu J_{\mathrm{sp}}^\mu )+{\displaystyle \frac{4\kappa _\mathrm{\Delta }^\mu }{4\kappa _\mathrm{\Delta }^\mu +\kappa _b^\mu }}(v_s^\mu J_h^\mu )`$ (49) $`+`$ $`_{\mathrm{sp}}+_h{\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{4\kappa _\mathrm{\Delta }^\mu +\kappa _b^\mu }}(2J_{\mathrm{sp}}^\mu +J_h^\mu )^2,`$ (50) where $`K_\mu =4\kappa _\mathrm{\Delta }^\mu \kappa _b^\mu /(4\kappa _\mathrm{\Delta }^\mu +\kappa _b^\mu )`$ and $$v_s^\mu =(_\mu \theta \frac{1}{2}_\mu \varphi eA_\mu )$$ (51) is the physical superfluid velocity. The first line reproduces the GL effective action (7) for the condensate fields, the second line describes the Doppler shift coupling of the superfluid velocity to the microscopic currents, and the third line contains spinon and holon pieces with additional current-current interactions generated by the gauge fluctuations. We now discuss the physical implications of Eq. (50) for the two types of vortices. We focus on the static solutions (i.e. we ignore the time dependences of various quantities, e.g. taking $`v_s^0=0`$) of $`_{\mathrm{eff}}^{}`$ in the presence of a single isolated vortex. We are interested in the local spectral function of a physical electron. This is given by a convolution in the energy variable of the spinon and holon spectral functions. According to the analysis presented in Ref. , at low temperatures the electron spectral function will be essentially equal to the spinon spectral function. Convolution with the holon spectral function which is dominated by the sharp coherent peak due to the condensate merely leads to a small broadening of the order $`T`$. In the following we therefore focus on the behavior of spinons in the vicinity of the two types of vortices. By inspecting Eq. (50) it is easy to see that the excitations inside the spinon vortex will be qualitatively very similar to those found in the conventional vortex described by the weak coupling $`d`$-wave BCS theory. In particular according to Eq. (24) we have $`\kappa _\mathrm{\Delta }r^2`$, and $`\kappa _b`$ const in the core. Recalling furthermore that $`|𝐯_s|1/r`$ we observe that the spinon current $`𝐉_{\mathrm{sp}}`$ is coupled to a term that diverges as $`1/r`$ in the core (just as in a conventional vortex), while the holon current $`𝐉_h`$ is coupled to a nonsingular term. Thus, one may conclude that holons remain essentially unperturbed by the phase singularity in the spinon vortex while the spinons obey the essentially conventional Bogoliubov-de Gennes equations for a $`d`$-wave vortex. In the holon vortex the situation is quite different. According to Eq. (23) we have $`\kappa _br`$ and $`\kappa _\mathrm{\Delta }`$ const in the core. The spinon current $`𝐉_{\mathrm{sp}}`$ is now coupled to a nonsingular term ($`1/r`$ divergence in $`v_s`$ is canceled by $`\kappa _br`$). Therefore, there will be no topological perturbation in the spinon sector and we expect the spinon wavefunctions to be essentially unperturbed by the diverging superfluid velocity. Spinon spectral density in the core should be qualitatively similar to that far outside the core. This is our basis for expecting a pseudogap-like spectrum in the core of a holon vortex. We now address the possible origin of the experimentally observed vortex core states within the present scenario for a holon vortex. To this end consider the effect of the last term in Eq. (50) which we ignored so far. Upon expanding the binomial the temporal component is seen to contain a density-density interaction of the form $`J_{\mathrm{sp}}^0J_h^0`$ where $`J_h^0`$ is the local density of uncondensed holons. Since the holon order parameter vanishes in the core and the electric neutrality dictates that the total density of holons must be approximately constant in space, we expect that uncondesed holon density will behave roughly as $`J_h^0(r)=\overline{\rho }_b\rho _b(r);`$ $`J_h^0(r)`$ will have a spike in the core of a holon vortex. Insofar as $`J_h^0(r)`$ can be viewed as a static potential acting on spinons, the uncondensed holons in the vortex core can be thought of as creating a scattering potential, akin to an impurity embedded in a $`d`$-wave superconductor. In fact, formally the spinon problem is identical to the problem of a fermionic quasiparticle in a $`d`$-wave superconductor in zero field in the presence of a localized impurity potential. It is known that such problem exhibits a pair of marginally bound impurity states at low energies which result in sharp resonances in the spectral density inside the gap. Such states have been extensively studied theoretically and their existence was recently confirmed experimentally by Pan et al. . We propose here that, within the formalism of Eq. (50), the same mechanism could give rise to the low energy quasiparticle states in the core of a holon vortex. Such structure, if indeed confirmed by a microscopic calculation, could explain the spectral features observed experimentally in the vortex cores of cuprate superconductors . ## IV Conclusions Scanning tunneling spectroscopy of the vortex cores affords a unique opportunity for probing the underlying “normal” ground state in cuprate superconductors. The existing experimental data on YBCO and BSCCO strongly suggest that conventional mean field weak coupling theories fail to describe the physics of the vortex core. Our main objective was to develop a theoretical framework for understanding these spectra and the nature of the strongly correlated electronic system which emerges once the superconducting order is suppressed. We have shown that phenomenological model (4) based on a variant of the U(1) gauge field slave boson theory contains the right physics, provided that the gauge field stiffness is vanishingly small. The latter assumption is consistent with the general arguments involving local gauge symmetry. In such a theory the gauge field can be explicitly integrated out, resulting in the effective action (7) which contains one phase degree of freedom representing the phase of a Cooper pair and two amplitude degrees of freedom representing the holon and spinon condensates. Analysis of the effective theory (7) in the presence of a magnetic field establishes existence of two types of vortices, spinon and holon, with contrasting spectral properties in their core regions. Our holon vortex is singly quantized and therefore differs in a profound way from the doubly quantized holon vortex discussed by SNL. As indicated in Figure 2 such a singly quantized holon vortex is expected to be stable over the large portion of the phase diagram on the underdoped side. Quasiparticle spectrum in the core of a holon vortex is predicted to exhibit a “pseudogap”, similar to that found in the underdoped normal region above $`T_c`$. This is consistent with the data of Renner et al. who pointed out a remarkable similarity between the vortex core and the normal state spectra in BSCCO. Spinon vortex, on the other hand, should be virtually indistinguishable from the conventional $`d`$-wave BCS vortex and is expected to occur on the overdoped side of the phase diagram. Transition from the insulating holon vortex to the metallic spinon vortex as a function of doping is a concrete testable prediction of the present theory. Phenomenological theory based on the effective action (7) does not permit explicit evaluation of the electronic spectral function. To this end we have considered the corresponding microscopic theory (50) and concluded that holon vortex will indeed exhibit a pseudogap like spectrum. Such qualitative analysis furthermore suggests a plausible mechanism for the sharp vortex core states observed in YBCO and BSCCO . We stress that conventional mean field weak coupling theories yield neither pseudogap nor the core states. In the core of a holon vortex such states will arise as a result of spinons scattering off of the locally uncondensed holons, in a manner analogous to the quasiparticle resonant states in the vicinity of an impurity in a $`d`$-wave superconductor . The latter conclusion is somewhat speculative and must be confirmed by explicitly solving the fermionic sector of the microscopic theory (50). On a broader theoretical front the importance of the vortex core spectroscopy as a window to the normal state in the $`T0`$ limit lies in its potential to discriminate between various microscopic theories of cuprates. It is reasonable to assume that the observed pseudogap in the vortex core reflects the same physics as the pseudogap observed in the normal state. This means that the mechanism responsible for the pseudogap must be operative on extremely short lengthscales, of order of several lattice spacings. The U(1) slave boson theory considered in this work apparently satisfies this requirement. Obtaining the correct vortex core spectral functions could serve as an interesting test for other theoretical approaches describing the physics of the underdoped cuprates. It will be of interest to explore the implications of the effective theories (7) and (50) in other physical situations. Of special interest are situations where the holon condensate amplitude is suppressed, locally or globally, giving rise to “normal” transport properties (vanishing superfluid density) but quasiparticle excitations that are characteristic of a superconducting state. These include the spectra in the vicinity of an impurity, twin boundary or a sample edge. In the latter case one might hope to observe a signature of the zero bias tunneling peak anomaly (normally seen for certain geometries deep in the superconducting phase in the optimally doped cuprates) even above $`T_c`$ in the underdoped samples. ###### Acknowledgements. The authors are indebted to A. J. Berlinsky, J. C. Davis, Ø. Fischer, C. Kallin, D.-H. Lee, P. A. Lee, S.-H. Pan, C. Renner, J. Ye and S.C. Zhang for helpful discussions. This research was supported in part by NSF grant DMR-9415549 and by Aspen Center for Physics where part of the work was done. Note added in proof. After submission of this manuscript we learned about complementary microscopic treatments of the spin-charge separated state in the vortex core within U(1) and SU(2) slave boson theories. The former agrees qualitatively with our phenomenological theory. Ref. proposes a new type of vortex which takes advantage of the larger symmetry group SU(2). In a related development Senthil and Fisher discussed a Z<sub>2</sub> vortex (which is essentially equivalent to our singly quantized holon vortex) and proposed a “vison detection” experiment based on trapping such a vortex in the hole fabricated in a strongly underdoped superconductor. Here we wish to point out that the experiment will produce the same general outcome in a system described by the U(1) theory where the role of a vison will be played by a flux quantum of the fictitious gauge field $`𝐚`$.
no-problem/0002/astro-ph0002503.html
ar5iv
text
# The evolution of helium white dwarfs ## 1 Introduction Millisecond pulsars are thought to be components of low-mass binary systems in their final stage of evolution: the neutron star which has been spun up by accretion of matter from a low-mass evolved companion is now being slowed down by emission of magnetic dipole radiation (recycled radio pulsar). The companion, after having transferred most of its envelope mass towards the neutron star, remains as a white dwarf of rather a low mass whose core consists, in the majority of the known cases, of helium. The characteristic age, or so-called spin-down age, of the (recycled) pulsar depends on the physics how the neutron star’s rotational energy is converted into non-thermal emission of electromagnetic energy. On the other hand, the white-dwarf age is ruled by the white dwarf’s thermo-mechanical structure and the transformation of gravothermal energy content into thermal emission of photons from the surface. Any age determinations of the pulsar and the dwarf component should give the same answer, provided our physical understanding of the pulsar’s slow-down processes and the white dwarf’s cooling properties is correct. So far, no general consensus on this matter has been achieved. Under the assumption that the cooling properties of low-mass white dwarfs are ruled by rather simple laws as is known from evolutionary calculations of more massive white dwarfs with carbon-oxygen cores (cf. Iben & Tutukov IT84 (1984); Koester & Schönberner KS86 (1986), Blöcker BL95 (1995)), large age differences between the pulsars and their dwarf companions have been found. In general, the white dwarfs appear to be much younger than the pulsars (cf. Hansen & Phinney 1998b for a recent, detailed account). The best-studied example is the PSR J1012+5307 system, for which Lorimer et al. (LFLN95 (1995)) determined 7 Gyr for the spin-down age of the pulsar, but only about 0.3 Gyr for the white dwarf’s age. Note that the usual spin-down age determinations are based on the assumption that the initial rotational period after completion of the spin-up by accretion is much smaller then the present one, and that the pulsar emits magnetic dipole radiation (braking index $`n=3`$). A summary of the assumptions inherent in the derivation of characteristic or spin-down ages of pulsars is given in Hansen & Phinney (1998b ). A discrepant result as found for PSR J1012+5307, if true, would have important consequences for the details of the accretion process and the following spin-down phase (cf. Burderi et al. BKW96 (1996)). A larger sample of millisecond pulsar systems with white-dwarf companions has recently been investigated by Hansen & Phinney (1998b ), using a grid of low-mass white-dwarf sequences especially computed for this purpurse (Hansen & Phinney 1998a ). In most cases spin-down and cooling ages appeared to be discrepant to various degrees, and the authors were able to constrain the initial spin periods and spin-up histories for individual systems, especially also for the PSR J1012+5307 system. However, the white-dwarf models which this study is based on, are generated from ad-hoc assumed initial configurations. These configurations appear not to be consistent with respect to the thermo-mechanical structures and unprocessed, hydrogen-rich envelopes with what would be adequate for companions in these pulsar binary systems. The early investigations concerning the evolution of helium white dwarfs made by Webbink (W75 (1975)) indicated that the final cooling is slowed down considerably by ongoing hydrogen burning via the pp cycle. Obviously the cooling behaviour of low-mass white dwarfs depends on the size of the still unprocessed hydrogen-rich envelope, i.e. whether this envelope is massive enough as to sustain burning temperatures at its bottom for a long time span. The Webbink (W75 (1975)) white-dwarf models are, however, just evolved main sequence stars without any consideration of mass loss. Since white-dwarf envelope masses cannot be guessed from first principles, they must rather be determined by detailed evolutionary calculations. A step in this direction was made by Alberts et al. (ASH96 (1996)) and Sarna et al. (SAAM98 (1998)) who modelled the PSR J1012+5307 system and in particular the evolution of the mass giving companion. It turned out that the donor shrinks below its Roche lobe while still having a rather massive hydrogen-rich envelope which is able to keep hydrogen burning dominant even through the white-dwarf cooling phase. The evolution was slowed down to such an extent that the discrepancy with the spin-down age of the pulsar vanished completely. Strictly speaking the strength of hydrogen burning, and hence the cooling age of an observed white dwarf, depends on the size of the envelope before entering the cooling path. This envelope mass can be reduced because of thermal instabilities of the burning shell when the CNO rate dies out, namely by * enhanced hydrogen consumption during the instability (flash) itself, and by * a possible Roche-lobe overflow driven by the rapid envelope expansion. The latter case was dominant for the evolution of the Iben & Tutukov (IT86 (1986)) 0.3 M helium white-dwarf model: Roche-lobe overflow due to the flash-driven envelope expansions reduced the envelope mass below the critical value necessary for hydrogen burning. The white-dwarf models of Webbink (W75 (1975)) and Sarna et al. (SAAM98 (1998)) experienced phases of unstable hydrogen burning for $`M0.2`$ M (but see Driebe et al. DBSH99 (1999) for a discussion). Recently Driebe et al. (DSBH98 (1998)) published a grid of evolutionary tracks for helium white-dwarf models which were generated by enhanced mass loss applied at different positions along the red-giant branch of a 1 M sequence (see also Iben & Tutukov IT86 (1986), Castellani et al. 1994). This method mimicks to some extent the mass transfer in binary systems and allows to get reliable post-red-giant configurations which are very useful for the interpretation of observations. Driebe et al. (DSBH98 (1998)) covered the whole mass range of interest, and they demonstrated that * the anti-correlation between core mass and size of envelope (cf. Blöcker et al. BHDBS97 (1997)) determines later the nuclear activity along the cooling branch, and that * thermal instabilities of the hydrogen-burning shell appear to be restricted to the mass range of approximately 0.2 to 0.3 M. The absence of thermal flashes below $`M=0.2`$ M agrees well with the results of Alberts et al. (ASH96 (1996)) but disagrees with those of Sarna et al. (SAAM98 (1998)). Nevertheless, the cooling times of our models are in excellent agreement with both studies. From the given parameters of the white-dwarf component in the PSR J1012+5307 system, Driebe et al. (DSBH98 (1998)) determined then its age to be of $`6\pm 1`$ Gyr, in good agreement with the pulsar’s spin-down age of $`7.0\pm 1.4`$ Gyr (Lorimer et al. LFLN95 (1995)). The latest effort in a better understanding of the combined pulsar-white dwarf systems is that of Burderi et al. (BKW98 (1998)). They took the pulsar spin-down ages at their face value and concluded that the standard assumption for the white-dwarf cooling (i.e. without nuclear burning) complies with the observations, except for masses below approx. 0.2 M. There are, however, some facts that we would like to point out: Burderi et al. (BKW98 (1998)) used data ’renormalized’ to a standard luminosity of $`10^2`$ L, whereby it remains unclear how ages can be renormalized if the temporal evolution of the systems is not known a priori. Furthermore, they extrapolated existing white-dwarf cooling models into mass regimes where they are not valid anymore. Because of its importance we felt the necessity to reconsider the whole issue by utilizing more realistic evolutionary models for low-mass white dwarfs. We will show in the next section that with such models a consistent description of those millisecond pulsar binary systems can be achieved for which sufficiently accurate data is available. ## 2 Pulsar characteristic times and cooling ages of their white-dwarf companions We started with the sample of millisecond pulsar systems used by Burderi et al. (BKW98 (1998), see their Table 1 and our Fig. 1), but made a few changes: the companion mass for PSR J1012+5307 was updated according to Driebe et al. (DSBH98 (1998)), and the systems PSR J1640+2224 and PSR J0437-4715 were omitted because of too uncertain pulsar ages. For convenience, the relevant data are collected in Table 1, and all the listed systems are shown in Fig. 1 where the characteristic ages of the pulsars are plotted against the possible mass ranges of their (white-dwarf) companions. The last column in Table 1 gives the white-dwarf masses according to the binary evolution calculations of Tauris & Savonije (TS99 (1999)). Within these binary calculations a relation between the system’s orbital period $`P_{\mathrm{orb}}`$ and the white-dwarf mass can be derived (see e. g. Savonije S87 (1987) and Rappaport et al. RPJSH95 (1995)). For the systems discussed here the $`P_{\mathrm{orb}}M_{\mathrm{WD}}`$ relation of Tauris & Savonije (TS99 (1999)) predicts masses which are well within the estimated mass limits (see Table 1). It should be emphasized that the characteristic ages given in Table 1 and plotted in Fig. 1 are based on certain assumptions (see Introduction) which may not be fulfilled in all cases. The corresponding systematic errors are difficult to assess and cannot be accounted for in this study. Also shown in Fig. 1 are (post-red giant) ages of helium white-dwarf models taken from Driebe et al. (DSBH98 (1998)), supplemented by ages from evolutionary white-dwarf models with carbon-oxygen cores (Blöcker BL95 (1995)). The ages are given for four effective temperatures as to simplify the comparisons with the observed sytems: the range between 4 000 and 20 000 K embraces roughly the estimated effective temperatures of the companions of the systems PSR J0034-0534, PSR J1713+0747, PSR J1012+5307 and PSR B0820+02 (cf. Hansen & Phinney 1998b ). Only for the PSR J1012+5307 companion exists a rather accurately determined effective temperature (8 600 K, van Kerkwijk et al. KBK96 (1996), Callanan et al. CGK98 (1998)). Note that the model ages are counted from the beginning of the post red-giant phase, i.e. they include also the contraction towards the white-dwarf regime.For white dwarfs of low mass this contraction from a giant towards a white-dwarf configuration makes up for a significant fraction of their ages and must be accounted for in younger systems like PSR B0820+02. The temporal behavior of the models, as shown in Fig. 1, is determined by the following facts: * The compositional differences between the lighter and heavier white dwarfs causes the obvious age break around $`M_{\mathrm{WD}}=0.5\mathrm{M}_{\mathrm{}}`$. * Hydrogen burning in the helium white dwarfs is, for a given temperature, responsible for the strong increase in age with decreasing mass. * At high temperatures, the lighter models ($`0.23`$ M) are still in a pre white-dwarf phase very close to the turn-around point with rather low (post red-giant) ages. Fig. 1 clearly demonstrates that for an effective temperature of 4 000 K our low-mass white-dwarf models exceed ages of 10 Gyr, roughly consistent with the pulsar characteristic ages. From the positions of individual pulsars (with error bars) we can estimate effective temperature and gravity ranges, and also the internal composition, to be expected for the white-dwarf companions. The results are collected in Table 2. Since for some of the white dwarfs temperature estimates based on photometry are available (Hansen & Phinney 1998b ), consistency checks are possible. ### 2.1 PSR J1012+5307 The white dwarf in PSR J1012+5307 is so far the best studied companion of all known systems (van Kerkwijk et al. KBK96 (1996); Callanan et al. CGK98 (1998)). With its surface parameters known, and together with our evolutionary white-dwarf models, a consistent description of the whole system is found (Fig. 2, upper two panels). Since the mass ratio of both components is known, the system’s inclination can be determined as well. The course of the inclination angle with white-dwarf companion mass is shown in the lower panel of Fig. 2 for two limiting pulsar masses. The inclination can be expected to lie between 40 and 50 degrees. ### 2.2 PSR B0820+02 The mass of the companion is rather ill-defined, and it could have a helium or carbon/oxygen core (cf. Table 2). However, the temperature estimate, $`\mathrm{15\hspace{0.17em}250}\pm 250`$ K (Hansen & Phinney 1998b ), allows only a C/O white dwarf with at least $``$ 0.5 M and a rather high gravity ($`\mathrm{log}g8`$), in agreement with the results of Tauris & Savonije (TS99 (1999)) (cf. Table 1). ### 2.3 PSR B1855+09 The temperature of the companion has recently been photometrically determined by van Kerkwijk et al. (Kerk (2000)) to be $`T_{\mathrm{eff}}=\mathrm{4\hspace{0.17em}800}\pm 800`$ K. Given its accurately known mass of $`0.258_{0.016}^{+0.028}`$ M due to the measured Shapiro delay of pulsar timing (Kaspi et al. KTR94 (1994)), this low temperature corresponds to a cooling age of 10 Gyr using our models and 3 Gyr using the models of Hansen & Phinney (1998a ) with their smaller envelope masses ($`3\times 10^4`$ M). The characteristic age of the pulsar is, however, 5 Gyr (cf. Table 1), a fact which on one hand might be a hint for a smaller braking index of the pulsar as discussed by van Kerkwijk et al. (Kerk (2000)). On the other hand this result strengthens the dependence of age on the thickness of the hydrogen envelope. According to our 0.259 M model, hydrogen burning ceases at about 10 Gyr, leaving a final unprocessed envelope <sup>1</sup><sup>1</sup>1The mass of the chemically homogeneous envelope is defined as the total mass of the layers above the hydrogen exhausted core. of $`510^4`$ M. The envelope mass right after the last shell flash is $`210^3`$ M, a value which is, however, subject to uncertainties like flash strength and metallicity. In additional calculations we artificially reduced this envelope mass by invoking mass loss on the upper cooling branch and followed the evolution of the models in the usual manner. It turned out that an earlier reduction of the envelope mass to $`510^4\mathrm{M}_{}`$ after $`0.5`$ Gyr (at $`T_{\mathrm{eff}}10^4\mathrm{K}`$) would be sufficient to give a cooling age of 5 Gyr at the desired effective temperature of 4800 K. We note that with this reduced envelope mass, hydrogen burning becomes insignificant below $`T_{\mathrm{eff}}10^4\mathrm{K}`$. ### 2.4 PSR J0034-0534 Hansen & Phinney (1998b ) estimated a very low temperature limit of $`T_{\mathrm{eff}}<\mathrm{3\hspace{0.17em}500}`$ K for the white dwarf, yielding ages of more than 10 Gyr if helium models are used, which is to be compared with the pulsar’s characteristic age of $`6.8\pm 2.4`$ Gyr (Fig. 1). Consistency between the pulsar’s and the white dwarf’s age can only be achieved if we assume the white dwarf to have a carbon/oxygen core and a mass of $`0.5`$ M. Such a model cools considerably faster and reaches 3 500 K well within about 6 Gyr. We note that this mass is noticeably larger than estimates of other studies. The $`P_{\mathrm{orb}}M_{\mathrm{WD}}`$ relation of Tauris & Savonije (TS99 (1999)) gives $`M0.21\mathrm{M}_{}`$, and the photometric measurements of Lundgren et al. (1996a ) $`M0.23\mathrm{M}_{}`$. The cooling models of Hansen & Phinney (1998b ) predict $`M_{\mathrm{WD}}0.32\mathrm{M}_{}`$ as an upper limit. The discrepancy between these results and our mass estimate (C/O-white dwarf) might be related to the same problem as in the case of PSR B1855+09. If the companion is a helium-white dwarf with $`M0.2\mathrm{}0.3\mathrm{M}_{}`$ it is prone to hydrogen shell flashes with the corresponding uncertainties. ### 2.5 PSR 1713+0747 Hansen & Phinney (1998b ) give a white-dwarf effective temperature of $`\mathrm{3\hspace{0.17em}400}\pm 300`$ K which is just slightly lower than the one predicted by our helium models (see Table 2). The white-dwarf mass is not very sensitive to the temperature value and consistent with the estimate of $`M_{\mathrm{WD}}0.33\mathrm{M}_{}`$ from Tauris & Savonije (TS99 (1999)). ### 2.6 Other millisecond pulsar systems In addition to the sample discussed above we investigated some other MSP systems with respect to the possible $`(g,T_{\mathrm{eff}})`$ combinations for the white-dwarf components (see Table 3). The results are illustrated in Fig. 3 where effective temperature and surface gravity are plotted as a function of the white-dwarf mass for the systems listed in Table 3. The white-dwarf masses can be estimated from the $`P_{\mathrm{orb}}M_{\mathrm{WD}}`$ relation of Tauris & Savonije (TS99 (1999)). Taking these masses, it is straightforward to determine effective temperatures and surface gravities of the white-dwarf companions (see Table 3). We find the systems in the first row of Fig. 3 (J1640+2224, J0751+1807, J1045-4509) to be consistent with He-WD companions if $`T_{\mathrm{eff}}\mathrm{4\hspace{0.17em}000}`$ K. At this temperature our helium models reach cooling ages comparable with the age of the galactic disk ($`10`$ Gyr). For the white dwarf in J1640+2224 a temperature estimate is available: $`\mathrm{4\hspace{0.17em}500}\pm 1100`$ K (Hansen & Phinney 1998b ). This value implies, for 10 Gyr, a white-dwarf mass of $`0.35\pm 0.05`$ M with a surface gravity of $`\mathrm{log}g7.6`$ (cf. Fig. 3). This mass is in good agreement with the result of Tauris & Savonije (TS99 (1999)) ($`M_{\mathrm{WD}}^{\mathrm{TS99}}0.37\mathrm{M}_{}`$). For the other systems the temperature ambiguity (more than one white dwarf mass for a given temperature, see. Fig. 3) does not allow to exclude the companions to be C/O-white dwarfs, but the mass values of Tauris & Savonije (TS99 (1999)) indicate that all these systems should contain helium white dwarfs. ## 3 Conclusions From the present paper, together with previous efforts which concentrated solely on the PSR J1012+5307 system (Alberts et al. ASH96 (1996); Sarna et al. SAAM98 (1998); Driebe et al. DSBH98 (1998)), it becomes obvious that a consistent description of the millisecond binary systems with compact companions can only be achieved by using evolutionary model calculations of white dwarfs which include their complete pre-white-dwarf history. The key to the solution of the apparent age paradoxon between the pulsar and its white-dwarf companion is the fact that low-mass white dwarfs have massive, still unburnt envelopes that sustain hydrogen burning at their bases for a long time. Hydrogen burning slows down the cooling of a low-mass white dwarf to such an extent that cooling ages become comparable to, or may even exceed, observed pulsar spin-down ages. Employing our evolutionary helium white-dwarf models, supplemented by those with carbon-oxygen cores, we demonstrated that, next to the already studied PSR J1012+5307 system, also in other millisecond pulsar binary systems with reliable information on pulsar age and companion properties, as mass and temperature, reasonable agreement between the components’ ages is achieved. The use of white-dwarf models with ad hoc assumed envelope masses may lead to erroneous interpretations since these envelope masses are usually much smaller than those which follow from complete evolutionary calculations. It appears to us that upon using realistic white-dwarf models in interpreting millisecond binary systems there is no need to modify existing ideas of the spin-down process of pulsars. Clearly a larger sample of well-studied systems like PSR J1012+5307 would be very important in investigating more precisely the cooling theory of white dwarfs and the braking of radio pulsars. ###### Acknowledgements. We would like to thank Marten van Kerkwijk and Gerrit Savonije for helpful discussions and comments.
no-problem/0002/math0002197.html
ar5iv
text
# 1 Introduction and result ## 1 Introduction and result In the present paper we apply the Lie method of studying of PDE symmetries to a special but geometrically important class of holomorphic completely overdetermined second order PDE systems, i.e. systems of the form $`(𝒮):u_{x_ix_j}^k=F_{ij}^k(x,u,u_x),k=1,\mathrm{},m,i,j=1,\mathrm{},n`$ (1) where $`x=(x_1,\mathrm{},x_n)`$ are independent variables, $`u(x)=(u^1(x),\mathrm{},u^m(x))`$ are unknown holomorphic functions (dependent variables), $`u_x=(u_{x_i}^j)`$ and $`F_{ij}^k=F_{ji}^k`$ are holomorphic functions. Denote by $`J_{n,m}^r`$ the r-jet space , of r-jets of holomorphic maps from $`\mathrm{I}\mathrm{C}^n`$ to $`\mathrm{I}\mathrm{C}^m`$. Set $`u^{(1)}=(u_1^1,\mathrm{},u_n^1,\mathrm{},u_1^m,\mathrm{},u_n^m)`$,… , $`u^{(s)}=(u_\tau ^j)`$ with $`j=1,\mathrm{},m`$, $`\tau =(\tau _1,\mathrm{}\tau _s)`$, $`\tau _1\tau _2\mathrm{}\tau _s`$ and use it as the natural coordinates $`(x,u,u^{(1)},\mathrm{},u^{(r)})`$ on the jet space: for every holomorphic (near a point $`p`$) function $`u=f(x):\mathrm{I}\mathrm{C}^n\mathrm{I}\mathrm{C}^m`$ the natural coordinates of the corresponding $`r`$-jet $`j_p^r(f)J_{n,m}^r`$ defining by $`f`$ at $`p`$ are $`x_j=p_j`$, $`u^k=f^k(p)`$, $`u_{\tau _1\mathrm{},\tau _s}^k=\frac{^sf^k(p)}{x_{\tau _1}\mathrm{}x_{\tau _s}}`$. We say that a system (1) is involutive if the differential forms $`du_i^k_jF_{ij}^k(x,u,u^{(1)})dx_j`$, $`du^k_iu_i^kdx_i`$ define a completely integrable distribution on the tangent bundle $`T(J_{n,m}^1)`$, i.e. satisfy the Frobenius involutivity condition. Solutions of such a system are holomorphic vector valued functions $`u=u(x)`$; denote by $`\mathrm{\Gamma }_u`$ the graph of a solution $`u`$. A symmetry group $`Sym(S)`$ (see for instance , ) of a system is a local complex transformation group $`G`$ acting on a domain in the space $`\mathrm{I}\mathrm{C}_x^n\times \mathrm{I}\mathrm{C}_u^m`$ of independent and dependent variables with the following property: for every solution $`u(x)`$ of $`(𝒮)`$ and every $`gG`$ such that the image $`g(\mathrm{\Gamma }_u)`$ is defined, it is a graph of a solution of $`(𝒮)`$. Sometimes the largest symmetry group $`(𝒮)`$ is of main interest (and so we write the symmetry group); for us this is not very essential since our methods give a description of $`\mathrm{𝑎𝑛𝑦}`$ symmetry group for given system. A holomorphic vector field $`X={\displaystyle \underset{j}{}}\theta _j(x,u){\displaystyle \frac{}{x_j}}+{\displaystyle \underset{\mu }{}}\eta ^\mu {\displaystyle \frac{}{u^\mu }}`$ (2) generating a complex one-parameter group of symmetries of a system of PDE $`(𝒮)`$ is called an infinitesimal symmetry of this system. All these fields form a complex Lie algebra with respect to the Lie bracket which is denoted by $`Lie(𝒮)`$. Using the notation $`w=(x,u)`$, we fix a point $`(x,u)`$ and set $`\alpha _j(x,u)=(\theta _{j_{w_1}}(x,u),\mathrm{},\theta _{j_{w_{n+m}}}(x,u)),\alpha (x,u)=(\alpha _1,\mathrm{},\alpha _n),`$ $`\beta ^k(x,u)=(\eta _{w_1}^k(x,u),\mathrm{},\eta _{w_{n+m}}^k(x,u)),\beta (x,u)=(\beta ^1,\mathrm{},\beta ^m),`$ $`\gamma (x,u)=(\theta _{1_{x_1w_1}}(x,u),\mathrm{},\theta _{1_{x_1w_{n+m}}}(x,u)),`$ $`\delta (x,u)=(\eta ^1(x,u),\mathrm{},\eta ^m(x,u)),`$ $`\epsilon (x,u)=(\theta _1(x,u),\mathrm{},\theta _n(x,u))`$ We call the vector $`\omega (x,u)=(\alpha (x,u),\beta (x,u),\gamma (x,u),\delta (x,u),\epsilon (x,u))`$ of $`\mathrm{I}\mathrm{C}^{(n+m+2)(n+m)}`$ the initial date of an infinitesimal symmetry $`X`$ of the form (2) at the point $`(x,u)`$. Our main result is the following ###### Theorem 1.1 Let $`(𝒮)`$ be a holomorphic completely overdetermined second order involutive system with $`n`$ independent and $`m`$ dependent variables. Then the Taylor expansions of coefficients of any infinitesimal symmetry $`XLie(𝒮)`$ at a fixed point $`(x,u)`$ are uniquely determined by the initial date $`\omega (x,u)`$ that is the linear map $`Lie(𝒮)\mathrm{I}\mathrm{C}^{(n+m+2)(n+m)}`$ defined by $`X\omega (x,u)`$ is injective. In the special case $`n=1`$, $`m=1`$ i.e. in the case of ordinary second order differential equation this result was obtained by A.Tresse (a student of S.Lie) in 1896: he proved that the symmetry group of a second order differential equation is a Lie group of dimension $`8`$ (as it was observed by B.Segre, this implies that the automorphism group of a real analytic Levi nondegenerate hypersurface in $`\mathrm{I}\mathrm{C}^2`$ is a finite dimensional real Lie group). A very clear proof of the same fact is contained in the known paper of L.E. Dickson (another former student of S.Lie) inspired by the lectures of S.Lie at the end of the XIX century. So it is quite possible that the basic idea goes back to S.Lie himself. Our proof is based on the direct generalization of the Lie - Tresse - Dickson method. We point out that this method is quite elementary and constructive and gives an efficient recursive algorithm for determination of the Taylor expansions of the coefficients of an infinitesimal symmetry at a given point. The symmetry group of $`(𝒮)`$ then can be parametrized by the exponential map (in a suitable neighborhood of the identity). If a point $`(x,u)`$ is fixed, the components of the initial date $`\omega (x,u)`$ are local parameters for the symmetry group. The number of these parameters is equal to $`(n+m+2)(n+m)`$; however, in general they are not independent. In our proof of theorem we consider only those equations on the coefficients of an infinitesimal symmetry which are necessary in order to conclude. In general, additional equations can occur. So in the general case the parameters may satisfy some additional relations and the actual dimension of a symmetry group may be smaller than $`(n+m+2)(n+m)`$. ###### Corollary 1.2 A symmetry group of a holomorphic completely overdetermined involutive second order system with $`n`$ independent and $`m`$ dependent variables is a local complex Lie transformation group of dimension $`(n+m+2)(n+m)`$. We will show that this estimate is precise. In the special case $`m=1`$ this last result can also be deduced from Chern’s solution of the equivalence problem for completely overdetermined involutive second order systems with one dependent variable . In the next section we consider applications of theorem 1.1 to CR geometry. ## 2 Segre varieties, holomorphic maps and PDE symmetries Let $`\mathrm{\Gamma }`$ be a real analytic Levi nondegenerate hypersurface in $`\mathrm{I}\mathrm{C}^{n+1}`$ and let $`Aut(\mathrm{\Gamma })`$ denote its biholomorphism group (all our considerations are local). Denote by $`Z=(z,w)\mathrm{I}\mathrm{C}^n\times \mathrm{I}\mathrm{C}`$ the standard coordinates in $`\mathrm{I}\mathrm{C}^{n+1}`$. For a fixed point $`\zeta \mathrm{I}\mathrm{C}^{n+1}`$ close enough to $`\mathrm{\Gamma }`$ consider the complex hypersurface $`Q(\zeta )=\{Z:r(Z,\zeta )=0\}`$. It is called the Segre variety ( ). The basic property of the Segre varieties (,,) is their biholomorphic invariance: for every automorphism $`fAut(\mathrm{\Gamma })`$ and any $`\zeta `$ one has $`f(Q(\zeta ))=Q(\overline{f}(\overline{\zeta }))`$. B.Segre observed that for $`n=1`$ i.e. in $`\mathrm{I}\mathrm{C}^2`$ the set of Segre varieties of $`\mathrm{\Gamma }`$ (which is called the Segre family of $`\mathrm{\Gamma }`$) is a regular two parameter family of holomorphic curves and so represents the trajectories of solutions of a holomorphic second order ordinary differential equation (see also , ,). This important observation can be generalized as follows. After a biholomorphic change of coordinates in a neighborhood of the origin $`\mathrm{\Gamma }`$ is given by the equation $`\{w+\overline{w}+_{j=1}^n\epsilon _jz_j\overline{z}_j+R(Z,\overline{Z})=0\}`$ where $`\epsilon _j=1`$ or $`1`$ and $`R=o(|Z|^2)`$. For every point $`\zeta `$ the corresponding Segre variety is given by $`w+\zeta _{n+1}+_{j=1}^n\epsilon _jz_j\zeta _j+R(Z,\zeta )=0`$. If we consider the variables $`x_j=z_j`$ as independent ones and the variable $`w=u(x)`$ as dependent one, then this equation can be rewritten in the form $`u+\zeta _{n+1}+{\displaystyle \underset{j=1}{\overset{n}{}}}\epsilon _jx_j\zeta _j+R((x,u),\zeta )=0`$ (3) Taking the derivatives in $`x_k`$ we obtain the equations $`u_{x_k}+\epsilon _k\zeta _k+R_{x_k}(x,u,\zeta )+R_u(x,u,\zeta )u_{x_k}=0,k=1,\mathrm{},n`$ (4) The equations (3), (4) and the implicit function theorem imply that $`\zeta `$ is an analytic function $`\zeta =\zeta (x,u,u_{x_1},\mathrm{},u_{x_n})`$; taking again the partial derivatives in $`x_j`$ in (4) and using the obtained expression for $`\zeta `$ in order to eliminate it from the equations, we obtain that every $`u`$ given by (3) is a solution of a completely overdermined second order holomorphic PDE system $`(𝒮_\mathrm{\Gamma })`$ of the form (1). This system is necessarily involutive since the family of its solutions (3) define a completely integrable distribution on the tangent space $`T(J_{n,1}^1)`$. The property of biholomorphic invariance of the Segre varieties means that any biholomorphism of $`\mathrm{\Gamma }`$ transforms the graph of a solution of $`(𝒮_\mathrm{\Gamma })`$ to the graph of another solution, i.e. is a Lie symmetry of $`(𝒮_\mathrm{\Gamma })`$. So the study of $`Aut(\mathrm{\Gamma })`$ can be reduced as a very special case to the general problem of study of Lie symmetries of holomorphic completely overdetermined second order involutive systems. We emphasize that the systems describing the Segre families form a very special subclass in the class of holomorphic completely overdetermined second order involutive systems with one dependent variable. So theorem 1.1 generalize known results in several directions. Indeed, since $`Sym(𝒮_\mathrm{\Gamma })`$ is a complex Lie transformation group in view of our theorem and $`Aut(\mathrm{\Gamma })`$ is its closed subgroup, we conclude that it is a local real Lie transformation subgroup of $`Sym(𝒮_\mathrm{\Gamma })`$. Hence theorem 1.1 (which we apply in the special case of one dependent variable) gives an upper estimate for its dimension: the real dimension of $`Aut(\mathrm{\Gamma })`$ is majorated by $`2dimSym(𝒮_\mathrm{\Gamma })`$, in particular, by $`2(n^2+4n+3)`$. In order to improve this estimate, we recall the following useful observation due to E.Cartan . Let a holomorphic vector field $`X`$ generate a local real one-dimensional subgroup of $`Aut(\mathrm{\Gamma })`$. This is equivalent to the fact that $`ReX`$ is a tangent vector field to $`\mathrm{\Gamma }`$. Since this subgroup is a real one-parameter subgroup of $`Sym(𝒮_\mathrm{\Gamma })`$, we have necessarily $`XLie(𝒮_\mathrm{\Gamma })`$. $`\mathrm{\Gamma }`$ is Levi nondegenerate, so the field $`Re(iX)`$ cannot be tangent to $`\mathrm{\Gamma }`$ simultaneously with $`ReX`$ i.e. $`Lie(\mathrm{\Gamma })`$ is a totally real subspace of $`Lie(𝒮_\mathrm{\Gamma })`$. Therefore, the real dimension of $`Aut(\mathrm{\Gamma })`$ is majorated by the complex dimension of $`Lie(𝒮_\mathrm{\Gamma })`$. In particular, it is smaller that $`n^2+4n+3`$. The example of the sphere shows that this estimate is precise. We obtain the following ###### Corollary 2.1 $`Aut(\mathrm{\Gamma })`$ is a local real Lie transformation subgroup of $`Sym(𝒮_\mathrm{\Gamma })`$ and the dimension of $`Aut(\mathrm{\Gamma })`$ is majorated by the complex dimension of $`Sym(𝒮_\mathrm{\Gamma })`$. In particular, it is always majorated by $`n^2+4n+3`$. Moreover, every infinitesimal automorphism of $`\mathrm{\Gamma }`$ is uniquely determined by its second order Taylor developement at a fixed point. Thus, we find here some classical results of N.Tanaka , S.S.Chern - J.Moser in the infinitesimal form. 2. Lie method and proof of the main theorem Let $`G`$ be a local group of biholomorphic transformation acting on a domain in $`\mathrm{I}\mathrm{C}^n\times \mathrm{I}\mathrm{C}^m`$. Every biholomorphism $`gG`$ , $`g:(x,u)(x^{},u^{})`$ close enough to the identity lifts canonically to a fiber preserving biholomorphism $`g^{(r)}:J_{n,m}^rJ_{n,m}^r`$ as follows: if $`u=f(x)`$ is a holomorphic function near $`p`$, $`q=f(p)`$ and $`u^{}=f^{}(x^{})`$ is its image under $`g`$ (i.e. the graph of $`f^{}`$ is the image of the graph of $`f`$ under $`g`$ near the point $`(p^{},q^{})=g(p,q)`$), then the jet $`j_p^{}^r(f^{})`$ is by the definition the image of $`j_p^r(f)`$ under $`g^{(r)}`$. In particular, a one-parameter local Lie group of transformations $`G`$ canonically lifts to $`J_{n,m}^r`$ as a one-parameter Lie group of transformations $`G^{(r)}`$ which is called the $`r`$-prolongation of $`G`$. The infinitesimal generator $`X^{(r)}`$ of $`G`$ is called the $`r`$-prolongation of the infinitesimal generator $`X`$ of $`G`$. Let $`(𝒮)`$ be a holomorphic PDE system of $`r`$th order with $`n`$ independent and $`m`$ dependent variables. Then it defines naturally a complex subvariety $`(𝒮)^{(r)}`$ in the jet space $`J_{n,m}^{(r)}`$ obtained by replacing of derivatives of dependent variables by the corresponding coordinates in the jet space. So $`u`$ is a holomorphic solution of the system $`(𝒮)`$ if and only if the section $`(p,u(p),j_p^r(u))`$ of the holomorphic fibre bundle $`\pi _n:J_{n,m}^r\mathrm{I}\mathrm{C}^n`$ (with the natural projection $`\pi _n`$) is contained in the variety $`(𝒮)^{(r)}`$. A key proposition of the Lie theory states that if the r-prolongation $`X^{(r)}`$ of a vector field $`X`$ is a tangent field to $`(𝒮)^{(r)}`$ then $`X`$ is an infinitesimal symmetry of $`(𝒮)`$ (, Theorem 2.31). For the system (1) one has $`(𝒮)^{(2)}:u_{ij}^k=\widehat{F}_{ij}^k(x,u,u^{(1)})`$ Here $`\widehat{F}_{ij}^k`$ denote the natural lifting of $`F_{ij}^k`$ to the jet space obtained via the replacing of the derivatives $`u_{x_i}^\mu `$ by the jet coordinates $`u_i^\mu `$ in power expansions of $`F_{ij}^k`$, i.e. $`\widehat{F}_{ij}^k=F_{ij}^k(x,u,u^{(1)})`$. In the natural coordinates one has $`X^{(r)}=X+{\displaystyle \underset{j,\mu }{}}\eta _j^\mu {\displaystyle \frac{}{u_j^\mu }}+\mathrm{}+{\displaystyle \underset{i_1,\mathrm{},i_r,\mu }{}}\eta _{i_1i_2\mathrm{}i_r}^\mu {\displaystyle \frac{}{u_{i_1i_2\mathrm{}i_r}^\mu }}`$ where $`\eta _i^\mu =D_i\eta ^\mu _j(D_i\theta _j)u_j^\mu `$, $`\eta _{i_1\mathrm{}i_{r1}i_r}^\mu =D_{i_r}\eta _{i_1\mathrm{}i_{r1}}^\mu _j(D_{i_r}\theta _j)u_{i_1\mathrm{}i_{r1}j}^\mu `$ and $`D_i={\displaystyle \frac{}{x_i}}+{\displaystyle \underset{k}{}}u_i^k{\displaystyle \frac{}{u^k}}+{\displaystyle \underset{\mu ,j}{}}u_{ij}^\mu {\displaystyle \frac{}{u_j^\mu }}+\mathrm{}`$ is the operator of total derivative , . In our case a direct computation gives an explicit expression for the coefficients of $`X^{(2)}`$: $`\eta _{i_1}^\mu ={\displaystyle \frac{\eta ^\mu }{x_{i_1}}}+{\displaystyle \underset{k}{}}u_{i_1}^k{\displaystyle \frac{\eta ^\mu }{u^k}}{\displaystyle \underset{j}{}}\left({\displaystyle \frac{\theta _j}{x_{i_1}}}+{\displaystyle \underset{k}{}}u_{i_1}^k{\displaystyle \frac{\theta _j}{u^k}}\right)u_j^\mu ,`$ $`\eta _{i_1i_2}^\mu ={\displaystyle \frac{^2\eta ^\mu }{x_{i_2}x_{i_1}}}+u_{i_1}^\mu \left[{\displaystyle \frac{^2\eta ^\mu }{x_{i_2}u^\mu }}{\displaystyle \frac{^2\theta _{i_1}}{x_{i_2}x_{i_1}}}\right]+u_{i_2}^\mu \left[{\displaystyle \frac{^2\eta ^\mu }{x_{i_1}u^\mu }}{\displaystyle \frac{^2\theta _{i_2}}{x_{i_2}x_{i_1}}}\right]`$ $`+{\displaystyle \underset{k\mu }{}}u_{i_1}^k{\displaystyle \frac{^2\eta ^\mu }{x_{i_2}u^k}}+{\displaystyle \underset{k\mu }{}}u_{i_2}^k{\displaystyle \frac{^2\eta ^\mu }{x_{i_1}u^k}}{\displaystyle \underset{ki_1,ki_2}{}}u_k^\mu {\displaystyle \frac{^2\theta _k}{x_{i_2}x_{i_1}}}{\displaystyle \underset{k;ji_2}{}}u_{i_1}^ku_j^\mu {\displaystyle \frac{^2\theta _j}{x_{i_2}u^k}}`$ $`{\displaystyle \underset{i;si_1}{}}u_{i_2}^iu_s^\mu {\displaystyle \frac{^2\theta _s}{x_{i_1}u^i}}+{\displaystyle \underset{r\mu ,p\mu }{}}u_{i_2}^ru_{i_1}^p{\displaystyle \frac{^2\eta ^\mu }{u^ru^p}}+{\displaystyle \underset{t\mu }{}}u_{i_1}^tu_{i_2}^\mu \left[{\displaystyle \frac{^2\theta _{i_2}}{x_{i_2}u^t}}+{\displaystyle \frac{^2\eta ^\mu }{u^\mu u^t}}\right]`$ $`+{\displaystyle \underset{q\mu }{}}u_{i_2}^qu_{i_1}^\mu \left[{\displaystyle \frac{^2\theta _{i_1}}{u^qx_{i_1}}}+{\displaystyle \frac{^2\eta ^\mu }{u^qu^\mu }}\right]+\left[{\displaystyle \frac{^2\eta ^\mu }{(u^\mu )^2}}{\displaystyle \frac{^2\theta _{i_2}}{x_{i_2}u^\mu }}{\displaystyle \frac{^2\theta _{i_1}}{x_{i_1}u^\mu }}\right]u_{i_1}^\mu u_{i_2}^\mu `$ $`{\displaystyle \underset{a,b,s}{}}u_{i_2}^au_{i_1}^bu_s^\mu {\displaystyle \frac{^2\theta _s}{u^au^b}}+\mathrm{\Lambda }_{i_1i_2}^\mu `$ for $`i_1i_2`$ and $`\eta _{ii}^\mu ={\displaystyle \frac{^2\eta ^\mu }{(x_i)^2}}+u_i^\mu \left[2{\displaystyle \frac{^2\eta ^\mu }{x_iu^\mu }}{\displaystyle \frac{^2\theta _i}{(x_i)^2}}\right]+2{\displaystyle \underset{k\mu }{}}u_i^k{\displaystyle \frac{^2\eta ^\mu }{x_iu^k}}{\displaystyle \underset{ki}{}}u_k^\mu {\displaystyle \frac{^2\theta _k}{(x_i)^2}}`$ $`2{\displaystyle \underset{k;ji}{}}u_i^ku_j^\mu {\displaystyle \frac{^2\theta _j}{x_iu^k}}+{\displaystyle \underset{r\mu ;p\mu }{}}u_i^ru_i^p{\displaystyle \frac{^2\eta ^\mu }{u^ru^p}}+{\displaystyle \underset{t\mu }{}}u_i^tu_i^\mu \left[{\displaystyle \frac{^2\theta _i}{x_iu^t}}+{\displaystyle \frac{^2\eta ^\mu }{u^\mu u^t}}\right]+`$ $`{\displaystyle \underset{q\mu }{}}u_i^qu_i^\mu \left[{\displaystyle \frac{^2\theta _i}{x_iu^q}}+{\displaystyle \frac{^2\eta ^\mu }{u^qu^\mu }}\right]+\left[{\displaystyle \frac{^2\eta ^\mu }{(u^\mu )^2}}2{\displaystyle \frac{^2\theta _i}{x_iu^\mu }}\right](u_i^\mu )^2`$ $`{\displaystyle \underset{a,b,s}{}}u_i^au_i^bu_s^\mu {\displaystyle \frac{^2\theta _s}{u^au^b}}+\mathrm{\Lambda }_{ii}^\mu `$ with $`\mathrm{\Lambda }_{i_1i_2}^\mu ={\displaystyle \underset{s}{}}u_{i_2i_1}^s{\displaystyle \frac{\eta ^\mu }{u^s}}{\displaystyle \underset{p}{}}u_{i_2p}^\mu {\displaystyle \frac{\theta _p}{x_{i_1}}}{\displaystyle \underset{j}{}}u_{i_1j}^\mu {\displaystyle \frac{\theta _j}{x_{i_2}}}{\displaystyle \underset{p,q}{}}u_{i_2i_1}^qu_p^\mu {\displaystyle \frac{\theta _p}{u^q}}`$ $`{\displaystyle \underset{p,q}{}}u_{i_2p}^\mu u_{i_1}^q{\displaystyle \frac{\theta _p}{u^q}}{\displaystyle \underset{s,j}{}}u_{i_1j}^\mu u_{i_2}^s{\displaystyle \frac{\theta _j}{u^s}}`$ Since the system (1) is involutive, for every point $`P(𝒮)^{(2)}`$ with the natural projection $`\pi _{n,m}(P)=(p,q)\mathrm{I}\mathrm{C}^n\times \mathrm{I}\mathrm{C}^m`$ there exists a solution $`u(x)`$ of $`(𝒮)`$ holomorphic near $`p`$ such that $`(p,q,j_p^2(u))=P`$. So the Lie criterion (, Theorem 2.72) implies that $`XLie(𝒮)`$ if and only if the second prolongation satisfies the following system of equations in $`J_{n,m}^2`$: $`X^{(2)}(u_{ij}^\mu \widehat{F}_{ij}^\mu )=0,u_{ij}^\mu =\widehat{F}_{ij}^\mu `$ This system implies that $`\eta _{ij}^\mu =X^{(2)}(\widehat{F}_{ij}^\mu )=X^{(1)}(\widehat{F}_{ij}^\mu )`$. Replace now in the expressions of $`\mathrm{\Lambda }_{i_1i_2}^\mu `$ the jet coordinates $`u_{ij}^k`$ by $`\widehat{F}_{ij}^k`$ and denote obtained functions by $`\widehat{\mathrm{\Lambda }}_{ij}^\mu `$. Transfer them to the right side; we get the equations of the form $`\stackrel{~}{\eta }_{ij}^\mu =X^{(1)}(\widehat{F}_{ij}^\mu )\widehat{\mathrm{\Lambda }}_{ij}^\mu `$ (5) Without loss of generality assume that every $`\widehat{F}_{ij}^\mu `$ is represented by a power series with respect to $`u_s^k`$. Then we can develop the right sides $`X^{(1)}(\widehat{F}_{ij}^\mu )\widehat{\mathrm{\Lambda }}_{ij}^\mu `$ of our equations in power series with respect to $`u_l^k`$. Clearly, the coefficients of these expansions are completely determined by the coefficients of the expansions of $`\widehat{F}_{ij}^\mu `$ and one can effectively compute them in a concrete case. Comparing now the coefficients near the powers of $`u_l^k`$ of degree $`3`$ in the equations (5) and using explicit expressions for the coefficients of $`X^{(2)}`$, we obtain the following PDE system for the coefficients of the inifinitesimal symmetry $`X`$: $`(A):{\displaystyle \frac{^2\eta ^\mu }{x_{i_2}x_{i_1}}}=A_{i_1i_2}^\mu ,{\displaystyle \frac{^2\eta ^\mu }{x_iu^k}}=B_{ik}^\mu ,k\mu ,{\displaystyle \frac{^2\eta ^\mu }{u^ru^p}}=C_{rp}^\mu ,r\mu ,p\mu ,`$ $`(B):{\displaystyle \frac{^2\theta _k}{x_{i_2}x_{i_1}}}=D_{i_1i_2}^k,ki_1,ki_2,{\displaystyle \frac{\theta _j}{x_iu^k}}=E_{ik}^j,ji,(C):{\displaystyle \frac{^2\theta _s}{u^au^b}}=G_{ab}^s`$ $`(D_1):{\displaystyle \frac{^2\eta ^\mu }{x_{i_2}u^\mu }}{\displaystyle \frac{^2\theta _{i_1}}{x_{i_2}x_{i_1}}}=H_{i_1i_2}^\mu ,i_1i_2,(D_2):{\displaystyle \frac{^2\eta ^\mu }{u^\mu u^t}}{\displaystyle \frac{^2\theta _{i_2}}{x_{i_2}u^t}}=I_{i_2t}^\mu ,t\mu `$ $`(D_3):2{\displaystyle \frac{^2\eta ^\mu }{x_iu^\mu }}{\displaystyle \frac{^2\theta _i}{(x_i)^2}}=J_i^\mu ,(D_4):{\displaystyle \frac{^2\eta ^\mu }{(u^\mu )^2}}2{\displaystyle \frac{^2\theta _i}{x_iu^\mu }}=K_i^\mu `$ where the right sides are analytic functions in $`\lambda (x,u)=(x,u,\alpha (x,u),\beta (x,u),\delta (x,u),\epsilon (x,u))`$. We point out that we do not write here all obtained equations; we consider only those who will be enough for the proof of our results. Denote by $`\mathrm{\Omega }`$ a holomorphic vectorvalued function whose components coincide with the right sides of our system: $`\mathrm{\Omega }=(A_{i_1i_2}^\mu ,B_{ik}^\mu ,\mathrm{},K_i^\mu )`$. An important propery of obtained PDE system $`(A)(D_i)`$ is its linearity with respect to the second order derivatives of dependent variables. Denote by $`v`$ the vector $`\mathrm{I}\mathrm{C}^L`$ (for a suitable $`L`$) whose components are the second order partial derivatives of $`\theta _j`$ and $`\eta ^\mu `$; then our system can be written in the form $`Mv=\mathrm{\Omega }`$, where $`M`$ is an integer matrix. An elementary linear algebra argument shows that this system can be represented in the form $`M^{}v^{}=P\gamma +\mathrm{\Omega }`$, where $`v^{}`$ is a vector formed by components of $`v`$ which are not components of $`\gamma `$, $`P`$ is an integer matrix and $`M^{}`$ is an invertible square integer matrix. Therefore $`v^{}=M^1P\gamma +M^1\mathrm{\Omega }`$, so every second order partial derivative of $`\theta _j`$, $`\eta ^k`$ is a linear combination of components of $`\gamma `$, $`\mathrm{\Omega }`$. In particular, the second order partial derivatives of $`\theta _j`$, $`\eta ^k`$ at $`(x,u)`$ are determined by $`\omega (x,u)`$. Denote by $`V`$ the vector whose components are the third order partial derivatives of $`\theta _j`$, $`\eta ^\mu `$ and write the system obtained by taking the partial derivatives in $`(A)(D_i^{})`$ in the form $`NV=\mathrm{\Omega }^{}`$ where $`\mathrm{\Omega }^{}`$ denote the vector with components $`\frac{\mathrm{\Omega }_j(\lambda (x,u))}{x_i}`$, $`\frac{\mathrm{\Omega }_j(\lambda (x,u))}{u^k}`$ and $`N`$ is an integer matrix. A direct computation shows that one can choose a subsystem of this system with an invertible square matrix $`N^{}`$, so we obtain that for every multi-indice $`\tau `$, $`|\tau |=3`$, there are polynomials $`R_k^\tau `$, $`S_j^\tau `$ with rational coefficients, such that the following holds: $`{\displaystyle \frac{^3\theta _j}{x_1^{\tau _1}\mathrm{}x_n^{\tau _n}(u^1)^{\tau _{n+1}}\mathrm{}(u^m)^{\tau _{n+m}}}}(x,u)=R_j^\tau ((\mathrm{\Omega })(\lambda (x,u)),(^2\theta )(x,u),(^2\eta )(x,u))`$ (6) $`{\displaystyle \frac{^3\eta ^k}{x_1^{\tau _1}\mathrm{}x_n^{\tau _n}(u^1)^{\tau _{n+1}}\mathrm{}(u^m)^{\tau _{n+m}}}}(x,u)=S_k^\tau ((\mathrm{\Omega })(\lambda (x,u)),(^2\theta )(x,u),(^2\eta )(x,u))`$ (7) where $`(\mathrm{\Omega })(\lambda (x,u))`$ denote the vector function whose components are the first order partial derivatives of $`\mathrm{\Omega }_j`$ evaluated at $`\lambda (x,u)`$, $`(^2\theta )(x,u)`$ (resp. $`(^2\eta )(x,u)`$) denote the vector function whose components are the partial derivatives of all $`\theta _j`$ (resp. $`\eta ^k`$) of order $`2`$. This means that the third order partial derivatives of $`\theta _j`$, $`\eta ^k`$ at $`(x,u)`$ are determined by $`\omega (x,u)`$. If $`\omega (x,u)`$ is given now, (6), (7) and the chain rule show that all coefficients of the Taylor expansions of $`\theta _j`$, $`\eta ^k`$ are determined by recursion. This completes the proof of the theorem. It is also clear from the construction that the corresponding homogeneous system $`Mv=0`$ describes infinitesimal symmetries of the system $`(𝒮_0)`$ of the form (1) with $`F_{ij}^k0`$. We obtain that the set $`Lie(𝒮_0)`$ of infinitesimal symmetries of this system is a complex Lie algebra of dimension $`(n+m+2)(n+m)`$ generated by the following holomorphic vector fields: $`U_k=\frac{}{x_k}`$, $`V_\mu =\frac{}{u^\mu }`$, $`W_{jk}=x_j\frac{}{x_k}`$, $`A_{jk}=u^j\frac{}{x_k}`$ , $`B_{k\mu }=x_k\frac{}{u^\mu }`$, $`C_{k\mu }=u^k\frac{}{u^\mu }`$, $`X_j=_kx_jx_k\frac{}{x_k}+_{mu}x_ju^\mu \frac{}{u^\mu }`$, $`Y_\nu =_kx_ku^\nu \frac{}{x_k}+_{mu}u^\nu u^\mu \frac{}{u^\mu }`$, $`k,j=1,\mathrm{},n`$, $`\mu ,\nu =1,\mathrm{},m`$. In particular, $`dimLie(𝒮_0)=(n+m+2)(n+m)`$ so the dimension estimate given by our theorem is precise. In the present paper we restrict an application of the Lie method only by the classical case of a Levi nondegenerate hypersurface. But this method can also be applied in other situations which are of interest for the CR geometry and form an area of research activity of several authors. For instance, the Levi degenerate case leads to considerations of holomorphic second order completely overdetermined involutive PDE systems which are not solved with respect to the second order derivatives. A study of their symmetries requires a combination of the Lie method with some tools of the local complex analytic geometry (compare with , ). On the other hand, the Segre families of Cauchy - Riemann manifolds of higher codimension are described by holomorphic second order completely overdetermined involutive PDE systems with additional first order relations, i.e. with additional holomorphic equations including first order derivatives of dependent variables. Clearly, the Lie method allows to study this class of systems and just requires more involved computations.
no-problem/0002/hep-ph0002068.html
ar5iv
text
# Emission of light mesons directly from the surface of quark-gluon plasma. ## Abstract On the basis of hydrodynamic model of evolution we consider emission of lightest ($`\pi ,K,\eta ,\rho ,\omega ,K^{}`$) mesons directly from the surface of quark-gluon plasma, created in the heavy ion collision, with accounting of their absorption by surrounding hadronic gas. We evaluate upper and lower limits on yields of these direct mesons in Pb+Pb collisions at SpS, RHIC and LHC energies, and find, that even in the case of the lowest yield, direct $`K`$, $`\eta `$ and heavier mesons dominate over freeze-out ones at soft $`p_t`$ ($`p_t0.5GeV/c`$). This leads to enhancement of the low $`p_t`$ production of these mesons which hardly can be explained within pure hadronic gas scenario, and can be considered as quark-gluon plasma signature. 25.75.-q,12.38.Mh,25.75.Dw,24.10.Nz As a rule, considering heavy ion collision with QGP creation, one assumes, that hadrons, produced on plasma surface, suffer numerous rescatterings in the surrounding hadronic gas, so that final hadrons do not carry direct information about plasma. In this letter we show, that this widely accepted opinion is not quite right for the case of finite systems, created in heavy ion collisions. Because of comparability of free path lengths of hadrons in the hadronic gas with sizes of the region, occupied by hot matter, significant number of hadrons, created on the surface of quark-gluon plasma can pass through the surrounding hadronic gas without rescattering. Such kind of final state hadrons we call direct hadrons below. In contrast to the usually considered freeze-out hadrons, originated due to evaporation from the hadronic gas or its freeze-out, direct hadrons carry immediate information about the plasma surface. The goal of this letter is to show, that the yield of direct mesons is not negligible small, and moreover, there is kinematic region, where direct hadrons dominate over freeze-out ones. The rather obvious itself fact, that final hadrons are emitted not from thin freeze-out hypersurface, but from the whole volume, occupied by hadronic gas, was noted several times within different models of AA collisions: within quark-gluon string model , within the model, combining hydrodynamic description of evolution of QGP and relativistic quantum molecular dynamics for hadronic gas , within similar to our approach, used in papers for evaluation of hadronic emission from the depth of hadronic gas. Nevertheless, until our paper , there were no attempts to separate direct hadrons from QGP surface from ones from hadronic gas. Surely, one can not point out hadron and tell, that it e.g. came from QGP surface, but as we will show, it is possible to find kinematic region, where hadrons from QGP surface dominate. In the paper we considered S+Au collision at $`200AGeV`$ (SpS) and evaluated yields of direct and freeze-out pions. We showed that direct pions can dominate in soft $`p_t`$ region, what result in enhancement of the yield of pions with low $`p_t`$. However, there are a lot of effects, such as resonance decay, absence of chemical equilibrium etc., which lead to the similar enhancement of pion spectrum, making strong physical background for direct pions. In this letter, in addition to pions, we evaluate emission of heavier direct and freeze-out mesons: $`K,\eta ,\rho ,\omega `$ and $`K^{}`$ in AA collisions at SpS, RHIC and LHC energies, and demonstrate, that for heavier mesons contribution of direct mesons is even larger than for pions while physical background is negligible. To estimate yields of the direct mesons in heavy ion collision we use the following model. Hot matter, created in the very beginning of collision, evolves hydrodynamically. On the background of this evolution the direct mesons are continuously emitted from the surface of QGP as a result of flying out of quarks and gluons from the depth of quark-gluon plasma, their hadronization on the plasma surface and fly out of the direct mesons through surrounding expanding hadronic gas sometimes with rescattering. If direct meson suffers rescattering in the hadronic gas, then we assume, that it lost direct information about plasma and consider it further hydrodynamically. For freeze-out hadrons we assume thermodynamic and chemical equilibrium at freeze-out moment. More elaborated description of freeze-out of hadronic gas, such as e.g. , can only increase the relative yield of direct hadrons. Probability for quark and gluon being emitted in the depth of QGP to reach its surface, and for meson to escape from hadronic gas without rescattering is determined by expression: $$P=\mathrm{exp}\left\{\lambda ^1(\epsilon ,x)𝑑x\right\},$$ where integration is performed along the path of the particle in the hot matter with accounting of its evolution, and $`\lambda (\epsilon ,x)`$ \- free path length of the particle, which depends on energy of the particle $`\epsilon `$ and local energy density at the point $`x`$. We calculate free path lengths of quark and gluon in QGP and meson in hadronic gas using equation $$\lambda _i(\epsilon )=\left[\frac{1}{16\pi ^3}\frac{T}{\epsilon p}\underset{j}{}\underset{(m_1+m_2)^2}{\overset{\mathrm{}}{}}\sqrt{s^22s(m_1^2+m_2^2)+(m_1^2m_2^2)^2}\sigma _{ij}(s)\mathrm{ln}\left(\frac{1\mathrm{exp}(a_+)}{1\mathrm{exp}(a_{})}\right)𝑑s\right]^1,$$ $$a_\pm =\frac{\epsilon (sm_1^2m_2^2)\pm \sqrt{(\epsilon ^2m_1^2)(s^22s(m_1^2+m_2^2)+(m_1^2m_2^2)^2)}}{2m_1^2T},$$ where $`\sigma _{ij}(s)`$ – total cross-section of interaction of $`i`$ and $`j`$ particles, $`m_1`$ and $`m_2`$ – mass of projectile and target particles correspondingly, $`T`$ – temperature, and sum is taken over all possible two-particle reactions. Evaluating free path lengths of hadrons in hadronic gas we take into account only rescatterings on pions. In the case of the free path length of pions we use experimental cross-sections of $`\pi \pi `$ scattering (see ), while for $`K,\eta ,\mathrm{}`$ we assume $`\sigma =\sigma _{\pi ^+\pi ^+}10mb`$ plus contributions from excitation of resonances. One could expect, that presence of nucleons in the hadronic gas results in significant reduction of the free path length of the pion with respect to pure pionic gas (due to excitation of $`\mathrm{\Delta }`$ resonances). However, below we concentrate on the midrapidity region, where net baryon density is small, and for reasonable values of barionic chemical potential $`\mu 300MeV`$ and temperatures below $`200MeV`$, we find, that contribution of nucleons into free path length of pions is negligible. Emission rate (the number of particles, emitted from unit volume per unit time) of quarks and gluons from QGP we find from the condition, that infinitely thick layer of QGP emits quarks and gluons in accordance with Stephan-Boltsman formula. So we find: $$\epsilon \frac{d^7R_i}{d^3pd^4x}=\frac{d_i}{\lambda _i(\epsilon )}\frac{\epsilon }{(\mathrm{exp}(\epsilon /T)\pm 1)}.$$ where $`d_i`$ – degeneracy, $`\lambda _i`$ – free path length of the particle, $`T`$ – temperature. In this letter we interest in $`p_t`$ distributions at midrapidity region, therefore we restrict ourselves by Bjorken 2+1 hydrodynamics with transverse expansion, which provides good description of evolution of hot matter at this region. As far as it is not possible to describe consistently hadronization of quarks and gluons on the plasma surface, we estimate upper and lower limits on the yields of direct hadrons, by use of two models of hadronization – ‘creation’ model, giving the lower limit and ‘pull in’ model, used in the paper and giving the upper limit. In both models we assume, that a quark or gluon, flown through the plasma surface, pulls tube (string) of color field. Creation of the final hadron corresponds to the breaking of the tube due to discoloring of the moving out quark or gluon. In the first model we assume, that this discoloring takes place as a result of creation of a quark-antiquark pair in the strong field of the tube. This assumption is used in the well known Lund model, and implemented in the event generator JETSET , where all parameters are chosen to fit the $`e^+e^{}`$ annihilation at $`\sqrt{s}=30GeV`$. Therefore, in the ‘creation’ model we extract corresponding probabilities from JETSET 7.4. In the second model we assume, that discoloring takes place as a result of ‘pulling in’ of soft quark or gluon with corresponding color from the pre-surface layer of QGP into the tube. Because of the large number of soft quarks and gluons in the QGP each moving out quark and gluon can transform into some hadron, providing its energy is larger than mass of this hadron. If several hadrons can be formed, then we take their relative yields from $`e^+e^{}`$ annihilation at $`\sqrt{s}=30GeV`$. The main difference between these two models is in the probability of fragmentation of soft quarks and gluons: in the ‘pull in’ model probability is independent on energy, while in the ‘creation’ model probability of creation of quark pair in strong field is proportional to $`\mathrm{exp}(\epsilon ^2/p_0^2)`$, where $`\epsilon `$ \- energy of the quark and $`p_00.5GeV`$. Having probabilities of hadronization of quarks and gluons into hadron $`R_h^{q,g}(\epsilon _q)`$, evaluated using these two models, we obtain the following hadronization function: $$f_h^{q,g}(\epsilon _h,\theta _h,\varphi _h)=2R_h^{q,g}(\epsilon _q)\frac{\delta (\varphi _h\varphi _q)\delta (\mathrm{cos}\theta _h\mathrm{cos}\theta _q)\theta (\epsilon _qm_h)}{\epsilon _q\sqrt{\epsilon _q^2m_h^2}m_h^2\mathrm{ln}\left(\epsilon _q/m_h+\sqrt{\epsilon _q^2m_h^2}/m_h\right)}$$ where $`\epsilon ,\theta ,\varphi `$ energy, polar angle and azimuthal angle of initial quark ($`q`$) or gluon ($`g`$) or final hadron ($`h`$) correspondingly, and $`m_h`$ – mass of the hadron. This fragmentation function is normalized to describe fragmentation of one quark or gluon to $`R_h^{q,g}(\epsilon _q)`$ hadrons with energy in the range $`m_h<\epsilon _h<\epsilon _q`$. To apply our model to central $`Pb+Pb`$ collisions at $`158AGeV`$ we choose initial conditions (initial temperature $`T_{in}`$, time of thermalization $`\tau _{in}`$ and initial radius $`R_{in}`$), QGP-hadronic gas transition temperature ($`T_c`$) and freeze-out temperature ($`T_f`$) to reproduce experimental $`p_t`$ distribution of $`\pi ^0`$ at midrapidity . So we use: $$T_{in}=350MeV,\tau _{in}=0.25fm/c,R_{in}=6.5fm,T_c=160MeV,T_f=140MeV.$$ Evaluated $`p_t`$ distributions of various mesons at midrapidity for the ‘pull in’ and ‘creation’ models of hadronization of quarks and gluons are shown on the fig. 1. We do not distinguish isotopic projections of mesons, so we show distributions, averaged over all pions, kaons etc. We find, that the yield of direct mesons is comparable with the yield of freeze-out ones, see fig. 1. The contribution of direct mesons is higher within ‘pull in’ model, what is the consequence of higher probability of fragmentation of low energy quarks and gluons in the this model of hadronization. Nevertheless, within both models of hadronization direct $`\eta `$ and heavier mesons dominate over freeze-out ones at soft $`p_t`$. Thus we obtain unexpected result, that direct mesons, emitted from the hottest region of the collision, dominate not only in the hard part of spectrum, but also in the soft $`p_t`$ region. This takes place because quark or gluon, hadronizing on the plasma surface, spreads its energy between the direct hadron and a part of color tube, which is pulled back to plasma, what leads to non-thermal spectrum of direct hadrons. To investigate dependence of the yields of direct mesons on the energy of collision, we evaluated these yields in the central $`Pb+Pb`$ collisions at $`3100+3100AGev`$ (LHC). To do this we used the following initial conditions: $$T_{in}=1GeV,\tau _{in}=0.15fm/c,R_{in}=6.5fm.$$ what corresponds to the multiplicity at midrapidity $`dN/dy10^4`$, predicted by cascade models, while transition and freeze-out temperatures remains the same as for SpS. Distributions of direct and freeze-out hadrons, evaluated in this case are shown on fig. 2. In contrast to SpS, direct hadrons do not dominate at hard $`p_t`$ at LHC energy, because at this energy collective radial velocity of hadronic gas compensate slightly higher temperature in the pre-surface layer of QGP from which direct hadrons are emitted. In return, due to longer QGP phase direct mesons more significantly contribute to low $`p_t`$ region. Again, for $`\eta `$ and heavier mesons direct hadrons dominate at soft $`p_t`$ within both models of hadronization. For the sake of brevity we do not present here plots for RHIC energy, but the contribution of direct hadrons can be easily estimated using figs. 1 and 2 and Table 1. We summarized the parts of direct mesons in the total meson yield, evaluated for two models of hadronization for SpS, RHIC and LHC energies on the Table 1. As one can see, the part of direct hadrons is considerable and increase with increasing of the energy of collision. We find, that direct hadrons considerably contribute to the total hadronic yield: contributions of direct $`\eta `$ and heavier mesons reach $`0.5`$ of the total yield. Direct mesons contribute not only to the hard part of spectrum (at SpS energies) but also at low $`p_t`$. We would like to stress, that domination of direct hadrons in the soft $`p_t`$ region is not artifact of our model, but the consequence of two rather general physical requirements. First – a part of the energy of quark or gluon is lost during their hadronization. Second – there is approximate thermal equilibrium, to that extend, which is usually observed in heavy ion collisions. As for the value of this domination, it certainly depends on the details of the model: on the probability of hadronization, transparency of hadronic gas and details of evolution of the hot matter. We estimated sensitivity of the yield of direct hadrons to these parameters. In this letter we evaluated upper and lower limits for the probability of hadronization, while in the paper we estimated sensitivity to variations of free path lengths and hydrodynamic parameters. As a result, we find that the yield of direct hadrons is rather stable with respect to variation of model parameters, and can not be changed considerably within reasonable models. Contribution of direct hadrons change the shape of the meson spectrum, what can be used, in principle, as quark-gluon plasma signature. But to do this one have to answer the question: ‘is it possible to find such enhancement within pure hadronic picture of the collision?’. Fortunately for us, excess of low $`p_t`$ pions in AA collisions was observed experimentally long ago (see e.g. ) and caused active discussion of possible sources of such excess in literature. The review of proposed reasons for such excess can be found in . It includes: * Resonance decays , mainly $`\mathrm{\Delta }`$ and $`N^{}`$. This effect is very important for pions, especially at target and projectile regions, however, it brings negligibly small contribution to spectra of $`\eta `$ and heavier mesons. The number of heavier resonances, which can decay onto mesons we consider, multiplied by branching of this decay is well below the number of direct mesons (see Table 1). * Collective motion, as was demonstrated in explained observed enhancement, but later it was shown , that this explanation was consequence of rather specific and unnatural freeze-out conditions, used in . Within more physical hydrodynamic models radial collective expansion does not bring such contribution. * Absence of chemical equilibrium in pion gas . This effect essentially uses suppression of channels, changing the number of pions with respect to elastic scattering. In the case of heavier mesons this is not the case, so this effect does not contribute as well. Therefore we find that excess of low $`p_t`$ $`\eta `$ and heavier mesons can not be explained within pure hadronic scenario of collision, and, possibly, can be considered as quark-gluon plasma signature. To conclude, we consider emission of light ($`\pi K,\eta ,\rho ,\omega ,K^{}`$) mesons directly from the surface of quark-gluon plasma in the case of its creation in nucleus-nucleus collision. These mesons are continuously emitted from the surface of QGP due to fly-out of quarks and gluons from pre-surface region, their hadronization and subsequent escape from hot matter without interactions. Direct mesons give unique opportunity to test the pre-surface layer of QGP via strongly interacting particles. We evaluated yields of direct and freeze-out mesons in SpS, RHIC and LHC energies, and find, that direct mesons considerably contribute to the total meson emission – their contributions reach $`0.5`$ of total yield. Moreover, direct hadrons dominate over freeze-out ones in the soft part of spectrum, what result in enhancement of low $`p_t`$ $`\eta `$ and heavier meson productions, what, as we have shown, can not be explained within pure hadronic gas scenario and can be considered as QGP signature. We argue, that such effect is the result of two natural assumptions: first – there is approximate thermodynamic equilibrium, that is, there is no drops of matter with extremely low temperature; second – quark or gluon loses part of its energy during hadronization on the plasma surface. The value of enhancement depends on the details of the model, however, one can not change it significantly within reasonable models. This research was supported by grants RFFI 96-15-96548 and INTAS 97-158.
no-problem/0002/astro-ph0002341.html
ar5iv
text
# Variable stars in the field of the globular cluster E3 ## 1 Introduction We present the results of a search for variable stars in the faint sparse globular cluster E3, located at $`\alpha _{2000}=9^h20^m59^s`$, $`\delta _{2000}=77\mathrm{°}16\mathrm{}57\mathrm{}`$. The cluster was discovered on the ESO B Schmidt Survey of the Southern Sky by Lauberts (1976). The first $`BV`$ photometry of the cluster was presented by van den Bergh, Demers & Kunkel (1980). Numerous candidates for blue stragglers were identified. The photoelectric photometry of Frogel & Twarog (1983) confirmed this finding. A subsequent study by Hesser et al. (1984) using $`UBV`$ photoelectric and photographic observations showed a sparsely populated subgiant branch in the color-magnitude diagram. The first CCD photometry for E3 in the $`BV`$ bands, obtained with the CTIO 4m telescope, was published by McClure et al. (1985). These observations suggested the presence of a second sequence of stars $`0.75`$ mag. above the cluster main sequence, interpreted as evidence for a significant population of binary stars in E3. The cluster was further studied by Gratton & Ortolani (1987), who provided new $`BV`$ CCD photometry from the 2.2m telescope at ESO. ## 2 Observations and Data Reduction The data for this project was obtained with the Las Campanas Observatory 1.0m Swope telescope during two separate runs: from April 11 to 21, 1996 and from May 16 to 27, 1996. For the first few nights of the April run the telescope was equipped with the TEK1 1024x1024 CCD camera giving a pixel scale of $`0.70\mathrm{}/pixel`$. On the night of April 14 the camera was switched to the TEK3 2048x2048 CCD with a pixel scale of $`0.61\mathrm{}/pixel`$. During the May run the FORD 2048x2048 CCD camera with a pixel scale of $`0.41\mathrm{}/pixel`$ was used. The main observing target on both runs was the M4 globular cluster. Several exposures of E3 were taken at the beginning of most nights. A total of 121 long ($`400÷900sec`$) exposures were taken in the $`V`$ filter (33, 42 and 46 with TEK1, TEK3 and FORD, respectively), six short ($`35÷120sec`$) exposures in $`V`$ (2 with TEK1, 4 with FORD), two $`600sec`$ exposures in $`I`$ (TEK1) and two $`480sec`$ exposures in $`B`$ (TEK1). The preliminary processing of the CCD frames was done with the standard routines in the IRAF-CCDPROC package.<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the NSF The images from the TEK3 camera were clipped to a size of 1024x1024 pixels<sup>2</sup> to cover roughly the same field as the TEK1 images. Due to the high degree of psf variability on the images taken with the FORD camera, only the central 800x800 pixel<sup>2</sup> sections were used. Photometry was extracted using the Daophot/Allstar package (Stetson 1987). A PSF varying linearly with the position on the frame was used. The PSF was modeled with a Moffat function. Stars were identified using the FIND subroutine and aperture photometry was measured with the PHOT subroutine. Approximately 40 bright isolated stars were initially chosen by Daophot for the construction of the PSF. Of those the stars with profile errors greater than twice the average were rejected and the PSF was recomputed. This procedure was repeated until no such stars were left on the list. The PSF was then further refined on frames with all but the PSF stars subtracted. This procedure was applied twice. The PSF obtained in the above method was then used by Allstar in profile photometry. The image where the most stars were identified was chosen as the template. The template star list was then transformed to the $`(X,Y)`$ coordinate system of each of the frames and used as input to Allstar in the fixed-position mode. The output profile photometry was transformed to the common instrumental system of the template image and then combined into a database. The databases were created for the long ($`400900sec`$) exposures in the $`V`$ filter only. ## 3 Variable stars We have followed the procedure for selecting variables given in Kaluzny et al. (1998), where it is described in detail. From the 1541 stars in the $`V`$ database 11 variable star candidates were selected. After the rejection of stars with noisy and/or chaotic light curves we were left with two variables. Their periods were refined using the analysis of variance method, as described by Schwarzenberg-Czerny (1989). These two variables were confirmed with ISIS - the image subtraction package (Alard & Lupton 1998, Alard 1999). No other variables were detected using this method. In Figure 2 we present the phased $`V`$ light curves of the two variables. Table 1 lists the parameters of these variables: name, period, $`V`$ magnitude ($`V`$ for the pulsating variable, $`V_{max}`$ for the eclipsing binary), the $`BV`$ and $`VI`$ colors. The variables are indicated by open circles on the finding chart in Figure 1. V1 is a pulsating variable, most likely of the SX Phe type, judging from its short period (0.0853 days) and the shape of its light curve. V2 is an eclipsing binary with a period of 0.4490 days. Its light curve shows an absence of the constant light phase, indicating that it is a W UMa type variable. We have used the period-luminosity calibration for SX Phe stars derived by McNamara (1997) to estimate the distance modulus to V1: $$M_V=3.725\mathrm{log}P1.933$$ Adopting a value of reddening E(B-V)=0.30 (Harris 1996) we obtain a distance modulus $`(m_VM_V)_0=14.46`$. This value is in agreement with the distance moduli found in literature: 14.55 - van den Bergh et al. (1980), 14.4 - Frogel & Twarog (1983), 14.2 - Gratton & Ortolani (1987), indicating that V1 is located at the same distance as the cluster. The following period-color-luminosity calibrations for W UMa type eclipsing binaries derived by Rucinski (2000) were applied to V2: $$M_V^{BV}=4.44\mathrm{log}P+3.03(BV)_0+0.12$$ $$M_V^{VI}=4.43\mathrm{log}P+3.63(VI)_00.31$$ The fact that $`BV`$ and $`VI`$ colors were determined at random phase should not influence the outcome substantially, as in the case of contact binaries the color does not change significantly throughout the cycle. Using a value of $`E(VI)=1.28E(BV)`$ (Schlegel et al. 1997) we obtained a distance modulus of 15.42 mag. from the first calibration and 14.83 mag. from the second. The variable appears to be located behind the cluster, in the Milky Way halo. ## 4 Color-magnitude diagrams To construct the color-magnitude diagrams we combined pairs of long exposures in the $`BVI`$ filters. The transformation from instrumental magnitudes to the standard system was derived from the observations of the Landolt fields (Landolt 1992). The following relations were adopted: $`v=V0.0189\times (BV)+const`$ $`bv=0.9359\times (BV)+const`$ $`v=V0.0182\times (VI)+const`$ $`vi=0.9843\times (VI)+const`$ We have compared our $`BV`$ photometry with that of McClure et al. (1985). The average differences in $`V`$ magnitude and $`BV`$ color were computed for 6 selected stars in the range $`15.5V19.25`$ and were found to be $`\mathrm{\Delta }V=0.02\pm 0.014`$ and $`\mathrm{\Delta }(BV)=0.04\pm 0.080`$. The colors and magnitudes for variable stars were determined following a different procedure. For variable V1 its average magnitude $`V`$ was used to place it on the CMDs. V2 was plotted with its magnitude outside of the eclipses $`V_{max}`$. To derive the colors we used the single $`B`$ and $`I`$ exposures and interpolated the $`V`$ magnitudes from the nearest exposures in $`V`$ to those epochs. The final values of $`BV`$ and $`VI`$ were taken as the average of two color determinations, with the average scatter of 0.04 mag. The resultant $`V/BV`$ and $`V/VI`$ color-magnitude diagrams are shown in Figure 3, with the variable V1 denoted by an open circle and V2 by an open square. Only stars within $`2\mathrm{}`$ of the cluster center are plotted. Both variables are located among candidate blue stragglers, although V2 appears to be located behind the cluster, based on the distance modulus determination in the previous section. The cluster main sequence is apparent in both diagrams. It exhibits considerable scatter and there is some indication of a second sequence running above it, although not as clear as in Figure 3 of McClure et al. (1985). This would indicate that E3 could possess a significant population of binary stars. This is in agreement with the idea proposed by van den Bergh et al. (1980) that severe tidal stripping had depleted the cluster in single stars, leading to an increased binary frequency. A gap in the main sequence near the turnoff, at $`V19.5`$ is visible in both CMDs. This has been previously noted by McClure et al. (1985) and shown to be more of a visual effect, as no significant discontinuities are present in the cumulative luminosity function for stars on the main sequence (Figure 6 therein). This result is confirmed by our analysis. The subgiant and lower giant branches are also discernible in the diagrams, although they show substantial scatter. This is regarded as a real feature of the cluster, as commented in literature (i.e. Hesser et al. 1984). A number of stars blueward of the turnoff are present, possibly blue stragglers belonging to the cluster, as first noted by van den Bergh et al. (1980). ## 5 Conclusions Our variability search in E3 resulted in the discovery of two variable stars: an SX Phe variable (V1) and a W UMa eclipsing binary (V2). We have applied period-luminosity and period-color-luminosity relations to the variables to obtain their distance moduli. V1 seems to be a blue straggler belonging to E3, based on its distance modulus and location on the CMD. V2 is probably located behind the cluster. We would like to thank Krzysztof Z. Stanek for his error scaling, database manipulation and period finding programs and Grzegorz Pojmański for $`lc`$ \- the light curve analysis utility, incorporating the analysis of variance algorithm. BJM and JK were supported by the polish KBN grant 2P03D003.17 and by NSF grant AST-9819787.
no-problem/0002/hep-ph0002297.html
ar5iv
text
# Untitled Document hep-ph/0002297 The Cosmological Constant From The Viewpoint Of String Theory Edward Witten Dept. of Physics, Cal Tech, Pasadena, CA 91125 and CIT-USC Center For Theoretical Physics, USC, Los Angeles CA The mystery of the cosmological constant is probably the most pressing obstacle to significantly improving the models of elementary particle physics derived from string theory. The problem arises because in the standard framework of low energy physics, there appears to be no natural explanation for vanishing or extreme smallness of the vacuum energy, while on the other hand it is very difficult to modify this framework in a sensible way. In seeking to resolve this problem, one naturally wonders if the real world can somehow be interpreted in terms of a vacuum state with unbroken supersymmetry. Lecture at DM2000, Marina del Rey, February 23, 2000. On leave from Institute for Advanced Study, Princeton NJ 08540. March, 2000 The problem of the vacuum energy density or cosmological constant – why it is zero or extremely small by particle physics standards – really only arises in the presence of gravity, since without gravity, we don’t care about the energy of the vacuum. Moreover, it is mainly a question about quantum gravity, since classically it would be more or less natural to just decide – as Einstein did – that we do not like the cosmological constant, and set it to zero. Classically, it may involve some fine-tuning to set the cosmological constant to zero, but once this is done, that is the end of the story. Quantum mechanically, it will not help to just set the cosmological constant to zero in a microscopic Lagrangian. The observed value of the vacuum energy density potentially includes contributions from things such as loops of soft photons. The puzzle is why the vacuum energy is so small after including all of these contributions. As the problem really involves quantum gravity, string theory is the only framework for addressing it, at least with our present state of knowledge. Moreover, in string theory, the question is very sharply posed, as there is no dimensionless parameter. Assuming that the dynamics gives a unique answer for the vacuum, there will be a unique prediction for the cosmological constant. But that is, at best, a futuristic way of putting things. We are not anywhere near, in practice, to understanding how there would be a unique solution for the dynamics. In fact, with what we presently know, it seems almost impossible for this to be true, rather as a few years ago it seemed almost impossible that the different string theories would turn out – as they have – to be limiting cases of one more unified theory. Not understanding why the cosmological constant is zero, or extremely small, is in my opinion the key obstacle to making the models of particle physics that can be derived from string theory more realistic. In fact, as I will explain, that has been true since 1985. Any time that one does not understand something, one can point to details that do not work. It is always important to identity what is wrong qualitatively and gives the best clue to possible future progress. Let me give an analogy. In the early 1980’s, it ws fairly clear that string theory gave consistent models of quantum gravity and that in that framework, one was forced to unify gravity with other forces. Many details were not right, but the most striking qualitative problem was that in the models of particle physics that could be derived from string theory at that time the weak interactions would have to conserve parity. Parity violation in weak interactions is deeply embedded in the standard model, and the inability to reproduce it is a much more basic problem than an inability to compute the correct mass of the muon, for example. When this problem was cleared up, by the Green-Schwarz generalized anomaly cancellation mechanism in 1984, many other things fell into place – including the discovery of the heterotic string shortly afterwards – and the particle physics models became much more realistic. Since that time, the outstanding qualitative problem has been the cosmological constant, or more exactly, why it is zero or extremely small after supersymmetry breaking. Broadly speaking, models of particle physics derived from string theory work fine in the absence of supersymmetry breaking; but we do not have a convincing method of supersymmetry breaking. Hence, we can give a reasonably elegant (though not rigorous) explanation of quark and lepton gauge interactions and family structure. But we cannot say very much about masses or mixing angles. The basic diagnostic test for knowing that a wrong mechanism of supersymmetry breaking is wrong is that it generates a cosmological constant. On this basis, all the known approaches to supersymmetry breaking are wrong. Thus, the cosmological constant is the main clue to improving the particle physics models. Without supersymmetry breaking, there is no problem in getting a stable vacuum with zero cosmological constant. If $`\varphi `$ is a generic scalar field and $`V(\varphi )`$ is the effective potential, then with unbroken supersymmetry one can naturally have a stable minimum of $`V(\varphi )`$ at which $`V=0`$. (In the abstract, unbroken supersymmetry allows negative $`V`$ as well, but when one actually tries to construct string theory models that could reproduce the supersymmetric standard model – for example via Calabi-Yau compactification – one naturally finds $`V=0`$.) Known mechanisms of supersymmetry breaking typically give an unstable runaway, with $`V`$ positive but vanishing as $`\varphi `$ goes to infinity, or else they give a $`V`$ that is not positive definite. (The runaway involves one or more scalars, depending on the model considered. I will use “$`\varphi `$” as a generic label for such scalars.) This is linked to the fact that the mechanisms of supersymmetry breaking that we understand generally turn off when the string coupling constant goes to zero or a compactification radius goes to infinity. And there generally is a scalar field $`\varphi `$ that controls the string coupling constant or compactification radius. So supersymmetry breaking turns off as $`\varphi `$ goes to infinity . Scenarios with an unstable runaway have been considered by many physicists over the years. As far as I know, such scenarios were first considered by Dirac in his approach to the large numbers problem. In recent years, experimental evidence for a nonvanishing cosmic energy density in today’s universe has suggested that (as an alternative to a true cosmological “constant”) the actual universe may be undergoing such a runaway. Such scenarios have been much discussed (along with other possible cosmic components of negative pressure) under the name “quintessence.” A discussion of the observational situation can be found in . These issues will be further addressed by other speakers at the present meeting. Going back to string theory, in view of the remarks in the last paragraph, supersymmetry breaking mechanisms that lead to an unstable runaway – $`V(\varphi )`$ has its minimum at $`\varphi =\mathrm{}`$, where it vanishes – should be considered seriously. But there are severe difficulties with them. First of all, as $`\varphi \mathrm{}`$, we need the bose-fermi mass splittings to vanish fast enough relative to the vanishing of $`V(\varphi )`$. This is simply the cosmological constant problem restated for the case of a runaway solution rather than a stable vacuum. Known supersymmetry breaking mechanisms fail this test. Moreover, even if we can arrange for $`V(\varphi )`$ to vanish fast enough for large $`\varphi `$, compared to the mass splittings, this kind of scenario has to face acute obstacles. Unless we restrict the couplings of $`\varphi `$ in a way that has no apparent rationale in mechanisms of supersymmetry breaking that I can think of, we will find that as $`\varphi `$ varies, the natural “constants” will change in time. We are at risk of finding $`\dot{G}/G`$ and $`\dot{\alpha }/\alpha `$ close to $`\dot{\varphi }/\varphi `$ (here $`G`$ is Newton’s constant, measured in units in which $`\mathrm{},`$ $`c`$, and the proton mass are fixed; $`\alpha `$ is the fine structure constant). By making $`V`$ sufficiently flat so that $`\dot{\varphi }/\varphi `$ is sufficiently small, and making the couplings of $`\varphi `$ sufficiently weak, we can perhaps bring $`\dot{G}/G`$ and $`\dot{\alpha }/\alpha `$ within acceptable limits. Even so, we will still, in general, be in trouble because if $`V`$ is so flat and $`\dot{\varphi }`$ so small as such a scenario will require, then $`\varphi `$ will behave in laboratory and solar system measurements as a massless field. We will generically expect that coherent and perhaps even spin-dependent forces mediated by $`\varphi `$ will show up in tests of the equivalence principle, solar system tests of General Relativity, and perhaps in “fifth force” measurements. I actually personally find it very surprising, if nature is exhibiting a runaway with cosmic evolution of a scalar field $`\varphi `$, that the effects of the $`\varphi `$ field have not been seen directly already in one or more of the experiments that I have mentioned. Especially if experimental indications of a cosmological “constant” hold up, I think that experiments that will improve the bounds on $`\varphi `$ couplings are important and promising. One proposal, for example, is a satellite measurement (STEP) that could possibly improve the tests of the equivalence principle by a factor of $`10^6`$. This would improve the bounds on the couplings of $`\varphi `$ by a factor of $`10^3`$. The bound at present is roughly speaking that the couplings of $`\varphi `$ are a couple orders of magnitude smaller than gravitational coupling. In string theory, a $`\varphi `$ field typically has a coupling not much weaker than gravity, and suppression of the $`\varphi `$ couplings relative to gravity puts a severe restriction on the model. From this perspective, in runaway scenarios it would be surprising if $`\varphi `$ exists and would not be detected in an experiment that would improve the test of the equivalence principle by a factor of $`10^6`$. I have emphasized scenarios with a runaway because I think that they are suggested by experimental findings of possible cosmic acceleration. One advantage of a runaway rather than a true cosmological “constant” is that, by analogy with a zero cosmological constant scenario first outlined by Dyson many years ago \[3,,4\], in a runaway scenario, life can possibly adapt and survive and develop forever by working at lower and lower temperatures and with longer and longer time and length scales. A strict cosmological constant would bring all this to a grim end by introducing effective length and time cutoffs. On the other hand, as I have also tried to explain, the difficulties with runaway scenarios are so severe that one would probably have to find something really interesting and new to get a scenario that works. I should also mention an alternative (which still fits within the general rubric of “quintessence”) which is less ambitious and dramatic than the runaway but faces much less severe experimental difficulties. We could assume that the potential $`V(\varphi )`$ has a stable minimum at which it vanishes, but that the actual universe has not yet reached the minimum because the potential is very flat. For example, $`\varphi `$ might be an axion-like field, which in particular is angle-valued. The potential, coming from instantons, might be something like $`V(\varphi )=V_0(1\mathrm{cos}\varphi )`$, with $`V_0`$ a constant. I have adjusted an additive constant in $`V`$ so that the true minimum of the potential (at $`\varphi =0`$) has $`V=0`$ and zero cosmological constant. Either because $`V_0`$ is very small or the universe started near the top of the potential bump at $`\varphi =\pi `$ (or both), $`\varphi `$ might still in today’s universe be away from the minimum of the potential and very slowly changing so that it imitates a cosmological constant. Axion-like fields that get exponentially small potentials of roughly this kind do occur in many string models. If the universe is near the maximum of the potential today, or at least not too close to the minimum, then $`V_0`$ should be of order the observed vacuum energy density. A priori, this involves some fine-tuning, as does any description of a tiny but nonzero vacuum energy density. Such a scenario is not as challenging conceptually as a runaway. However, it has the practical advantage that there are mechanisms for suppressing the axion couplings to ordinary matter that would not apply to moduli that might lead to a runaway. Hence, in this kind of scenario, the experimental limits on light scalars are potentially much less problematic. In such a scenario, theorists would need to focus on why the true cosmological constant is exactly zero and why there is an axion with the right value for $`V_0`$. What about other scenarios? Interestingly, I cannot think of any known assumption or approximation in string theory that leads to the most obvious interpretation of recent experimental findings: a stable vacuum with a positive cosmological constant. There certainly is not an attractive known way to do this, and we certainly don’t have the techniques at the moment to find such a vacuum in which, in addition, the cosmological constant is extremely small and the particle physics is attractive. We do have stable vacua with negative cosmological constant (and sometimes unbroken supersymmetry). Such vacua presumably do not describe the real world, but exploring them has been in the last few years, beginning with Maldacena’s proposed duality , an arena for amazing theoretical advances. From this work has come new approaches to nonperturbative description of quantum gravity, understanding of quark confinement in terms of properties of black holes, and more. What I have said so far has been very general. In the concluding portion of this talk, I will make a few remarks on some specific approaches to the cosmological constant problem. First of all, the problem is hard because of the following set of facts: (1) Solving the problem seems to require a low energy mechanism – to cancel contributions of loops of soft photons, for example. (2) But low energy physics in the standard framework of four-dimensional effective field theory does not seem to offer a solution to the problem. For a review of these issues, see . (3) It is very hard to change the low energy framework in a sensible way, given all of the familiar successes. Faced with this conundrum, one way out (eloquently described by Weinberg in the lecture before mine) is an anthropic principle. This entails abandoning the quest for a conventional scientific explanation and interpreting the smallness of the cosmological constant as a necessary feature of our local environment, without which our human species could not have developed to ponder the question. I very much hope that things will not go that way, for several reasons. First, I think we need the cosmological constant as a clue to understand particle physics better; if the cosmological constant must be treated anthropically – and the approximate vacuum we live in is drawn fortuitously from an ensemble – I am not optimistic about how well we will be able to understand the aspects of particle physics that are not explained by the standard model. Second, I think we all prefer to see a conventional scientific explanation because it gives more understanding. Time and again the quest for a conventional scientific explanation has triumphed over seemingly impossible obstacles. And finally, I want to ultimately understand that, with all the particle physics one day worked out, life is possible in the universe because $`\pi `$ is between 3.14159 and 3.1416. To me, understanding this would be the real anthropic principle. I don’t want to lose it. On the other hand, it is not clear to me which alternative approaches to the cosmological constant are actually worth mentioning in the ten minutes remaining. I will make a slightly eccentric choice, guided by the following considerations. Supersymmetry, if unbroken, would give a natural explanation of the vanishing of the cosmological constant.<sup>1</sup> This is not automatic, since unbroken supersymmetry can coexist with negative cosmological constant, as I have mentioned earlier. But in many situations, with higher dimensions, chiral symmetries, natural microscopic constructions via Calabi-Yau manifolds, etc., unbroken supersymmetry does make vanishing of the cosmological constant natural. It seems a pity to waste it. Moreover, unbroken supersymmetry, if valid, give a natural explanation of the stability of spacetime. This last point perhaps needs some explanation. The Einstein action $`d^4x\sqrt{g}R`$ has no obvious positivity or stability property. Positive energy of classical General Relativity can nonetheless be proved – for instance, using the possibility of extending the classical theory to include fermions in a supersymmetric way. But positive energy and stability of the vacuum fail in many plausible extension of General Relativity such as nonsupersymmetric Kaluza-Klein theory \[7,,8\]. I think it is a mystery why any nonsupersymmetric string vacuum would be stable. Hence, I wonder if we can somehow make use of the vacua with unbroken supersymmetry to describe nature. Moreover, a decade ago, one might have hoped that the supersymmetric string vacua were nonperturbatively inconsistent. It has become pretty clear with all the results of the 1990’s on nonperturbative string dualities that this is not so. If they are consistent, it is a shame to waste them. Can we somehow reinterpret the real world in terms of unbroken supersymmetry, suitably construed, even though the boson and fermion masses are different? I know of two tries in this direction. Both involve relating the observed $`D=4`$ universe to a world of a different dimension. One scenario starts with $`D<4`$, and the other starts with $`D>4`$. For the first scenario , we consider a supersymmetric string vacuum in $`D=3`$ (or similarly in $`D<3`$) with the dilaton or string coupling constant $`g`$ as the only modulus. Despite the supersymmetry of the vacuum, because of a peculiar three-dimensional infrared divergence, the boson and fermion masses are not equal – they are equal at tree level, but are split by gravitational corrections. We assume that, for example because of a discrete $`R`$-symmetry, the cosmological constant vanishes. Now we take $`g\mathrm{}`$. We have learned in the 1990’s that sometimes when a coupling constant goes to infinity, a new dimension of spacetime opens up. Let us assume this occurs in the present case. The cosmological constant will remain zero because it is zero for all $`g`$. Conceivably, as $`g\mathrm{}`$, the boson and fermion masses (which were unequal for all finite nonzero $`g`$) remain unequal. To know, we would need some new insight about dynamics. If so, the $`g\mathrm{}`$ limit would be a four-dimensional world with vanishing cosmological constant and unequal bose-fermi masses. It is not obvious that it could be interpreted, in a dual description, as a four-dimensional world with conventional spontaneously broken supersymmetry, but perhaps this is also possible. Are there any models that actually exhibit the rather optimistic dynamics that I have just suggested? One reason that it is hard to be find out is that models whose nonperturbative dynamics we know something about usually have many moduli and can undergo partial or complete decompactification even for weak coupling. Because of the way the scenario depends on an infrared divergence that is limited to $`D3`$, I think that it is unlikely that the desired dynamics will occur in any model that can undergo partial decompactification to $`D4`$ at fixed coupling. This will make it hard to learn in the near future with known methods if the scenario does work as hoped. Now, I will discuss an alternative idea, based on a suggestion by Gregory, Rubakov, and Sibiryakov . They proposed a scenario in which the world looks four-dimensional up to a very large (presumably cosmological) length scale $`R`$, but above that length scale is $`D`$-dimensional with $`D5`$. A discussion of the model aiming to verify that it has the claimed quasi four-dimensional behavior can be found in . Since the GRS scenario does not obey the usual assumptions of four-dimensional low energy effective field theory, which lead to the cosmological constant problem, we should naturally reexamine the problem in this scenario. (In fact, this was done independently in a paper that appeared on the hep-th bulletin board the same day this talk was given. In this paper, it is also claimed that the GRS scenario leads to scalar-tensor gravity rather than pure tensor and so is excluded experimentally.) For our purposes, we will start with a world of $`D5`$ with unbroken supersymmetry. For example, to be definite, we will take $`D=5`$. We assume that the five-dimensional cosmological constant vanishes. (This is natural in some supergravity models in five dimensions, and would, according to Nahm’s theorem, automatically occur in all supersymmetric models in $`D8`$.) We assume that we live on a four-dimensional “brane” which is nonsupersymmetric (that is, non-BPS). This assumption makes the observed macroscopic four-dimensional physics nonsupersymmetric . It is natural for the brane to be flat because this gives a minimal volume hypersurface in five-dimensional Minkowski space. So we naturally get a flat four-dimensional world with unequal masses for bosons and fermions. But do we observe four-dimensional gravity? The idea of GRS was to get approximate four-dimensional gravity by “localizing” a graviton on the brane along the lines of Randall and Sundrum . The GRS idea can be illustrated with a metric of the general form $$ds^2=dr^2+\left(e^{2r/r_0}+ϵ\right)\underset{i=1}{\overset{4}{}}dx_idx^i.$$ Here $`r0`$; there is some sort of “brane” at the $`r=0`$ boundary of spacetime. If $`ϵ=0`$, this metric describes a portion of five-dimensional Anti de Sitter space, with negative cosmological constant. We could get more of Anti de Sitter space by continuing to $`r=\mathrm{}`$. According to the AdS/CFT correspondence, quantum gravity in this five-dimensional spacetime (with $`\mathrm{}<r<\mathrm{}`$) is equivalent to an ordinary four-dimensional conformal field theory, without gravity. The four-dimensional world is the virtual “boundary” at $`r=\mathrm{}`$. Randall and Sundrum truncated the Anti de Sitter space to $`r0`$, and observed that in this case one does get dynamical four-dimensional gravity.<sup>2</sup> To be more precise, one gets four-dimensional gravity coupled to a cutoff version of the conformal field theory; the low energy physics is conventional four-dimensional physics with General Relativity coupled to some massless matter fields that are governed by a nontrivial infrared fixed point. The cusp-like behavior of the metric for $`r=+\mathrm{}`$ corresponds in the AdS/CFT correspondence to the nontriviality of the IR fixed point. In particular, if $`ϵ=0`$, a four-dimensional graviton is “localized” near $`r=0`$. GRS modified this scenario by taking $`ϵ`$ nonzero but tiny. As long as $`e^{2r/r_0}>>ϵ`$, the correction is unimportant. This bound on $`r`$ means that if the four-dimensional lengths are not too large, the physics will look effectively four-dimensional, just as if $`ϵ=0`$. However, if we take $`r\mathrm{}`$, the $`ϵ`$ term dominates and the metric becomes five-dimensional Minkowski space. Hence, at very long distances, the world is effectively five-dimensional. (I have parametrized the correction in a way that makes it appear that extreme fine-tuning is required to get $`ϵ`$ sufficiently small. GRS present the discussion in a way that makes this look more plausible, though their analysis has a “negative energy” problem that I will mention later.) One apparent problem with this scenario (apart from possible issues raised in ) is that it cannot be realized with matter fields that obey even the weakest form of a positive energy condition that is usually considered physically acceptable, which is that the stress tensor $`T_{ij}`$ obeys $`n^in^jT_{ij}0`$ for any lightlike vector $`n`$. Indeed, given this energy condition, one can prove a “holographic $`c`$-theorem” in the AdS/CFT correspondence \[15--17\]. The $`c`$-theorem says that as one flows to large $`r`$, the effective five-dimensional cosmological constant can only become more negative. Any metric that looks like the ultraviolet end of Anti de Sitter space near $`r=0`$, and like five-dimensional Minkowski space near $`r=+\mathrm{}`$, violates this bound. In fact, a general five-dimensional metric with four-dimensional Poincaré symmetry can be put in the form $$ds^2=dr^2+e^{2A(r)}\underset{i=1}{\overset{4}{}}dx_idx^i,$$ with some function $`A(r)`$. As shown in section 4.2 of , the weak energy condition implies that $`A^{\prime \prime }0`$. This inequality makes it impossible to have a metric that behaves qualitatively like (1) in the whole range of $`r0`$. This point has been made independently in . GRS in fact incorporated a brane of negative tension in their construction of the model. This is one way to violate the weak energy condition. It seems likely that the physics with violation of the weak energy condition is unstable. I do not know if, starting in $`D>5`$, it is possible to get a metric of the qualitative form needed for the GRS scenario while also respecting the weak energy condition. I will conclude by briefly mentioning some other interesting recent developments that were omitted in the actual talk at DM2000 for lack of time. On the one hand, there are recent proposals \[18,,19\] attempting to use a five-dimensional solution with a singularity (as well as a brane on which standard model fields are localized) to get a small cosmological constant. The behavior of these models will depend very much on what assumption is made about the physics of the singularity. Note that one should not expect that a generic singularity will turn out to have any sensible physical interpretation at all. Elliptic equations (such as the ones that describe time-independent solutions of General Relativity) have an abundance of singular solutions, and only special ones can be given a physical interpretation at the quantum level. I will make a digression on this point. A familiar example of a singular solution that should not be given a physical interpretation is the singular Dirac monopole solution of QED. We do not want to predict magnetic monopoles in QED with a mass that (in units of the electron mass) would only depend on the fine structure constant! By contrast to this example, all of the singular charge-bearing “brane” solutions of ten and eleven-dimensional supergravity have turned out to be approximations to objects that actually exist in the pertinent quantum theories. Apart from the fact that fortune sometimes favors the brave, why did this occur? I think it may be that in the presence of gravity, since a charge-bearing singularity can be hidden behind a black hole horizon, objects carrying all of the gauge charges must exist. This contrasts with QED which does not include gravity and does not have magnetic monopoles. In any event, the generic singularity of a time-independent solution of General Relativity is much “worse” – much less likely to have a quantum interpretation – than the minimal charge-bearing singularities. So the singularities encountered in \[18,,19\] may well not have any sensible physical interpretation at all, especially in cases in which (as described in the second paper in ) the singular solution has a modulus that controls the four-dimensional cosmological constant. If the solution with the singularity were physically sensible, one would expect the modulus to correspond to a physical field, which would then relax to minimize the vacuum energy – not necessarily reaching the value zero. If a singularity does have a physical interpretation in terms of four-dimensional effective field theory, perhaps with some fields supported near the singularity, this still seems likely at best to lead to a restatement of the problem of the cosmological constant, in a new context. The best hope might be that the framework of low energy effective four-dimensional field theory would somehow break down because of the behavior near the singularity, perhaps leading to something roughly along the lines of GRS. Finally, another interesting development is the study of nonplanar loop diagrams in noncommutative geometry . A surprising connection between infrared and ultraviolet dynamics was found, wherein what would usually be regarded as the ultraviolet region of a Feynman diagram produces infrared singularities. This evades the usual renormalization group thinking by means of which one usually deduces an effective low energy field theory description – which leads to trouble with the cosmological constant. (However, the specific phenomenon found in can be described in a low energy effective field theory, albeit one that contains fields that one might not expect.) As has been noted by the authors of , such an IR/UV connection might, if other elements fall into place, be an ingredient in an eventual solution of the cosmological constant problem. This work was supported in part by NSF Grant PHY-9513835 and the Caltech Discovery Fund. References relax M. Dine and N. Seiberg, “Is The Superstring Semiclassical?” in Unified String Theories, ed. M. B. Green and D. J. Gross (World Scientific, 1986), “Is The Superstring Weakly Coupled?” Phys. Lett. B156 (1985) 55. relax L. Wang, R. R. Caldwell, J. P. Ostriker, and P. J. Steinhardt, “Cosmic Concordance and Quintessence,” astro-ph/9901388. relax F. W. Dyson, Rev. Mod. Phys. 51 (1979) 447. relax S. Frautschi, “Entropy In An Expanding Universe,” Science 217 (1982) 593. relax J. Maldacena, “The Large $`N`$ Limit Of Superconformal Field Theories And Supergravity,” Adv. Theor. Math. Phys. 2 (1998) 231. relax S. Weinberg, “The Cosmological Constant Problem,” Rev. Mod. Phys. 61 (1989) 1. relax E. Witten, “Instability Of The Kaluza-Klein Vacuum,” Nuc. Phys. B195 (1982) 481. relax M. Fabinger and P. Horava, “Casimir Effect Between World Branes In Heterotic $`M`$ Theory,” hep-th/0002073. relax E. Witten, “Is Supersymmetry Really Broken?” Int. J. Mod. Phys. A10 (1995) 1247, hep-th/9409111, “Some Comments On String Dynamics,” in Strings ’95: Future Perspectives In String Theory, ed. I. Bars et. al., hep-th/9507121. relax R. Gregory, V. A. Rubakov, and S. M. Sibiryakov, “Opening Up Extra Dimensions At Ultra Large Scales,” hep-th/0002072. relax C. Csaki, J. Erlich, and T. J. Hollowood, “Quasi-Localization Of Gravity By Resonant Modes,” hep-th/0002161. relax G. Dvali, G. Gababadze, and M. Porrati, “Metastable Gravitons And Infinite Volume Extra Dimensions,” hep-th/0002190. relax J. Hughes, J. Liu, and J. Polchinski, “Supermembranes,” Phys. Lett. B180 (1986) 370 relax L. Randall and Sundrum, “An Alternative To Compactification,” Phys. Rev. Lett. 83 (1999) 4690, hep-th/9906064. relax E. Alvarez and C. Gomez, “Geometric Holography, The Renormalization Group, and the $`c`$ Theorem,” Nucl. Phys. B541 (1999) 441, hep-th/9807226. relax L. Girardello, M. Petrini, M. Porrati, and A. Zaffaroni, “Novel Local CFT and Exact Results On Perturbations Of $`N=4`$ Super Yang-Mills From ADS Dynamics,” JHEP 12 (1998) 22, hep-th/9810126. relax D. Z. Freedman, S. S. Gubser, K. Pilch, and N. P. Warner, “Renormalization Group Flows From Holography - Supersymmetry And A $`c`$-Theorem,” hep-th/9904017. relax N. Arkani-Hamed, S. Dimopoulos, N. Kaloper, and R. Sundrum, “A Small Cosmological Constant From A Large Extra Dimension,” hep-th/000197. relax S. Kachru, M. Schulz, and E. Silverstein, “Self-Tuning Flat Domain Walls In $`5D`$ Gravity And String Theory,” hep/th/0001206, “Bounds On Curved Domain Walls in 5-D Gravity,” hep-th/0002121. relax S. Minwalla, M. Von Raamsdonk and N. Seiberg, “Noncommutative Perturbative Dynamics,” hep-th/9912072.
no-problem/0002/gr-qc0002008.html
ar5iv
text
# 1 The International Gravitational Event Collaboration ## 1 The International Gravitational Event Collaboration In table I there is a list of the five cryogenic resonant antennas now in operation. These detectors are aluminum (niobium in the case of the Australian one) cylindrical bars, all equipped with resonant transducers, that give them two narrow bands of detection of a few hertz around the two near coupled modes of the bar-transducer system, at frequencies of about 900 Hz (about 700 Hz for Niobe). These are called the detection bands of the antenna; in some detectors they can constitute a single wider band. The orientation of the bars was chosen in order to achieve the maximum parallelism of the bars. In order to coordinate and correlate the data produced by these detectors, in July 1997, all the groups that operate them have undersigned a data exchange protocol named ”International Gravitational Event Center” (”IGEC”). It is based on some technical and policy rules. The main points are: \- The data consist of ”candidate events” and service information. \- The minimum information about each event consist of the time of the event maximum, the amplitude in units of standard burst strain, the duration and the mean detector noise at the time of the event. \- Additional fields, giving further information on the candidate events (e.g. the shape and/or parameters relating the shape) may be provided. \- Time accuracy will be at least 0.1 s. \- Each group will set up a site (ftp, www,…) on which the data will be continuously available in agreed formats, with an updating rate not exceeding one day. The full text of the agreement, together with other data exchange documents, is posted at http://grwav1.roma1.infn.it/gwda/de/de.htm . In this paper I will show how the candidate events are produced by the Rome group (that is operating the two cryogenic resonant antennas Explorer and Nautilus) and how the IGEC data can be analyzed. | Antenna | Institution | City | | --- | --- | --- | | Allegro | LSU | Baton Rouge | | Auriga | LNL (INFN-AURIGA) | Legnaro | | Explorer | CERN (INFN-ROG) | Geneva | | Nautilus | LNF (INFN-ROG) | Frascati | | Nïobe | UWA | Perth | ## 2 How the events are obtained The simplified modelisation of the detector as a linear system is normally very good and the gravitational signal is simply added to the noise; nevertheless the detection of short gravitational pulses (with unknown shape) in the data of gravitational wave antennas is not an easy task. The main reasons are: \- the low signal-to-noise ratio (SNR) and the rarity of the expected pulses \- the ignorance of their shape \- the non-stationarity of the noise of the detectors and the presence of many spurious events. As regards the shape of the pulses, resonant antenna people normally consider as a standard pulse a delta function (or the part of that signal in the detector band) that, in the data at the output of the antenna, becomes a known waveform. The noise is considered gaussian and stationary (also if some consider it slowly non-stationary), completely described by the noise power spectrum. With these assumptions, the problem becomes the classical problem of detecting a known waveform in gaussian noise, that is optimally solved by the ”matched filter”. This filter is a non-causal linear system that has the property that at its output the signal-to-noise ratio (i.e. the ratio between the square of the maximum of the signal and the variance of the noise) is maximized. This means that, if we normalize the filter in order to have the same amplitude of the maximum of the input and output signal, the output noise (that is also gaussian because of the linearity of the filter), has the minimum variance, so the probability that the noise samples exceed the threshold is minimized. There are many ways to express the equation of the matched filter. In the frequency domain it is $$M(j\omega )=\frac{F^{}(j\omega )}{S(\omega )}$$ (1) where $`F^{}(j\omega )`$ is the complex conjugate of the Fourier transform of the response of the antenna to the standard pulse and $`S(\omega )`$ is the noise power spectrum. As a matter of fact, each group implement practically the filter in one or more different ways. This because: \- there are different ways of taking data (high frequency sampling, aliased sampling, two lock-ins acquisition, single central lock-in,…) \- the use of frequency domain procedures or time domain procedures \- the use of adaptive or non-adaptive procedures \- the threshold mechanism \- particular procedures of event features estimation and/or spurious event detection. In our case (Ref ), the most advanced procedure that we use is based on a high frequency sampling (5000 Hz), from which we extract a band of 40 Hz centered at 816 Hz, containing the detection bands. Other bands are also analyzed in order to monitor the behavior of the antenna and the presence of local disturbances, but this will not be discussed here. On these data we implement three adaptive matched filters (and also filters of other types), followed by a two state adaptive threshold mechanism. ## 3 The adaptive matched filters The noise power spectrum of our antennas is not stationary: both the electric noise, that normally gives a wide-band and a narrow band contributions, and the seismic noise, that gives normally a narrow band contribution, change with time. Moreover often extra noise peaks appear in the spectrum, due to local disturbances. These bands change slowly in amplitude and in frequency. Almost all these non-stationarities have time constants of at least a few hours. The presence of the non-stationarities can be seen as the fact that the detector has a time-varying sensitivity. Also the resonance frequencies of the antennas can slowly change, mainly because of the change in the transducer polarization voltage. In order to have a good filter in presence of these varying features of the detectors, we use an adaptive approach: we apply the filter in the frequency domain, using the expression of eq.1 and use, for the power spectrum, a first-order auto-regressive sum of the periodograms, with a time constant of one hour and with an updating rate of about one minute. The filter is so recomputed about every minute; this is our ”basic” adaptive filter. In presence of a big disturbance, the basic filter estimated spectrum is ”dirtied” and can remain dirty for a long time (also many time constants) after the end of the disturbance, producing filters that are not optimal for the ”clean” data. To solve this problem, we have implemented a matched filter that uses only ”clean” periodograms for the estimation of the power spectrum. We studied also another policy that gets good results in the case that the disturbed period can be seen as a low sensitivity period. It is based on the reduction of the auto-regressive time constant in the case of highly disturbed spectra. In order to evaluate the performances of the different filters, we add to the acquired data fictitious ”theoretical” pulses and analyze the resulting SNR; this is done every half an hour, disturbing the real data for less than the $`0.1\%`$ of the time. We chose as ”official filter” the filter that gives the better results at that time. This procedure provides also a sort of Monte Carlo check procedure for the whole system and can be used also to analyze the behavior in case of non-delta-pulse events. ## 4 The threshold mechanism ### 4.1 The adaptive threshold Because of the non-stationarity of the antenna noise, also at the output of one matched filter (or of any linear filter) the noise is not stationary, i.e. the variance of the noise changes (slowly) with the time; this means that the sensitivity of the detector changes with the time. If we put a fixed threshold on these data, we have more (spurious) candidate events when higher is the noise (and lower is the sensitivities) and this can highly worsen the statistics. So we must change the threshold with the time. This can be done in various ways, for example choosing a fixed number N and taking the N highest events of each hour. We use a different procedure, based on an ”adaptive threshold” defined in the following way. Let $`x_i`$ be the filtered data samples. We estimate the background statistics by computing the auto-regressive mean of the absolute value and of the square of $`x_i`$ $$m_i=(1w)|x_i|+wm_{i1}$$ (2) $$q_i=(1w)x_i^2+wq_{i1}$$ (3) with $$w=e^{\frac{\mathrm{\Delta }t}{\tau }}$$ (4) where $`\mathrm{\Delta }t`$ is the sampling time and $`\tau `$ is the ”memory” of the auto-regressive mean (we normally chose $`\tau =600s`$). Then we define the standard deviation $$\sigma _i=\sqrt{q_im_i^2}$$ (5) and the threshold $`\theta `$ is not set on $`|x_i|`$, but on the critical ratio of $`|x_i|`$ given by $$z_i=\frac{|x_i|m_i}{\sigma _i}$$ (6) This procedure was developed for the general case of $`x_i`$; if $`x_i`$ is simply zero mean, as is in the case of the matched filter, the algorithm for estimating the adaptive threshold can be simplified, using just the estimation $`q_i`$ and computing the critical ratio as $`z_i=|x_i|/\sqrt{q_i}`$. In case of very large value of $`\sigma _i`$, we can reduce the memory $`\tau `$ in order to reduce the ”blinding” effect at the end of large disturbances. ### 4.2 The event definition: the two state event machine In order to define candidate events, let us suppose that we have chosen an adaptive threshold $`\theta `$ and a dead time (that is the minimum time between two different events; it depends on the apparatus, the noise and the expected signal: we normally use 3 s). Then we use an easy two-state (0 and 1) mechanism that we call the ”event machine”. The algorithm performed is the following. \- The machine is normally set in the state $`0`$. \- When the signal goes over the threshold, it changes to state $`1`$ and an event begins. \- The state changes to $`0`$ after the signal has remained below the threshold for a time longer than the dead time. \- The ”duration” of the event is the duration of the state $`1`$, subtracted the dead time. A simplified model of this algorithm (Ref ) is a two-state Markov chain (in the discrete time). Its transition matrix is $$\left(\begin{array}{cc}1p_{01}& p_{01}\\ p_{10}& 1p_{10}\end{array}\right)$$ (7) where $`p_{01}`$ and $`p_{10}`$ are the transition probability for the two states. We can easily compute the probabilities of the two states as $$p_0=\frac{p_{10}}{p_{01}+p_{10}}$$ (8) $$p_1=1p_0$$ (9) and the mean length (in unit of sampling time) $$\overline{L}=\frac{1}{p_{10}}$$ (10) ### 4.3 The event density We can define the event density $`\lambda `$ as the number of the events per unit time and then the candidate events production can be modeled by a Poisson process with parameter $`\lambda `$. Obviously the value of $`\lambda `$ depends on the value of the adaptive threshold $`\theta `$. The functional dependence of $`\lambda `$ on $`\theta `$ depends on the distribution of the filtered data (or, more precisely, on its tail), that only theoretically is gaussian; in practice it depends strongly on the local disturbances. This is what finally limits the sensitivity of a single antenna. However, because the disturbances give heavy tails in the distributions, reducing the threshold (and so enhancing the sensitivity) in this region gives a not big increment on $`\lambda `$. It is not so for the gaussian distribution (that has light tails). ## 5 The coincidences If we consider the case of $`N`$ antennas, each defined with a $`\lambda _i`$, and if we choose a coincidence window of duration $`t_w`$, in the hypothesis of uncorrelated data, that is that of the background noise event, we have an expected density of ”casual” coincidences given by $$\lambda ^{(N)}=t_w^{N1}\underset{i=1}{\overset{N}{}}\lambda _i$$ (11) $`t_w`$ is chosen depending on the apparatuses, the time precision and the light time delay between the antennas; good values ranges between $`0.1`$ and $`1s`$. In table 2 there are some values for the coincidence densities (expressed in number of coincidences per day) for the case that all $`\lambda _i`$’s are equal to the value $`\lambda `$ and $`t_w=0.1s`$. | | $`\lambda =\mathrm{𝟏𝟎𝟎}\mathrm{𝐞𝐯𝐞𝐧𝐭𝐬}/\mathrm{𝐝𝐚𝐲}`$ | $`\lambda =\mathrm{𝟏𝟎𝟎𝟎}\mathrm{𝐞𝐯𝐞𝐧𝐭𝐬}/\mathrm{𝐝𝐚𝐲}`$ | | --- | --- | --- | | $`𝐍=\mathrm{𝟐}`$ | $`1.1510^2`$ | $`1.15`$ | | $`𝐍=\mathrm{𝟑}`$ | $`1.3410^6`$ | $`1.3410^3`$ | | $`𝐍=\mathrm{𝟒}`$ | $`1.5510^{10}`$ | $`1.5510^6`$ | | $`𝐍=\mathrm{𝟓}`$ | $`1.7910^{14}`$ | $`1.7910^9`$ | In order to evaluate the statistical significance of the found coincidences, one should consider the appropriate Poisson statistics. Anyway, the Poisson model is normally not a good approximation, because of all the non-stationarities in the detection process. In practice the $`\lambda `$’s are functions of the time. In particular, there are \- periodicities (e.g. the solar day) \- aggregation (the disturbances often are clustered in time) \- holes in the data. Also the signals (the ”true” events) are not expected to be uniformly distributed in time, because of the radiation pattern of the antennas and the non-uniformity of the space distribution of the biggest sources, that causes a sidereal day modulation of the detection probability. To have an efficient evaluation method in presence of this problems, in the case of two antennas, Weber introduced a non-parametric procedure, that we call ”pulse correlation”. ### 5.1 The pulse correlation The pulse correlation method is based on the evaluation of the casual (”background”) coincidence rate for two antennas obtained by adding a bias time $`\tau `$ to the eventts of one antenna. $`\tau `$ is normally set equal to $`kt_w`$, with $`k`$ integer and $`nkn`$ (e.g., $`n`$ can be set equal to 1000). Taken a period of time $`T`$ (e.g. one day, one month,…) the number of coincidences $`C(\tau )`$ are computed. If we take the mean value $`\mu _C`$ of $`C(\tau )`$ excluding $`C(0)`$, the estimated number of ”true” coincidences is given by $$\stackrel{~}{C}=C(0)\mu _C$$ (12) if $`\stackrel{~}{C}0`$. We can define $`\lambda _C=\mu _C/T`$ as the estimated chance coincident event density (analogous to $`\lambda ^{(2)}`$ for the equation 11). Because of the non-stationarities, the expected shape of $`C(\tau )`$ for $`\tau 0`$ is not uniform and the evaluation of $`\mu _C`$ deserves some cares. I would note the analogy between the pulse correlation function and the classical cross-correlation function. To evaluate the chance probability of having $`C(0)`$, one can use one of the two following methods: a) use the Poisson statistics with parameter $`\mu _C`$ b) compute the histogram of $`C(\tau )`$ and evaluate the ”frequentistic” probability of having $`C(0)`$. Both methods must be used with care, the first because of the uncertainty in the evaluation of $`\mu _C`$ and the second because it assumes that the event production is stationary on times of the order of $`nt_w`$. A particular use of the pulse correlation is in correlating the events of an antenna with themselves (”pulse auto-correlation”). This method of analysis can be used to identify the non-stationarities of the events: in particular aggregation and periodicities. The pulse correlation can be applied on data selected by some ”single event” rules (e.g. the amplitude or the time of occurrence, that can be a particular solar or sidereal hour) or ”coincidence” rules (e.g. consider a coincidence only if the amplitude and/or the duration of the two events is about the same). In the evaluation of the probabilities, particular care must be taken on any choice made ”a posteriori”. ### 5.2 The pulse correlation in the case of more than two antennas How can we generalize the pulse correlation in the case of $`N>2`$ antennas ? Consider the following two methods: a) the couple pulse correlation, obtained summing the pulse correlation of all the $`N(N1)/2`$ couples of antennas. We obtain a function $`C_N(\tau )`$. With these method there is a ”natural” weighting of multiple coincidences. In fact a coincidence between only two antennas is considered just once, a coincidence between 3 antennas is considered 3 times , a 4 antennas coincidence it is considered 6 times and a 5 antennas coincidence is considered 10 times. The background is the sum of all the backgrounds. b) the multiple pulse correlation $`C(\tau _1,\tau _2,\mathrm{},\tau _{N1})`$, obtained by adding to the events of each of the first $`N1`$ antennas a different bias time. The delay variables $`\tau _i`$ can be chosen equal to $`kt_w`$, with $`nkn`$ (e.g., $`n`$ can be set equal to 10; in this case, with five antennas, we have 194481 delays). In searching for coincidences for more then two antennas, it must be taken into account that, in case of not too much big gravitational events, the probability that an event is overlooked by a detector is not negligible. This because of the presence of the additive noise (see, for example, Ref ), the difference in the frequencies, the not perfect parallelism and the difference in sensitivities, that, also if are basically similar, change in time depending on the local noises. So methods like the multiple pulse correlation are good only in case of huge events and, for small events, would produce a false dismissal probability near to 1. More complex procedures were presented in Ref . With these procedures, with five detectors it is possible to estimate also the position of the source in the sky and the polarization of the gravitational pulse, together with a better estimate of its energy. Also the rejection of spurious events is enhanced. Unfortunately these methods get poor results with parallel antennas. ## 6 Another perspective Because of the rarity of the gravitational events, it is important to not overlook the presence of real events in the data. So any method used to reduce the false detection probability, should not enhance too much the false dismissal probability. This means that we should reject a candidate event only after a careful analysis. On the other hand, also very low probability coincidence events must be carefully checked for consistency, e.g. to compare the amplitudes and the lengths in the different antennas or to check in the other antennas, that did not detect events at that time, if there is ”something” (i.e. an event under the threshold). As it is shown in table 2, the multiple coincidence operation (2 or 3 antennas on 5) reduces strongly the number of candidate events; this can be done automatically. For the surviving candidates we should analyze carefully (i.e. not automatically) the outputs of all the antennas in operation, the signal shapes and the auxiliary channels. It is important to have a data-base of all the events, with some characteristics (e.g. the ”color”, i.e. some spectral parameters, the length, the shape,…). A very important information to take into account are other impulsive astrophysical events, e.g. the gamma ray bursts and the neutrinos bursts detected in underground experiments. These must constitute a complementary data base to be analyzed routinely together with the candidate events. Also supernova surveys can be useful, in order to correctly interpret the other detector results, also if they don’t give precise events time. After all, the ”true decision” on the ”promotion” of a candidate event will be made not by an algorithm, but by a long discussion.
no-problem/0002/cond-mat0002098.html
ar5iv
text
# Partition Function Zeros and Finite Size Scaling of Helix-Coil Transitions in a Polypeptide ## Abstract We report on multicanonical simulations of the helix-coil transition of a polypeptide. The nature of this transition was studied by calculating partition function zeros and the finite-size scaling of various quantities. New estimates for critical exponents are presented. A common, ordered structure in proteins is the $`\alpha `$-helix and it is conjectured that formation of $`\alpha `$-helices is a key factor in the early stages of protein-folding . It is long known that $`\alpha `$-helices undergo a sharp transition towards a random coil state when the temperature is increased. The characteristics of this so-called helix-coil transition have been studied extensively , most recently in Refs. . They are usually described in the framework of Zimm-Bragg-type theories in which the homopolymers are approximated by a one-dimensional Ising model with the residues as “spins” taking values “helix” or “coil”, and solely local interactions. Hence, in such theories thermodynamic phase transitions are not possible. However, in preliminary work it was shown that our all-atom model of poly-alanine exhibits a phase transition between the ordered helical state and the disordered random-coil state. It was conjectured that this transition is due to long range interactions in our model and the fact that it is not one-dimensional: it is known that the 1D Ising model with long-range interactions also exhibits a phase transition at finite $`T`$ if the interactions decay like $`1/r^\sigma `$ with $`1\sigma <2`$ . Our aim now is to investigate this transition in the frame work of a critical theory by means of the finite size scaling (FSS) analysis of partition function zeros. Analysis of partition function zeros is a well-known tool in the study of phase transitions, but was to our knowledge never used before to study biopolymers. For our project, the use of the multicanonical algorithm was crucial. The various competing interactions within the polymer lead to an energy landscape characterized by a multitude of local minima. Hence, in the low-temperature region, canonical simulations will tend to get trapped in one of these minima and the simulation will not thermalize within the available CPU time. One standard way to overcome this problem is the application of the multicanonical algorithm and other generalized-ensemble techniques to the protein folding problem . For poly-alanine, both the failure of standard Monte Carlo techniques and the superior performance of the multicanonical algorithm are extensively documented in earlier work . In the multicanonical algorithm conformations with energy $`E`$ are assigned a weight $`w_{mu}(E)1/n(E)`$. Here, $`n(E)`$ is the density of states. A simulation with this weight will lead to a uniform distribution of energy: $$P_{mu}(E)n(E)w_{mu}(E)=\mathrm{const}.$$ (1) This is because the simulation generates a 1D random walk in the energy, allowing itself to escape from any local minimum. Since a large range of energies are sampled, one can use the reweighting techniques to calculate thermodynamic quantities over a wide range of temperatures by $$<𝒜>_T=\frac{{\displaystyle 𝑑x𝒜(x)w^1(E(x))e^{\beta E(x)}}}{{\displaystyle 𝑑xw^1(E(x))e^{\beta E(x)}}},$$ (2) where $`x`$ stands for configurations. It follows from Eq. 1 that the multicanonical algorithm allows us to calculate estimates for the spectral density: $$n(E)=P_{mu}(E)w_{mu}^1(E).$$ (3) We can therefore construct the partition function from these estimates by $$Z(\beta )=\underset{E}{}n(E)u^E,$$ (4) where $`u=e^\beta `$ with $`\beta `$ the inverse temperature, $`\beta =1/k_BT`$. The complex solutions of the partition function determine the critical behavior of the model. They are the so-called Fisher zeros , and correspond to the complex extension of the temperature variable. Our investigation of the helix-coil transition for poly-alanine is based on a detailed, all-atom representation of that homopolymer, and goes beyond the approximations of the Zimm-Bragg model . The interaction between the atoms was described by a standard force field, ECEPP/2, (as implemented in the KONF90 program ) and is given by: $`E_{tot}`$ $`=`$ $`E_C+E_{LJ}+E_{HB}+E_{tor},`$ (5) $`E_C`$ $`=`$ $`{\displaystyle \underset{(i,j)}{}}{\displaystyle \frac{332q_iq_j}{ϵr_{ij}}},`$ (6) $`E_{LJ}`$ $`=`$ $`{\displaystyle \underset{(i,j)}{}}\left({\displaystyle \frac{A_{ij}}{r_{ij}^{12}}}{\displaystyle \frac{B_{ij}}{r_{ij}^6}}\right),`$ (7) $`E_{HB}`$ $`=`$ $`{\displaystyle \underset{(i,j)}{}}\left({\displaystyle \frac{C_{ij}}{r_{ij}^{12}}}{\displaystyle \frac{D_{ij}}{r_{ij}^{10}}}\right),`$ (8) $`E_{tor}`$ $`=`$ $`{\displaystyle \underset{l}{}}U_l\left(1\pm \mathrm{cos}(n_l\chi _l)\right).`$ (9) Here, $`r_{ij}`$ (in Å) is the distance between the atoms $`i`$ and $`j`$, and $`\chi _l`$ is the $`l`$-th torsion angle. Note that with the electrostatic energy term $`E_C`$ our model contains a long range interaction neglected in the Zimm-Bragg theory . Since one can avoid the complications of electrostatic and hydrogen-bond interactions of side chains with the solvent for alanine (a nonpolar amino acid), explicit solvent molecules were neglected. Chains of up to $`N=30`$ monomers were considered. We needed between 40,000 sweeps ($`N=10`$) and 500,000 sweeps ($`N=30`$) for the weight factor calculations by the iterative procedure described in Refs. . All thermodynamic quantities were estimated from one production run of $`N_{sw}`$ Monte Carlo sweeps starting from a random initial conformation, i.e. without introducing any bias. We chose $`N_{sw}`$=400,000, 500,000, 1,000,000, and 3,000,000 sweeps for $`N=10`$, 15, 20, and 30, respectively. For our analysis of the partition function zeros we first divide the energy range into intervals of lengths $`0.5`$ kcal/mol. Equation 4 becomes now a polynomial in the variable $`u`$ and can be easily solved with MATHEMATICA to obtain all complex zeros $`u_j^0`$ ($`j=1,2,\mathrm{}`$). For the case of $`N=10`$ we also repeated the calculation of the zeros for energy bin sizes $`1.0`$ kcal/mol and $`0.25`$ kcal/mol. The changes in the zeros were smaller than the statistical errors. The effect of the energy bin size on the zeros is also discussed in Ref. . Figure 1 shows the distribution of the zeros for $`N=30`$ and provides already strong evidence for a singularity on the real axis: in the case of the (analytic) Zimm-Bragg theory the zeros would be located solely on the negative real $`u`$-axis . We summarize in Table I the leading zeros for each of the four chain lengths, where we have used the mapping $`u=e^{\beta /2}`$ due to our binning procedure. The FSS relation by Itzykson et al. for the leading zero $`u_1^0(N)`$, $$u_1^0(N)=u_c+AN^{1/d\nu }[1+O(N^{y/d})],y<0$$ (10) shows that the distance from the closest zero $`u_1^0`$, to the infinite-chain critical point $`u_c=e^{\beta _c/2}`$ on Re($`u`$) axis, scales with a relevant linear length $`L`$, which we translated as $`N^{1/d}`$ in the above equation. Here, $`\beta _c`$ is the inverse critical temperature of the infinite long polymer chain and $`y`$ is the correction to scaling exponent. We remark that, unlike in the Zimm-Bragg model, we have no theoretical indication to assume $`d`$ as a particular integer geometrical dimension and report therefore estimates for the quantity $`d\nu `$. For sufficiently large $`N`$, the exponent $`d\nu `$ can be obtained from the linear regression $$\mathrm{ln}|u_1^0(N)u_c|=\frac{1}{d\nu }\mathrm{ln}(N)+a.$$ (11) This relation requires an accurate estimate for $`u_c`$. Therefore, we prefer to calculate our estimates for $`d\nu `$ from the corresponding relation with $`|u_1^0u_c|`$ replaced by its imaginary part $`\mathrm{Im}u_1^0`$. Including chains of all lengths, $`N=1030`$, this approach leads to $`d\nu =0.93(5)`$, with a goodness of fit $`Q=0.48`$. Figure 2 displays the corresponding fit. Omitting the smallest chain, i.e. restricting the fit to the range $`N=1530`$, does not change the above result. We obtain now $`d\nu =0.93(7)`$, with $`Q=0.22`$. This indicates that the $`d\nu `$ determination is stable over the studied chains and therefore, the correction exponent $`y`$ can be disregarded in face of the present statistical error. Considering the real part of the leading zeros given in Table I, $`\mathrm{Re}(\beta _1^0(N))=\mathrm{ln}\{[\mathrm{Re}u_1^0(N)]^2+[\mathrm{Im}u_1^0(N)]^2\}`$, we can derive the critical temperature through the following FSS fit , $$\mathrm{Re}(\beta _1^0(N))=\beta _c+bN^{1/d\nu }.$$ (12) We obtain $`\beta _c=0.906(12)`$ ($`Q=0.005`$) for the range $`N=1030`$, and $`\beta _c=0.929(14)`$ ($`Q=0.63`$) for $`N=1530`$. The last and more acceptable estimate, corresponds to $`T_c(\mathrm{})=541(8)`$ K. A stronger version for the relation (10) considers that the next zeros $`u_j^0(N)`$, should also satisfy a scaling relation , $$|u_j^0(N)u_c|\left(\frac{j}{N}\right)^{1/d\nu },$$ (13) where $`j`$ labels the zeros in order of increasing distance from $`u_c`$. This relation is expected to be satisfied for large $`j`$ and allows for an independent check of the estimate for our exponent $`d\nu `$. The scaling plot in Fig. 3 for the roots closest to the critical point $`u_c`$ demonstrates that the assumed scaling relation is indeed observed for our data as $`N`$ increases and consistent with our estimate of the exponent $`d\nu `$. Our results for the critical temperature and critical exponent can be compared with independent estimates obtained from FSS of the specific heat: $$C_N(T)=\beta ^2(<E^2(T)><E(T)>^2)/N.$$ (14) Defining the critical temperature $`T_c(N)`$ as the position where the specific heat $`C_N(T)`$ has its maximum, we can again calculate the critical temperature by means of Eq. 12. With the values in Table II we obtain $`T_c(\mathrm{})=544(12)`$ K, which is consistent with the value obtained from the partition function zeros analysis, $`T_c(\mathrm{})=541(8)`$ K. Choosing $`T_1(N)`$ and $`T_2(N)`$ such that $`C(T_1)=1/2C(T_c)=C(T_2)`$, we have the following scaling relation for the width $`\mathrm{\Gamma }_C(N)`$ of the specific heat , $$\mathrm{\Gamma }_C(N)=T_2(N)T_1(N)N^{1/d\nu }.$$ (15) Using the above equation and the values given in Table II, we obtain $`d\nu =0.98(11)`$ $`(Q=0.9)`$ for chains of length $`N=15`$ to $`N=30`$, i.e. omitting the shortest chain. This value is in agreement with our estimate $`d\nu =0.93(5)`$, obtained from the partition function zero analysis. Including $`N=10`$ leads to $`d\nu =1.19(10)`$, but with a less acceptable fit ($`Q=0.1`$). The analysis of partition function zeros seems also to be more stable than one relying on Eq. 15. No significant change in $`d\nu `$ was observed when the data from ref. (which relied on much smaller number of Monte Carlo sweeps) were used in the partition function zeros analysis, while Eq. 15 leads for this reduced statistics to an estimate for $`d\nu =1.9`$. Through the scaling relation for the peak of the specific heat, we can evaluate yet another critical exponent, the specific heat exponent $`\alpha `$, by: $$C_N^{max}N^{\alpha /d\nu }.$$ (16) In particular, with the values for $`C_N^{max}`$ as given in Table II, we obtain $`\alpha =0.86(10)`$. The scaling plot for the specific heat is shown in Fig. 4: curves for all lengths of the poly-alanine chains nicely collapse on each other indicating the scaling of the specific heat and the reliability of our exponents. It worths to note that our estimates for $`d\nu `$ and $`\alpha `$, as obtained from the finite size scaling of the specific heat, obey within the errorbars the hyperscaling relation $`d\nu =2\alpha `$. It is well-known that renormalization-group fixed point picture leads to a critical exponent $`d\nu =1`$, $`\alpha =1`$ and $`\gamma =1`$ for a first-order phase transition . Our estimate $`d\nu =0.93(5)`$ for the correlation exponent deviates from unity and rather indicates that the ‘helix-coil-transition” is a strong second order transition. However, the errorbars are such that a first order phase transition cannot be excluded. Our values for the specific heat exponent $`\alpha =0.86(10)`$ and the susceptibility exponent $`\gamma =1.06(14)`$ (data not shown) are consistent with a first order phase transition, but also not conclusive. A common way to evaluate the order of a phase transition is by means of the Binder energy cumulant , $$b(T,N)=1\frac{<E^4(T,N)>}{3<E^2(T,N)>^2}.$$ (17) For a second order phase transition one would expect that the minimum of this quantity $`b(T_{min},\mathrm{})`$ approaches $`2/3`$. Here $`T_{min}`$ defines the temperature where the cumulant reaches its minimum value and $`b(T_{min},\mathrm{})=lim_N\mathrm{}b(T,N)`$. With the present values of Table II we find the infinite volume extrapolation $`b(T_{min},\mathrm{})=0.23(13)`$ $`(Q=0.12)`$, for the range $`N=1530`$, which is consistent with a first order phase transition. However, we cannot exclude the possibility of a second order phase transition because the energy cumulant scales with the maximum of specific heat , $`b(T,N)N^{\alpha /d\nu 1}`$, and the true asymptotic limit is reached only for rather large chains due to the value of $`\alpha /d\nu `$. In fact, the straight line fit for the range $`N=1030`$ is less consistent with our data ($`Q0.001`$). Hence, we conclude that our results seem to favor a (weak) first order phase transition, but are not precise enough to exclude the possibility of a second order phase transition. To summarize, we have used a common technique for investigation of phase transitions, analysis of the finite size scaling of partition function zeros, to evaluate the helix-coil transition in an all-atom model of poly-alanine. Although our results are due to the complexity of the simulated model not precise enough to determine the order of the phase transition, we have demonstrated that the transition can be described by a set of critical exponents. Hence, we have shown for this example that structural transitions in biological molecules can be described within the frame work of a critical theory. Acknowledgements: Financial supports from FAPESP and a Research Excellence Fund of the State of Michigan are gratefully acknowledged. Figure Captions: 1. Partition function zeros in the complex $`u`$ plane for $`N=30`$. For the Zimm-Bragg model the zeros would be located solely on the negative real $`u`$ axis. 2. Linear regression for -ln (Im$`u_1^0(N)`$) in the range $`N=1030`$. 3. Scaling behavior of the first $`j`$ complex zeros closest to $`u_c=0.6284`$, for chain lengths $`N=10,15,20`$ and 30. 4. Scaling plot for the specific heat $`C_N(T)`$ as a function of temperature $`T`$, for poly-alanine molecules of chain lengths $`N=10,15,20,`$ and $`30`$.
no-problem/0002/cond-mat0002085.html
ar5iv
text
# Nucleation theory and the phase diagram of the magnetization-reversal transition ## Abstract The phase diagram of the dynamic magnetization-reversal transition in pure Ising systems under a pulsed field competing with the existing order can be explained satisfactorily using the classical nucleation theory. Indications of single-domain and multi-domain nucleation and of the corresponding changes in the nucleation rates are clearly observed. The nature of the second time scale of relaxation, apart from the field driven nucleation time, and the origin of its unusual large values at the phase boundary are explained from the disappearing tendency of kinks on the domain wall surfaces after the withdrawal of the pulse. The possibility of scaling behaviour in the multi-domain regime is identified and compared with the earlier observations. Arkajyoti Misra<sup>1</sup><sup>1</sup>1 e-mail : arko@cmp.saha.ernet.in and Bikas K. Chakrabarti<sup>2</sup><sup>2</sup>2 e-mail : bikas@cmp.saha.ernet.in *Saha Institute of Nuclear Physics, 1/AF Bidhannagar, Calcutta 700 064, India.* PACS. 64.60 Ht - Dynamic Critical Phenomena. PACS. 64.60 Kw - Mulicritical Points. PACS. 64.60.Qb - Nucleation. The study of dynamical phase transitions in pure Ising systems, with ferromagnetic short range interactions, under the influence of time dependent external magnetic field has recently become one of the significant areas of interest in statistical physics . The effect of external magnetic fields which are periodic in time was first dealt with mean field theory . Subsequently, through extensive Monte Carlo studies, the existence of a dynamic phase transition was established and properly characterized . A relevant investigation in this context was the study of the effect of external pulsed field which is uniform in space but applied for a finite duration. All these pulsed field studies are concerned with the system below its static critical temperature $`T_c^0`$, where the system has a long range order characterized by a non-zero magnetization $`m_0`$. Under the influence of a ‘positive’ field or a field applied along the direction of prevalent order, the system does not show any new phase transition . However, a ‘negative’ pulsed field competes with the existing order and the system may show a transition from an initial equilibrium state with magnetization $`m_0`$ to a final equilibrium state with magnetization $`m_0`$ . Such a transition can be brought about by tuning the duration ($`\mathrm{\Delta }t`$) and strength ($`h_p`$) of the pulse, and the ‘phase diagram’ in the $`h_p\mathrm{\Delta }t`$ plane gives the minimal combination of these two parameters to bring about the transition at any temperature $`T`$ below $`T_c^0`$. The transition is dynamic in nature and both length and time scales are seen to diverge across this transition . According to classical theory of nucleation, there could be two mechanisms through which the droplet of a particular spin grows in the sea of opposite spins. When the external magnetic field is relatively weak, a single droplet grow to cover the whole system size and this regime is called the single-droplet (SD) or nucleation regime . On the other hand, under stronger magnetic fields, many small droplets grow simultaneously and eventually they coalesce to span the entire system. This is called the multi-droplet (MD) or coalescence regime. The crossover from the SD to the MD regime takes place at the dynamic spinodal field or $`H_{DSP}`$, which is a function of the system size $`L`$ and of the temperature $`T`$. There are two time scales in the problem : (i) the nucleation time $`\tau _N`$ is the time taken by the system to leave the metastable state under the influence of the external field and (ii) relaxation time $`\tau _R`$ which is defined as the time taken by the system to reach the final equilibrium state after the external field is withdrawn. In this letter, we show that the phase diagram of the magnetization-reversal transition can be explained satisfactorily by employing the classical nucleation theory . The nature of the transition changes form a discontinuous to a continuous one, giving rise to a ‘tricritical’ point as the system goes from the SD regime to the MD regime, depending on the temperature and the strength of the applied pulse. The dimension dependent factor by which the nucleation time differs in the two regimes is confirmed by Monte Carlo simulation of $`2d`$ kinetic Ising model evolving under Glauber dynamics below $`T_c^0`$. Our observations here also support the validity of the finite size scaling of the of the order parameter used earlier to obtain the dynamic critical exponents of the transition . Monte Carlo simulation using single spin-flip Glauber dynamics have been used to study pure Ising system in 2$`d`$ below $`T_c^0`$ ($`2.27,\text{ }`$ in units of the strength of the cooperative interaction). The system is subjected to a magnetic field $`h(t)`$ of finite strength $`h_p`$ for a finite duration $`\mathrm{\Delta }t`$ : $$\begin{array}{ccccc}h(t)& =& \hfill h_p& \text{for }t_0tt_0+\mathrm{\Delta }t\hfill & \\ & =& \hfill 0& \text{otherwise}.\hfill & \end{array}$$ (1) In order to prepare the system in an ordered state corresponding to a temperature $`T<T_c^0`$, $`t_0`$ is taken much larger than the equilibrium relaxation time of the system at that temperature. The negative sign signifies that the applied field competes with the prevalent order characterized by equilibrium magnetization $`m_0`$, which is a function of $`T`$ only. Depending on the values of $`h_p`$ and $`\mathrm{\Delta }t`$, the system can either go back to the original state (with magnetization $`m_0`$) or to the other equivalent ordered state, characterized by equilibrium magnetization $`m_0`$, eventually after the withdrawal of the field. This gives rise to the magnetization-reversal transition which brings the system from one equilibrium ordered state to the other, below $`T_c^0`$. It appears that the average magnetization at the time of withdrawal of the pulse $`m_wm(t_0+\mathrm{\Delta }t)`$ is a very relevant quantity : depending on the sign of $`m_w`$, the system chooses the final equilibrium state. On an average, the transition to the $`m_0`$ state occurs for negative values of $`m_w`$, whereas for positive $`m_w`$ the system goes back to the original $`+m_0`$ state. There occur of course fluctuations, where the transition to the opposite order takes place even for small positive $`m_w`$ values or vice versa. The phase boundary is therefore identified as the $`m_w=0`$ line. Fig. 1 shows the Monte Carlo phase boundaries at a few different temperatures below $`T_c^0`$ for a 100$`\times `$100 lattice with periodic boundary condition. At very low temperatures (typically for $`T<0.5`$), the transition is discontinuous in nature all along the phase boundary. Crossover to a continuous transition region appears along the phase boundary for higher temperatures, thereby giving rise to a tricritical point. Along a typical phase boundary where both kinds of transition are observed, the continuous transition region appears for smaller values of $`\mathrm{\Delta }t`$ (higher values of $`h_p`$), whereas the discontinuous region appears for higher values of $`\mathrm{\Delta }t`$ (smaller values of $`h_p`$). It is instructive to look at the droplet picture of the classical nucleation theory to describe the nature of the phase diagram for the magnetization-reversal transition. The typical configuration of a ferromagnet below $`T_c^0`$ consists of clusters or droplets of down spins in a background of up spins or vice versa. According to the classical nucleation theory , the equilibrium number of droplets of size $`l`$ is then given by $`n_l=N\mathrm{exp}(\beta ϵ_l)`$, where $`\beta =1/k_BT`$ and $`ϵ_l`$ is the free energy of formation of a droplet of size $`l`$. Assuming spherical shape of the droplets, one can write $`ϵ_l=2hl+\sigma (T)l^{(d1)/d}`$, where $`\sigma `$ is the temperature dependent surface tension and $`h`$ is the external magnetic field. Droplets of size greater than a critical value $`l_c`$ are then favoured to grow. One obtains $`l_c`$ by maximizing $`ϵ_l`$ : $`l_c=\left[\sigma (d1)/2d\left|h\right|\right]^d`$. The number of supercritical droplets is then given by $`n_l^{}=N\mathrm{exp}\left(\beta K_d\sigma ^d/\left|h\right|^{d1}\right)`$, where $`K_d`$ is a constant depending on dimension only. In the nucleation regime or the SD regime, there is only one supercritical droplet that grows and engulfs the entire system. The nucleation time $`\tau _N^{SD}`$ is inversely proportional to the nucleation rate $`I`$. According to the Becker-D$`\ddot{\text{o}}`$ring theory , $`I`$ in turn is proportional to $`n_l^{}`$, and thus $$\tau _N^{SD}I^1\mathrm{exp}\left(\frac{\beta K_d\sigma ^d}{\left|h\right|^{d1}}\right).$$ In the coalescence or MD regime, where due to stronger magnetic field many droplets grow simultaneously and eventually coalesce to form a system spanning droplet, the nucleation time is given by $$\tau _N^{MD}I^{1/(d+1)}\mathrm{exp}\left(\frac{\beta K_d\sigma ^d}{(d+1)\left|h\right|^{d1}}\right).$$ During the time when the field is ‘on’, the only relevant time scale of the problem is the nucleation time. The phase boundary of the magnetization-reversal transition corresponds to the threshold value of the field pulse ($`h_p^c`$), which can bring the system from an equilibrium state with magnetization $`m_0`$ to a state with magnetization $`m_w=0_{}`$ in time $`\mathrm{\Delta }t`$, so that the system eventually evolves to a state with magnetization $`m_0`$. Equating therefore $`\mathrm{\Delta }t`$ with the nucleation time $`\tau _N`$, one gets for the magnetization-reversal phase diagram $$\begin{array}{cccccc}\mathrm{ln}(\mathrm{\Delta }t)& =& c_1& +& \frac{C}{h_p^{d1}}& \text{in SD regime}\hfill \\ & =& c_2& +& \frac{C}{(d+1)h_p^{d1}}& \text{in MD regime},\hfill \end{array}$$ (2) where $`C=\beta K_d\sigma ^d`$ and $`c_1`$, $`c_2`$ are constants. Therefore a plot of $`\mathrm{ln}(\mathrm{\Delta }t)`$ against $`\left(h_p^c\right)^{d1}`$ would show different slopes in the two regimes. This is indeed the case as shown in Fig. 2, where two distinct slopes are seen at higher temperatures (typically for $`T>0.5`$). The ratio of the slopes $`R`$ corresponding to the two regimes has a value close to $`3`$ as predicted for $`2d`$ by the classical nucleation theory. The point of intersection of the two straight lines gives the value of $`H_{DSP}`$ at that temperature and system size. At lower temperatures, however, the MD region is absent which is marked by a single slope (Fig. 2(d)). According to the classical nucleation theory $`l_c\left|h_p\right|^d`$ and at any fixed temperature, therefore, one expects stronger (weaker) fields to bring the system to the MD (SD) regime. Fig. 3 shows snapshots of the spin configuration at a particular temperature, where the dots correspond to down spins. It is clear from the figure that many droplets of down spins are formed for a smaller value of $`\mathrm{\Delta }t`$ or equivalently for large $`h_p`$ whereas only a single down spin droplet is formed for a larger $`\mathrm{\Delta }t`$ or weaker $`h_p`$. The snapshots of the system are taken at the time of withdrawal of the pulse, beyond which the system is left to itself to relax back to either of the equilibrium states. The time taken by the system to reach the final equilibrium after the withdrawal of the pulse is defined as the relaxation time ($`\tau _R`$). It is observed that $`\tau _R\kappa (T,L)\mathrm{exp}\left[\lambda (T)\left|m_w\right|\right]`$, where $`\kappa (T,L)`$ is a constant depending on temperature and system size ($`\kappa \mathrm{}\text{ as }L\mathrm{}`$) and $`\lambda (T)`$ is a constant depending on temperature. This is in distinct contrast with the normal relaxation of a ferromagnet to the equilibrium state with magnetization $`m_0`$, starting from a random initial state where the average magnetization is close to zero. The effect of the field is to initiate the nucleation process and by the time the pulse is withdrawn, the droplet(s) has (have) very few kinks along the surface(s). Once a droplet or a domain forms a flat boundary, it becomes a rather stable configuration and the domain wall movement practically stops; thereby restricting further nucleation. When the system is trapped in such a metastable state, it is left for large fluctuations to initiate further movement of the domain walls and resume the process of nucleation. The closer to zero is $`m_w`$, more is the chance that the system will be trapped in a metastable state owing to more number of flat domain walls. Thus the effect of the pulse is to create a domain structure which does not favour fast nucleation and even for $`TT_c^0`$, $`\tau _R`$ therefore diverges at the phase boundary. To determine the order of the phase transition one can look at the probability distribution $`P(m_w)`$ of $`m_w`$, as one approaches a phase boundary. Fig. 4 shows the variation of $`P(m_w)`$ across the phase boundary corresponding to a particular $`T`$ at two different values of $`\mathrm{\Delta }t`$. The existence of a single peak of $`P(m_w)`$ in (a), which shifts its position continuously from $`+1`$ to $`1`$, indicates the continuous nature of the transition; whereas in (b), one obtains two peaks simultaneously, close to $`\pm m_0(T)`$, indicating that the system can simultaneously reside in two different phases with finite probabilities. This is a sure indication of discontinuous phase transition. The crossover from the discontinuous transition to a continuous one along the phase boundary is not very sharp; instead one gets a ‘tricritical region’ where the determination of the nature of the transition is not very conclusive. As is evident from Fig. 2, the data points in this particular region do not fit to either of the two straight lines corresponding to the two regimes. Thus for $`h_p^cH_{DSP}`$ the system belongs to the MD regime and the nature of the transition is continuous, whereas for $`h_p^cH_{DSP}`$ the system is brought to the SD regime where the transition is discontinuous in nature. In our earlier work , we clearly noticed that finite size scaling of the order parameter was possible only at higher temperatures for moderately low $`\mathrm{\Delta }t`$. The reason behind that becomes clear now, as at higher temperatures the tricritical point shifts towards higher values of $`\mathrm{\Delta }t`$ and hence in the lower $`\mathrm{\Delta }t`$ region the system belongs to the MD regime and the transition is continuous in nature. This supports the scaling. Since at lower temperatures the continuous region gradually shrinks before disappearing altogether, all attempts for a finite size scaling fit failed. This observation also compares with that of Sides et al. for the scaling behaviour in the case of dynamic transition under periodic fields. In this letter we have shown that the phase diagram of the magnetization-reversal transition in the Ising model can be explained satisfactorily from the classical nucleation theory. Across $`H_{DSP}`$, the system goes from MD or coalescence regime where the transition is continuous in nature to SD or nucleation regime where the transition is discontinuous. A tricritical point separates these two regimes which moves towards smaller $`\mathrm{\Delta }t`$ (larger $`h_p`$) values as one decreases $`T`$ until it disappears altogether at low temperatures. The dimensional factor in the nucleation time of the two regimes is found to be quite accurately reproduced in the Monte Carlo simulations. There are two time scales in the problem, viz. $`\tau _N`$ and $`\tau _R`$. $`\tau _N`$, which is determined by the pulse duration $`\mathrm{\Delta }t`$, brings the system from one regime to the other depending on the temperature; while $`\tau _R`$ determines the relaxation of the field-released system and it diverges at the phase boundary even for $`TT_c^0`$. Finite size scaling of the order parameter for the transition is also justified in the MD regime where the transition is continuous in nature. \*** A. M. would like to thank Muktish Acharyya and Burkhard Duenweg for useful discussions. Figure Captions Fig. 1. Monte Carlo phase diagram for magnetization-reversal transition in a $`100\times 100`$ lattice: (a) $`T=0.2`$, (b) $`T=0.6`$, (c) $`T=1.0`$, (d) $`T=1.4`$, (e) $`T=2.0`$. Fig. 2. Plot of $`\mathrm{ln}(\mathrm{\Delta }t)`$ against $`\left[h_p^c(\mathrm{\Delta }t,T)\right]^1`$, obtained from the phase diagram in Fig. 1, at different temperatures. (a) $`T=1.4`$, (b) $`T=1.0`$, (c) $`T=0.7`$, (d) $`T=0.2`$. The observed values of the slope ratio $`R`$ are close to $`3.38`$, $`3.09`$ and $`3.24`$ in (a), (b) and (c) respectively. Fig. 3. Snapshots of the spin configuration of the system at $`t=t_0+\mathrm{\Delta }t`$ for $`T=1.0`$; (a) $`\mathrm{\Delta }t=2`$, $`h_p=1.91`$ (MD regime) (b) $`\mathrm{\Delta }t=100`$, $`h_p=0.65`$ (SD regime). Fig. 4. Probability distribution $`P(m_w)`$ of $`m_w`$ at $`T=0.6`$. In (a) where $`\mathrm{\Delta }t=1`$ (SD regime) , the position of the peak changes sharply from $`+1`$ to $`1`$ indicating a discontinuous transition: (i) $`h_p=2.05`$, (ii) $`h_p=2.15`$, (iii) $`h_p=2.25`$. In (b) where $`\mathrm{\Delta }t=500`$ (MD regime) , the position of the peak changes continuously from $`+1`$ to $`1`$ indicating a continuous transition: (i) $`h_p=0.92`$, (ii) $`h_p=1.02`$, (iii) $`h_p=1.12`$.
no-problem/0002/physics0002021.html
ar5iv
text
# Averages of static electric and magnetic fields over a spherical region: a derivation based on the mean-value theorem preprint: Submitted to Notes and Discussions in the American Journal of Phyiscs The electromagnetic theory of dielectric and magnetic media deals with macroscopic electric and magnetic fields, because the microscopic details of these fields are usually experimentally irrelevant (with certain notable exceptions such as scanning tunneling microscopy). The macroscopic fields are the average of the microscopic fields over a microscopically large but macroscopically small region. This averaging region is often chosen to be spherical, denoted here as $$\overline{𝐄}\frac{3}{4\pi R^3}_𝒱𝑑𝐫^{}𝐄(𝐫^{}),$$ (1) where $`𝒱`$ is a sphere of radius $`R`$ centered at $`𝐫`$ (with a similar definition for $`\overline{𝐁}`$). $`R`$ is a distance which is macroscopically small but nonetheless large enough to enclose many atoms. The macroscopic $`\overline{𝐄}`$ and $`\overline{𝐁}`$ fields obtained by averaging over a sphere exhibit properties which prove useful in certain arguments and derivations. These properties are as follows: 1. if all the sources of the $`𝐄`$-field are outside the sphere, then $`\overline{𝐄}`$ is equal to the electric field at the center of the sphere, 2. if all the sources of the E-field are inside the sphere, $`\overline{𝐄}=𝐩/(4\pi ϵ_0R^3)`$, where $`𝐩`$ is the dipole moment of the sources with respect to the center of the sphere, 3. if all the sources of the $`𝐁`$-field are outside the sphere, then $`\overline{𝐁}`$ is equal to the magnetic field at the center of the sphere, 4. if all the sources of the B-field are inside the sphere, $`\overline{𝐁}=\mu _0𝐦/(2\pi R^3)`$, where $`𝐦`$ is the magnetic dipole moment of the sources. These results can be derived in a variety of ways. For example, Griffiths derives properties 1 and 2 using a combination of results from Coulomb’s law and Gauss’ law, and properties 3 and 4 by writing down the $`𝐁`$-field in terms of the vector potential in the Coulomb gauge, and explicitly evaluating angular integrals. The purpose of this note is to describe a relatively simple derivation of all four results, based on the well-known mean-value theorem theorem (described in most textbooks on electromagnetic theory): if a scalar potential $`\mathrm{\Phi }(𝐫)`$ satisfies Laplace’s equation in a sphere, then the average of $`\mathrm{\Phi }`$ over the surface of the sphere is equal to $`\mathrm{\Phi }`$ at the center of the sphere; that is, if $`^2\mathrm{\Phi }=0`$ in a spherical region of radius $`r^{}`$ centered at $`𝐫`$, then $$\mathrm{\Phi }(𝐫)=\frac{1}{4\pi }𝑑\mathrm{\Omega }\mathrm{\Phi }\left(𝐫+r^{}\widehat{𝐧}_\mathrm{\Omega }\right),$$ (2) where $`\mathrm{\Omega }`$ is the solid angle relative to $`𝐫`$ and $`\widehat{𝐧}_\mathrm{\Omega }`$ is the unit vector pointing in the direction of $`\mathrm{\Omega }`$. Taking the gradient $``$ of both sides of Eq. (2) with respect to r, we immediately obtain $$𝐄(𝐫)=\mathrm{\Phi }(𝐫)=\frac{1}{4\pi }𝑑\mathrm{\Omega }\left[\mathrm{\Phi }\left(𝐫+r^{}\widehat{𝐧}_\mathrm{\Omega }\right)\right]=\frac{1}{4\pi }𝑑\mathrm{\Omega }𝐄\left(𝐫+r^{}\widehat{𝐧}_\mathrm{\Omega }\right);$$ (3) that is, if $`𝐄=0`$ inside a sphere, the average of $`𝐄`$ over the surface of the sphere is equal to the $`𝐄`$ at the center of sphere. Eq. (3) is the basis of the derivation of the four properties listed above. To simplify the notation, henceforth in this note it is assumed that $`𝒱`$ is a sphere of radius $`R`$ and is centered at the origin $`\mathrm{𝟎}`$. 1. Electric field with sources outside sphere When all the static charge sources are outside $`𝒱`$, $`𝐄=\mathrm{\Phi }`$ and $`𝐄=0`$ inside $`𝒱`$ and hence Eq. (3) is valid. We shall now see that property 1 follows quite trivially as a special case of Eq. (3) \[and hence of Eq. (2)\]. The average of $`𝐄`$ over $`𝒱`$ can be written as weighted integral of the average of $`𝐄`$ over surfaces of spheres with radius $`r^{}<R`$ centered at $`\mathrm{𝟎}`$, $`\overline{𝐄}{\displaystyle \frac{3}{4\pi R^3}}{\displaystyle _𝒱}𝑑𝐫^{}𝐄(𝐫^{})`$ $`=`$ $`{\displaystyle \frac{3}{R^3}}{\displaystyle _0^R}r^2𝑑r^{}\left[{\displaystyle \frac{1}{4\pi }}{\displaystyle 𝑑\mathrm{\Omega }𝐄(r^{}\widehat{𝐧}_\mathrm{\Omega })}\right].`$ (4) Using Eq. (3) in Eq. (4) immediately yields property 1, $$\overline{𝐄}=\frac{3}{R^3}_0^Rr^2𝑑r^{}𝐄(\mathrm{𝟎})=𝐄(\mathrm{𝟎}).$$ (5) 2. Electric field with sources inside sphere For clarity we first prove property 2 for a single point charge inside the sphere. The general result can be inferred from the single point charge result using the superposition principle, but for completeness we generalize the proof for a continuous charge distribution. Utilizing the vector identity $$_𝒱\mathrm{\Phi }d𝐫=_𝒮\mathrm{\Phi }𝑑𝐚,$$ (6) where $`𝒮`$ is the surface of $`𝒱`$, the average of the electric field over $`𝒱`$ centered at $`\mathrm{𝟎}`$ can be written as $$\overline{𝐄}=\frac{3}{4\pi R^3}_𝒱\mathrm{\Phi }(𝐫)𝑑𝐫=\frac{3}{4\pi R^3}_S\mathrm{\Phi }𝑑𝐚.$$ (7) Eq. (7) implies the average $`\overline{𝐄}`$ is determined completely by the potential on the surface $`𝒮`$. We now use a well-known result from the method of images solution for the potential of a point charge next to a conducting sphere: the potential on the surface $`S`$ for a charge $`q`$ at $`𝐫`$ inside the sphere is reproduced exactly by an image charge $`q^{}=Rq/d`$ at $`𝐫^{}=(R^2/r)\widehat{𝐫}`$ outside the sphere ($`\widehat{𝐫}`$ is a unit vector). Eq. (7) therefore implies that the $`\overline{𝐄}`$ for a point charge $`q`$ at $`𝐫`$ is exactly equal to that of an image point charge $`q^{}`$ at $`(R^2/r)\widehat{𝐫}`$. But since the image charge is outside the sphere, we can use property 1 to determine $`\overline{𝐄}`$, $$\overline{𝐄}=𝐄_{\mathrm{image}}(\mathrm{𝟎})=\frac{q^{}\widehat{𝐫}}{4\pi ϵ_0|𝐫^{}|^2}=\frac{q𝐫}{4\pi ϵ_0R^3}=\frac{𝐩}{4\pi ϵ_0R^3},$$ (8) where $`𝐩=q𝐫`$ is the dipole moment for a single point charge. Generalization to continuous charge distributions – Assume the charge distribution inside the spherical volume $`𝒱`$ is $`\rho (𝐫)`$. A volume element $`dV`$ at $`𝐫`$ inside $`𝒱`$ contains charge $`dq=\rho (𝐫)dV`$. The image charge element outside the sphere which gives the same average electric field as $`dq`$ is $`dq^{}=\rho ^{}(𝐫^{})dV^{}=dqR/r`$ at $`𝐫^{}=(R^2/r)\widehat{𝐫}`$. As in the discrete case, the contribution of $`dq`$ to the average $`𝐄`$-field in $`𝒱`$ is equal to the electric field at the origin due to $`dq^{}`$, $$d\overline{𝐄}=d𝐄_{\mathrm{image}}(\mathrm{𝟎})=\frac{\rho ^{}(𝐫^{})dV^{}}{4\pi ϵ_0r^2}\widehat{𝐫}=\frac{𝐫\rho (𝐫)dV}{4\pi ϵ_0R^3}.$$ (9) The averaged electric field due to all charges in the sphere $`𝒱`$ is therefore $$\overline{𝐄}=_𝒱𝑑\overline{𝐄}=\frac{1}{4\pi ϵ_0R^3}_𝒱𝑑V𝐫\rho (𝐫)=\frac{𝐩}{4\pi ϵ_0R^3},$$ (10) where $`𝐩=_𝒱𝑑V𝐫\rho (𝐫)`$ is the dipole moment with respect to the origin of all the charges in $`𝒱`$. 3. Magnetic field with sources outside the sphere When magnetic field sources are absent in $`𝒱`$, both $`𝐁=0`$ and $`\times 𝐁=0`$, and hence $`𝐁=\varphi _M`$ where $`^2\varphi _M=0`$ inside the sphere. Therefore, the same derivation for property 1 holds here; that is, the average of the $`𝐁`$-field over a sphere is equal to its value at the center of the sphere. 4. Magnetic field for sources inside the sphere Using the vector potential description of the magnetic field, $`𝐁=\times 𝐀`$, the average over the sphere $`𝒱`$ can be written as $$\overline{𝐁}=\frac{3}{4\pi R^3}_𝒱\times 𝐀=\frac{3}{4\pi R^3}_𝒮𝐀\times 𝑑𝐚.$$ (11) The second equality in the above equation is a vector identity. This equation shows that, as in the case of the $`𝐄`$-field and the scalar potential, the average $`𝐁`$-field over any volume $`𝒱`$ can be computed from $`𝐀`$ on the surface of $`𝒱`$. We now consider the contribution to $`\overline{𝐁}`$ of current element $`𝐉(𝐫)dV`$ inside $`𝒱`$. We can do this by determining the image current element outside the sphere that exactly reproduces the vector potential due to $`𝐉(𝐫)dV`$ on the surface of the sphere. We choose the Coulomb gauge $$A_i(𝐫)=\frac{\mu _0}{4\pi }\frac{J_i(𝐱)}{|𝐫𝐱|}𝑑𝐱,$$ (12) where $`i`$ denotes spatial component. Since $`A_i`$ is related to $`J_i`$ in the same way as the electric potential $`\mathrm{\Phi }`$ is to the charge density $`\rho `$, the method of electrostatic images can also be used here to determine the image current element. The proof of property 4 proceeds analogously to that of property 2. For $`𝐉(𝐫)dV`$ inside the sphere $`𝒱`$, $`𝐀`$ on the surface $`𝒮`$ is reproduced by an image current element $`𝐉^{}(𝐫^{})dV^{}=(R/r)𝐉(𝐫)dV`$, where $`𝐫^{}=(R^2/r)\widehat{𝐫}`$. Since the image current is outside the sphere, we can use property 3. Thus, the $`𝐉(𝐫)dV`$ contribution to the average $`𝐁`$-field in $`𝒱`$ is equal to the $`𝐁`$-field at the center due to $`𝐉^{}(𝐫^{})dV^{},`$ $$d\overline{𝐁}=d𝐁_{\mathrm{image}}(\mathrm{𝟎})=\frac{\mu _0}{4\pi }\frac{𝐉^{}(\mathrm{r}^{})\times \widehat{𝐫}}{r^2}dV^{}=\frac{\mu _0}{4\pi R^3}𝐫\times 𝐉(𝐫)dV.$$ (13) The contribution for the entire current distribution in $`𝒱`$ is therefore $$\overline{𝐁}=\frac{\mu _0}{4\pi R^3}_𝒱𝑑V𝐫\times 𝐉(𝐫)=\frac{\mu _0𝐦}{2\pi R^3},$$ (14) where $`𝐦=\frac{1}{2}_𝒱𝑑V𝐫\times 𝐉(𝐫)`$ is magnetic moment of a current distribution in $`𝒱`$. Finally, note that similar arguments hold for charge distributions which are constant along the $`z`$-direction, since the potentials for these satisfy Laplace’s equation in two dimensions, and the method of images is also applicable for line charges. Acknowledgement Useful correspondence with Prof. David Griffiths is gratefully acknowledged.
no-problem/0002/gr-qc0002058.html
ar5iv
text
# Conformal Invariance and Particle Aspects in General Relativity ## 1 Introduction Conformal invariance has been playing a particularly important role in the investigation of gravitational models ever since the emergence of such theories. In a system which includes matter, it is well known that conformal invariance requires the vanishing of the trace of the stress tensor in the absence of dimensional parameters. In the presence of dimensional parameters, the conformal invariance can be established for a large class of theories if the dimensional parameters are conformally transformed according to their dimensions. One general feature of conformally invariant theories is, therefore, the presence of varying dimensional coupling constants. In particular, one can say that the introduction of a constant dimensional parameter into a conformally invariant theory breaks the conformal invariance in the sense that a preferred conformal frame is singled out, namely that in which the dimensional parameter has the assumed constant configuration. Thus the breakdown of conformal invariance may be established by introducing a constant dimensional parameter into the theory. The determination of the corresponding preferred conformal frame depends on the nature of the problem at hand. In a conformally invariant gravitational model one usually considers the symmetry breaking as a cosmological effect. This would mean that one breaks the conformal symmetry by defining a preferred conformal frame in terms of the large scale properties of cosmic matter distributed in a finite universe. In this way, the breakdown of conformal symmetry was found to be a framework in which one can look for the origin of the gravitational coupling of matter, both classical and quantum , at large cosmological scales. The purpose of this paper is to show that the cosmological breakdown of conformal invariance in a conformally invariant gravitational model together with a local change of the corresponding preferred conformal frame may be used to model a particle concept in general relativity. The organisation of the paper is as follows: We first study the breakdown of conformal symmetry in a conformally invariant gravitational model and define the resulting preferred conformal frame in terms of some cosmological characteristics in close correspondence to the work in . We then show that, by a local change of the preferred conformal frame, it is possible to derive a Hamilton-Jacobi type equation with adjustable mass. In our presentation there is a substantial dynamical interplay between a particle in the ensemble and the applied conformal factor in a form which is similar to the dynamical effect of the quantum potential on a particle in the context of the causal interpretation of relativistic quantum mechanics . This result seems to be interesting because it suggests that the emergence of quantal behaviour of matter may be a general feature of a conformal invariant gravitational model. We shall work with a metric having the signature (- + + +). ## 2 Breakdown of conformal invariance In this section we briefly review the work in . Consider the action functional $$S[\varphi ]=\frac{1}{2}d^4x\sqrt{g}(g^{\mu \nu }_\mu \varphi _\nu \varphi +\frac{1}{6}R\varphi ^2)$$ (1) which describes a system consisting of a real scalar field $`\varphi `$ non-minimally coupled to gravity, $`R`$ is the scalar curvature. Variations with respect to $`\varphi `$ and $`g_{\mu \nu }`$ lead to the equations $$(\mathrm{}\frac{1}{6}R)\varphi =0$$ (2) $$G_{\mu \nu }=6\varphi ^2\tau _{\mu \nu }(\varphi )$$ (3) where $`G_{\mu \nu }=R_{\mu \nu }\frac{1}{2}g_{\mu \nu }R`$ is the Einstein tensor and $$\tau _{\mu \nu }(\varphi )=[_\mu \varphi _\nu \varphi \frac{1}{2}g_{\mu \nu }_\alpha \varphi ^\alpha \varphi ]\frac{1}{6}(g_{\mu \nu }\mathrm{}_\mu _\nu )\varphi ^2$$ (4) with $`_\mu `$ denoting the covariant derivative. Taking the trace of (3) gives $$\varphi (\mathrm{}\frac{1}{6}R)\varphi =0$$ (5) which is consistent with equation (2). This is a consequence of the conformal symmetry of action (1) under the conformal transformations $$\varphi \overline{\varphi }=\mathrm{\Omega }^1(x)\varphi ,g_{\mu \nu }\overline{g}_{\mu \nu }=\mathrm{\Omega }^2(x)g_{\mu \nu }$$ (6) where the conformal factor $`\mathrm{\Omega }(x)`$ is an arbitrary, positive and smooth function of space-time. Adding a matter source $`S_m`$ independent of $`\varphi `$ to the action (1) in the form $$S=S[\varphi ]+S_m$$ (7) yields the field equations $$(\mathrm{}\frac{1}{6}R)\varphi =0$$ (8) $$G_{\mu \nu }=6\varphi ^2[\tau _{\mu \nu }(\varphi )+T_{\mu \nu }]$$ (9) where $`T_{\mu \nu }`$ is the matter energy-momentum tensor. The following algebraic requirement $$T_\mu ^\mu =0$$ (10) then emerges as a consequence of comparing the trace of (9) with (8). This implies that only traceless matter can couple consistently to such gravity models. We may break the conformal symmetry by adding a dimensional mass term $`\frac{1}{2}d^4x\sqrt{g}\mu ^2\varphi ^2`$, with $`\mu `$ being a constant mass parameter, to the action (7). This leads to $$\mu ^2\varphi ^2=T_\mu ^\mu .$$ (11) Consequently, the field equations become $$(\mathrm{}\frac{1}{6}R\mu ^2)\varphi =0$$ (12) $$G_{\mu \nu }3\mu ^2g_{\mu \nu }=6\varphi ^2[\tau _{\mu \nu }(\varphi )+T_{\mu \nu }].$$ (13) Now the basic input is to consider the invariance breaking as a cosmological effect. This would mean that one may take $`\mu ^1`$ as the length scale characterizing the typical size of the universe $`R_0`$ and $`T_\mu ^\mu `$ as the average density of the large scale distribution of matter $`MR_0^3`$, where $`M`$ is the mass of the universe. This leads, as a consequence of (11) to the estimation of the constant background value of $`\varphi `$ $$\varphi ^2R_0^2(M/R_0^3)^1R_0/MG$$ (14) where the well-known empirical cosmological relation $`GM/R_01`$ has been used. Inserting this background value of $`\varphi `$ into the field equations (12), (13) leads to the following set of Einstein equations $$G_{\mu \nu }3\mu ^2g_{\mu \nu }=6\varphi ^2T_{\mu \nu }GT_{\mu \nu }$$ (15) with a correct coupling constant $`8\pi G`$, and a term $`3\mu ^2`$ which appears as an effective cosmological constant $`\mathrm{\Lambda }`$ of the order of $`R_0^2`$. The field equation (12) for $`\varphi `$ contains no new information. We should emphasize that the invariance breaking explained above dealt with a broken conformal invariance in the presence of a dimensional matter source which effectively appeared as a cosmological constant $`\mathrm{\Lambda }`$. By implication, we get a preferred conformal frame $`(\varphi ,g_{\mu \nu },\mu )`$ for the gravitational variables, namely that in which $`\varphi ^2G^1`$, $`\mu ^2\mathrm{\Lambda }`$ and $`g_{\mu \nu }`$ determined by the field equations (15). This preferred confromal frame has the remarkable property that it incorporates a correct coupling of the cosmic matter to gravity. We shall call it the cosmological frame. ## 3 Particle interpretation A key feature of any fundamental theory consistent with a given symmetry is that its breakdown would lead to effects which can have various manifestations of physical importance. Therefore, in the case of conformal symmetry, one would expect that the corresponding cosmological invariance breaking, considered in the last section, would have an important effect on the local structure of the underlying theory. Here we would like to study one particular effect which illustrates the local particle aspects. There are two chracteristic scales of squred mass in the cosmological frame $`(\varphi ,g_{\mu \nu },\mu )`$ which are subjected to the scale hierarchy <sup>1</sup><sup>1</sup>1We use units in which $`\mathrm{}=c=1`$. In these units $`\varphi `$ has the dimension of mass. $`\mu ^2\varphi ^2G^1.`$ (16) In terms if this scale hierarchy, it is possible to single out a congruence of hypersurface orthogonal timelike curves, if we subject the corresponding generating vector field $`_\mu S`$ to the condition $`_\mu S^\mu S=(\varphi ^2\mu ^2).`$ (17) We can observe that no known elementary particle can move along the congruence defined by (17) because of the large mass-scale defined by the right hand side of (17). Actually, we know from (16) that $$\varphi ^2\mu ^2\varphi ^2G^1=m_p^2$$ where $`m_p`$ is the Plank mass . The situation, however, changes if we consider a local change of the cosmological frame. To illustrate this point, let us write equation (17) in a new frame $`(\overline{\varphi },\overline{g}_{\mu \nu },\overline{\mu })`$ which is locally connected to the frame $`(\varphi ,g_{\mu \nu },\mu )`$ by a conformal transformation (6). Taking into account that $`S`$ as a dimensionless quantity does not transform under conformal invariance, we find $`\overline{}_\mu S\overline{}^\mu S=(\overline{\varphi }^2\overline{\mu }^2)`$ (18) with $`\overline{\varphi }=\mathrm{\Omega }^1\varphi `$, $`\overline{\mu }=\mathrm{\Omega }^1\mu `$ and the bar quantities refering to the new frame. Now, from the field equation (12) we find $$\overline{\mu }^2=\frac{\stackrel{}{\mathrm{}}\overline{\varphi }}{\overline{\varphi }}\frac{1}{6}\overline{R}$$ which, if combined with equation (17) leads to $`\overline{}_\mu S\overline{}^\mu S=\overline{\varphi }^2+{\displaystyle \frac{\stackrel{}{\mathrm{}}\overline{\varphi }}{\overline{\varphi }}}{\displaystyle \frac{1}{6}}\overline{R}.`$ (19) Now, if the new conformal frame $`(\overline{\varphi },\overline{g}_{\mu \nu },\overline{\mu })`$ is taken to be subjected to $`\overline{}_\mu S\overline{}^\mu \overline{\varphi }=0`$ (20) then equation (19) can be interpreted as a Hamilton-Jacobi equation for a particle with adjustable mass-scale $`\overline{\varphi }^2`$. The relation (20) ensures that this mass-scale will not change along the particle trajectory. In addition, we get a dynamical effect on the particle trajectories, reflected in the term $`\frac{\stackrel{}{\mathrm{}}\overline{\varphi }}{\overline{\varphi }}\frac{1}{6}\overline{R}`$, which illustrates a modification of the particle mass due to the scalar curvature and the applied conformal transformation. In summary, we observe that starting from the given preferred cosmological frame $`(\varphi ,g_{\mu \nu },\mu )`$, we may derive a particle concept in terms of a corresponding local change of the conformal frame. In this approach no separate particle action needs to be introduced, that of a conformally invariant gravitational model suffices. The particle aspects emerge from an internal condition connecting the local properties of a time-like congruence of curves associated with a characteristic scale hierarchy in the cosmological frame with particle properties in a new conformal frame. This observation emphasizes that general relativity, if suitably formulated as a conformal invariant field theory, does not ascribe any special significance to a separate particle action. We should, however, note that there is a certain limitation on the applicability of the particle concept. Actually, in a universe in which $`\varphi ^2`$ is smaller than $`\mu ^2`$, the above particle concept does not apply, for the mass scale of (17) becomes tachionic. This limitation, however, does not seem to be a weakness of our particle concept because the condition $`\varphi ^2<\mu ^2`$ describes a universe of trans-Plankian size. ## 4 Concluding remarks In this paper we have shown that the cosmological breakdown of conformal symmetry in a conformally invariant gravitational model together with a local change of the corresponding preferred conformal frame leads to a picture consistent with a particle concept. This picture may be considered as a manifestation of Mach’s principle in that the particle concept emerges as a local effect emerging from large scale cosmological consideration. We emphasize the similarity of the term $`\frac{\stackrel{}{\mathrm{}}\overline{\varphi }}{\overline{\varphi }}`$ on the right hand side of (19) with the quantum potential term in the context of the causal interpretation of relativistic quantum mechanics . This similarity merits attention because it provides an indication for a possible geometrization of quantal behaviour of relativistic particles in the framework of a conformal invariant gravitational model.
no-problem/0002/physics0002016.html
ar5iv
text
# Formalizing the gene centered view of evolution ## Abstract A historical dispute in the conceptual underpinnings of evolution is the validity of the gene centered view of evolution. We transcend this debate by formalizing the gene centered view as a dynamic version of the mean field approximation. This establishes the conditions under which it is applicable and when it is not. In particular, it breaks down for trait divergence which corresponds to symmetry breaking in evolving populations. The gene centered view addresses a basic problem in the interplay of selection and heredity in sexually reproducing organisms. Sexual reproduction disrupts the simplest view of evolution because the offspring of an organism are often as different from the parent as organisms that it is competing against. In the gene centered view the genes serve as indivisible units that are preserved from generation to generation. In effect, different versions of the gene, i.e. alleles, compete rather than organisms. It is helpful to explain this using the “rowers analogy” introduced by Dawkins. In this analogy boats of mixed left- and right-handed rowers are filled from a common rower pool. Boats compete in heats and it is assumed that a speed advantage exists for boats with more same-handed rowers. The successful rowers are then reassigned to the rower pool for the next round. Over time, a predominantly and then totally single handed rower pool will result. Thus, the selection of boats serves, in effect, to select rowers who therefore may be considered to be competing against each other. Other perspectives on evolution distinguish between vehicles of selection (the organisms) and replicators (the genes). However, a direct analysis of the gene centered view to reveal its domain of applicability has not yet been discussed. The analysis provided here, including all the equations, is applicable quite generally, but for simplicity it will be explained in terms of the rowers analogy.<sup>1</sup><sup>1</sup>1The rowers analogy may be considered a model of a single gene in an $`n`$-ploid organism with $`n`$ the number of rowers, or a model of $`n`$ genes with two alleles per gene and each pair labeled correspondingly. The formal discussion applies to complete genomes i.e. to homolog and non-homolog genes. The formal question is: Under what conditions (assumptions) can allelic (rower) competition serve as a surrogate for organism (boat) competition in the simple view of evolution. Formalizing this question requires identifying the conditions attributed to two steps in models of evolution, selection and reproduction. In the selection step, organisms (boats) are selected, while in the sexual reproduction step, new organisms are formed from the organisms that were selected. This is not fully discussed in the rowers model, but is implicit in the statement that victorious rowers are returned to the rower pool to be placed into new teams. The two steps of reproduction and selection can be written quite generally as: $`\{N(s;t)\}`$ $`=`$ $`R[\{N^{}(s;t1)\}]`$ (1) $`\{N^{}(s;t)\}`$ $`=`$ $`D[\{N(s;t)\}]`$ (2) The first equation describes reproduction. The number of offspring $`N(s;t)`$ having a particular genome $`s`$ is written as a function of the reproducing organisms $`N^{}(s;t1)`$ from the previous generation. The second equation describes selection. The reproducing population $`N^{}(s;t)`$ is written as a function of the same generation at birth $`N(s;t)`$. The brackets on the left indicate that each equation represents a set of equations for each value of the genome. The brackets within the functions indicate, for example, that each of the offspring populations depends on the entire parent population. To formalize the gene centered view, we introduce a dynamic form of what is known in physics as the mean field approximation. In the mean field approximation the probability of appearance of a particular state of the system (i.e. a particular genome, $`s`$) is the product of probabilities of the components (i.e. genes, $`s_i`$) $$P(s_1,\mathrm{},s_N)=P(s_i).$$ (3) In the usual application of this approximation, it can be shown to be equivalent to allowing each of the components to be placed in an environment which is an average over the possible environments formed by the other components of the system, hence the term “mean field approximation.” The key to applying this in the context of evolution is to consider carefully the effect of the reproduction step, not just the selection step. In many models of evolution that are discussed in the literature, the offspring are constructed by random selection of surviving alleles (a panmictic population). In the rowers analogy the return of successful rowers to a common pool is the same approximation. This approximation eliminates correlations in the genome that result from the selection step and thus imposes Eq. (3), the mean field approximation, on the reproduction step for the alleles of offspring. Even though it is not imposed on the selection step, inserting this approximation into the two step process allows us to write both of the equations in Eq. (4) together as an effective one-step update $$P^{}(s_i;t)=\stackrel{~}{D}[\{P^{}(s_i;t1)\}]$$ (4) which describes the allele population change from one generation to the next of offspring at birth. Since this equation describes the behavior of a single allele it corresponds to the gene centered view. There is still a difficulty pointed out by Lewontin. The effective fitness of each allele depends on the distribution of alleles in the population. Thus, the fitness of an allele is coupled to the evolution of other alleles. This is apparent in Eq. (4) which, as indicated by the brackets, is a function of all the allele populations. It corresponds, as in other mean field approximations, to placing an allele in an average environment formed from the other alleles. For example, there is a difference of likelihood of victory (fitness) between a right-handed rower in a predominantly left-handed population, compared to a right-handed rower in a predominantly right-handed population. Since the population changes over time, fitnesses are time dependent and therefore not uniquely defined. This problem with fitness assignment would not be present if each allele separately coded for an organism trait. While this is a partial violation of the simplest conceptual view of evolution, however, the applicability of a gene centered view can still be justified, as long as the contextual assignment of fitness is included. When the fitness of organism phenotype is dependent on the relative frequency of phenotypes in a population of organisms it is known as frequency dependent selection, which is a concept that is being applied to genes in this context. A more serious breakdown of the mean field approximation arises from what is known in physics as symmetry breaking. This corresponds in evolution to trait divergence of subpopulations. Such trait divergence arises when correlations in reproduction exist so that reproduction does not force complete mixing of alleles. The correlations in reproduction do not have to be trait related. For example, they can be due to spatial separation of organisms causing correlations in reproduction among nearby organisms. Models of spatially distributed organisms are sometimes called models of spatially structured environments. However, this terminology suggests that the environment itself is spatially varying and it is important to emphasize that symmetry breaking / trait divergence can occur in environments that are uniform (hence the terminology “symmetry breaking”). In the rowers model this has direct meaning in terms of the appearance of clusters of mostly left and mostly right handed rowers if they are not completely mixed when reintroduced and taken from the rower pool. Trait related correlations in sexual reproduction (assortive mating) caused by, e.g. sexual selection, would also have similar consequences. In either case, the gene centered view would not apply. Historically, the gene-centered view of evolution has been part of the discussion of attitudes toward altruism and group selection and related socio-political as well as biological concerns. Our focus here is on the mathematical applicability of the gene-centered view in different circumstances. While the formal discussion we present here may contribute to the socio-political issues, we have chosen to focus here on mathematical concerns. The problem of understanding the mean-field approximation in application to biology is, however, also relevant to the problem of group selection. In typical models of group selection asexually (clonally) reproducing organisms have fecundities determined both by individual traits and group composition. The groups are assumed to be well defined, but periodically mixed. Similar to the gene-centered model, an assumption of random mixing is equivalent to a mean field theory. Sober and Wilson (1998) have used the term “the averaging fallacy” to refer to the direct assignment of fitnesses to individuals. This captures the essential concept of the mean-field approximation. However, both the limitations of this approximation in some circumstances and its usefulness in others do not appear to be generally recognized. For example, it is not necessary for well defined groups to exist for a breakdown in the mean-field approximation to occur. Correlations in organism influences are sufficient. Moreover, standard group-selection models rely upon averaging across groups with the same composition. For this case, where well defined groups exist and correlations in mixing satisfy averaging (mean-field) assumptions by group composition, equations developed by Price<sup>2</sup><sup>2</sup>2See discussion on pp. 73–74 of . separate and identify both the mean field contribution to fitness and corrections due to correlations. These equations do not apply in more general circumstances when correlations exist in a network of interactions and/or groups are not well defined, and/or averaging across groups does not apply. It is also helpful to make a distinction between the kind of objection raised by Lewontin to the use of averaging, and the failure that occurs due to correlations when the mean-field approximation does not apply. In the former case, the assignment of fitnesses can be performed through the effect of the environment influencing the gene, in the latter case, an attempt to assign fitnesses to a gene would correspond to inventing non-causal interactions between genes. The mean field approximation is widely used in statistical physics as a “zeroth” order approximation to understanding the properties of systems. There are many cases where it provides important insight to some aspects of a system (e.g. the Ising model of magnets) and others where is essentially valid (conventional BCS superconductivity). The application of the mean field approximation to a problem involves assuming an element (or small part of the system) can be treated in the average environment that it finds throughout the system. This is equivalent to assuming that the probability distribution of the states of the elements factor. Systematic strategies for improving the study of systems beyond the mean field approximation both analytically and through simulations allow the inclusions of correlations between element behavior. An introduction to the mean-field approximation and a variety of applications can be found in Bar-Yam (1997). In conclusion, the gene centered view can be applied directly in populations where sexual reproduction causes complete allelic mixing, and only so long as effective fitnesses are understood to be relative to the prevailing gene pool. However, structured populations (i.e. species with demes—local mating neighborhoods) are unlikely to conform to the mean field approximation / gene centered view. Moreover, it does not apply to considering the consequences of trait divergence, which can occur when such correlations in organism mating occur. These issues are important in understanding problems that lie at scales traditionaly between the problems of population biology and those of evolutionary theory: e.g. the understanding of ecological diversity and sympatric speciation.
no-problem/0002/quant-ph0002040.html
ar5iv
text
# Heating of trapped ions from the quantum ground state[1] ## I Introduction Cold trapped ions have been proposed as a physical implementation for quantum computation (QC) , and experiments on one and two ions have demonstrated proof of the principle. Work is currently underway to extend these results. In ion trap QC, ion-laser interactions prepare, manipulate and entangle atomic states in ways dependent on the quantum motional state of the ions. A limiting factor in the fidelity of an operation is uncontrolled heating of the motion during manipulations. Heating leads to decoherence of the quantum superposition states involved in the computation , and can ultimately limit the number of elementary gate operations which can be strung together. Speculations have been made about the mechanisms that lead to heating , but measurements are scarce since the necessary sensitivity can be achieved only through laser cooling to near the ground state of motion. Additionally, systematic studies of the dependence of heating rate on various trap properties are difficult, since often this requires the construction and operation of an entirely new trap apparatus which may have different values of properties not under study. Indeed, the data presented here pose several interpretational difficulties for this reason. Heating of a single trapped ion (or the center-of-mass motion of a collection of trapped ions) occurs when noisy electric fields at the position of the ion couple to its charge, giving rise to fluctuating forces. If the spectrum of fluctuations overlaps the trap secular motion frequency or its micro-motion sidebands, the fluctuating forces can impart significant energy to the secular motion of the ion. Here, we express the heating rate as the average number of quanta of energy gained by the secular motion in a given time. There are several candidates worth considering for sources of the noisy fields which give rise to heating. Some of these are : Johnson noise from the resistance in the trap electrodes or external circuitry (the manifestation of thermal electronic noise or black body radiation consistent with the boundary conditions imposed by the trap electrode structure), fluctuating patch-potentials (due, for example, to randomly oriented domains at the surface of the electrodes or adsorbed materials on the electrodes), ambient electric fields from injected electronic noise, fields generated by fluctuating currents such as electron currents from field-emitter points on the trap electrodes, and collisions with background atoms. Only the first two mechanisms will be considered here since the remaining mechanisms (and others) are unlikely contributers or can be eliminated by comparing the measured heating rates of the center-of-mass and differential modes of two ions . As will be shown below, the Johnson-noise and patch-potential mechanisms give rise to heating rates which scale differently with the distance between the ion and the trap electrodes. ## II Two models for sources of heating ### A Preliminaries The heating rate caused by a fluctuating uniform field can be derived as in Savard, et al. and agrees with a classical calculation . The Hamiltonian for a particle of charge $`q`$ and mass $`m`$ trapped in an harmonic well subject to a fluctuating, uniform (non-gradient) electric field drive $`ϵ(t)`$ is $$H(t)=H_0qϵ(t)x,$$ (1) where $`H_0=p^2/2m+m\omega _m^2x^2`$ is the usual, stationary harmonic oscillator Hamiltonian with trap frequency $`\omega _m`$. From first order perturbation theory, the rate of transition from the ground state of the well ($`|n=0`$) to the first excited state ($`|n=1`$) is $$\mathrm{\Gamma }_{01}=\frac{1}{\mathrm{}^2}_{\mathrm{}}^{\mathrm{}}𝑑\tau e^{i\omega _m\tau }ϵ(t)ϵ(t+\tau )|0|qx|1|^2.$$ (2) Evaluating the motional matrix element gives $$\mathrm{\Gamma }_{01}=\frac{q^2}{m\mathrm{}\omega _m}S_E(\omega _m)$$ (3) where $`S_E(\omega )_{\mathrm{}}^{\mathrm{}}𝑑\tau e^{i\omega _m\tau }ϵ(t)ϵ(t+\tau )`$ is the spectral density of electric-field fluctuations. For an ion trapped by a combination of (assumed noiseless) static and inhomogeneous rf fields (Paul trap) the heating rate can be generalized to : $$\dot{\overline{n}}=\frac{q^2}{4m\mathrm{}\omega _m}\left(S_E(\omega _m)+\frac{\omega _m^2}{2\mathrm{\Omega }_T^2}S_E(\mathrm{\Omega }_T\pm \omega _m)\right),$$ (4) where $`\dot{\overline{n}}`$ is the rate of change of the average thermal occupation number, $`\omega _m`$ is now the secular frequency of the mode of motion under consideration and $`\mathrm{\Omega }_T`$ is the trap rf-drive frequency. The second term on the rhs of Eq. 4 is due to a cross-coupling between the rf and noise fields; it will not be present for the axial motion of a linear trap which is confined only by static fields. Even for motion confined by rf pondermotive forces, this second term will be negligible in the absence of spurious resonances in $`S_E(\omega )`$ or increasing $`S_E(\omega )`$ (since $`\omega _m^2/\mathrm{\Omega }_T^210^4`$) and is neglected in what follows . We differentiate two sources of the noise that gives rise to heating. The first is thermal electronic noise in the imperfectly conducting trap electrodes and elsewhere in the trap circuitry. Though this source of noise is ultimately microscopic in origin, for our purposes here it can be treated adequately by use of lumped circuit models. Thermal noise has been considered in the context of ion-trap heating in several places . The second source of noise considered here is due to “microscopic” regions of material (small compared to the size of the trap electrodes) with fluctuating, discontinuous potentials established, for example, at the interface of different materials or crystalline domains. We call this patch-potential noise, and its microscopic origin leads to manifestly different heating behaviour from that for the thermal electronic noise case. Static patch-potentials are a well-known phenomenon, but little is known about the high-frequency (MHz) fluctuating patches which are required to account for our observed heating rates . ### B Thermal electronic noise Heating rates in the case of thermal electronic noise (Johnson noise) can be obtained simply through the use of lumped-circuit models, which are justified by the fact that the wavelength of the relevant fields (at typical trap secular or drive frequencies) is significantly larger than the size of the trap electrodes. Such an analysis has been carried out elsewhere , and only the major results will be quoted here. Resistances in the trap electrodes and connecting circuits give rise to an electric field noise spectral density $`S_E(\omega )=4k_BTR(\omega )/d^2`$ where $`d`$ is the characteristic distance from the trap electrodes to the ion, $`T`$ is the temperature (near room temperature for all of our experiments), $`k_B`$ is Boltzmann’s constant, and $`R(\omega )`$ is the effective (lumped-circuit) resistance between trap electrodes. The heating rate is given by $$\dot{\overline{n}}_\mathrm{R}=\frac{q^2k_BTR(\omega _m)}{m\mathrm{}\omega _md^2}.$$ (5) A numerical estimate of the heating rate for typical trap parameters gives $`0.1`$/s $`<\dot{\overline{n}}_\mathrm{R}<1`$/s , which is significantly slower than our observed rates. As a final note, the lumped circuit approach is convenient, but not necessary. In the Appendix, we present a microscopic model that is valid for arbitrary ion-electrode distances and reproduces Eq. 5 for all the traps considered here (and for all realistic traps where $`d\delta `$, where $`\delta `$ is the skin depth of the electrode material at the trap secular frequency). ### C Fluctuating patch-potential noise To derive the heating rate for the case of microscopic patch-potentials we use the following approximate model. We assume that the trap electrodes form a spherical conducting shell of radius $`a`$ around the ion. Each of the patches is a disc on the inner surface of the sphere with radius $`r_pa`$ and electric potential noise $`V_p(\omega )`$. Alternatively, each patch is assumed to have power noise spectral density $`S_V(\omega )`$. The electric field noise at the ion due to a single patch is $`E_p(\omega )=3V_p(\omega )r_p^2/4a^3`$ in the direction of the patch. There are $`N4Ca^2/r_p^2`$ such patches distributed over the sphere with coverage $`C1`$. Averaging over a random distribution of patches on the sphere, we find that the power spectral density of the electric field at the ion (along a single direction) is $$S_E(\omega )=N\left(\frac{E_p(\omega )}{V_p(\omega )}\right)^2S_V(\omega )=\frac{3CS_V(\omega )r_p^2}{4a^4}.$$ (6) This gives a heating rate $$\dot{\overline{n}}_\mathrm{P}=\frac{3q^2Cr_p^2S_V(\omega _m)}{16m\mathrm{}\omega _md^4},$$ (7) in which the association $`da`$ is made. Note the difference in scaling with electrode size between Eqs. 5 and 7. The thermal electronic noise model gives a scaling $`\dot{\overline{n}}_\mathrm{R}d^2`$, while the patch-potential model gives $`\dot{\overline{n}}_\mathrm{P}d^4`$. In fact, a $`d^4`$ dependence also arises from a random distribution of fluctuating charges or dipoles. ## III Measurements ### A Measuring the heating rate To determine the heating rate, we first cool the ion to near the ground state. In sufficiently strong traps, this is achieved simply by laser cooling with light red-detuned from a fast cycling transition ($`\gamma \omega _m`$, where $`\gamma `$ is the radiative linewidth of the upper state) propagating in a direction such that its $`k`$-vector has a component along the direction of the mode of interest. In weaker traps, additional sideband Raman cooling is utilized to cool to the ground state . Typical starting values of $`\overline{n}`$, the average number of thermal phonons in the mode of interest, are between $`0`$ and $`2`$. After cooling and optically pumping the ion to its internal ground state (denoted $`|`$), we drive Raman transitions between atomic and motional levels . Tuning the Raman difference frequency $`\mathrm{\Delta }\omega `$ to the $`k^{th}`$ motional blue sideband (bsb) at $`\mathrm{\Delta }\omega =\omega _0+k\omega _m`$ drives the transition $`||n||n+k`$ where $`|,|`$ refer to the internal (spin) states of the atom that are separated by $`\omega _0`$. The $`k^{th}`$ red sideband (rsb) at $`\mathrm{\Delta }\omega =\omega _0k\omega _m`$ drives $`||n||nk`$. The measurement utilizes asymmetry in the strengths of the red and blue motional sidebands to extract $`\overline{n}`$. The strengths of the sidebands are defined as the probability of making a transition $`||`$, which depends on the occupation number of the motional levels. The strengths are probed by a Raman pulse of duration $`t`$ tuned to either $`k^{th}`$ sideband. The probability $`P_{}`$ of remaining in $`|`$ after probing is measured and the strengths $`I_k^{\mathrm{rsb}}=1P_{,\mathrm{rsb}}`$ and $`I_k^{\mathrm{bsb}}=1P_{,\mathrm{bsb}}`$ are extracted. For thermal motional states, the strengths of the red and blue sidebands are related by $`I_k^{\mathrm{rsb}}`$ $`=`$ $`{\displaystyle \underset{m=k}{\overset{\mathrm{}}{}}}P_m\mathrm{sin}^2\mathrm{\Omega }_{m,mk}t`$ (8) $`=`$ $`\left({\displaystyle \frac{\overline{n}}{1+\overline{n}}}\right)^k{\displaystyle \underset{m=0}{\overset{\mathrm{}}{}}}P_m\mathrm{sin}^2\mathrm{\Omega }_{m+k,m}t`$ (9) $`=`$ $`\left({\displaystyle \frac{\overline{n}}{1+\overline{n}}}\right)^kI_k^{\mathrm{bsb}}`$ (10) where $`\mathrm{\Omega }_{m+k,m}=\mathrm{\Omega }_{m,m+k}`$ is the Rabi frequency of the $`k^{th}`$ sideband between levels $`m`$ and $`m+k`$, and $`P_m=\overline{n}^m/(1+\overline{n})^{m+1}`$ is the probability of the $`m^{th}`$ level being occupied for a thermal distribution of mean number $`\overline{n}`$. The ratio of the sidebands $`R_kI_k^{\mathrm{rsb}}/I_k^{\mathrm{bsb}}`$ is independent of drive time $`t`$ and immediately gives the mean occupation number $`\overline{n}`$, $$\overline{n}=\frac{(R_k)^{1/k}}{1(R_k)^{1/k}},$$ (11) which is valid even if the Lamb-Dicke criterion is not satisfied. In principle, $`k`$ should be chosen to be the positive integer nearest to $`\overline{n}`$ in order to maximize sensitivity. In practice we use $`k=`$1, 2, or 3 in most cases. Note that Eq. 11 is valid only for thermal states; this is adequate since Doppler cooling leaves the motion in a thermal state , as does any cooling to near the ground state. In order to determine the heating rate $`\dot{\overline{n}}`$, delays with no laser interaction are added between the cooling cycle and the probing cycle. An example of a data set at a fixed trap secular frequency is shown in Fig. 1. The error bars are determined as follows: The raw data of Raman scans over the sidebands (such as is those shown in the insets of Fig. 1) are fit to Gaussians, from which the depths of the sidebands are extracted, with error on the parameter estimate calculated assuming normal distribution of the data. The errors from the rsb and bsb strengths are propagated through Equation 11 for an error on $`\overline{n}`$. The error bars shown are one-sigma, and include only statistical factors. These errors are incorporated in the linear regression to extract $`\dot{\overline{n}}`$ with appropriate error. Many such data sets are taken for various types of traps and at different secular frequencies. ### B The traps The measurements of heating rate in this paper extend over a five year period and utilize six different traps. The traps are summarized in Table I. The traps are described in the references listed; here only a brief discussion is included. The “ring” traps are approximate quadrupole configurations consisting of a flat electrode ($`125\mu `$m thick) with a hole drilled through it (the ring) and an independent “fork” electrode ($`100\mu `$m thick) that forms endcaps on either side of the electrode, centered with the hole, similar to the trap shown in Fig. 2. In trap 1, the ring and endcap electrodes are at the same average potential; in traps 2 and 3 a static bias field could be added between the fork and ring to change the distribution of binding strengths along the three principle axes of the trap. The size of these traps is stated as the hole radius, with the endcaps spacing approximately 70% of the hole diameter. For the elliptical ring trap (trap 2) the stated size is the radius along the minor axis and the aspect ratio is 3:2; the fork tines are parallel to the major axis of the ellipse. Traps 3a and 3b were drilled into a single flat electrode with a single graded fork electrode (see Fig. 2). The rings are circular and the size stated is the radius. This was the trap used for the size-scaling measurements. The heating in all of the ring traps was measured in a direction in the plane of the ring electrode, parallel to the tines of the fork electrode. Traps 4, 5 and 6 are similar linear traps with geometry indicated in Fig. 3. Trap 6 was made slightly larger than traps 4 and 5 by increasing the space between the two electrode wafers. Heating was measured along the axial direction, which has only a static confining potential. The size quoted in Table I for the linear traps is the distance between the ion and the nearest electrode. All traps are mounted at the end of a coaxial $`\lambda `$/4 resonator for rf voltage buildup . Typical resonator quality factors are around 500 and rf voltage at the open end is approximately 500 V with a few watts of input power. In all traps except for trap 3a and 3b the resonator is inside the vacuum chamber with the trap. In traps 3a and 3b, the resonator is outside the chamber, with the high-voltage rf applied to the trap through a standard vacuum feedthrough. Since we believe that surface effects are an important factor in heating, we cleaned the electrode surfaces before using a trap. When trap electrodes were recycled, they were first cleaned with HCl in order to remove the Be coating deposited by the atomic source. For the molybdenum traps an electro-polish in phosphoric acid was then used. For the beryllium electrodes electro-polishing in a variety of acids was ineffective, so abrasive polishing was used. Finally, the traps were rinsed in distilled water followed by methanol. The gold electrodes of the linear traps were cleaned with solvents after being evaporatively deposited on their alumina substrates. The time of exposure of clean trap electrodes to the atmosphere before the vacuum chamber was evacuated was typically less than one day. The traps were then vacuum baked at $`350^{}`$ C for approximately three days. ### C Data Our longest-term heating measurements were made on trap 1. In Figure 4 we plot the heating rate as a function of date of data acquisition for a fixed trap frequency ($`11`$ MHz). The heating rate is on the order of 1 quantum per millisecond with a basic trend upwards of $`1`$ quantum per millisecond per year. Over this time the electrodes were coated with Be from the source ovens, but beyond this, nothing was changed in the vacuum envelope, which was closed for this entire period of time. The cause of the increase in heating rate is unknown, but may be related to increased Be deposition on the electrodes. Be plating on the trap electrodes could be a source of patch-potential noise. Figure 5 shows heating rates in the linear traps (trap 4, 5 and 6) and the elliptical ring trap (trap 2) as a function of trap secular frequency. The frequency dependence of the heating rate is expected to scale as $`S_E(\omega _m)/\omega _m`$ (Eq. 4). For example, a trap electrode with a flat noise spectrum ($`S_E(\omega )=`$ constant) will have a heating rate that scales as $`\omega _m^1`$. The actual spectrum of fluctuations is impossible to know a priori, but in principle the data can be used to extract a spectrum over a limited frequency range given the model leading to Eq. 4. For the three linear traps, the heating rate data are most consistent with a $`\omega _m^2`$ scaling, implying $`S_E\omega ^1`$. This does not greatly assist in identifying a physical mechanism for the heating. For example, pure Johnson noise will have a flat spectrum, low-pass-filtered Johnson noise will have a spectrum that decreases with increasing frequency, and the spectrum of fluctuations in the patch-potential case is entirely unknown. In addition to the theoretical ambiguity, there is evidence in other data sets of different frequency scalings (though they are always power-law scalings). This measurement certainly cannot be used to pinpoint a heating mechanism; it is presented here only for completeness. The data of Figure 5b provide a first indication of the scaling of the heating rate with trap size. Trap 6 is about 1.3 times larger than trap 5, while its heating rate (at 10 MHz) was a factor of 3 slower. This indicates that the dependence of heating rate on trap size is stronger than $`d^2`$, but is consistent with $`d^4`$. Of course, this comparison is to be taken with some caution, since these are two separate traps measured several weeks apart, and therefore likely had different microscopic electrode environments. However, a comparison is warranted since the traps were identical apart from their sizes. In particular, all the associated electronics was the same and the rf drive voltage was very nearly the same. In fact, the rf voltage was slightly larger for the measurements on trap 6, which showed the lower heating rate. This is important to note because we observe a slight dependence of the heating rate on the applied rf trapping voltage. Though we have only a qualitative sense of this dependence at present, it seems that heating rates increase with rf voltage, up to a point, at which the effect levels off. This rf-voltage dependence is observed along directions where the ion is confined both by static fields and by pondermotive fields. It may not be unreasonable that the increased rf voltage increases the intensity of the noise source (possibly due to an increase of temperature of the electrodes), even when it does not affect the trap secular frequency, as in the axial direction of the linear traps. Trap 3 was designed to give a controlled measure of the heating rate as a function of trap size, while all other parameters were held fixed. The trap electrodes were made from the same substrates, the electrodes were subjected to the same pre-use cleaning, the traps were in the same vacuum envelope, driven by the same rf electronics (simultaneously) and data for both traps were acquired with minimal delay. For direct comparison at the same secular frequency in both traps, it was necessary to change the applied rf voltage since $`\omega _m1/d^2`$. (A static bias between ring and endcap can be added, as discussed above, but this was not sufficient to measure heating at identical secular frequencies for the same rf drive.) There are two data sets to be discussed for this trap, shown in Figure 6. In the first set, shown in Figure 6a, we have data points at two different secular frequencies for the “small trap” (trap 3a) and one point for the “big trap” (trap 3b). The heating rates of the small trap are comparable to the heating rates for other traps and show a $`\omega _m^1`$ scaling of the heating rate. The single point on the big trap is at a lower secular frequency, yet has a much slower heating rate. In fact, if we extrapolate the data from the small trap to the same secular frequency (using $`\omega _m^1`$), the heating rate is over an order of magnitude lower in the big trap. The ratio of the heating rate in the small trap to that of the big trap is $`20\pm 6`$. This is a much stronger scaling than that predicted by a Johnson noise heating mechanism (Eq. 5 predicts a $`d^24.8`$ scaling), but is consistent with the scaling in the patch-potential case (Eq. 7 predicts a $`d^423`$ scaling). When these data are used to predict an exponent for the size-scaling, the result is $`d^{3.8\pm 0.6}`$. For the second data set, shown in Figure 6b, the trap was removed from the vacuum enclosure, given the usual cleaning (as discussed above), and replaced for the measurements. In this data set, the trap behaved quite differently from all other traps, with heating rates significantly below those of other traps. Also, $`S_E`$ must have been a strong function of $`\omega `$ for this trap since the scaling with trap frequency was rather pronounced. The scaling with size was also strong: the heating rate was 16,000 times smaller in the big trap. When these data are used to predict an exponent for the size-scaling, the result is $`d^{12\pm 2}`$. Needless to say, it is difficult to draw general conclusions from the data for this particular trap, but the difference in heating rates between the two traps seems to strongly indicate, again, that Johnson noise is not the source of the heating. We cannot be sure why this trap had such anomalous heating behaviour, but we speculate that it is due to a less-than-usual deposition of Be on the trap electrodes prior to the measurements, because the trap loaded with minimal exposure to the Be source atomic beam. At this point it is useful to compare the present results to heating rates in other experiments. There are two other measurements. The first was done with <sup>198</sup>Hg<sup>+</sup> . For that experiment $`\omega _m/2\pi 3`$ MHz and $`d450\mu `$m and the heating rate was 0.006/ms. Accounting for scalings with trap frequency ($`\omega _m^1`$) and mass ($`m^1`$), these results are consistent with the present results for a size-scaling of $`d^4`$. Another measurement has been made with <sup>40</sup>Ca<sup>+</sup> . For that experiment $`\omega _m/2\pi 4`$ MHz and $`d700\mu `$m and the heating rate was 0.005/ms. Compared to the present experiments and the Hg experiment, this is also consistent with a $`d^4`$ scaling, although it is certainly unlikely that all systems had the same patch field environment. ## IV Conclusions and outlook We have measured heating from the ground state of trapped ions in different traps. The magnitude of heating rates and the results of the size-scaling measurements are inconsistent with thermal electronic noise as the source of the heating. The results do not indicate any strong dependence on trap-electrode material or on the type of trap potential (pondermotive or static). The rf voltage applied to the electrodes may play a role in heating, in as much as it may have an influence on patch-potentials. Since we have not identified the mechanism for the observed heating, it is difficult to say what path should be taken to correct it. If fluctuating patch-potentials on the surface of the electrodes are the cause, then further cleaning may be appropriate. Additionally, better masking of the the trap electrodes from the Be source ovens may help. The results coupled with those of other experiments , strongly indicate that bigger traps have smaller heating rates. This is not a surprise, but the strength of the scaling may be. With little sacrifice in the trap secular frequency (which ultimately determines the fastest rate of coherent manipulation) a dramatic decrease in the heating rate vs. logic gate speed appears possible using larger traps. We acknowledge support from the U. S. National Security Agency, Office of Naval Research and Army Research Office. We thank Chris Langer, Pin Chen, and Mike Lombardi for critical readings of the manuscript. ## V Appendix: Thermal Electric fields We are interested in the thermal electric field power spectral density $`S_{E_i}(𝒓,\omega )`$ generated from a specified volume of conductor. The conductor can be decomposed into a web of resistors each carrying current spectral density $`S_{I_i}=4k_BT/R_i`$ (where we assume $`k_BT\mathrm{}\omega `$). The resistance along the $`ith`$ direction of an infinitesimal volume element is $`R_i=dl/(\sigma dA)`$, where $`\sigma `$ is the conductivity, $`dl`$ is the length along $`i`$ and $`dA`$ is the cross-sectional area. A Fourier component of current $`I_i(\omega )`$ through the volume $`dV=dldA`$ gives rise to an electric dipole $`P_i(\omega )=I_i(\omega )dl/\omega `$, thus the equivalent spectral density of electric dipole of the infinitesimal resistor is isotropic: $`S_{P_i}(\omega )=4k_BT\sigma dV/\omega ^2`$. The electric field from an electric dipole $`𝑷(𝒓^{},\omega )`$ oscillating at frequency $`\omega `$ and position $`𝒓^{}`$ is $$E_i(𝒓,\omega )=\underset{j=x,y,z}{}P_j(𝒓^{},\omega )G_{ij}(𝒓,𝒓^{},\omega ).$$ (12) In this expression, $`G_{ij}(𝒓,𝒓^{},\omega )`$ is a Green function matrix, representing the $`ith`$ component of electric field at position $`𝒓`$ due to the $`jth`$ component of a point dipole at $`𝒓^{}`$ which satisfies the appropriate boundary conditions of the geometry. The electric field spectral density at position $`𝒓`$ is an integral over the dipoles in the conductor volume: $$S_{E_i}(𝒓,\omega )=\frac{4k_BT}{\omega ^2}\sigma (𝒓^{})\underset{j=x,y,z}{}|G_{ij}(𝒓,𝒓^{},\omega )|^2dV^{}.$$ (13) The Green function satisfies $`G_{ij}(𝒓,𝒓^{},\omega )=G_{ij}(𝒓^{},𝒓,\omega )`$, so the above integral can be interpreted as the Ohmic power absorbed by the conductor from the electric fields generated by a point dipole at position $`𝒓`$. By energy conservation, this must be equivalent to the time-averaged power dissipated by a point dipole at $`𝒓`$, which is related to the imaginary part of the Green function matrix $`G_{ij}(𝒓,𝒓,\omega )`$ . This simplifies Eq. (13), leaving the fluctuation-dissipation theorem $$S_{E_i}(𝒓,\omega )=\frac{2k_BT}{\omega }\underset{j=x,y,z}{}\mathrm{}mG_{ij}(𝒓,𝒓,\omega ).$$ (14) Agarwal solved Maxwell’s equations for $`G_{ij}(𝒓^{},𝒓,\omega )`$ for the simple geometry of an infinite sheet of conductor filling the space $`z0`$ with the conductor-vacuum interface in the $`z=0`$ plane . Although this idealized geometry is far from any real ion trap electrode structure, rough scalings of the thermal fields can be relevant to real ion trap geometries. From Ref. , the Green function matrix for this problem is diagonal with axial ($`z`$) and radial ($`\rho `$) components $$G_{zz}(z,z,\omega )=G^{free}(\omega )+i_0^{\mathrm{}}\frac{q^3}{w_0}\left(\frac{w_0\epsilon w}{w_0\epsilon +w}\right)e^{2iw_0z}𝑑q$$ (15) $$G_{\rho \rho }(z,z,\omega )=G^{free}(\omega )\frac{i}{2}_0^{\mathrm{}}q\left[w_0\left(\frac{w_0\epsilon w}{w_0\epsilon +w}\right)+\frac{k^2}{w_0}\left(\frac{ww_0}{w+w_0}\right)\right]e^{2iw_0z}𝑑q,$$ (16) In the above expressions, $`\epsilon (\omega )=\epsilon _0+i\sigma /\epsilon _0\omega `$ is the dielectric function of the conductor (in the low frequency limit), $`k=\omega /c`$, and wavevectors $`w_0`$ and $`w`$ (generally complex) are defined by $`w_0^2=k^2q^2`$ and $`w^2=k^2\epsilon q^2`$ with $`\mathrm{}mw_00`$ and $`\mathrm{}mw0`$. The free space Green’s function $`G^{free}(\omega )`$ has imaginary part $`\mathrm{}mG^{free}(\omega )=k^3/6\pi \epsilon _0`$ and gives rise to the isotropic free space blackbody electric field fluctuations when substituted into Eq. (14). The above integrals are significantly simplified in the “quasi-static” limit, where $`kz1`$ and the conductivity is sufficiently high so that $`k\delta 1`$, where $`\delta =\sqrt{2c^2\epsilon _0/\omega \sigma }`$ is the skin-depth of the conductor. Despite these conditions, no restriction is placed on the value of $`z/\delta `$. We break the above integrals into two pieces. The first piece $`_0^k`$ has $`qk`$ with $`w_0`$ real. In the quasi-static limit, this piece can be shown to cancel the free space contribution to the transverse Green function $`\mathrm{}mG_{\rho \rho }(z,z,\omega )`$ while doubling the free space contribution to the axial Green function $`\mathrm{}mG_{zz}(z,z,\omega )`$. Physically, the presence of the conductor negates the transverse free space blackbody field while it doubles the axial blackbody field due to a near-perfect reflection. The second piece of the integrals $`_k^{\mathrm{}}`$ has $`qk`$ with $`w_0`$ imaginary. These pieces of the integral can be solved to lowest order in $`kz`$ and $`k\delta `$. Combining terms and substituting the results into Eq. (14), the thermal electric field spectral density is $$S_{E_z}(z,\omega )=\frac{2k_BT\omega ^2}{3\pi \epsilon _0c^3}+\frac{k_BT}{4\pi \sigma z^3}\sqrt{\frac{1}{2}+\sqrt{\frac{1}{4}+\frac{z^4}{\delta ^4}}}$$ (17) $$S_{E_\rho }(z,\omega )=\frac{k_BT}{8\pi \sigma z^3}\sqrt{\frac{1}{2}+\sqrt{\frac{1}{4}+\frac{z^4}{\delta ^4}}}.$$ (18) These expressions show that the thermal electric field noise scales as $`1/z^3`$ for $`z\delta `$ , but scales as $`1/z^2`$ for $`z\delta `$ . At large distances $`z>\sqrt{\delta /k}`$ (with $`kz1`$), the axial field noise settles toward twice the free space blackbody value while the radial field vanishes. This result is also reported in Ref. . The behavior is shown in Fig. 7, where Eqs. (17) and (18) have been substituted into Eq. 4, giving the expected thermal heating rate for a <sup>9</sup>Be ion trapped with molybdenum electrodes at room temperature. Note that the predicted heating rate at trap sizes typical in our experiments is significantly slower than the 0.1-1 quanta/s rate predicted in . This difference comes from the choice of the value of the resistance in Eq. 5, which was chosen in as an absolute upper limit. When interpreting these results, only the rough scaling should be considered. Realistic ion trap electrode geometries are more complicated than a single infinite conducting plane, involving a more closed electrode structure. This generally requires a full numerical solution to the relevant boundary value problem. Moreover, we are usually interested in the electric field fluctuations at the center of the trap, where these fluctuations will be substantially different from those above an infinite plate.
no-problem/0002/cond-mat0002465.html
ar5iv
text
# Nonuniversal mound formation in nonequilibrium surface growth \[ ## Abstract We demonstrate, using well-established nonequilibrium limited-mobility solid-on-solid growth models, that mound formation in the dynamical surface growth morphology does not necessarily imply the existence of a surface edge diffusion bias (“the Schwoebel barrier”). We find mounded morphologies in several nonequilibrium growth models which incorporate no Schwoebel barrier. Our numerical results indicate that mounded morphologies in nonequilibrium surface growth may arise from a number of distinct physical mechanisms, with the Schwoebel instability being one of them. Keywords: Computer simulations; Models of surface kinetics; Molecular beam epitaxy; Scanning tunneling microscopy; Growth; Surface diffusion; Surface roughening \] In vacuum deposition growth of thin films or epitaxial layers (e.g. MBE) it is common to find mound formation in the evolving dynamical surface growth morphology. Although the details of the mounded morphology could differ considerably depending on the systems and growth conditions, the basic mounding phenomenon in surface growth has been reported in a large number of recent experimental publications . The typical experiment monitors vacuum deposition growth on substrates using STM and/or AFM spectroscopies. Growth mounds are observed under typical MBE-type growth conditions, and the resultant mounded morphology is statistically analyzed by studying the dynamical surface height $`h(𝐫,t)`$ as a function of the position $`𝐫`$ on the surface and growth time $`t`$. Much attention has focused on this ubiquitous phenomenon of mounding and the associated pattern formation during nonequilibrium surface growth for reasons of possible technological interest (e.g. the possibility of producing controlled nanoscale thin film or interface patterns) and fundamental interest (e.g. understanding nonequilibrium growth and pattern formation). The theoretical interpretation of the mounding phenomenon has often been based on the step-edge diffusion bias or the so-called Schwoebel barrier effect (also known as the Ehrlich-Schwoebel , or ES, barrier). The basic idea of the ES barrier-induced mounding (often referred to as an instability) is simple : The ES effect produces an additional energy barrier for diffusing adatoms on terraces from coming “down” toward the substrate, thus probablistically inhibiting attachment of atoms to lower or down-steps and enhancing their attachment to upper or up-steps; the result is therefore mound formation because deposited atoms cannot come down from upper to lower terraces and so three-dimensional mounds or pyramids result as atoms are deposited on the top of already existing terraces. The physical picture underlying mounded growth under an ES barrier is manifestly obvious, and clearly the existence of an ES barrier is a sufficient condition for mound formation in nonequilibrium surface growth. Our interest in this paper is to discuss the necessary condition for mound formation in nonequilibrium surface growth morphology — more precisely, we want to ask the inverse question, namely, whether the observation of mound formation requires the existence of an ES barrier. Through concrete examples we demonstrate that the mound formation in nonequilibrium surface growth morphology does not necessarily imply the existence of an ES barrier, and we contend that the recent experimental observations of mound formation in nonequilibrium surface growth morphology should not be taken as definitive evidence in favor of an ES barrier-induced universal mechanism for pattern formation in surface growth. Mound formation in nonequilibrium surface growth is a non-universal phenomenon, and could have very different underlying causes in different systems and situations, with the Schwoebel instability being one particular mechanism (among many) for the mounded morphology. Before presenting our results we point out that the possible nonuniversality in surface growth mound formation (i.e. mounds do not necessarily imply an ES barrier) has recently been mentioned in at least two experimental publications where it was emphasized that the mounded patterns seen on Si and GaAs , InP surfaces during MBE growth were not consistent with the phenomenology of a Schwoebel instability. In two other recent experimental publications mound formation during semiconductor surface growth (Ge, GaAs) was carefully analyzed using the prevailing Schwoebel instability phenomenology with a conclusion not very dissimilar from that in ref. . In particular, the ES barrier-based analyses of the experimental data in both the papers in refs. produced rather weak Schwoebel effects in both experiments, leading to the conclusion in both experiments that the Schwoebel instability in all likelihood is playing a small to negligible role in the observed mound formations in refs. . Very recently, experimental observations of striking mound formation in Au and MgO vapor deposition growth have been interpreted without invoking any ES barrier effect. Thus, the observed mound formation in the nonequilibrium surface growth in refs. is interpreted essentially without invoking any key role being played by an ES barrier whereas the mounded growth morphologies in refs. have mostly been interpreted as arising essentially due to a Schwoebel instability. Thus the inevitable conclusion from recent experimental observations is that the mound formation in surface growth does not necessarily arise from the universal mechanism of a Schwoebel instability , but may be caused by different non-universal mechanisms in different experimental situations. The purpose of the current article is to explore this nonuniversality in the mound formation in some detail using simple solid-on-solid (SOS) growth models where the kinetic mechanisms leading to the mounded morphologies are explicitly obvious, and therefore compelling conclusions can be drawn about the precise physical mechanism producing the mounds. A direct comparison between experimental results and our rather simplistic limited mobility nonequilibrium SOS models, however, is unwarranted due to the extreme simplicity in the growth and diffusion rules in our models — our models do serve the purpose of explicitly demonstrating the fact that mounded morphologies can arise without any ES barriers whatsoever. There have been two proposed mechanisms in the literature which lead to mounding without any explicit ES barrier: One of them invokes a preferential attachment to up-steps compared with down-steps (the so-called “step-adatom” attraction), which, in effect, is equivalent to having an ES barrier because the attachment probability to down-steps is lower than that to up-steps exactly as it is in the regular ES barrier case — we therefore do not distinguish it from the ES barrier mechanism, and in fact, within the simple growth models we study, these two energetic mechanisms are physically and mathematically indistinguishable. The second mounding alternative , which is a purely topologic-kinetic effect, is the so-called edge diffusion induced mounding, where diffusion of adatoms around cluster edges is shown to lead to mound formation during nonequilibrium surface growth even in the absence of any finite ES barrier. One of the concrete examples we discuss below, the spectacular pyramidal pattern formation (Fig. 3(c)) in the 2+1 dimensional (d) noise reduced Wolf-Villain (WV) model , arises from such a nonequilibrium edge diffusion effect (perhaps in a somewhat unexpected context). We also demonstrate, using the WV model and the Das Sarma-Tamborenea (DT) model , that mound formation during nonequilibrium surface growth is, in fact, almost a generic feature of limited mobility solid-on-solid discrete growth models , which typically have comparatively large values of the roughness exponent ($`\alpha `$) characterizing the growth morphology. We find that a large roughness exponent coupled with atomistic solid-on-solid growth almost invariably leads to visually mounded growth morphology. Below we demonstrate that mound formation in surface morphology arising from this generic “large $`\alpha `$” effect (without any explicit ES barrier) is often qualitatively virtually indistinguishable from that in growth under an ES barrier. Mound formation in the presence of strong edge diffusion (as in the d=2+1 WV model in Fig. 3) is, on the other hand, morphologically quite distinct from the ES barrier- or the large $`\alpha `$\- induced mound formation. Our results are based on the extensively studied limited mobility SOS nonequilibrium WV and DT growth models. Both models have been widely studied in the context of kinetic surface roughening in nonequilibrium solid-on-solid epitaxial growth — the interest in and the importance of these models lie in the fact that these were the first concrete physically motivated growth models falling outside the well-known Edwards-Wilkinson-Kardar-Parisi-Zhang generic universality class in kinetic surface roughening. Both models involve random deposition of atoms on a square lattice singular substrate (with a growth rate of 1 layer/sec. where the growth rate defines the unit of time) under the SOS constraint with no evaporation or desorption. An incident atom can diffuse instantaneously before incorporation if it satisfies certain diffusion rules which differ slightly in the two models. In the WV model the incident atom can diffuse within a diffusion length $`l`$ (which is taken to be one with the lattice constant being chosen as the length unit, i.e. only nearest-neighbor diffusion, in all the results shown in this paper — larger values of $`l`$ do not change our conclusions) in order to maximize its local coordination number or equivalently the number of nearest neighbor bonds it forms with other atoms (if there are several possible final sites satisfying the maximum coordination condition equivalently then the incident atom chooses one of those sites with equal random probability and if no other site increases the local coordination compared with the incident site then the atom stays at the incident site). The DT model is similar to the WV model except for two crucial differences: (1) only incident atoms with no lateral bonds (i.e. with the local coordination number of one — a nearest-neighbor bond to the atom below is necessary to satisfy the SOS constraint) are allowed to diffuse (all other deposited atoms, with one or more lateral bonds, are incorporated into the growing film at their incident sites); (2) the incident atoms move only to increase their local coordination number (and not to maximize it as in the WV model) — all possible incorporation sites with finite lateral local coordination numbers are accepted with random equal probability. Although these two differences between the DT and the WV model have turned out to be crucial in distinguishing their asymptotic universality class, the two models exhibit very similar growth behavior for a long transient pre-asymptotic regime. It is easy to incorporate an ES barrier in the DT (or WV) model by introducing differential probabilities $`P_u`$ and $`P_l`$ for adatom attachment to an upper and a lower step respectively — the original DT model has $`P_u=P_l`$, and an ES barrier can be explicitly incorporated in the model by having $`P_l<P_u1`$. We call this situation the DT-ES model (we use $`P_u=1`$ throughout with no loss of generality). We also note, as mentioned above, that within the DT-ES model the ES barrier ($`P_l<P_u`$) and the step-adatom attraction ($`P_u>P_l`$) are manifestly equivalent, and we therefore do not consider them as separate mechanisms. We note also that in some of our simulations below we have used the noise reduction technique which have earlier been successful in limited mobility growth models in reducing the strong stochastic noise effect through an effective coarse-graining procedure. All three models described above are studied in both one-dimensional substrate (d=1+1) and two-dimensional substrate (d=2+1) systems with periodic boundary conditions being used in all simulations. Detailed descriptions of DT and WV models are available in the literature . In Fig. 1 and 2 we present our d=1+1 growth simulations, which demonstrate the point we want to make in this paper. We show in Fig. 1 the simulated growth morphologies at three different times for four different situations, two of which (Fig. 1(a),(b)) have finite ES barriers and the other two (Fig. 1(c),(d)) do not. The important point we wish to emphasize is that, while the four morphologies and their dynamical evolutions shown in Fig. 1 are quite distinct in their details, they all share one crucial common feature: they all indicate mound formation although the details of the mounded morphologies and the controlling length scales are obviously quite different in the different cases. Just the mere observation of mounded morphology, which is present in Figs. 1(c),(d), thus does not necessarily imply the existence of an ES barrier. To further quantify the mounding apparent in the simulated morphologies of Fig. 1 we show in Fig. 2 the calculated height-height correlation function, $`H(r)h(𝐱)h(𝐫+𝐱)_𝐱`$, along the surface for two different times. All the calculated $`H(r)`$ show noisy oscillations as a function of $`r`$, which implies mound formation (corresponding to the noisy mounded morphologies of Fig. 1). It is indeed true that the presence of considerable stochastic noise associated with the deposition process in the DT, WV models make the $`H(r)`$-oscillations quite noisy, but the important feature to note in Fig. 2 is that the qualitative oscillatory nature of $`H(r)`$ in situations with or without an ES barrier is essentially the same. Thus, the mound formation, although noisy, is qualitatively similar with or without an ES barrier in Figs. 1 and 2. We have explicitly verified that such growth mounds (or equivalently $`H(r)`$ oscillations) are completely absent in the growth models which correspond to the generic Edwards-Wilkinson-Kardar-Parisi-Zhang universality class, and arise only in the DT, WV limited mobility growth models which are known to have large value of the roughness exponent $`\alpha `$ arising from (linear or nonlinear) surface diffusion processes. In fact, the effective $`\alpha `$ in the DT, WV models is essentially unity (in d=1+1), which is the same as what one expects in a naive theoretical description of growth under the ES barrier (although the underlying growth mechanisms are completely different in the two situations). We believe that any surface growth involving a “large” roughness exponent ( $`0.5<\alpha 1`$) will invariably show “mounded” morphology independent of whether there is an ES barrier in the system or not. We contend that this effectively large $`\alpha `$ is the physical origin for the mounded morphology in semiconductor MBE growth where one expects the surface diffusion driven linear or nonlinear conserved fourth order (in contrast to the generic second order) dynamical growth universality class to apply which has the asymptotic exponent : $`\alpha `$ (d=1+1) $``$ 1; $`\alpha `$ (d=2+1) $``$ 0.67 (nonlinear), 1 (linear). One recent experimental paper , which reports the observation of mounded GaAs and InP growth with $`\alpha 0.50.6`$, has explicitly made this case, and other recent reported mound formations in semiconductor MBE growth are also consistent with our contention that mounds may arise from a large effective roughness exponent rather than a Schwoebel instability. Two very recent experimental publications have reached the same conclusion in non-semiconductor MBE growth studies — in these recent publications spectacular mounded surface growth morphologies have been interpreted on the basis of the fourth order conserved growth equations . The crucial message of our simulated d=1+1 growth morphologies in Fig. 1 and 2 is the fact that the mound formation with \[Figs. 1 (a),(b) and 2 (a),(b)\] and without ES barrier \[Figs. 1 (c),(d) and 2 (c),(d)\] are qualitatively similar, and therefore the mere observation of a mounded morphology does not necessarily imply a Schwoebel instability. Finally, in Figs. 3 and 4 we present our results for the physically more relevant d=2+1 nonequilibrium surface growth. In Fig. 3(a)-(c) we show the growth morphologies for the DT-ES, DT, and the noise-reduced WV model, respectively whereas in Fig. 4 we show the scaled height-height correlation function for the mounded morphologies depicted in Fig. 3. It is apparent that all three models (one with an ES barrier and the other two without) have qualitatively similar oscillations in $`H(r)`$ indicating mounded growth, and the differences in the mounding between the growth models are purely quantitative. Again, the important point is that mounded morphologies with and without ES barriers manifest similar oscillating in $`H(r)`$, indicating that such an oscillatory height-height correlation function by itself does not establish a Schwoebel instability. Thus we come to the same conclusion: mound formation, by itself, does not imply the existence of an ES barrier; the details of the morphology obviously will depend on the existence (or not) of an ES barrier. We note that the effective values of the roughness exponent are very similar in Fig. 3(a) and (b) i.e. with and without an ES barrier, both being approximately $`\alpha 0.5`$ (far below the asymptotic value $`\alpha 1`$ expected in the ES barrier growth — we have verified that this asymptotic $`\alpha 1`$ is achieved in our simulations at an astronomically long time of $`10^9`$ layers). The most astonishing result we show in Fig. 3 is the spectacular pyramidal mound formation in the d=2+1 noise reduced WV model (without any ES barrier), which has not earlier been reported in the literature. The strikingly regular pyramidal pattern formation (Fig. 3(c)) in our noise reduced WV model in fact has a magic slope and strong coarsening behavior. The pattern is very reminiscent of the theoretical growth model studied earlier in ref. in the context of nonequilibrium growth under an ES barrier where very similar patterns with slope selection were proposed as a generic scenario for growth under a Schwoebel instability. In our case of the noise reduced d=2+1 WV model of Fig. 3(c), there is no ES barrier, but there is strong cluster-edge diffusion. This strong edge diffusion (which obviously cannot happen in 1+1 dimensional growth) arises in the WV model (but not in the DT model) from the hopping of adatoms which have finite lateral nearest neighbor bonds (and are therefore the edge atoms in a cluster). This edge diffusion (discussed in entirely different contexts in ) leads to an “uphill” surface current in the 111 direction, which leads to the formation of the slope-selected pyramidal patterned growth morphology. While noise reduction enhances the edge current strengthening the pattern formation (the uphill current is extremely weak in the ordinary WV model due to the strong suppression by the deposition shot noise), our results of Fig. 3 estabish compellingly that the WV model in d=2+1 is, in fact, unstable (uphill current) in contrast to the situation in d=1+1. Thus, the WV model belongs to totally different universality classes in d=1+1 and 2+1 dimensions! We mention that in (unphysical) higher (e.g. d=3+1, 4+1, etc.) dimensions, the WV model would be even more unstable, forming even stronger mounds since the edge diffusion effects will increase substantially in higher dimensions due to the possibility of many more configurations of nearest-neighbor bonding. Such unstable growth in high dimensional (d $`>`$ 2+1) WV model has earlier been reported in the literature without any physical explanation. We have therefore provided the explanation for the long-standing puzzle of an instability in high-dimensional (d $`>`$ 2+1) WV model simulations which were reported in the literature some years ago. More details on this phenomenon will be published elsewhere . In conclusion, we have shown through concrete examples that, while a Schwoebel instability is certainly sufficient to cause mounded surface growth morphology, the reverse is not true : an ES barrier is by no means necessary to produce mounds, and mound formation in nonequilibrium surface growth morphology does not necessarily imply the existence of a Schwoebel instability. In particular, we show that a large roughness exponent (without any ES barrier) as in the fourth order conserved growth universality class produces mounded growth morphologies which are indistinguishable from the ES barrier effect. Any experimentally observed mounded morphology therefore requires a careful and detailed quantitative analyses to determine the physical mechanism (e.g. ES barrier, edge diffusion, large roughness exponent without any ES barrier) underlying its cause — in particular, the existence of a mounded growth morphology by itself may not imply the existence of any significant Schwoebel barrier. This work is supported by the NSF-DMR-MRSEC and the US-ONR.
no-problem/0002/math0002079.html
ar5iv
text
# Untitled Document An automorphic form related to cubic surfaces. First draft. 1997. R. E. Borcherds. (Remark added 2000: See the paper “Cubic Surfaces and Borcherds Products” by Allcock and Freitag, math.AG/0002066) In the paper \[A-C-T\] the authors showed that the moduli space of cubic surfaces was $`(CH^4\backslash H)/G`$, where $`H`$ is the union of the reflection hyperplanes of the reflection group $`G`$. The purpose of this note is to construct an automorphic form, called the discriminant, on complex hyperbolic space $`CH^4`$ whose zeros are exactly the reflection hyperplanes of $`G`$, each with multiplicity 1. The complex hyperbolic space of $`E^{1,4}`$ embeds naturally in the Grassmannian of 2 dimensional positive definite subspaces of the underlying even integral lattice $`M`$ of $`E^{1,4}`$. We will construct the automorphic form on complex hyperbolic space by constructing an automorphic form $`\mathrm{\Psi }`$ on the Grassmannian and restricting it to complex hyperbolic space. The construction of $`\mathrm{\Psi }`$ is similar to that of example 13.7 of \[B\]. The lattice $`M`$ is isomorphic to $`A_2A_2^4(1)`$, so it has dimension 10 and determinant $`3^5`$. We let the elements $`e_\gamma `$ for $`\gamma M^{}/M`$ stand for the obvious basis of the group ring $`C[M^{}/M]`$. We will construct a modular form of weight $`3`$ and type $`\rho _M`$ for $`SL_2(Z)`$ which is holomorphic on the upper half plane and meromorphic at cusps, by which we mean a holomorphic vector valued function $`f=_{\gamma M^{}/M}e_\gamma f_\gamma (\tau )`$ such that $$f_\gamma (\tau +1)=e^{\pi i\gamma ^2}f_\gamma (\tau ),f_\gamma (1/\tau )=i3^{5/2}\tau ^3\underset{\delta M^{}/M}{}e^{2\pi i(\delta ,\gamma )}f_\delta (\tau )$$ Recall that if $`\gamma ,\delta M^{}/M`$ then $`\gamma ^2`$ is well defined mod 2 and $`(\gamma ,\delta )`$ is well defined mod 1. There are 4 orbits of vectors $`\gamma M^{}/M`$ under $`\mathrm{Aut}(M)`$, which we name as follows: a nonzero vector $`\gamma `$ with $`\gamma ^2/2n/3mod1`$ will be called a vector of type $`n`$ ($`n=0,1,2`$), and the zero vector will be called a vector of type $`00`$. We will construct a modular form $`f`$ such that the components $`f_\gamma `$ of $`f`$ are given by functions $`f_{00}`$, $`f_0`$, $`f_1`$, or $`f_2`$ depending only on the type of $`\gamma `$. We now work out the conditions that these functions have to satisfy for $`f`$ to transform correctly. In order to check that $`f`$ is a modular form under $`\tau 1/\tau `$ we need to know, given some fixed vector $`u`$, how many vectors $`v`$ there are with given type and given inner product with $`u`$. These numbers are given in the following table. $$\begin{array}{ccccccccccccccccc}\text{type of }u& 00& 00& 00& 00& 0& 0& 0& 0& 1& 1& 1& 1& 2& 2& 2& 2\\ \text{type of }v& 00& 0& 1& 2& 00& 0& 1& 2& 00& 0& 1& 2& 00& 0& 1& 2\\ (u,v)0& 1& 80& 90& 72& 1& 26& 36& 18& 1& 32& 24& 24& 1& 20& 30& 30\\ (u,v)1/3& 0& 0& 0& 0& 0& 27& 27& 27& 0& 24& 33& 24& 0& 30& 30& 21\\ (u,v)2/3& 0& 0& 0& 0& 0& 27& 27& 27& 0& 24& 33& 24& 0& 30& 30& 21\end{array}$$ Using this table we see that $`f_{00}`$, $`f_0`$, $`f_1`$, and $`f_2`$ have to satisfy the equations $$\begin{array}{cc}& f_{00}(\tau +1)=f_{00}(\tau ),f_{00}(1/\tau )=i3^{5/2}\tau ^3(f_{00}(\tau )+80f_0(\tau )+90f_1(\tau )+72f_2(\tau ))\hfill \\ & f_0(\tau +1)=f_0(\tau ),f_0(1/\tau )=i3^{5/2}\tau ^3(f_{00}(\tau )f_0(\tau )+9f_1(\tau )9f_2(\tau ))\hfill \\ & f_1(\tau +1)=e^{2\pi i/3}f_1(\tau ),f_1(1/\tau )=i3^{5/2}\tau ^3(f_{00}(\tau )+8f_0(\tau )9f_1(\tau ))\hfill \\ & f_2(\tau +1)=e^{4\pi i/3}f_2(\tau ),f_2(1/\tau )=i3^{5/2}\tau ^3(f_{00}(\tau )10f_0(\tau )+9f_2(\tau ))\hfill \end{array}$$ One solution of these equations is given as follows. $$\begin{array}{cc}\hfill f_{00}(\tau )& =24\eta (3\tau )^3\eta (\tau )^9=24(1+9q+54q^2+O(q^3))\hfill \\ \hfill f_0(\tau )& =3\eta (3\tau )^3\eta (\tau )^9=3+O(q)\hfill \\ \hfill f_1(\tau )& =0\hfill \\ \hfill f_2(\tau )& =\eta (\tau /3)^3\eta (\tau )^9+3\eta (3\tau )^3\eta (\tau )^9=q^{1/3}+14q^{2/3}+92q^{5/3}+O(q^{8/3})\hfill \end{array}$$ Most of the transformations follow formally from the functional equations $`\eta (\tau +1)=e^{2\pi i/24}\eta (\tau )`$ and $`\eta (1/\tau )=\sqrt{\tau /i}\eta (\tau )`$ of $`\eta `$. The only one which takes slightly more work is the transformation of $`f_2`$ under $`\tau \tau +1`$, and this follows from the identity $`\eta (\tau )^3=_{nZ}(4n+1)q^{(4n+1)^2/8}`$ and its consequence $$\eta (\tau /3)^3\eta (\tau )^9+\eta ((\tau +1)/3)^3\eta (\tau +1)^9+\eta ((\tau +2)/3)^3\eta (\tau +2)^9=9\eta (3\tau )^3\eta (\tau )^9.$$ By theorem 13.3 of \[B\] there is an automorphic form $`\mathrm{\Psi }`$ on the symmetric space of $`M`$ with the following properties. It has weight 12 = (coefficient of $`q^0`$ in $`f_{00}`$)/2. The zeros of $`\mathrm{\Psi }`$ correspond to the negative powers of $`q`$ in $`f`$, so are zeros of order 1 orthogonal to all the norm $`2/3`$ vectors of $`M^{}`$. $`\mathrm{\Psi }`$ is holomorphic on the symmetric space, and therefore holomorphic at cusps as well by the Koecher boundedness principle. $`\mathrm{\Psi }`$ is an automorphic form for some one dimensional representation of $`\mathrm{Aut}(M)`$. By restricting $`\mathrm{\Psi }`$ to complex hyperbolic space we get an automorphic form which has zeros of order 3 along the reflection hyperplanes (because $`f`$ has zeros of order 1, but every reflection hyperplane is the restriction of 3 hyperplanes of the symmetric space of $`M`$). So by taking the cube root of the restriction of $`\mathrm{\Psi }`$ we get an automorphic form for a one dimensional character of $`G`$ whose zeros are exactly the reflection hyperplanes with multiplicity 1. References. \[A-C-T\] D. J. Allcock, J. Carlson, D. Toledo, A Complex Hyperbolic Structure for Moduli of Cubic Surfaces, alg-geom/9709016 \[B\] R. E. Borcherds, Automorphic forms with singularities on Grassmannians, alg-geom/9609022, to appear in Invent. Math.
no-problem/0002/hep-ph0002257.html
ar5iv
text
# Cosmic Neutrinos and New Physics beyond the Electroweak Scale ## I Introduction It has been suggested that the neutrino-nucleon cross section, $`\sigma _{\nu N}`$, can be enhanced by new physics beyond the electroweak scale in the center of mass (CM) frame, or above about a PeV in the nucleon rest frame. At ultra-high energies (UHE), neutrinos can in principle acquire $`\nu N`$ cross sections approaching hadronic levels. It may therefore be possible for UHE neutrinos to initiate air showers, which would offer a very direct means of probing these new UHE interactions. The results of this kind of experiment have important implications for both neutrino astronomy and high energy physics. For the lowest partial wave contribution to the cross section of a point-like particle, this new physics would violate unitarity . However, two major possibilities have been discussed in the literature for which unitarity bounds need not be violated. In the first, a broken SU(3) gauge symmetry dual to the unbroken SU(3) color gauge group of strong interaction is introduced as the “generation symmetry” such that the three generations of leptons and quarks represent the quantum numbers of this generation symmetry. In this scheme, neutrinos can have close to strong interaction cross sections with quarks. In addition, neutrinos can interact coherently with all partons in the nucleon, resulting in an effective cross section comparable to the geometrical nucleon cross section. This model lends itself to experimental verification through shower development altitude statistics, as described by its authors . The present paper will not affect the plausibility of this possibility, which largely awaits more UHE cosmic ray (UHECR) events to compare against. The second possibility consists of a large increase in the number of degrees of freedom above the electroweak scale . A specific implementation of this idea is given in theories with $`n`$ additional large compact dimensions and a quantum gravity scale $`M_{4+n}`$TeV that has recently received much attention in the literature because it provides an alternative solution (i.e., without supersymmetry) to the hierarchy problem in grand unifications of gauge interactions. For parameters consistent with measured cross sections at electroweak energies and below, at $`10^{20}`$eV, the $`\nu N`$ cross section can approach $`10^410^2`$ times a typical strong cross section . Therefore, in these models the mean free path for $`\nu N`$ interactions at these energies is on the order of kilometers to hundreds of kilometers in air. Experimentally, this would be reflected by a specific energy dependence of the typical column depth at which air showers initiate. As we will show, this leads to a useful signature, deriving from zenith angle variations with shower energy. Comparison with observations would either support these scenarios or constrain them in a way complementary to studies in terrestrial accelerators or other laboratory experiments that have recently appeared in the literature. Both scenarios for increasing $`\nu N`$ cross sections raise the question of how far UHE neutrinos can travel in space. In the first case, due to the coherent nuclear interactions, the cross section approaches the nucleon-nucleon cross section, and so intergalactic matter becomes the greatest threat to the neutrino traveling long distances unimpeded. However, an order of magnitude calculation shows that the mean free path for such a neutrino is $`10^2`$ Gpc, if we simply assume the matter density of the Universe is its critical density. Within a galaxy with similar interstellar density to the Milky Way, the mean free path is $``$Mpc; the UHE neutrino is not attenuated before reaching Earth’s atmosphere. For the extra dimension model, the cross sections are lower, so interaction with matter is not a problem. With Cosmic Microwave Background (CMB) photons, the CM energy is too low to give rise to a significant interaction probability. If neutrinos are massive, then the relic neutrino background is more of a threat. But assuming the neutrino mass is below 92 eV, as conservatively required to prevent over-closing the Universe for a stable species , the mean free path is $`10^4`$ Gpc at a CM energy of $`100`$GeV, corresponding to a $`10^{20}`$eV neutrino. The cosmological horizon is only $`15`$ Gpc, so for both scenarios the neutrinos can reach Earth unhindered from arbitrarily great distances. A major motivation for attempts to augment the $`\nu N`$ cross section at ultra-high energies derives from the paradox of UHECRs. Above $`10^{20}`$ eV, protons and neutrons see CMB photons doppler shifted to high energy gamma rays, and encounters generate pions, thereby reducing the resulting nucleon’s energy by at least the pion rest mass. One therefore expects the Greisen-Zatespin-Kuzmin (GZK) cutoff , wherein cosmic rays above $`E_{\mathrm{GZK}}7\times 10^{19}`$ eV would lose their energy within $`50`$Mpc. However, since we observe cosmic rays at super-GZK energies, it has been proposed that these UHECRs might really be neutrinos rather than nucleons (for example, see ). While this is a strong motivation to explore new UHE neutrino interactions, we will not need to assume that UHECRs are neutrinos. The paper is outlined as follows: In Section II we derive a constraint on the neutrino-nucleon cross section resulting from the non-observation of horizontal air showers induced by neutrinos produced by interactions of the highest energy cosmic rays with the cosmic microwave background. In Section III we discuss potential improvements of this constraint including other hypothesized neutrino sources and next generation experiments. In Section IV we narrow the discussion to the case of a $`\nu N`$ cross section scaling linearly with CM energy, as an example. A zenith angle profile is presented as a signature of this scaling, and a means of constraining it. We conclude in Section V. ## II A Bound from the “Cosmogenic” Neutrino Flux The easiest way to distinguish extensive air showers (EAS) caused by neutrinos from those caused by hadrons is to look for nearly horizontal electromagnetic showers. As pointed out in Ref. , the Earth atmosphere is about 36 times thicker taken horizontally than vertically. A horizontally incident UHE hadron or nucleus has a vanishingly small chance of descending to sufficiently low altitudes to induce air showers detectable by ground instruments. As a consequence, a horizontal shower is a very strong indication of a neutrino primary. The two major techniques for observing extensive air showers produced by UHECRs or neutrinos are the detection of shower particles by an array of ground detectors, and the detection of the nitrogen fluorescence light produced by the shower. The largest operating experiment utilizing the first technique is the AGASA which covers about $`100\mathrm{km}^2`$ in area . The second technique was pioneered by the Fly’s Eye instrument and produced a total exposure similar to the AGASA instrument. For cosmic neutrino detection, the fluorescence technique may provide somewhat more detailed information, because it can readily measure the position of the shower maximum and possibly the first interaction, which are important in distinguishing neutrino primaries. Also, fluorescence detectors are equipped to see horizontal showers that do not intersect the ground near the detector. The main backgrounds consist of muon bremsstrahlung and tau lepton decays, as these could occur very deep in the atmosphere. The flux of these particles is several orders of magnitude below the CRs and predicted neutrinos of the same energy, but if UHE neutrinos interact with matter only through Standard Model channels, then the rate of neutrino-induced triggers will still be dwarfed by these background events. The backgrounds for high zenith angle events differ somewhat between fluorescence detectors and ground arrays , but both detector classes have remedies for the high background. For example, one interesting benefit with air fluorescence is the possibility of seeing a separate primary CR shower associated with the event, thus identifying it as a muon- or tau-induced secondary . Ground arrays would not see this sign, but they should be able to discriminate between neutrinos and secondaries by the shape of the shower front . The strategy here is to use observations (or non-observation) of horizontal showers to place limits on the UHE neutrino flux incident on the Earth. Comparing that against a reliably predicted flux permits us to bound the cross section. Fig. 1 summarizes the high energy neutrino fluxes expected from various sources, where the shown flavor summed fluxes consist mostly of muon and electron neutrinos in a ratio of about 2:1. Shown are the atmospheric neutrino flux , a typical flux range expected for “cosmogenic” neutrinos created as secondaries from the decay of charged pions produced by collisions between UHE nucleons and CMB photons , a typical prediction for the diffuse flux from photon optically thick proton blazars that were normalized to recent estimates of the blazar contribution to the diffuse $`\gamma `$ray background , and a typical prediction by a scenario where UHECRs are produced by decay of particles close to the Grand Unification Scale (for a review see also Ref. ). Apart from the atmospheric neutrino flux, only the cosmogenic neutrinos are nearly guaranteed to exist due to the known existence of UHECRs, requiring only that these contain nucleons and be primarily extragalactic in origin. This is an assumption, since Galactic sources can not be wholly ruled out . However, if the charge of the primary UHECR satisfies $`Z𝒪(1)`$ (which will be tested by forthcoming experiments), an extragalactic origin is favored on the purely empirical grounds of the observed nearly isotropic angular distribution which rules out Galactic disk sources. The reaction generating these cosmogenic neutrinos is well known, depending only on the existence of UHE nucleon sources more distant than $`\lambda _{\mathrm{GZK}}`$ 8 Mpc and on the relic microwave photons known to permeate the Universe. The two cosmogenic lines in Fig. 1 were computed for a scenario where radiogalaxies explain UHECRs between $`3\times 10^{18}`$eV (i.e. above the “ankle”) and $`10^{20}`$eV , with source spectra cutoffs at $`3\times 10^{20}`$ eV (lower curve) and $`3\times 10^{21}`$ eV (upper curve) . Since events were observed above $`3\times 10^{20}`$eV, this may be used as a conservative flux estimate (see also Sect. III below). The non-observation of horizontal showers by the neutrino fluxes indicated in Fig. 1 can now be translated into an upper limit on the total neutrino-nucleon cross section. The total charged current Standard Model $`\nu N`$ cross section can be approximately represented by $$\sigma _{\nu N}^{SM}(E)2.36\times 10^{32}(E/10^{19}\mathrm{eV})^{0.363}\mathrm{cm}^2(10^{16}\mathrm{eV}E10^{21}\mathrm{eV}).$$ (1) In any narrow energy range and as long as the neutrino mean free path is larger than the linear detector size, the neutrino detection rate scales as $$\mathrm{Event}\mathrm{Rate}A\varphi _\nu \sigma _{\nu N},$$ (2) where $`A`$ is the detector acceptance (in units of volume times solid angle) in that energy band, and $`\varphi _\nu `$ is the neutrino flux incident on the Earth. In our case, the non-observation of neutrino-induced (horizontal) air showers limits the event rate, and thereby sets the experimental upper bounds given in Fig. 1. On the other hand, the gap between the best current experimental flux bound and the predicted cosmogenic flux specifies a bounding cross section in the presence of new interactions. Representing the experimental bounds in Fig. 1 with an approximate expression, and letting $`\overline{y}`$ be the average fraction of the neutrino’s energy deposited into the shower, the cross section limit becomes $$\sigma _{\nu N}(E)10^{28}\left(\frac{10^{18}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1}{\varphi _c(E)}\right)\left(\frac{10^{19}\mathrm{eV}}{\overline{y}E}\right)^{1/2}\mathrm{cm}^2,$$ (3) where $`\varphi _c(E)`$ is the cosmogenic neutrino flux, and the exponent 1/2 reflects the approximate energy dependence of the Fly’s Eye upper limit on the rate of horizontal air showers. For electron neutrinos and atmospheric detectors, charged current (CC) interactions correspond to $`\overline{y}=1`$ since all of the neutrino energy goes into “visible” energy, either in the form of an electron or of hadronic debris. In this case, using the conservative lower estimate of the cosmogenic flux in Fig. 1 yields $`\sigma _{\nu N}^{CC}(10^{17}\mathrm{eV})`$ $``$ $`1\times 10^{29}\mathrm{cm}^2`$ (4) $`\sigma _{\nu N}^{CC}(10^{18}\mathrm{eV})`$ $``$ $`8\times 10^{30}\mathrm{cm}^2`$ (5) $`\sigma _{\nu N}^{CC}(10^{19}\mathrm{eV})`$ $``$ $`5\times 10^{29}\mathrm{cm}^2,`$ (6) and a severely degraded limit outside this energy range. The bounds Eq. (6) probe CM energies $$\sqrt{s}1.4\times 10^5\left(\frac{E}{10^{19}\mathrm{eV}}\right)^{1/2}\mathrm{GeV}$$ (7) that are about 3 orders of magnitude beyond the electroweak scale. In the case of neutral currents, one needs a model for computing $`\overline{y}`$. As an example, if $`\overline{y}=0.1`$ at these energies, then the best limit becomes $`\sigma _{\nu N}(10^{18}\mathrm{eV})2\times 10^{29}\mathrm{cm}^2`$. So as long as $`\overline{y}`$ doesn’t depart too far from $`0.1`$, as indicated in , neutral currents won’t change the bounds dramatically. In Section IV, we discuss the neutral current case with respect to extra dimension models. Note that this bound does not challenge the model of Ref. . That option relies on neutrinos acquiring larger, hadron-scale cross sections. As such, shower development will occur vertically, not horizontally, and our argument does not apply. We therefore specify a sub-hadron-size cross section above which our bound is inapplicable because horizontal air showers could not develop in column depths $`3000\mathrm{g}/\mathrm{cm}^2`$ as used by the Fly’s Eye bound : $$\sigma _{\nu N}(E10^{19}\mathrm{eV})10^{27}\mathrm{cm}^2.$$ (8) The bound derived here is independent of the type of interaction enhancing physics, stemming directly from attempts to observe deeply penetrating air showers. To summarize, Eq. (3) applies to the cosmogenic neutrino flux as long as $`\sigma _{\nu N}`$ does not reach hadronic levels. We note that our allowed range, Eqs. (6) and (8), is complementary to and consistent with the interpretation of UHECRs as neutrino primaries, as considered in Ref. . In this case, $`\sigma _{\nu N}(E)2\times 10^{26}\mathrm{cm}^2`$, which is in the larger cross section regime, Eq. (8). This is because for $`\sigma _{\nu N}(E)10^{28}\mathrm{cm}^2`$, the first interaction point would have a flat distribution in column depth up to large zenith angles $`\theta 80^{}`$, whereas observed events appear to peak at column depths $`450\mathrm{g}/\mathrm{cm}^2`$ . For an individual event, a $`0.2`$% probability exists for interacting by $`450\mathrm{g}/\mathrm{cm}^2`$ with cross sections as small as $`10^{29}\mathrm{cm}^2`$. We also note in this context that Goldberg and Weiler have related the $`\nu N`$ cross section at UHEs to the lower energy $`\nu N`$ elastic amplitude in a model independent way, only assuming $`3+1`$ dimensional field theory. Consistency with accelerator data then requires $$\sigma (E)3\times 10^{24}\left(\frac{E}{10^{19}\mathrm{eV}}\right)\mathrm{cm}^2,$$ (9) otherwise deviations of neutrino cross sections from the Standard Model should become visible at the electroweak scale . ## III Future Prospects for Improvement EAS detection will be pursued by several experiments under construction or in the proposal stage. As an upscaled version of the original Fly’s Eye Cosmic Ray experiment, the High Resolution Fly’s Eye detector has recently begun operations and has preliminary results of several super-GZK events , but no official limit on horizontal showers thus far. The effective aperture of this instrument is $`350(1000)\mathrm{km}^2\mathrm{sr}`$ at $`10(100)`$EeV, on average about 6 times the Fly’s Eye aperture, with a threshold around $`10^{17}`$eV. This takes into account a duty cycle of about 10% which is typical for the fluorescence technique. Another project utilizing this technique is the proposed Japanese Telescope Array . If approved, its effective aperture will be about 10 times that of Fly’s Eye above $`10^{17}`$eV. The largest project presently under construction is the Pierre Auger Giant Array Observatory planned for two sites, one in Argentina and another in the USA for maximal sky coverage. Each site will have a $`3000\mathrm{km}^2`$ ground array. The southern site will have about 1600 particle detectors (separated by 1.5 km each) overlooked by four fluorescence detectors. The ground arrays will have a duty cycle of nearly 100%, leading to an effective aperture about 30 times as large as the AGASA array. The corresponding cosmic ray event rate above $`10^{20}`$eV will be 50–100 events per year. About 10% of the events will be detected by both the ground array and the fluorescence component and can be used for cross calibration and detailed EAS studies. The energy threshold will be around $`10^{18}`$eV, with full sensitivity above $`10^{19}`$eV. NASA recently initiated a concept study for detecting EAS from space by observing their fluorescence light from an Orbiting Wide-angle Light-collector (OWL). This would provide an increase by another factor of $`50`$ in aperture compared to the Pierre Auger Project, corresponding to a cosmic ray event rate of up to a few thousand events per year above $`10^{20}`$eV. Similar concepts being proposed are the AirWatch and Maximum-energy Air-Shower Satellite (MASS) missions. The energy threshold of such instruments would be between $`10^{19}`$ and $`10^{20}`$eV. This technique would be especially suitable for detection of almost horizontal air showers that could be caused by UHE neutrinos. As can be seen from Fig. 1, with an experiment such as OWL, the upper limit on the cross section Eq. (6) could improve by about 4 orders of magnitude. In addition, for such showers the fluorescence technique could be supplemented by techniques such as radar echo detection . The study of horizontal air showers nicely complements the more traditional technique for neutrino detection in underground detectors. In these experiments, muons created in charged current reactions of neutrinos with nucleons either in water or in ice are detected via the optical Cherenkov light emitted. Due to the increased column depth (see also Fig. 3), this technique would be mostly sensitive to cross sections smaller by about a factor of 100, in a range $`10^{31}10^{29}\mathrm{cm}^2`$. For an event rate comparable to the one discussed above for ground based instruments, the effective area of underground detectors needs to be comparable and thus significantly larger than $`1\mathrm{km}^2`$, with at least some sensitivity to downgoing events. The largest pilot experiments in ice (AMANDA ) and in water (Lake Baikal and the now defunct DUMAND ) are roughly $`0.1`$km in size. Next generation deep sea projects such as the ANTARES and NESTOR projects in the Mediterranean, and ICECUBE in Antarctica will significantly improve the volume studied but may not reach the specific requirements for the tests discussed here. Also under consideration are techniques for detecting neutrino induced showers in ice or water acoustically or by their radio emission . Radio pulse detection from the electromagnetic showers created by neutrino interactions in ice could possibly be scaled up to an effective area of $`10^4\mathrm{km}^2`$ and may be the most promising option for probing the $`\nu N`$ cross section with water or ice detectors. A prototype is represented by the Radio Ice Cherenkov Experiment (RICE) experiment at the South Pole . The bounds we derived here can be even more constraining if the UHE neutrino fluxes are higher than the conservative estimates we used. For instance, the cosmogenic flux was computed by assuming a cosmic ray spectral dependence of $`N(E)E^\gamma `$ with $`\gamma =3`$, consistent with the overall cosmic ray spectrum above $`10^{15}`$ eV. However, at $`10^{19.5}`$ eV, the spectrum appears to flatten down to $`1\gamma 2`$, probably suggestive of a new cosmic ray component at these energies. Once more UHECR events are observed, the trend hinted by the AGASA data may be confirmed, giving the source of the UHECRs that generate the cosmogenic neutrinos a much harder spectral index than we assumed. The UHE cosmogenic neutrino flux computed for this flattened spectrum would yield a higher flux and a stronger bound will result. For example, assuming a rather strong source evolution and an $`E^{1.5}`$ injection spectrum would increase the cosmogenic flux up to 100 fold, and consequently yield a 100 fold improvement of the constraint Eq. (6). See also Ref. for an estimate more optimistic by a factor 10-20 than our conservative one. Concerning sources of primary UHE neutrinos, current estimates on the flux from the proton blazar model , wherein AGN accelerate protons which interact with the local thick photon field, producing pions and therefore neutrinos, could potentially improve the bound by a factor of $`50`$ at $`10^{17}`$ eV. The topological defect model shown in Fig. 1 could improve the bound by a factor of $`75`$ at $`10^{19}`$ eV. These improvements would be realized if either model were to become as strongly motivated as the cosmogenic flux. Furthermore, any mechanism predicting fluxes approaching the experimental flux limits would exclude a new contribution to the $`\nu N`$ cross section beyond the Standard Model at corresponding energies. Particularly high UHE neutrino fluxes are predicted in scenarios where the highest energy cosmic rays are produced as secondaries from Z bosons resonantly produced in interactions of these neutrino primaries with the relic neutrino background . In general, any confirmed increase in flux over cosmogenic as currently estimated would improve the bound presented here. We also consider Markarian 421 as an example of how a point source can impact the bound. Its calculated neutrino flux is $`50\mathrm{eV}\mathrm{cm}^2\mathrm{s}^1`$ at its peak energy of $`10^{18}`$ eV . This is about an order of magnitude shy of the cosmogenic flux at the same energy, and therefore it will not serve to improve the bound. However, should some new evidence arise against the existence of a cosmogenic flux, point source estimates like this may become the fallback means to constrain $`\sigma _{\nu N}`$. ## IV Energy Dependence Signatures and Extra Dimension Models In extra dimension scenarios, the virtual exchange of bulk gravitons (Kaluza-Klein modes) leads to extra contributions to any two-particle cross section. For CM energy such that $`sM_{4+n}^2`$, where possible stringy effects are under control , these cross sections can be well approximated perturbatively. In contrast, for $`sM_{4+n}^2`$ model dependent string excitations can become important and in general one has to rely on extrapolations which can be guided only by general principles such as unitarity . Naively, the exchange of spin 2 bulk gravitons would predict an $`s^2`$ dependence of the cross section . However, more conservative arguments consistent with unitarity suggest a linear growth in $`s`$, and in what follows we will assume the following cross section parameterization in terms of $`M_{4+n}`$: $$\sigma _g\frac{4\pi s}{M_{4+n}^4}10^{28}\left(\frac{\mathrm{TeV}}{M_{4+n}}\right)^4\left(\frac{E}{10^{19}\mathrm{eV}}\right)\mathrm{cm}^2,$$ (10) where the last expression applies to a neutrino of energy $`E`$ hitting a nucleon at rest. It should be stressed that this cross section is only one example among a few proposed in the literature. We further note that within a string theory context, Eq. (10) can only be a good approximation for $`sM_s^2`$. Above the string scale, there is currently no agreement in the literature about the behavior of $`\sigma _g`$ with energy. Amplitudes for single states may be exponentially suppressed $`\mathrm{exp}[s/M_s^2]`$ above the string scale $`M_sM_{4+n}`$ , which can be interpreted as a result of the finite spatial extension of the string states, in which case the number of states may grow exponentially . This is currently unclear, and which effect dominates the total cross section may be model dependent. For example, domination by the amplitude suppression would result in $`\sigma _g4\pi /M_s^210^{32}\mathrm{cm}^2`$, conservatively assuming $`M_s100`$GeV. Thus, an experimental detection of the signatures discussed in this section could lead to constraints on some string-inspired models of extra dimensions. Fig. 2 shows the neutrino-nucleon cross section based on the Standard Model , and three curves for enhanced cross sections, given by the sum $$\sigma _{tot}=\sigma _{SM}+\sigma _g.$$ (11) These three curves are given for different values of the scale $`M_{4+n}`$. An increase in this mass scale brings the total cross section closer to that of the Standard Model, Eq. (1). For electron neutrinos, the Standard Model contribution is well approximated (within about 10%) by the charged current cross section Eq. (1), whereas the other flavors are harder to observe . Of course, experimentally speaking, what counts for observing an interaction is the total cross section weighted by the average energy fraction transferred to the shower. Following a format used previously in the literature , we can express these cross sections in a more useful manner. The interaction lengths of neutrinos through various kinds of matter can be directly compared if they are specified in terms of the corresponding interaction lengths in a particular medium. In Fig. 3, the interaction length for neutrinos is plotted in centimeters of water equivalent (cmwe), indicating the appropriate energy range necessary for detection underground and in our atmosphere. The interaction length in cmwe is computed from $$L_{int}=\frac{1}{\sigma _{\nu N}N_A}\mathrm{cmwe},$$ (12) where $`N_A`$ is Avogadro’s number, since the density of water is 1 g/cm<sup>3</sup>. In the particular case of extra dimension models with the cross section scaling Eq. (10), Fig. 3 shows that a neutrino becomes unlikely to create a shower below $`5\times 10^{18}`$ eV, since $`M_{4+n}`$ can’t be pushed much below about 1 TeV without its effects contradicting current experiments . The total neutrino-nucleon cross section is dominated by a contribution of the form Eq. (10) at neutrino energies $`EE_{\mathrm{th}}`$, where, for $`M_{4+n}1`$ TeV, the threshold energy can be approximated by $$E_{\mathrm{th}}2\times 10^{13}\left(\frac{M_{4+n}}{\mathrm{TeV}}\right)^{6.28}\mathrm{eV}.$$ (13) Provided that $`M_{4+n}`$ is low enough, the effects of these new interactions should be observable in UHE neutrino interactions. The upper bound Eq. (3) we derived on $`\sigma _{\nu N}`$ by merit of current non-observation of horizontal air showers immediately implies a corresponding bound on the mass scale within the context of Eq. (10): $$M_{4+n}\overline{y}^{1/8}\left(\frac{\varphi _c(E)}{10^{18}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1}\right)^{1/4}\left(\frac{E}{10^{19}\mathrm{eV}}\right)^{3/8}\mathrm{TeV},$$ (14) where $`\overline{y}`$ enters because the bound derives from observational limits at various shower energies. Since $`M_{4+n}`$ is a new constant of nature (that is, the value of $`M_{4+n}`$ doesn’t actually scale with $`E`$), Eq. (14) should be evaluated at the energy which generates the best limit. The strongest bound that can emerge from this formula with the lower cosmogenic curve in Fig. 1 occurs at $`E=10^{18}`$ eV and $`\overline{y}=1`$, giving $`M_{4+n}1.2\mathrm{TeV}`$. But for the exchange of Kaluza-Klein gravitons considered here, we can give a very rough estimate $`\overline{y}0.1`$ by noting similarity to massive boson exchange within the Standard Model . In this case the best bound is $`M_{4+n}(\overline{y}=0.1)0.9\mathrm{TeV}`$. The bound Eq. (14) is consistent with the best current laboratory bound, $`M_{4+n}1.26`$ TeV, which is from Bhabha scattering at LEP2 . The fact that laboratory bounds are in the TeV range can be understood from Fig. 2 which shows that for smaller mass scales $`\sigma _{tot}`$ would already be dominated by $`\sigma _g`$ at electroweak scales. Eq. (14) can be improved beyond this level through more data from the continued search for horizontal showers and independently by any increase in the theoretically expected UHE neutrino flux beyond the conservative cosmogenic flux estimate shown in Fig. 1. We stress again that Eq. (14) derives from the more general bound Eq. (3) valid for CM energies about 3 orders of magnitude above the electroweak scale if $`\sigma _{\nu N}s`$ in this energy range, and is thus complementary to laboratory bounds. It is interesting to note that Eq. (14) does not depend on the number $`n`$ of extra dimensions, assuming the cross section in Eq. (10). This is in contrast to some other astrophysical bounds which depend on the emission of gravitons causing energy loss in stellar environments . The best lower limit of this sort comes from limiting the emission of bulk gravitons into extra dimensions in the hot core of supernova 1987A . In order to retain the agreement between the energy released by the supernova, as measured in neutrinos, and theoretical predictions, the energy flow of gravitons into extra dimensions must be bounded. The strongest contribution to graviton emission comes from nucleon-nucleon bremsstrahlung . The resulting constraints read $`M_650`$TeV, $`M_74`$TeV, and $`M_81`$TeV, for $`n=2,3,4`$, respectively, and, therefore, $`n4`$ is required if neutrino primaries are to serve as a candidate for the UHECR events observed above $`10^{20}`$eV (but see ). We calculated this bound for the case $`n=7`$ extra dimensions, as motivated by superstring theory. We find the lower bound drops to $`M_{11}0.05`$TeV, so for higher numbers of extra dimensions, the cosmic ray bound derived here could be stronger than type II supernova bounds, if the scaling assumed in Eq. (10) were widely accepted. This assumes that all extra dimensions have the same size given by $$r_nM_{4+n}^1\left(\frac{M_{\mathrm{Pl}}}{M_{4+n}}\right)^{2/n}2\times 10^{17}\left(\frac{\mathrm{TeV}}{M_{4+n}}\right)\left(\frac{M_{\mathrm{Pl}}}{M_{4+n}}\right)^{2/n}\mathrm{cm},$$ (15) where $`M_{\mathrm{Pl}}`$ denotes the Planck mass. The SN1987A bounds on $`M_{4+n}`$ mentioned above translate into the corresponding upper bounds $`r_63\times 10^4`$mm, $`r_74\times 10^7`$mm, and $`r_82\times 10^8`$mm. With the cosmogenic bound and 7 extra dimensions, we have $`r_{11}6\times 10^{12}`$mm. A specific signature of the linear scaling with energy of the cross section Eq. (10) consists of the existence, for a given zenith angle $`\theta `$, of two critical energies. First, there is an energy $`E_1(\theta )`$ below which the first interaction point has a flat distribution in column depth, whereas above which this distribution will peak above the ground. This energy is independent of any experimental attributes of the detector, apart from its altitude. Second, there is an energy $`E_2(\theta )>E_1(\theta )`$ below which a large enough part of the resulting shower lies within the sensitive volume to be detectable, and above which the event rate cuts off because primary neutrinos interact too far away from the detector to induce observable showers. This energy is specific to the detector involved. We show the zenith angle dependence of these two critical energies in Fig. 4, where we have assumed approximate experimental parameters for the Fly’s Eye experiment given in , at an atmospheric depth of 860 g/cm<sup>2</sup>; both Auger sites are within a few hundred meters of this altitude as well. For simplicity, we have assumed that the Fly’s Eye detects showers above $`10^{19}`$ eV, provided that they initiate and touch ground within 20 km of the detector. Following , we assume a standard exponential atmosphere of scale height 7.6 km, and a spherical (rather than planar) Earth. These zenith angle plots can be expressed in terms of cross sections instead of energies, and be generalized to other energy dependences. They have two purposes. First, upon adding UHECR data on such a plot, a resemblance to the arcing curves shown would give strong support to the respective energy scaling. Second, in the specific case of Eq. (10), owing to the 4th power dependence on the quantum gravity scale $`M_{4+n}`$, Fig. 4 serves as a good means for determining this mass scale by comparing the data curves to those predicted. Of course, this will only work in directions in which the neutrino-induced EAS rate is not dominated by the ordinary UHECR induced rate. This restriction may necessitate additional discriminatory information such as the shower depth at maximum. Furthermore, in a detailed analysis the observed shower energy has to be corrected by the distribution of fractional energy transfer $`y`$ in order to obtain the primary neutrino energy. This approach requires more data, which will likely come from the HiRes Fly’s Eye and the Pierre Auger observatories. Because these facilities can see nitrogen fluorescence shower trails in the sky, they are better equipped to see horizontal showers than ground arrays alone. In addition, they utilize more than one “eye” so their angular resolution is $`\mathrm{\Delta }\theta 2^{}`$ . On the other hand, these detectors are biased toward horizontal events because of their long track lengths, and this bias will need to be treated properly in order to interpret the zenith angle data. As mentioned above, “traditional” neutrino telescopes based on water and ice as detector medium could be used similarly to probe a range of smaller $`\nu N`$ cross sections $`10^{31}10^{29}\mathrm{cm}^2`$, if their size significantly exceeds 1 km and if they are sensitive to at least a limited range of zenith angles $`\theta <90^{}`$, i.e. above the horizon. For example, the ratio of upgoing to downgoing events in the $`10100`$PeV range could be a measure of the absolute $`\nu N`$ cross section at these energies . However, the detection method and flavor sensitivity are different in this case. It is worth considering whether enhanced cross sections in extra dimensions significantly raise detector backgrounds. Here, the most important background for UHE neutrino detection is from muons created in decays of collision products of UHECR interactions in the atmosphere (and to a lesser degree, from tau lepton decays). The muon range in centimeters of water equivalent, with only Standard Model interactions, is approximately $$L_\mu ^{SM}2.5\times 10^5\mathrm{ln}\left(\frac{2E}{\mathrm{TeV}}+1\right)\mathrm{cmwe}.$$ (16) Therefore, the UHE muon range is larger than the depth of the atmosphere (see Fig. 3). But with $`M_{4+n}`$ TeV extra dimension models, the muon acquires a cross section given by Eq. (10), and therefore has the same interaction rate in the atmosphere as the neutrinos we wish to detect. However, even if we conservatively estimate the atmospheric muon flux , it is more than 350 times smaller than the cosmogenic neutrino flux at $`E=10^{18}`$ eV, and more than $`10^3`$ times smaller for $`E10^{19}`$ eV. So with extra dimensions, atmospheric muons may lead to interesting signatures in atmospheric and underground detectors, but they do not hinder cosmic UHE neutrino shower identification. UHECRs and neutrinos together with other astrophysical and cosmological constraints thus provide an interesting testing ground for new interactions beyond the Standard Model, as suggested for example in scenarios involving additional large compact dimensions. In the context of these scenarios we mention that Newton’s law of gravity is expected to be modified at distances smaller than the length scale given by Eq. (15). Indeed, there are laboratory experiments measuring the gravitational interaction at smaller distances than currently measured (for a recent review of such experiments see Ref. ), which also probe these theories. Thus, future UHECR experiments and gravitational experiments in the laboratory together have the potential to provide rather strong tests of these theories. These tests complement constraints from collider experiments . ## V Conclusions A direct measurement approach has been presented here for bounding the extent to which UHE neutrinos acquire greater interactions with matter than the Standard Model prescribes. By looking for nearly horizontal air showers, we can limit the UHE neutrino flux striking Earth. When this is combined with a natural source of cosmogenic UHE neutrinos, a model-independent upper limit on the cross section results. Air shower experiments can also test for the energy dependence of cross sections in the range $`10^{29}10^{27}\mathrm{cm}^2`$; we have discussed the specific case of a linear energy dependence, as motivated by some large extra compact dimensions models that have enjoyed much recent attention in the community. In such scenarios, fundamental quantum gravity scales in the $`110`$TeV range can be probed. This could also be relevant for testing string theory scenarios with string scales in the TeV range, however reliable theoretical predictions for the relevant cross sections in such scenarios are not yet available. If this method proves beneficial, knowledge will be gained about new interactions at ultra-high energies. Should the data support extra dimensions, our understanding of UHE interactions between all particles (not just $`\nu N`$) will be changed because changes in gravity affect every particle. In general, cosmic neutrinos and UHECRs will probe interactions at energies a few orders of magnitude beyond what can be achieved in terrestrial accelerators in the forseeable future. ## Acknowledgments We would like to thank Pierre Binetruy, Sean Carroll, Cedric Deffayet, Emilian Dudas, Gia Dvali, and Michael Kachelriess for valuable conversations, and Mary Hall Reno and Ren-Jie Zhang for their timely correspondence. We also thank Murat Boratav, Shigeru Yoshida, and Tom Weiler for comments on the manuscript.
no-problem/0002/hep-ph0002208.html
ar5iv
text
# 1 The flavor violation parameters defined in Eq. (); |Δ^(𝑅)₂₃| in the large angle MSW case (solid), |Δ^(𝑅)₁₂| in the large angle MSW case (dotted), and |Δ^(𝑅)₁₂| in the small angle MSW case (dashed). We take 𝑎₀=0, tan𝛽=3, and 𝑚_{𝐺⁢5} and 𝑚₀ are chosen so that 𝑚_𝑞̃=1⁢TeV and 𝑚_𝑊̃=150⁢GeV. In particle physics, one of the greatest excitement in the last decade is the discovery of the evidence for the neutrino oscillations . In particular, the anomalies in the atmospheric and solar neutrino fluxes suggest neutrino masses much smaller than those of the quarks and charged leptons. Such small neutrino masses are beautifully explained by the see-saw mechanism with the right-handed neutrinos . The see-saw mechanism gives the (left-handed) neutrino masses of the form $`m_{\nu _L}m_{\nu _\mathrm{D}}^2/M_{\nu _R}`$, where $`m_{\nu _\mathrm{D}}`$ and $`M_{\nu _R}`$ are Dirac and Majorana masses for neutrinos, respectively. The Dirac mass $`m_{\nu _\mathrm{D}}`$ is from the Yukawa interaction with the electroweak Higgs boson, and is of order the electroweak scale $`M_{\mathrm{weak}}`$ (or smaller). The Majorana mass $`M_{\nu _R}`$ is, however, not related to the electroweak symmetry breaking and can be much larger than $`M_{\mathrm{weak}}`$. Adopting $`M_{\nu _R}M_{\mathrm{weak}}`$, $`m_{\nu _L}M_{\mathrm{weak}}`$ is realized. For example, assuming $`m_{\nu _\mathrm{D}}M_{\mathrm{weak}}`$, the atmospheric neutrino anomaly predicts $`M_{\nu _R}10^{14}10^{15}\mathrm{GeV}`$ with relevant mixing parameters. This fact suggests the validity of the field theoretical description up to a scale much higher than the electroweak scale. If so, the quadratic divergence in the Higgs boson mass parameter badly destabilize the electroweak scale unless some mechanism protects the stability. A natural solution to this problem is to introduce supersymmetry (SUSY); in SUSY models, quadratic divergences are cancelled between bosonic and fermionic loops and hence the electroweak scale becomes stable against radiative corrections. SUSY standard model also suggests an interesting new physics at a high scale, i.e., the grand unified theory (GUT). With the renormalization group (RG) equation based on the minimal SUSY standard model (MSSM), all the gauge coupling constants meet at $`M_{\mathrm{GUT}}2\times 10^{16}\mathrm{GeV}`$, while the gauge coupling unification is not realized in the (non-supersymmetric) standard model . Therefore, SUSY GUT with the right-handed neutrinos is a well-motivated extension of the standard model. In this paper, we consider $`\mathrm{\Delta }S=2`$ and $`\mathrm{\Delta }B=2`$ processes in the SU(5) GUT with the right-handed neutrinos. Before the Super-Kamiokande experiment , similar effects were considered without taking into account the large mixing in the neutrino sector . Now, we have better insights into the neutrino mass and mixing parameters. In particular, the atmospheric neutrino flux measured by the Super-Kamiokande experiment strongly favors a large 2-3 mixing in the neutrino sector . Furthermore, with the on-going $`B`$ factories, the mixings in the quark sector, in particular, the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements, will be well constrained. In such a circumstance, it is appropriate to study implications of the right-handed neutrinos to the $`\mathrm{\Delta }S=2`$ and $`\mathrm{\Delta }B=2`$ processes within the current experimental situation. In grand unified models, the right-handed down type squarks $`\stackrel{~}{d}_R`$ couple to the right-handed neutrinos above the GUT scale, and sizable flavor violations are possible in the $`\stackrel{~}{d}_R`$ sector. In particular, the RG effect may generate off-diagonal elements in the soft SUSY breaking mass matrix of $`\stackrel{~}{d}_R`$. (Notice that the flavor violation in the $`\stackrel{~}{d}_R`$ sector is highly suppressed in the minimal supergravity model without the right-handed neutrinos.) Furthermore, in grand unified models, there are new physical CP violating phases in the colored Higgs vertices, which do not affect the low energy Yukawa interactions. These may affect CP and flavor violating processes in $`K`$ and $`B`$ system. Indeed, as we will see, SUSY contribution to the $`ϵ_K`$ parameter can be as large as the experimental value even with a relatively heavy squark mass $`m_{\stackrel{~}{q}}1\mathrm{TeV}`$, and it might cause an anomaly in the on-going test of the CKM triangle. We start by introducing the Lagrangian for the SUSY SU(5) model with the right-handed neutrinos. Denoting $`\mathrm{𝟏𝟎}`$, $`\overline{\mathrm{𝟓}}`$, and singlet fields of SU(5) as $`\mathrm{\Psi }_i`$, $`\mathrm{\Phi }_i`$, and $`N_i`$, respectively, the superpotential is given by<sup>#1</sup><sup>#1</sup>#1We assume that other Yukawa couplings (in particular, those of the SU(5) symmetry breaking fields) are small. We neglect their effects in this paper. $`W_{\mathrm{GUT}}={\displaystyle \frac{1}{8}}\mathrm{\Psi }_i\left[Y_U\right]_{ij}\mathrm{\Psi }_jH+\mathrm{\Psi }_i\left[Y_D\right]_{ij}\mathrm{\Phi }_j\overline{H}+N_i\left[Y_N\right]_{ij}\mathrm{\Phi }_jH+{\displaystyle \frac{1}{2}}N_i\left[M_N\right]_{ij}N_j,`$ (1) where $`\overline{H}`$ and $`H`$ are Higgs fields which are in $`\overline{\mathrm{𝟓}}`$ and $`\mathrm{𝟓}`$ representations, respectively, and $`i`$ and $`j`$ are generation indices which run from 1 to 3. (We omit the SU(5) indices for the simplicity of the notation.) Here, $`M_N`$ is the Majorana mass matrix of the right-handed neutrinos. In order to make our points clearer, we adopt the simplest Majorana mass matrix, that is, the universal structure of $`M_N`$: $`\left[M_N\right]_{ij}=M_{\nu _R}\delta _{ij}.`$ (2) The following discussions are qualitatively unaffected by this assumption, although numerical results depend on the structure of $`M_N`$. In Eq. (1), $`Y`$’s are $`3\times 3`$ Yukawa matrices; in general, $`Y_U`$ is a general complex symmetric matrix, while $`Y_D`$ and $`Y_N`$ are general complex matrices. By choosing a particular basis, we can eliminate unphysical phases and angles. In this paper, we choose the basis of $`\mathrm{\Psi }_i`$, $`\mathrm{\Phi }_i`$, and $`N_i`$ such that the Yukawa matrices at the GUT scale become $`Y_U(M_{\mathrm{GUT}})=V_Q^T\widehat{\mathrm{\Theta }}_Q\widehat{Y}_UV_Q,Y_D(M_{\mathrm{GUT}})=\widehat{Y}_D,Y_N(M_{\mathrm{GUT}})=\widehat{Y}_NV_L\widehat{\mathrm{\Theta }}_L,`$ (3) where $`\widehat{Y}`$’s are real diagonal matrices: $`\widehat{Y}_U=\mathrm{diag}(y_{u_1},y_{u_2},y_{u_3}),\widehat{Y}_D=\mathrm{diag}(y_{d_1},y_{d_2},y_{d_3}),\widehat{Y}_N=\mathrm{diag}(y_{\nu _1},y_{\nu _2},y_{\nu _3}),`$ (4) while $`\widehat{\mathrm{\Theta }}`$’s are diagonal phase matrices: $`\widehat{\mathrm{\Theta }}_Q=\mathrm{diag}(e^{i\varphi _1^{(Q)}},e^{i\varphi _2^{(Q)}},e^{i\varphi _3^{(Q)}}),\widehat{\mathrm{\Theta }}_L=\mathrm{diag}(e^{i\varphi _1^{(L)}},e^{i\varphi _2^{(L)}},e^{i\varphi _3^{(L)}}),`$ (5) where phases obey the constraints $`\varphi _1^{(Q)}+\varphi _2^{(Q)}+\varphi _3^{(Q)}=0`$ and $`\varphi _1^{(L)}+\varphi _2^{(L)}+\varphi _3^{(L)}=0`$. Furthermore, $`V_Q`$ and $`V_L`$ are unitary mixing matrices and are parameterized as $`V_Q=\left(\begin{array}{ccc}c_{12}^{(Q)}c_{13}^{(Q)}& s_{12}^{(Q)}c_{13}^{(Q)}& s_{13}^{(Q)}e^{i\delta _{13}^{(Q)}}\\ s_{12}^{(Q)}c_{23}^{(Q)}c_{12}^{(Q)}s_{23}^{(Q)}s_{13}^{(Q)}e^{i\delta _{13}^{(Q)}}& c_{12}^{(Q)}c_{23}^{(Q)}s_{12}^{(Q)}s_{23}^{(Q)}s_{13}^{(Q)}e^{i\delta _{13}^{(Q)}}& s_{23}^{(Q)}c_{13}^{(Q)}\\ s_{12}^{(Q)}s_{23}^{(Q)}c_{12}^{(Q)}c_{23}^{(Q)}s_{13}^{(Q)}e^{i\delta _{13}^{(Q)}}& c_{12}^{(Q)}s_{23}^{(Q)}s_{12}^{(Q)}c_{23}^{(Q)}s_{13}^{(Q)}e^{i\delta _{13}^{(Q)}}& c_{23}^{(Q)}c_{13}^{(Q)}\end{array}\right),`$ (9) where $`c_{ij}^{(Q)}=\mathrm{cos}\theta _{ij}^{(Q)}`$ and $`s_{ij}^{(Q)}=\mathrm{sin}\theta _{ij}^{(Q)}`$, and $`V_L=V_Q|_{QL}`$. $`W_{\mathrm{GUT}}`$ can be also expressed by using the standard model fields. By properly relating the GUT fields with the standard model fields, the phases $`\varphi ^{(Q)}`$ and $`\varphi ^{(L)}`$ are eliminated from the interactions among the light fields. Indeed, let us embed the standard model fields into the GUT fields as $`\mathrm{\Psi }_i\{Q,V_Q^{}\widehat{\mathrm{\Theta }}_Q^{}\overline{U},\widehat{\mathrm{\Theta }}_L\overline{E}\}_i,\mathrm{\Phi }_i\{\overline{D},\widehat{\mathrm{\Theta }}_L^{}L\}_i,`$ (10) where $`Q_i(\mathrm{𝟑},\mathrm{𝟐})_{1/6}`$, $`\overline{U}_i(\overline{\mathrm{𝟑}},\mathrm{𝟏})_{2/3}`$, $`\overline{D}_i(\overline{\mathrm{𝟑}},\mathrm{𝟏})_{1/3}`$, $`L_i(\mathrm{𝟏},\mathrm{𝟐})_{1/2}`$, and $`\overline{E}_i(\mathrm{𝟏},\mathrm{𝟏})_1`$ are quarks and leptons in $`i`$-th generation with the SU(3)<sub>C</sub> $`\times `$ SU(2)<sub>L</sub> $`\times `$ U(1)<sub>Y</sub> gauge quantum numbers as indicated. With this embedding, $`W_{\mathrm{GUT}}`$ at the GUT scale becomes $`W_{\mathrm{GUT}}`$ $`=`$ $`W_{\mathrm{SSM}}`$ (11) $`{\displaystyle \frac{1}{2}}Q_i\left[V_Q^T\widehat{\mathrm{\Theta }}_Q\widehat{Y}_UV_Q\right]_{ij}Q_jH_C+\overline{E}_i\left[\widehat{\mathrm{\Theta }}_L\widehat{Y}_U\right]_{ij}\overline{U}_jH_C`$ $`+\overline{U}_i\left[\widehat{\mathrm{\Theta }}_Q^{}V_Q^{}\widehat{Y}_D\right]_{ij}\overline{D}_j\overline{H}_CQ_i\left[\widehat{Y}_D\widehat{\mathrm{\Theta }}_L^{}\right]_{ij}L_j\overline{H}_C`$ $`+N_i\left[\widehat{Y}_NV_L\widehat{\mathrm{\Theta }}_L\right]_{ij}\overline{D}_jH_C,`$ where $`H_C`$ and $`\overline{H}_C`$ are colored Higgses, and the low energy Lagrangian is given by $`W_{\mathrm{SSM}}`$ $`=`$ $`Q_i\left[V_Q^T\widehat{Y}_U\right]_{ij}\overline{U}_jH_u+Q_i\left[\widehat{Y}_D\right]_{ij}\overline{D}_jH_d+\overline{E}_i\left[\widehat{Y}_E\right]_{ij}L_jH_d+N_i\left[\widehat{Y}_NV_L\right]_{ij}L_jH_u`$ (12) $`+{\displaystyle \frac{1}{2}}M_{\nu _R}N_iN_i.`$ with $`H_u`$ and $`H_d`$ being the up- and down-type Higgses, respectively. Simple SU(5) GUT predicts $`\widehat{Y}_D=\widehat{Y}_E`$, but this relation breaks down for the first and second generations. Some non-trivial flavor physics is necessary to have a realistic down-type quark and charged lepton Yukawa matrices. Such a new physics may provide a new source of flavor and CP violations , but we do not consider such an effect in this paper. In the basis given in Eq. (12), Yukawa matrices for the down-type quarks and charged leptons are diagonal. Strictly speaking, the RG effect mixes the flavors and their diagonalities are not preserved at lower scale. Such a flavor mixing is, however, rather a minor effect for the Yukawa matrices and the down-type quarks and charged leptons in Eq. (12) correspond to the mass eigenstates quite accurately. Therefore, this basis is useful to understand the qualitative features of the $`K`$ and $`B`$ physics. So, we first discuss the flavor violation of the model using Eq. (12) neglecting the RG effect for the Yukawa matrices below the GUT scale.<sup>#2</sup><sup>#2</sup>#2Notice that, in our numerical discussion later, all the flavor mixing effects are included. In particular, mass eigenstates of the quarks and charged leptons will be defined using the Yukawa matrices given at the electroweak scale. With the superpotential (12), the left-handed neutrino mass matrix is given by $`[m_{\nu _L}]_{ij}={\displaystyle \frac{H_u^2}{M_{\nu _R}}}\left[V_L^T\widehat{Y}_N^2V_L\right]_{ij}={\displaystyle \frac{v^2\mathrm{sin}^2\beta }{2M_{\nu _R}}}\left[V_L^T\widehat{Y}_N^2V_L\right]_{ij},`$ (13) where $`v246\mathrm{GeV}`$, and $`\beta `$ parameterizes the relative size of two Higgs vacuum expectation values: $`\mathrm{tan}\beta H_u/H_d`$. Then, $`V_L`$ provides the mixing matrix of the lepton sector. On the contrary, as seen in Eq. (12), $`V_Q`$ is the CKM matrix: $`V_QV_{\mathrm{CKM}}`$. At the tree level, effects of the heavy particles, like the colored Higgses and right-handed neutrinos, are suppressed by inverse powers of their masses. As a result, we can mostly completely neglect the interaction with such heavy particles, although there are a few important processes like the neutrino oscillation and the proton decay. The phases $`\varphi ^{(Q)}`$ and $`\varphi ^{(L)}`$ do not affect the Yukawa interactions among the standard model fields and only appear in the colored Higgs vertices. Therefore, their effects on low energy physics are rather indirect, and those phases are not determined from the Yukawa interactions of the quarks and leptons with the electroweak Higgses. At the loop level, however, heavy particles may affect the low energy quantities through the RG effect . In particular, flavor changing operators may be significantly affected. For example, the neutrino Yukawa interactions may induce off-diagonal elements in the soft SUSY breaking mass matrix of left-handed sleptons. In addition, even if there is no neutrino Yukawa interactions, lepton flavor numbers are violated in the grand unified models by the up-type Yukawa interaction, and hence we may expect non-vanishing off-diagonal elements in the mass matrix of the right-handed sleptons. Since the lepton flavor numbers are preserved in the standard model, these effects result in very drastic signals like $`\mu e\gamma `$ and $`\tau \mu \gamma `$ in SU(5) and SO(10) models as well as in models with the right-handed neutrinos . Furthermore, hadronic flavor and CP violations in grand unified models were considered in Ref. without taking into account the large mixing in the neutrino sector. The flavor mixing below the GUT scale through the CKM matrix also affects the $`\mathrm{\Delta }S=2`$ and $`\mathrm{\Delta }B=2`$ processes . Here, we consider CP violation and flavor mixing in the down sector induced by the neutrino Yukawa matrix. As seen in Eq. (11), $`\overline{D}`$ interacts with the right-handed neutrinos above the GUT scale, and its soft SUSY breaking scalar mass matrix is affected by the neutrino Yukawa interactions. Because of possible flavor and CP violating parameters in $`V_L`$ and $`\widehat{\mathrm{\Theta }}_L`$, this can be important. Similar effect was also considered in Ref. for the $`bs\gamma `$ process, for which the effect was insignificant. Here, we will show that a large effect is possible in particular in the CP violation in the $`K`$ decay. Furthermore, Ref. did not take into account the physical phases $`\varphi ^{(L)}`$ which play a very important role in calculating the SUSY contribution to the $`ϵ_K`$ parameter. Now, we are at the point to discuss the RG effect and to estimate the off-diagonal elements in the down-type squark mass matrix. The RG effect is not calculable unless the boundary condition of the SUSY breaking parameters are given. In order to estimate the importance of this effect, we use the minimal supergravity boundary condition. Then, the SUSY breaking parameters are parameterized by the following three parameters:<sup>#3</sup><sup>#3</sup>#3There are extra important parameters. In particular, in order to have a viable electroweak symmetry breaking, so-called $`\mu `$\- and $`B_\mu `$-parameters are necessary. However, they do not affect the running of other parameters, so they do not have to be specified in our RG analysis. We assume they have correct values required from the radiative electroweak breaking. the universal scalar mass $`m_0`$, the universal $`A`$-parameter $`a_0`$ which is the ratio of SUSY breaking trilinear scalar interactions to corresponding Yukawa couplings, and the SU(5) gaugino mass $`m_{G5}`$. We assume that the boundary condition is given at the reduced Planck scale $`M_{}2.4\times 10^{18}\mathrm{GeV}`$. Once the boundary condition is given, the MSSM Lagrangian at the electroweak scale is obtained by running all the parameters down from the reduced Planck scale to the electroweak scale with RG equation. For the RG analysis, relevant effective field theory describing the energy scale $`\mu `$ is MSSM (without the right-handed neutrinos) for $`M_{\mathrm{weak}}\mu M_{\nu _R}`$, SUSY standard model with the right-handed neutrinos for $`M_{\nu _R}\mu M_{\mathrm{GUT}}`$, and SUSY SU(5) GUT for $`M_{\mathrm{GUT}}\mu M_{}`$. We use the minimal SUSY SU(5) GUT where the SU(5) symmetry is broken by 24 representation of SU(5). Before showing the results of the numerical calculation, let us discuss the behavior of the solution to the RG equation using a simple approximation. With the one-iteration approximation, off-diagonal elements in the soft SUSY breaking mass of $`\stackrel{~}{d}_R`$ is given by $`[m_{\stackrel{~}{d}_R}^2]_{ij}`$ $``$ $`{\displaystyle \frac{1}{8\pi ^2}}\left[Y_N^{}Y_N\right]_{ij}(3m_0^2+a_0^2)\mathrm{log}{\displaystyle \frac{M_{}}{M_{\mathrm{GUT}}}}`$ (14) $``$ $`{\displaystyle \frac{1}{8\pi ^2}}e^{i(\varphi _i^{(L)}\varphi _j^{(L)})}y_{\nu _k}^2\left[V_L^{}\right]_{ki}\left[V_L\right]_{kj}(3m_0^2+a_0^2)\mathrm{log}{\displaystyle \frac{M_{}}{M_{\mathrm{GUT}}}}.`$ This expression suggests two important consequences of the SU(5) model with the right-handed neutrinos. First, if neutrinos have a large Yukawa coupling and mixing, off-diagonal elements in $`[m_{\stackrel{~}{d}_R}^2]_{ij}`$ are generated. Since the atmospheric and solar neutrino anomalies require non-vanishing mixing parameters in $`V_L`$, this effect may generate sizable off-diagonal elements in $`[m_{\stackrel{~}{d}_R}^2]`$. Notice that, below the GUT scale, $`\stackrel{~}{d}_R`$ have very weak flavor violating Yukawa interactions. As a result, if the minimal supergravity boundary condition is given at the GUT scale, we see much smaller flavor violation in $`[m_{\stackrel{~}{d}_R}^2]`$. The second point is on the phase of the RG induced off-diagonal elements. Because the phases $`\varphi ^{(L)}`$ are free parameters, the RG induced off-diagonal elements (14) may have arbitrary phases and large CP violating phases in the mass matrix of $`\stackrel{~}{d}_R`$ are possible. As can be understood in Eq. (14), $`[m_{\stackrel{~}{d}_R}^2]_{ij}`$ depends on the structure of the neutrino mixing matrix $`V_L`$. In our analysis, we use the mixing angles and neutrino masses suggested by the atmospheric neutrino data, and also by the large angle or small angle MSW solution to the solar neutrino problem .<sup>#4</sup><sup>#4</sup>#4We can also consider the case of the vacuum oscillation solution to the solar neutrino problem. The vacuum oscillation solution requires very light second generation neutrino ($`m_{\nu _2}O(10^5\mathrm{eV})`$). With our simple set up, $`y_{\nu _2}`$ cannot be large once the perturbativity of $`y_{\nu _3}`$ is imposed up to $`M_{}`$, since we assume the universal structure of $`M_N`$. Consequently, the RG effect is suppressed except for the 2-3 mixing. If we consider general neutrino mass matrix, however, this may not be true. For example, non-universal $`M_N`$ or non-vanishing $`[V_L]_{31}`$ would easily enhance the RG effect. For the large angle MSW case, we use $`m_\nu (\begin{array}{ccc}0,& 0.004\mathrm{eV},& 0.03\mathrm{eV}\end{array}),V_L\left(\begin{array}{ccc}0.91& 0.30& 0.30\\ 0.42& 0.64& 0.64\\ 0& 0.71& 0.70\end{array}\right),`$ (19) and for the small angle case, we use $`m_\nu (\begin{array}{ccc}0,& 0.003\mathrm{eV},& 0.03\mathrm{eV}\end{array}),V_L\left(\begin{array}{ccc}1.0& 0.028& 0.028\\ 0.040& 0.70& 0.71\\ 0& 0.71& 0.70\end{array}\right).`$ (24) Here, we assume that the neutrino masses are hierarchical, and that the lightest neutrino mass is negligibly small. The 3-1 element of $`V_L`$ is known to be relatively small: $`|[V_L]_{31}|^20.05`$ from the CHOOZ experiment . We simply assume $`[V_L]_{31}`$ be small enough to be neglected, and we take $`[V_L]_{31}=0`$. We will discuss the implication of $`[V_L]_{31}0`$ later. In calculating the neutrino Yukawa matrix, we take $`M_{\nu _R}`$ as a free parameter, and the neutrino Yukawa matrix is determined as a function of $`M_{\nu _R}`$ using Eq. (13). In order to discuss the flavor violating effects, it is convenient to define $`\mathrm{\Delta }_{ij}^{(R)}`$, which is the off-diagonal elements in $`[m_{\stackrel{~}{d}_R}^2]_{ij}`$ normalized by the squark mass: $`\mathrm{\Delta }_{ij}^{(R)}{\displaystyle \frac{[m_{\stackrel{~}{d}_R}^2]_{ij}}{m_{\stackrel{~}{q}}^2}}{\displaystyle \frac{[m_{\stackrel{~}{d}_R}^2]_{ij}}{[m_{\stackrel{~}{d}_R}^2]_{11}}},`$ (25) where all the quantities are evaluated at the electroweak scale. Notice that, in our case, all the squark masses are almost degenerate except those for the stops. We use the first generation right-handed squark mass as a representative value. In Fig. 1, we plot several $`|\mathrm{\Delta }_{ij}^{(R)}|`$ as a function of $`M_{\nu _R}`$. Here, we take $`a_0=0`$, $`\mathrm{tan}\beta =3`$, and $`m_{G5}`$ and $`m_0`$ are chosen so that $`m_{\stackrel{~}{q}}=1\mathrm{TeV}`$ and $`m_{\stackrel{~}{W}}=150\mathrm{GeV}`$ (which gives the gluino mass of $`m_{\stackrel{~}{G}}520\mathrm{GeV}`$). As one can see, for larger $`M_{\nu _R}`$, off-diagonal elements are more enhanced. This is because the neutrino Yukawa coupling is proportional to $`M_{\nu _R}^{1/2}`$ for fixed neutrino mass. $`\mathrm{\Delta }_{23}^{(R)}`$ is dominated by the 2-3 mixing in the neutrino sector, and hence both the large and small mixing cases give almost the same $`\mathrm{\Delta }_{23}^{(R)}`$. (Therefore, we plot only $`\mathrm{\Delta }_{23}^{(R)}`$ in the large mixing case.) On the contrary, with the choice of $`V_L`$ given in Eqs. (19) and (24), $`[V_L]_{31}=0`$, and hence both $`\mathrm{\Delta }_{12}^{(R)}`$ and $`\mathrm{\Delta }_{13}^{(R)}`$ are from the term proportional to $`y_{\nu _2}^2`$ in Eq. (14). As a result, these quantities are more enhanced for the large angle MSW case because of the larger $`|[V_L]_{21}|`$. Furthermore, since $`|[V_L]_{22}||[V_L]_{23}|`$ in the mixing matrices we use, $`|\mathrm{\Delta }_{12}^{(R)}||\mathrm{\Delta }_{13}^{(R)}|`$ is realized both in the large and small mixing cases. If $`[V_L]_{31}O(10^2)`$, however, $`y_{\nu _3}^2`$-term may give comparable contributions. We also found that $`|\mathrm{\Delta }_{ij}^{(R)}|`$ is almost independent of the phases $`\varphi ^{(Q)}`$ and $`\varphi ^{(L)}`$. As suggested from Eq. (14), however, the phase of $`\mathrm{\Delta }_{ij}^{(R)}`$ is sensitive to $`\varphi ^{(L)}`$; the phase of $`\mathrm{\Delta }_{ij}^{(R)}`$ agrees with $`\varphi _i^{(L)}\varphi _j^{(L)}`$ very accurately. Therefore, $`\mathrm{\Delta }_{ij}^{(R)}`$ may have an arbitrary phase and can be a new source of CP violation. The most stringent constraints on the off-diagonal elements in the squark mass matrices are from the $`\mathrm{\Delta }S=2`$ and $`\mathrm{\Delta }B=2`$ processes , so we discuss them in this paper. In particular, $`|\mathrm{\Delta }_{12}^{(R)}|O(10^4)`$ realized with $`M_{\nu _R}10^{14}10^{15}\mathrm{GeV}`$ may generate the $`ϵ_K`$ parameter as large as the experimental value . In order to discuss the SUSY contribution to the flavor and CP violations (in particular, to the $`ϵ_K`$ parameter), we calculate the effective Hamiltonian contributing to $`\mathrm{\Delta }S=2`$ and $`\mathrm{\Delta }B=2`$ processes:<sup>#5</sup><sup>#5</sup>#5In Eq. (26), possible operators arising from the left-right squark mixing are omitted, although those are included in our numerical calculation. We checked their contributions are negligible relative to the dominant contributions from $`\mathrm{\Delta }_{ij}^{(R)}`$ (and $`\mathrm{\Delta }_{ij}^{(L)}`$). $`_{\mathrm{eff}}`$ $`=`$ $`\left[C_{LL}\right]_{ij}\left(\overline{d}_i^a\gamma _\mu P_Ld_j^a\right)\left(\overline{d}_i^b\gamma _\mu P_Ld_j^b\right)+\left[C_{RR}\right]_{ij}\left(\overline{d}_i^a\gamma _\mu P_Rd_j^a\right)\left(\overline{d}_i^b\gamma _\mu P_Rd_j^b\right)`$ (26) $`+\left[C_{LR}^{(1)}\right]_{ij}\left(\overline{d}_i^aP_Ld_j^a\right)\left(\overline{d}_i^bP_Rd_j^b\right)+\left[C_{LR}^{(2)}\right]_{ij}\left(\overline{d}_i^aP_Ld_j^b\right)\left(\overline{d}_i^bP_Rd_j^a\right),`$ where $`P_{R,L}=\frac{1}{2}(1\pm \gamma _5)`$, $`d_i^a`$ is the down-type quark in $`i`$-th generation with $`a`$ being SU(3)<sub>C</sub> index. Based on the operator structure, we call the first and second operators $`LL`$ and $`RR`$ operators, respectively, while the third and fourth ones $`LR`$ operators. In our analysis, we only include the dominant contributions from the gluino-squark loops, and use mass-insertion approximation. The SUSY contributions to the coefficients are found in Ref. , and are given by $`\left[C_{LL}\right]_{ij}`$ $`=`$ $`{\displaystyle \frac{\alpha _s}{36m_{\stackrel{~}{q}}^2}}\left(\mathrm{\Delta }_{ij}^{(L)}\right)^2\left[4xf_6(x)+11\stackrel{~}{f}_6(x)\right],`$ (27) $`\left[C_{RR}\right]_{ij}`$ $`=`$ $`{\displaystyle \frac{\alpha _s}{36m_{\stackrel{~}{q}}^2}}\left(\mathrm{\Delta }_{ij}^{(R)}\right)^2\left[4xf_6(x)+11\stackrel{~}{f}_6(x)\right],`$ (28) $`\left[C_{LR}^{(1)}\right]_{ij}`$ $`=`$ $`{\displaystyle \frac{\alpha _s}{3m_{\stackrel{~}{q}}^2}}\mathrm{\Delta }_{ij}^{(L)}\mathrm{\Delta }_{ij}^{(R)}\left[7xf_6(x)\stackrel{~}{f}_6(x)\right],`$ (29) $`\left[C_{LR}^{(2)}\right]_{ij}`$ $`=`$ $`{\displaystyle \frac{\alpha _s}{9m_{\stackrel{~}{q}}^2}}\mathrm{\Delta }_{ij}^{(L)}\mathrm{\Delta }_{ij}^{(R)}\left[xf_6(x)+5\stackrel{~}{f}_6(x)\right],`$ (30) where $`x=m_{\stackrel{~}{G}}^2/m_{\stackrel{~}{q}}^2`$, and $`f_6(x)`$ $`=`$ $`{\displaystyle \frac{6(1+3x)\mathrm{log}x+x^39x^29x+17}{6(x1)^5}},`$ (31) $`\stackrel{~}{f}_6(x)`$ $`=`$ $`{\displaystyle \frac{6x(1+x)\mathrm{log}xx^39x^2+9x+1}{6(x1)^5}}.`$ (32) Furthermore, $`\mathrm{\Delta }_{ij}^{(L)}`$ parameterizes the flavor violation in the left-handed down-type squarks $`\stackrel{~}{d}_L`$: $`\mathrm{\Delta }_{ij}^{(L)}{\displaystyle \frac{[m_{\stackrel{~}{d}_L}^2]_{ji}}{m_{\stackrel{~}{q}}^2}}.`$ (33) With the minimal supergravity boundary condition, $`[m_{\stackrel{~}{d}_L}^2]_{ij}`$ is generated by the large top Yukawa coupling $`y_t`$ and the CKM matrix. Approximately, $`[m_{\stackrel{~}{d}_L}^2]_{ij}{\displaystyle \frac{1}{8\pi ^2}}y_t^2\left[V_{\mathrm{CKM}}^{}\right]_{3i}\left[V_{\mathrm{CKM}}\right]_{3j}(3m_0^2+a_0^2)\left(3\mathrm{log}{\displaystyle \frac{M_{}}{M_{\mathrm{GUT}}}}+\mathrm{log}{\displaystyle \frac{M_{\mathrm{GUT}}}{M_{\mathrm{weak}}}}\right),`$ (34) and numerically, for example, $`|\mathrm{\Delta }_{12}^{(R)}|`$ is $`O(10^4)`$. With the effective Hamiltonian (26), we first calculate the SUSY contribution to $`ϵ_K`$: $`ϵ_K={\displaystyle \frac{e^{i\pi /4}\mathrm{Im}K|_{\mathrm{eff}}|\overline{K}}{2\sqrt{2}m_K\mathrm{\Delta }m_K}},`$ (35) where $`m_K`$ and $`\mathrm{\Delta }m_K`$ are the $`K`$-meson mass and the $`K_L`$-$`K_S`$ mass difference, respectively. QCD corrections below the electroweak scale are neglected. The matrix element of the effective Hamiltonian is calculated by using the vacuum insertion approximation: $`K|_{\mathrm{eff}}|\overline{K}`$ $`=`$ $`{\displaystyle \frac{2}{3}}\left(\left[C_{LL}\right]_{12}+\left[C_{RR}\right]_{12}\right)m_K^2f_K^2`$ (36) $`+\left[C_{LR}^{(1)}\right]_{12}\left({\displaystyle \frac{1}{2}}{\displaystyle \frac{m_K^2}{(m_s+m_d)^2}}+{\displaystyle \frac{1}{12}}\right)m_K^2f_K^2`$ $`+\left[C_{LR}^{(2)}\right]_{12}\left({\displaystyle \frac{1}{6}}{\displaystyle \frac{m_K^2}{(m_s+m_d)^2}}+{\displaystyle \frac{1}{4}}\right)m_K^2f_K^2,`$ where $`f_K160\mathrm{MeV}`$ is the $`K`$-meson decay constant, and $`m_s`$ and $`m_d`$ are the strange and down quark masses, respectively. The new flavor and CP violating effects in the $`\stackrel{~}{d}_R`$ sector have an important implication to the calculation of $`ϵ_K`$, since the matrix element of the $`LR`$-type operators are enhanced by the factor $`(m_K/m_s)^210`$, as seen in Eq. (36). As a result, if all the coefficients of the $`LL`$, $`RR`$, and $`LR`$ operators are comparable, the $`LR`$ ones dominate the contribution to the $`ϵ_K`$ parameter. In the SUSY SU(5) model with the right-handed neutrinos, this is the case. However, if there is no right-handed neutrinos, or if the running above the GUT scale is not included, $`\mathrm{\Delta }_{12}^{(R)}`$ is much more suppressed and the $`LL`$ operator gives the dominant effect. Then, the SUSY contribution to $`ϵ_K`$ becomes much smaller . We solve the RG equation numerically and obtain the electroweak scale values of the MSSM parameters. Then, we calculate $`ϵ_K^{(\mathrm{SUSY})}`$, the SUSY contribution to the $`ϵ_K`$ parameter. In Figs. 3 and 3, we show $`|ϵ_K^{(\mathrm{SUSY})}|`$ on $`\mathrm{tan}\beta `$ vs. $`m_{\stackrel{~}{q}}`$ plane. Here, we take $`m_{\stackrel{~}{W}}=150\mathrm{GeV}`$, and consider both the large and small angle MSW cases. The result depends on the phase $`\varphi _2^{(L)}\varphi _1^{(L)}`$; $`ϵ_K^{(\mathrm{SUSY})}`$ is approximately proportional to $`\mathrm{sin}(\varphi _2^{(L)}\varphi _1^{(L)}+\mathrm{arg}(\mathrm{\Delta }_{12}^{(L)}))`$. We choose $`\varphi _2^{(L)}\varphi _1^{(L)}`$ which maximizes $`|ϵ_K^{(\mathrm{SUSY})}|`$. The RG effect also generates off-diagonal elements in the slepton mass matrix, which induce various lepton flavor violations. In particular, in our case, rate for the $`\mu e\gamma `$ process may become significantly large. In Figs. 3 and 3, we shaded the region with $`Br(\mu e\gamma )4.9\times 10^{11}`$, which is already excluded by the negative search for $`\mu e\gamma `$ . The $`\mu e\gamma `$ process is enhanced for large $`\mathrm{tan}\beta `$ . However, large amount of parameter space is still allowed, in particular, in the low $`\mathrm{tan}\beta `$ region. As one can see, the SUSY contribution to $`ϵ_K`$ can be $`O(10\%)`$ of the experimentally measured value ($`ϵ_K^{(\mathrm{exp})}2.3\times 10^3`$ ) even with relatively heavy squarks ($`m_{\stackrel{~}{q}}1\mathrm{TeV}`$), and can be even comparable to $`ϵ_K^{(\mathrm{exp})}`$. Of course, if $`M_{\nu _R}`$ is smaller, or if there is an accidental cancellation among the phases, then the SUSY contribution to $`ϵ_K`$ is more suppressed. The information from $`ϵ_K`$ plays a significant role in constraining the allowed region in the so-called $`\rho `$ vs. $`\eta `$ plane . If the currently measured $`ϵ_K`$ has a significant contamination of the SUSY contribution, however, $`\rho `$ and $`\eta `$ suggested by $`ϵ_K`$ may become inconsistent with those from other constraints. Most importantly, the $`B`$ factories will measure the CP violation in the $`B_d\psi K_S`$ mode, and in the standard model, this process gives the phase $`\mathrm{arg}([V_{\mathrm{CKM}}]_{cd}[V_{\mathrm{CKM}}^{}]_{cb}/[V_{\mathrm{CKM}}]_{td}[V_{\mathrm{CKM}}^{}]_{tb})`$. As will be discussed below, the SUSY correction to this process is small at least in the parameter space we are discussing. By comparing $`\rho `$ and $`\eta `$ suggested by $`ϵ_K`$ and those from $`B_d\psi K_S`$, we may see an anomaly arising from the SUSY loop effect if we fit the data just assuming the standard model. Other check point is the $`K_L`$-$`K_S`$ mass difference $`\mathrm{\Delta }m_K={\displaystyle \frac{|K|_{\mathrm{eff}}|\overline{K}|}{m_K}},`$ (37) since, in some case, SUSY contribution to $`\mathrm{\Delta }m_K`$ is significantly large. In our case, however, $`\mathrm{\Delta }m_K`$ is less important than the $`ϵ_K`$ parameter; with $`\mathrm{\Delta }_{12}^{(R)}\mathrm{\Delta }_{12}^{(L)}O(10^410^3)`$, the SUSY contribution to the $`\mathrm{\Delta }m_K`$ parameter is $`O(1\%)`$ level of its experimental value. Finally, we discuss the contributions to $`\mathrm{\Delta }B=2`$ processes. Since the $`\mathrm{\Delta }_{13}^{(R,L)}`$ and $`\mathrm{\Delta }_{23}^{(R,L)}`$ parameters are non-vanishing, they may change the standard model predictions to the $`B`$ decays. In particular, due to the possible GUT phases $`\varphi ^{(Q)}`$ and $`\varphi ^{(L)}`$, the CP violations in $`B`$ systems as well as the mass differences $`\mathrm{\Delta }m_B`$ may be affected. $`\mathrm{\Delta }_{13}^{(R,L)}`$ affects the $`B_d`$ meson. With $`\mathrm{\Delta }_{13}^{(R)}{}_{}{}^{<}O(10^3)`$ and $`\mathrm{\Delta }_{13}^{(L)}O(10^4)`$, however, SUSY contribution to $`\mathrm{\Delta }m_{B_d}`$ is also small, and is at most $`1\%`$ . This also means that the phase in the $`B_d`$-$`\overline{B}_d`$ mixing matrix is dominated by the standard model contribution. Because of the large 2-3 mixing in the $`V_L`$ matrix, the SUSY contribution to the $`\mathrm{\Delta }m_{B_s}`$ parameter is more enhanced. In the standard model, the ratio $`\mathrm{\Delta }m_{B_s}/\mathrm{\Delta }m_{B_d}`$ is approximately $`|[V_{\mathrm{CKM}}]_{ts}/[V_{\mathrm{CKM}}]_{td}|^2`$. With this relation, we found that the SUSY contribution to $`\mathrm{\Delta }m_{B_s}`$ can be as large as $`10\%`$ of the standard model contribution when $`y_{\nu _3}`$ is maximally large. This fact also suggests that a sizable correction may be possible to the phase in the $`B_s`$-$`\overline{B}_s`$ mixing matrix because $`\mathrm{\Delta }_{23}^{(R)}`$ has a phase equal to $`e^{i(\varphi _2^{(L)}\varphi _3^{(L)})}`$. Very small CP asymmetry is expected in the decay modes like $`B_s\varphi \eta `$, $`D_s\overline{D}_s`$, and $`\varphi \eta ^{}`$ in the standard model, and the new source of the CP violation may affect the standard model predictions. Notice that, in the model without the right-handed neutrinos, $`\mathrm{\Delta }_{23}^{(L)}`$ is approximately proportional to $`[V_{\mathrm{CKM}}^{}]_{ts}[V_{\mathrm{CKM}}]_{tb}`$, while $`\mathrm{\Delta }_{23}^{(R)}`$ is negligibly small. Then, the SUSY contribution to the matrix element $`B_s|_{\mathrm{eff}}|\overline{B}_s`$ has almost the same phase as the standard model contribution, and hence the phase in the $`B_s`$-$`\overline{B}_s`$ mixing matrix is not significantly affected. In summary, in the SUSY SU(5) model with the right-handed neutrinos, sizable CP and flavor violations are possible in the $`\stackrel{~}{d}_R`$ sector, which is not the case in models without the right-handed neutrinos. Importantly, with such effects, the SUSY contribution to the $`ϵ_K`$ parameter can be as large as the experimental value ($`ϵ_K^{(\mathrm{exp})}2.3\times 10^3`$) if the right-handed neutrino mass $`M_{\nu _R}`$ is high enough. In this paper, we only considered the $`\mathrm{\Delta }S=2`$ and $`\mathrm{\Delta }B=2`$ processes. In the future, however, other CP violation informations will be available, in particular, in $`\mathrm{\Delta }S=1`$ and $`\mathrm{\Delta }B=1`$ processes. Given the fact that there can be extra sources of the CP violation, it would be desirable to measure as much CP violations as possible and to test standard model predictions. Note added: After the completion of the main part of this work, the author was noticed a paper by S. Baek, T. Goto, Y. Okada and K. Okumura which discussed implications of the right-handed neutrinos to the $`B`$ physics in the SUSY GUT, in particular, to the $`\mathrm{\Delta }m_{B_s}`$ parameter. In their paper, however, effects of the GUT phases $`\varphi ^{(Q)}`$ and $`\varphi ^{(L)}`$ are not discussed. Acknowledgments: The author would like to thank J. Hisano for useful discussions. This work was supported by the National Science Foundation under grant PHY-9513835, and also by the Marvin L. Goldberger Membership.
no-problem/0002/hep-lat0002016.html
ar5iv
text
# The masses of the mesons and baryons Part II. The standing wave model ## 1 Introduction The masses of the so-called stable elementary particles are the best-known and most characteristic property of the particles, but have not yet been explained. In a preceding article we have shown that it follows from the well-known decays of the elementary particles that the spectrum of the particles splits into a $`\gamma `$-branch and a neutrino branch. From the masses of the particles follows that the masses of the $`\gamma `$-branch particles are, within 3%, integer multiples of the $`\pi ^0`$ meson and that the masses of the $`\nu `$-branch are integer multiples of the $`\pi ^\pm `$ mesons times a factor $`0.86\pm 0.02`$. For the masses and decays see Table I and II of . The integer multiple rule suggests that the particles are the solution of a wave equation. In a previous article , we have tried to explain the masses of the elementary particles by monochromatic eigenfrequencies of standing waves in an elastic cube. These eigenfrequencies depend on the value of Young’s modulus of the material of the cube. In a subsequent paper , we have explained the value of Young’s modulus of the material of the elementary particles with the help of Born’s classical model of cubic crystals, assuming that the crystal is held together by the weak nuclear force acting between the lattice points. The explanation of the particles by monochromatic eigenfrequencies does not seem to be tenable because a monochromatic frequency does not accommodate the multitude of frequencies created in a high energy collision of $`10^{23}`$ sec duration. We will now study whether the so-called stable elementary particles of the $`\gamma `$-branch cannot be described by the frequency spectrum of standing waves in a lattice, which can accommodate automatically the Fourier frequency spectrum of an extreme short-time collision. The investigation of the consequences of lattices for particle theory was initiated by Wilson who studied a fermion lattice. This study has developed over time into lattice QCD. The results of such endeavors have culminated in the paper of Weingarten and his colleagues which required elaborate year long numerical calculations. They determined the masses of seven particles ($`K^{}`$, $`p`$, $`\varphi `$, $`\mathrm{\Delta }`$, $`\mathrm{\Sigma }`$, $`\mathrm{\Xi }`$, $`\mathrm{\Omega }`$), with an uncertainty of up to $`\pm 8`$%, agreeing with the observed particles within a few percent, up to 6%. Our theory covers all particles of the $`\gamma `$-branch, namely the $`\pi ^0`$, $`\eta `$, $`\eta ^{}`$, $`\mathrm{\Lambda }`$, $`\mathrm{\Sigma }^0`$, $`\mathrm{\Xi }^0`$, $`\mathrm{\Omega }^{}`$, $`\mathrm{\Lambda }_c^+`$, $`\mathrm{\Sigma }_c^0`$, $`\mathrm{\Xi }_c^0`$, and $`\mathrm{\Omega }_c^0`$ particles and agrees with the measured masses within at most 3.3%. The masses of the $`\nu `$-branch can be explained separately by standing waves in a neutrino lattice. ## 2 The frequency spectrum of the <br>oscillations of a cubic lattice It will be necessary to outline the most elementary aspects of the theory of lattice oscillations since we will, in the following, investigate whether the masses of the elementary particles can be explained with the help of the frequency spectra of standing waves in a cubic lattice. The classic paper describing lattice oscillations is from Born and v. Karman , henceforth referred to as B&K. They looked at first at the oscillations of a one-dimensional chain of points with mass $`m`$, separated by a constant distance $`a`$. B&K assume that the forces exerted on each point of the chain originate only from the two neighboring points. These forces are opposed to and proportional to the displacements, as with elastic springs. The equation of motion is in this case $$m\ddot{u}_n=\alpha \left(u_{n+1}u_n\right)\alpha \left(u_nu_{n1}\right).$$ (1) The displacements of the mass points from their equilibrium position are given by $`u_n`$. The dots signify, as usual, differentiation with respect to time, $`\alpha `$ is a constant characterizing the force between the lattice points, and $`n`$ is an integer number. In order to solve (1) B&K set $$u_n=A\mathrm{e}^{i(\omega t+n\varphi )},$$ (2) which is obviously a temporally and spatially periodic solution. $`n`$ is an integer, with $`nN`$, where $`N`$ is the number of points in the chain. At $`2n\varphi =\pi `$ there are nodes, where for all times $`t`$ the displacements are zero, as with standing waves. If a displacement is repeated after $`n`$ points we have $`na=\lambda `$, where $`\lambda `$ is the wavelength, and it must be $`n\varphi =2\pi `$ according to (2). It follows that $`\lambda =2\pi a/\varphi `$. Inserting (2) into (1) one obtains a continuous frequency spectrum given by $$\nu =2\sqrt{\frac{\alpha }{m}}\mathrm{sin}(\varphi /2).$$ (3) B&K point out that there is not only a continuum of frequencies, but also a maximal frequency which is reached when $`\varphi =\pi `$, or at the minimum of the possible wavelengths $`\lambda =2a`$. B&K then discuss the three-dimensional lattice, with lattice constant $`a`$ and masses $`m`$, the monatomic case, i.e. when all lattice points are of the same mass. They reduce the complexities of the problem by considering only the 18 points nearest to any point. These are 6 points at distance $`a`$, and 12 points at distance $`a\sqrt{2}`$. B&K assume that the forces between the points are linear functions of the small displacements, that the symmetry of the lattice is maintained, and that the equations of motion transform into the equations of motion of continuum mechanics for $`a0`$. We cannot reproduce the lengthy equations of motion of the three-dimensional lattice. In the three-dimensional case we deal with the forces caused by the 6 points at the distance $`a`$ which are characterized by the constant $`\alpha `$ in the case of central forces. There are also the forces which originate from the 12 points at distance $`a\sqrt{2}`$, characterized by the constant $`\gamma `$, which is important later on. We investigate the propagation of plane waves in a three-dimensional monatomic lattice with the ansatz $$u_{\mathrm{},m}=u_0\mathrm{e}^{i(\omega t+\mathrm{}\varphi _1+m\varphi _2)}$$ (4) and a similar ansatz for $`v_{\mathrm{},m}`$, with $`\mathrm{},m`$ being integer numbers $`N`$, where $`N`$ is the number of lattice points along a side of the cube. We also consider higher order solutions, with $`i_1\mathrm{}`$ and $`i_2m`$, where $`i_1,i_2`$ are integer numbers. The boundary conditions are periodic. The number of normal modes must be equal to the number of particles in the lattice. B&K arrive, in the case of two-dimensional waves, at a secular equation for the frequencies $$\left|\begin{array}{cc}A(\varphi _1,\varphi _2)m\nu ^2& B(\varphi _2,\varphi _1)\\ B(\varphi _1,\varphi _2)& A(\varphi _2,\varphi _1)m\nu ^2\end{array}\right|=0,$$ (5) The formulas for $`A(\varphi _1,\varphi _2)`$ and $`B(\varphi _1,\varphi _2)`$ are given in equation (17) of B&K. The theory of lattice oscillations has been pursued in particular by Blackman , a summary of his and other studies is in . Comprehensive reviews of the results of linear studies of lattice dynamics have been written by Born and Huang , and by Maradudin et al. . ## 3 The masses of the particles of the $`\gamma `$-branch We will now assume, as seems to be quite natural, that the particles of the $`\gamma `$-branch consist of the same particles into which they decay, i.e., of photons. We base this assumption on the fact that photons are the principal mode of decay of the $`\gamma `$-branch particles, the characteristic example is $`\pi ^0\gamma \gamma `$ (98.8%). The composition of the particles of the $`\gamma `$-branch suggested here offers a direct route from the formation of a $`\gamma `$-branch particle, through its lifetime, to its decay products. Particles that are made of photons are necessarily neutral, as the majority of the particles of the $`\gamma `$-branch are. We also base our assumption that the particles of the $`\gamma `$-branch are made of photons on the circumstances of the formation of the $`\gamma `$-branch particles. The most simple and straightforward creation of a $`\gamma `$-branch particle is the reaction $`\gamma +p\pi ^0+p+\gamma ^{}`$. A photon impinges on a proton and creates a $`\pi ^0`$ meson. In a timespan of order of $`10^{23}`$ sec the pulse of the incoming electromagnetic wave is, according to Fourier analysis, converted into a continuum of electromagnetic waves with frequencies ranging from $`10^{23}`$ sec<sup>-1</sup> to $`\nu \mathrm{}`$. The wave packet so created decays, according to experience, after $`8.410^{17}`$ sec, into two electromagnetic waves or $`\gamma `$-rays. It seems to be very unlikely that Fourier analysis does not hold for the case of an electromagnetic wave impinging on a proton. The question then arises of what happens to the electromagnetic waves in the timespan of $`10^{16}`$ seconds between the creation of the wave packet and its decay into two $`\gamma `$-rays? We will investigate whether the electromagnetic waves cannot continue to exist for the $`10^{16}`$ seconds until the wave packet decays. There must be a mechanism which holds the wave packet of the newly created particle together, or else it will disperse. We assume that the very many photons in the new particle are held together in a cubic lattice. If a particle consists of photons, alternate photons must have opposite spin, otherwise the spin of the particle could not be zero. Ordinary cubic lattices, such as the NaCl lattice, are held together by attractive forces between particles of opposite polarity. We assume that the photon lattice is held together by weak attractive forces between photons of opposite polarity. Electrodynamics does not predict the existence of such a force between two photons. However, electroweak theory says that $`e^2g^2`$, and we will now assume that there is a corresponding force in the photon lattice. It is not unprecedended that photons have been considered to be building blocks of the elementary particles. Schwinger has once studied an exact one-dimensional quantum electrodynamical model in which the photon acquired a mass $`e^2`$. We will now investigate the standing waves of a cubic photon lattice. We assume that the lattice is held together by a weak force acting from one lattice point to the next. We assume that the range of this force is $`10^{16}`$ cm, because the range of the weak nuclear force is of order of $`10^{16}`$ cm, according to . Likewise, we assume that the sidelength of the lattice is about $`10^{13}`$ cm, which follows from the size of the nucleon as given by . There are then $`10^9`$ lattice points. Because it is the most simple case, we assume that a central force acts between lattice points of different polarity. We cannot consider spin, isospin, strangeness or charm. The frequency equation for the two-dimensional oscillations of an isotropic monatomic cubic lattice with central forces is $$\nu ^2=\frac{2\alpha }{4\pi ^2M}\left\{2\mathrm{cos}\varphi _1\mathrm{cos}\varphi _2+\mathrm{sin}\varphi _1\mathrm{sin}\varphi _2\mathrm{cos}\varphi _1\right\}.$$ (6) In the isotropic case, i.e. when $`\gamma /\alpha =0.5`$, Eq. (6) follows directly from the equation of motion for the displacements in a monatomic lattice, which are given e.g. by Blackman . The minus sign in front of $`\mathrm{cos}\varphi _1`$ means that the waves are longitudinal. All frequencies that solve (6) come with either a plus or a minus sign, which is, as we will see, important. The frequency distribution following from (6) is shown in Fig. 1. There are some easily verifiable frequencies. For example at $`\varphi _1,\varphi _2=0,0`$ it is $`\nu =0`$, at $`\varphi _1,\varphi _2=\pi /2`$, $`\pi /2`$ it is $`\nu =\nu _0\sqrt{6}`$, at $`\varphi _1,\varphi _2=\pi /2,\pi /2`$ we have $`\nu =\nu _0\sqrt{2}`$. Furthermore at $`\varphi _1,\varphi _2=\pi ,0`$ we have $`\nu =\nu _0\sqrt{8}`$, and for all values of $`\varphi _1`$ it is $`\nu =2\nu _0`$ at $`\varphi _2=\pi `$ and $`\varphi _2=\pi `$, with $`\nu _0=\sqrt{\alpha /4\pi ^2M}`$, or as we will see $`\nu _0=c_{}/2\pi a`$. The limitation of the group velocity in the photon lattice has now to be considered. The formula for the group velocity is given by $$c_g=\frac{d\omega }{dk}=a\sqrt{\frac{\alpha }{M}}\frac{df(\varphi _1,\varphi _2)}{d\varphi }.$$ (7) The group velocity in the photon lattice has to be equal to the velocity of light $`c_{}`$, throughout the entire frequency spectrum, because photons move with the velocity of light. In order to learn how this requirement affects the frequency distribution we have to know the value of $`\sqrt{\alpha /M}`$ in a photon lattice. But we do not have information about what either $`\alpha `$ or $`M`$ might be in this case. We assume in the following that $`a\sqrt{\alpha /M}=c_{}`$, which means, since $`a=10^{16}`$ cm, that $`\sqrt{\alpha /M}=310^{26}`$ sec<sup>-1</sup>, or that the corresponding period is $`\tau =\frac{1}{3}10^{26}`$ sec, which is the time it takes for a wave to travel with the velocity of light over one lattice distance. With $`a\sqrt{\alpha /M}=c_{}`$, the equation for the group velocity is $$c_g=c_{}\frac{df}{d\varphi }.$$ (8) For a photon lattice that means, since $`c_g`$ must then always be equal to $`c_{}`$, that $`df/d\varphi =1`$. This requirement determines the form of the frequency distribution, regardless of the order of the mode of oscillation. The frequencies of the corrected spectrum must increase from the origin $`\varphi _1,\varphi _2=0,0`$ with slope 1 until a maximum is reached, from where the frequency must decrease with slope 1 to $`\nu =0`$. The frequency distribution corrected for (8) is shown in Fig. 2. The corrected frequency distributions of higher modes are of the same type, but for the area they cover. The second mode ($`i_1,i_2=2`$) covers 4 times the area of the basic mode, because $`2\varphi `$ ranges from 0 to $`2\pi `$, (for $`\varphi >0`$), whereas the basic mode ranges from 0 to $`\pi `$. Consequently the energy $`E`$ (Eq. 9) contained in all oscillations of the second mode is four times larger than the energy of the basic mode, because the energy contained in the lattice oscillations must be proportional to the sum of all frequencies. Adding, by superposition, to the second mode different numbers of basic modes or second modes will give exact integer multiples of the energy of the basic mode. Now we understand the integer multiple rule of the particles of the $`\gamma `$-branch. There is, in the framework of this theory, on account of Eq. (8), no alternative but integer multiples of the basic mode for the energy contained in the oscillations of the different modes or for superpositions of different modes. In other words, the masses of the different particles are integer multiples of the mass of the $`\pi ^0`$ meson, assuming that there is no spin, isospin, strangeness or charm. We remember that the measured masses in Table I of , which incorporate different spins, isospins, strangeness, and charm spell out the integer multiples rule within 3% accuracy. It is worth noting that there is no free parameter if one takes the ratio of the energies contained in the frequency distributions of the different modes, because the factor $`\sqrt{\alpha /M}`$ in Eq. (6) cancels. This means, in particular, that the ratios of the frequency distributions, or the mass ratios, are independent of the mass of the photons at the lattice points, as well as of the magnitude of the force between the lattice points. Let us summarize our findings concerning the particles of the $`\gamma `$-branch. The $`\pi ^0`$ meson is the basic mode of the photon lattice oscillations. The $`\eta `$ meson corresponds to the first higher mode ($`i_1,i_2=2`$), as is suggested by $`m(\eta )4m(\pi ^0)`$. $`\eta ^{}`$ is a superposition of three basic modes on the first higher mode. The $`\mathrm{\Lambda }`$ particle corresponds to the superposition of two higher modes ($`i_1,i_2=2`$), as is suggested by $`m(\mathrm{\Lambda })2m(\eta )`$. This superposition apparently results in the creation of spin $`\frac{1}{2}`$. The two modes would then have to be coupled. The $`\mathrm{\Sigma }^0`$ and $`\mathrm{\Xi }^0`$ baryons are superpositions of one or two basic modes on the $`\mathrm{\Lambda }`$ particle. The $`\mathrm{\Omega }^{}`$ particle corresponds to the superposition of three coupled higher modes ($`i_1,i_2=2`$) as is suggested by $`m(\mathrm{\Omega }^{})3m(\eta )`$. This procedure apparently causes spin $`\frac{3}{2}`$. The charmed $`\mathrm{\Lambda }_c^+`$ particle seems to be the first particle incorporating a (3.3) mode. $`\mathrm{\Sigma }_c^0`$ is apparently the superposition of a basic mode on $`\mathrm{\Lambda }_c^+`$, as is suggested by the decay of $`\mathrm{\Sigma }_c^0`$. The easiest explanation of $`\mathrm{\Xi }_c^0`$ is that it is the superposition of two coupled (3.3) modes. The superposition of two modes of the same type is, as in the case of $`\mathrm{\Lambda }`$, accompanied by spin $`\frac{1}{2}`$. The $`\mathrm{\Omega }_c^0`$ baryon is apparently the superposition of two basic modes on the $`\mathrm{\Xi }_c^0`$ particle. All neutral particles of the $`\gamma `$-branch are thus accounted for, the agreement between the measured masses and the theoretical values is $`3`$%, see Table I of . We find it interesting that all $`\gamma `$-branch particles with coupled $`2(2.2)`$ modes, or the $`\mathrm{\Omega }^{}`$ particle with the coupled $`3(2.2)`$ mode, have strangeness. But this rule does not hold in the presence of a (3.3) mode. All $`\gamma `$-branch particles with a (3.3) mode have charm. We have also found the $`\gamma `$-branch antiparticles, which follow from the negative frequencies which solve Eq. (6). Antiparticles have always been associated with negative energies. Following Dirac’s argument for electrons and positrons, we associate the masses following from the negative frequency distributions with antiparticles. We emphasize that the existence of antiparticles is an automatic consequence of our theory. ## 4 The mass of the $`\pi ^0`$ meson So far we have studied the ratios of the masses of the particles. We will now determine the mass of the $`\pi ^0`$ meson in order to validate that the mass ratios connect with the actual masses of the particles. The energy of the $`\pi ^0`$ meson is $`E(m(\pi ^0))=134.976`$ MeV = $`2.162610^4`$ erg. For the energy $`E`$ of all oscillations we use the equation $$E=\frac{Nh\nu _0}{(2\pi )^2}_0^{2\pi }f(\varphi _1,\varphi _2)𝑑\varphi _1𝑑\varphi _2,$$ (9) This equation originates from Born and v. Karman. $`N`$ is the number of lattice points. For $`f(\varphi _1,\varphi _2)`$ we use our Eq. (6). Using the frequency distribution shown in Fig. 2 it turns out that the numerical value of the double integral in (9) is 66.896 (radians<sup>2</sup>) for the corrected (1.1) state. With $`N=10^9`$ and $`\nu _0=310^{26}/2\pi `$ it follows from Eq. (9) that $`E(\mathrm{corr})(1.1)`$ is $`5.3610^8`$ erg, that means $`2.4810^{12}`$ times larger than $`E(m(\pi ^0))`$. In order to eliminate this discrepancy we use, instead of the simple form $`E=h\nu `$, the complete quantum mechanical energy of a linear oscillator as given by Planck, $$E=\frac{h\nu }{\mathrm{e}^{\frac{h\nu }{kT}}1}.$$ (10) This equation was already used by B&K for the determination of the specific heat of cubic crystals or solids. Equation (10) calls into question the value of the temperature $`T`$ in the interior of a particle. We determine $`T`$ empirically with the formula for the internal energy of solids $$u=\frac{R\mathrm{\Theta }}{\mathrm{e}^{\mathrm{\Theta }/T}1},$$ (11) which is from Sommerfeld . In this equation it is now $`R=10^9k`$, where $`k`$ is Boltzmann’s constant, and $`\mathrm{\Theta }`$ is the characteristic temperature introduced by Debye for the explanation of the specific heat of solids. It is $`\mathrm{\Theta }=h\nu _m/k`$, where $`\nu _m`$ is a maximal frequency. In the case of the oscillations making up the $`\pi ^0`$ meson the maximal frequency is $`\nu _m=\pi \nu _0`$, see Figs. 1,2, therefore $`\nu _m=1.510^{26}`$ sec<sup>-1</sup>, and we find that $`\mathrm{\Theta }=7.19910^{15}`$ K. In order to determine $`T`$ we set the internal energy $`u`$ equal to $`m(\pi ^0)c_{}^2`$. It then follows from (11) that $`\mathrm{\Theta }/T=29.16`$, or $`T=2.4710^{14}`$ K. That means that Planck’s formula (11) introduces a factor $`1/(\mathrm{e}^{\mathrm{\Theta }/T}1)1/e^{29.16}=1/4.61310^{12}`$ into Eq. (9). In other words, if we determine the temperature $`T`$ of the particle empirically through equation (11), then we arrive from (9) at a mass of the $`\pi ^0`$ meson of $`1.1610^4`$ erg. The difference between the exact $`E(m(\pi ^0))=2.1610^4`$ erg and our calculated $`E`$(corr)(1.1), which describes the $`\pi ^0`$ meson, is well within the uncertainty of the number of the lattice points and the lattice distance we have used. If we take the value of the radius of the nucleon given in verbatim, $`r=0.810^{13}`$ cm, and calculate the number of lattice points in a sphere of that radius, then there should be $`2.1410^9`$ lattice points in the particle. Since $`E`$ is directly proportional to the number of lattice points it follows that then $`E`$(corr)(1.1) = 1.15$`E(m(\pi ^0))`$. The energy in the mass of the $`\pi ^0`$ meson and the energy in the corresponding lattice oscillations agree very well, considering the uncertainties of the parameters involved. It can be shown that the factor exp($`\mathrm{\Theta }/T`$) remains constant for the higher modes. To summarize. We find that the energy in the $`\pi ^0`$ meson and the other particles of the $`\gamma `$-branch are correctly given by the energy of the standing waves, if the energy of the oscillations is determined by Planck’s formula for the energy of a linear oscillator. The $`\pi ^0`$ meson is like an adiabatic, cubic black body filled with standing electromagnetic waves. We know from Bose’s work that Planck’s formula applies to a photon gas as well. ## 5 Conclusions Let us summarize what we have assumed and what we have learned from this study. In short, for each neutral meson and each neutral baryon of the $`\gamma `$-branch we have found a simple mode of standing waves in a cubic lattice, the ratio of the energies of which agree within 3% with the ratio of the energies of the masses of the particles. In order to arrive at this result, we have first assumed that the neutral mesons and baryons consist of the same particles into which they decay, which seems to be a quite natural assumption. For the explanation of the mesons and baryons of the $`\gamma `$-branch we use only photons, nothing else. We assume the existence of a weak force which holds the photon lattice of the $`\gamma `$-branch together. Then we apply the results of the classical study of Born and v. Karman, and subsequent studies, about lattice oscillations to the particle lattice. We determine the frequency distributions and the energy contained in plane standing waves in a cubic lattice. The $`\gamma `$-rays in the photon lattice must move with the velocity of light, and we impose the condition that the group velocity is equal to the velocity of light. From the frequency distributions of the standing waves in the lattice follow the ratios of the masses of the particles. The masses of the $`\gamma `$-branch, the $`\pi ^0`$, $`\eta `$, $`\eta ^{}`$, $`\mathrm{\Lambda }`$, $`\mathrm{\Sigma }^0`$, $`\mathrm{\Xi }^0`$, $`\mathrm{\Omega }^{}`$, $`\mathrm{\Lambda }_c^+`$, $`\mathrm{\Sigma }_c^0`$, $`\mathrm{\Xi }_c^0`$, and $`\mathrm{\Omega }_c^0`$ particles are found to be integer multiples of the mass of the $`\pi ^0`$ meson, in agreement with what the data on the particle masses strongly suggest. The integer multiple rule is a consequence of the standing wave structure of the particles. It is important to note that in our theory the ratios of the masses of the $`\gamma `$-branch particles to the mass of the $`\pi ^0`$ meson do not depend on the sidelength of the lattice, and the distance between the lattice points, neither do they depend on the strength of the force between the lattice points nor on the mass of the lattice points. The mass ratios are determined only by the spectra of the frequencies of the standing waves in the lattice. Since the equation determining the frequency of the standing waves is quadratic it follows automatically that for each positive frequency there is also a negative frequency of the same absolute value, that means that for each particle there exists also an antiparticle. We have then determined the mass of the $`\pi ^0`$ meson which follows from our theory. This requires consideration of the number $`N^3`$ of the lattice points and of the value of $`\nu _0=c_{}/2\pi a`$. Assuming that the energy of the oscillations is determined by Planck’s formula for the energy of a linear oscillator, we arrive at a mass of the $`\pi ^0`$ meson which differs from the experimentally determined $`m(\pi ^0)`$ by 15%, which is well within the uncertainties of $`N^3`$ and $`a`$. The $`\pi ^0`$ meson is like a cubic black body filled with plane, standing electromagnetic waves, whose wavelengths are integer multiples of the lattice constant $`a`$. A rather conservative explanation of the $`\pi ^0`$ meson, and the $`\gamma `$-branch particles. It is worth noting that in the $`\gamma `$-branch of our model there is a continuous line leading from the creation of a particle out of photons or electromagnetic waves through the lifetime of the particle as standing electromagnetic waves to the decay products which are electromagnetic waves as well. The concept of a nuclear cubic lattice provides more than just the masses of the particles of the $`\gamma `$-branch. The masses of the $`\nu `$-branch follow from the frequencies in a diatomic neutrino lattice made of electron and muon neutrinos, with $`m(\nu _e)5`$ meV/$`c^2`$ and $`m(\nu _\mu )50`$ meV/$`c^2`$. Furthermore, as discussed in , the theory of cubic lattices permits the determination of the potential of the force that holds the lattice together. Born and Landé have shown that the potential must have an attractive part over longer distances and a repulsive part over shorter distances, otherwise the lattice would not be stable. Following the steps of Born and Landé we have found in equation (11) of that the attractive and repulsive terms of the potential in a nuclear lattice differ at the lattice distance by only $`10^{12}`$, if we replace the electric interaction constant $`e^2`$ in a crystal by $`g^2`$ from the weak nuclear force. Following a paper of Born and Stern we have discussed in also the force which acts, (in vacuum), between two cubic lattices. The attractive forces between two cubic lattices are the sum of all unsaturated weak forces at the sides of the lattices. Since there are about $`10^6`$ lattice points on a side of the nuclear cubic lattice considered here, the attractive force of a side of the lattice for one side of another lattice is $`10^6`$ times as large as the weak force acting between two lattice points. The empirical relation between the strength of the strong and weak forces is given by the ratio of the coupling constants, which is $`\alpha _s/\alpha _w10^6`$. Our standing wave model does not only account for the masses of the mesons and baryons and the antiparticles of the $`\gamma `$-branch, but also provides access to an explanation of the weak force which holds the nuclear lattice together, and the strong force which emanates from the surface of the particles. The strong and the weak forces are unified in this model. Acknowledgements. We are, in particular, grateful to Professor I. Prigogine for his support. We thank M. Fink and L. Frommhold for discussions. ELK is grateful to Y. Tassoulas for the calculation of numerous eigenvalues of the elastic oscillations of cubes. REFERENCES L. Koschmieder, preceding article. L. Koschmieder, Nuovo Cimento A99, 555 (1988). L. Koschmieder, Nuovo Cimento A101, 1017 (1989). K. Wilson, Phys. Rev. D10, 2445 (1974). D. Weingarten, Scient. American 274, 116 (1996). M. Born and Th. v. Karman, Phys. Zeitschr. 13, 297 (1912). M. Blackman, Proc. Roy. Soc. A148, 365; 384 (1935). M. Blackman, in Handbuch der Physik VII/1 (1955) Sec. 12. M. Born and K. Huang, Dynamical Theory of Crystal Lattices, (Oxford) 1954. A. Maradudin, E. Montroll, G. Weiss and I. Ipatova, Theory of Lattice Dynamics in the Harmonic Approximation, (Academic Press), 2nd edition, 1971. J. Schwinger, Phys. Rev. 128, 2425 (1962). D. Perkins, Introduction to High-Energy Physics, (Addison Wesley 1982, p.128. H. Frauenfelder and E. Henley, Subatomic Physics, (Prentice- Hall) 1974, p. 128. A. Sommerfeld, Vorlesungen über Theoretische Physik, 1952, Bd.V, 56. P. Debye, Ann. d. Phys. 39, 789 (1912). S. Bose, Zeitschr. f. Phys. 26, 178 (1924). M. Born and A. Landé, Verh. Dtsch. Phys. Ges. 20, 210 (1918). M. Born and O. Stern, Sitzungsber. Preuss. Akad. Wiss. 33, 901 (1919).
no-problem/0002/hep-lat0002020.html
ar5iv
text
# Infrared Behavior of the Gluon Propagator on a Large Volume Lattice ## I Introduction There has been considerable interest in the infrared behavior of the gluon propagator, as a probe into the mechanism of confinement and as input for other calculations. Lattice gauge theory is an excellent means to study such nonperturbative behavior. See, for example, Ref. for a recent survey. The infrared part of any lattice calculation may be affected by the finite volume of the lattice. Larger volumes mean either more lattice points (with increased computational cost) or coarser lattices (with corresponding discretization errors). The desire for larger physical volumes thus provides strong motivation for using improved actions. Improved actions have been shown to reduce discretization effects , although some concerns have been expressed that coarse lattices may miss important instanton physics. In this study, no change is seen in the infrared gluon propagator, even with a lattice spacing as coarse as 0.35 fm. We find the gluon propagator to be less singular than $`\frac{1}{q^2}`$ in the infrared. Our results suggest that the gluon propagator is infrared finite, although more data is needed in the far infrared to be conclusive. This behavior is similar to that observed in three-dimensional SU(2) studies . ## II $`𝓞\mathbf{(}𝒂^\mathrm{𝟐}\mathbf{)}`$ Improvement The $`𝒪(a^2)`$ tadpole-improved action is defined as $$S_G=\frac{5\beta }{3}\underset{\text{pl}}{}eTr(1U_{\text{pl}}(x))\frac{\beta }{12u_0^2}\underset{\text{rect}}{}eTr(1U_{\text{rect}}(x)),$$ (1) where the operators $`U_{\text{pl}}(x)`$ and $`U_{\text{rect}}(x)`$ are defined $`U_{\text{pl}}(x)`$ $`=`$ $`U_\mu (x)U_\nu (x+\widehat{\mu })U_\mu ^{}(x+\widehat{\nu })U_\nu ^{}(x),`$ (2) and $`U_{\text{rect}}(x)`$ $`=`$ $`U_\mu (x)U_\nu (x+\widehat{\mu })U_\nu (x+\widehat{\nu }+\widehat{\mu })U_\mu ^{}(x+2\widehat{\nu })U_\nu ^{}(x+\widehat{\nu })U_\nu ^{}(x)`$ (3) $`+`$ $`U_\mu (x)U_\mu (x+\widehat{\mu })U_\nu (x+2\widehat{\mu })U_\mu ^{}(x+\widehat{\mu }+\widehat{\nu })U_\mu ^{}(x+\widehat{\nu })U_\nu ^{}(x).`$ (4) The link product $`U_{\text{rect}}(x)`$ denotes the rectangular $`1\times 2`$ and $`2\times 1`$ plaquettes. For the tadpole (mean-field) improvement parameter we use the plaquette measure $$u_0=\left(\frac{1}{3}eTr<U_{\text{pl}}>\right)^{\frac{1}{4}}.$$ (5) Eq. (1) reproduces the continuum action as $`a0`$, provided that $`\beta `$ takes the standard value of $`6/g^2`$. Note that our $`\beta =6/g^2`$ differs from that used in . Multiplication of our $`\beta `$ in Eq. (1) by a factor of $`5/3`$ reproduces their definition. $`𝒪(g^2a^2)`$ corrections to this action are estimated to be of the order of two to three percent . Gauge configurations are generated using the Cabbibo-Marinari pseudoheat-bath algorithm with three diagonal $`SU(2)`$ subgroups. The mean link, $`u_0`$, is averaged every 10 sweeps and updated during thermalization. Representative gauge fields are selected after 5000 thermalization sweeps. Gauge fixing on the lattice is achieved by maximizing a functional, the extremum of which implies the gauge fixing condition. The usual Landau gauge fixing functional is $$_1^G[\{U\}]=\underset{\mu ,x}{}\frac{1}{2u_0}\text{Tr}\left\{U_\mu ^G(x)+U_\mu ^G(x)^{}\right\},$$ (6) where $$U_\mu ^G(x)=G(x)U_\mu (x)G(x+\widehat{\mu })^{},$$ (7) and $$G(x)=\mathrm{exp}\left\{i\underset{a}{}\omega ^a(x)T^a\right\}.$$ (8) A maximum of Eq. (6) implies that $`_\mu _\mu A_\mu =0`$ up to $`𝒪(a^2)`$. To ensure that gauge dependent quantities are also $`𝒪(a^2)`$ improved, we implement the analogous $`𝒪(a^2)`$ improved gauge fixing functional $$_{\text{Imp}}^G=\underset{\mu ,x}{}\frac{1}{2u_0}\text{Tr}\left\{\frac{4}{3}U_\mu ^G(x)\frac{1}{12u_0}U_\mu ^G(x)U_\mu ^G(x+\widehat{\mu })+\text{h.c.}\right\}$$ (9) as described in . We employ a Conjugate Gradient, Fourier Accelerated gauge fixing algorithm optimally designed for parallel machines. ## III The Landau Gauge Gluon Propagator The gauge links $`U_\mu (x)`$ are expressed in terms of the continuum gluon fields as $$U_\mu (x)=𝒫e^{ig_0_x^{x+\widehat{\mu }}A_\mu (z)𝑑z},$$ (10) where $`𝒫`$ denotes path ordering. From this, the dimensionless lattice gluon field $`A_\mu (x)`$ may be obtained via $$A_\mu (x+\widehat{\mu }/2)=\frac{1}{2ig_0}\left(U_\mu (x)U_\mu ^{}(x)\right)\frac{1}{6ig_0}\mathrm{Tr}\left(U_\mu (x)U_\mu ^{}(x)\right),$$ (11) accurate to $`𝒪(a^2)`$. $`𝒪(a^2)`$ improved gluon field operators have also been investigated. While the infrared behavior is unaffected by the improvement, the ultraviolet behavior suffers due to the extended nature of an improved operator. These results will be discussed in detail elsewhere . We calculate the gluon propagator in coordinate space $$D_{\mu \nu }^{ab}(x,y)A_\mu ^a(x)A_\nu ^b(y),$$ (12) using (11). To improve statistics, we use translational invariance and calculate $$D_{\mu \nu }^{ab}(y)=\frac{1}{V}\underset{x}{}A_\mu ^a(x)A_\nu ^b(x+y).$$ (13) In this report we focus on the scalar part of the propagator, $$D(y)=\frac{1}{N_d1}\underset{\mu }{}\frac{1}{N_c^21}\underset{a}{}D_{\mu \mu }^{aa}(y),$$ (14) where $`N_d=4`$ and $`N_c=3`$ are the number of dimensions and colors. This is then Fourier transformed into momentum space $$D(q)=\underset{y}{}e^{i\widehat{q}y}D(y),$$ (15) where the available momentum values, $`\widehat{q}`$, are given by $$\widehat{q}_\mu =\frac{2\pi n_\mu }{aL_\mu },n_\mu (\frac{L_\mu }{2},\frac{L_\mu }{2}],$$ (16) and $`L_\mu `$ is the length of the box in the $`\mu `$ direction. In the continuum, the scalar function $`D(q^2)`$ is related to the Landau gauge gluon propagator via $$D_{\mu \nu }^{ab}(q)=(\delta _{\mu \nu }\frac{q_\mu q_\nu }{q^2})\delta ^{ab}D(q^2).$$ (17) The bare, dimensionless lattice gluon propagator, $`D(qa)`$ is related to the renormalized continuum propagator, $`D_R(q;\mu )`$ via $$a^2D(qa)=Z_3(\mu ,a)D_R(q;\mu ),$$ (18) where the renormalization constant, $`Z_3(\mu ,a)`$ is determined by imposing a renormalization condition at some chosen renormalization scale, $`\mu `$, e.g., $$D_R(q)|_{q^2=\mu ^2}=\frac{1}{\mu ^2}.$$ (19) This means that there is an undetermined multiplicative renormalization factor, $`Z_3(a)`$, on each of our propagators. Since our purpose is to compare our two improved, coarse lattices with the unimproved, finer one, it is sufficient to consider only the their relative renormalizations. We have slightly rescaled the improved propagators so as to provide a reasonable match with old one. The relevant quantity is $$Z_3(0.10)/Z_3(0.35)=1.02.$$ (20) ## IV Analysis of Lattice Artifacts To emphasize the nonperturbative behavior of the gluon propagator, we divide the propagator by the tree level result of lattice perturbation theory. Hence Figs. 1, 2 and 3 are plotted with $`q^2D(q^2)`$ on the $`y`$-axis, which is expected to approach a constant up to logarithmic corrections as $`q^2\mathrm{}`$. Here $$q_\mu \frac{2}{a}\mathrm{sin}\frac{\widehat{q}_\mu a}{2}.$$ (21) To reduce ultraviolet noise resulting from the lattice discretization, the available momenta are cut half way into the Brillouin zone, that is $$q_{\text{max}}=\frac{\pi }{2a}.$$ (22) All figures have a cylinder cut imposed upon them, i.e. all momenta must lie within a cylinder of radius two spatial momentum units centered about the lattice diagonal. The propagators are plotted in physical units, which we obtain from the string tension with $`\sqrt{\sigma }=440`$ MeV. Fig. 1 is reproduced from Ref. , where the standard, Wilson action is used. The propagator is calculated on a $`32^3\times 64`$ lattice at $`\beta =6.0`$, which corresponds to a lattice spacing of 0.1 fm. This propagator produces the correct asymptotic behavior and a detailed analysis shows the anisotropic finite volume errors are small. However, it was impossible to rule out isotropic finite volume artifacts. We use the improved action described above to calculate the gluon propagator on a small ($`10^3\times 20`$), coarse ($`a=0.35`$ fm) lattice, which is shown in Fig. 2. Despite the coarse lattice spacing we see that it reproduces the infrared behavior of Fig. 1. Finally, we calculate the propagator on a $`16^3\times 32`$ lattice, at the same $`\beta `$ providing $`a=0.35`$ fm. This corresponds to a very large physical volume of $`5.6^3\times 11.2\text{ fm}^4.`$ Fig. 3 illustrates the results. These results largely agree with the previous two calculations of the propagators. The behavior of the gluon propagator is not changed by changing the volume. In Fig. 4 we have superimposed the gluon propagators for all three lattices. Here we plot $`D(q^2)`$ on the $`y`$-axis to allow an alternative examination of the most infrared momenta. The points at $``$ 350 MeV are very robust with respect to volume. Only the very lowest momenta points show signs of finite volume effects. With more volumes it should be possible to extrapolate to the infinite volume limit. We see that the propagator, at the very lowest momentum points, decreases as the volume increases. This strongly suggests an infrared finite propagator. ## V Conclusions The gluon propagator has been calculated on a coarse lattice with an $`𝒪(a^2)`$ improved action, in $`𝒪(a^2)`$ improved Landau gauge. The infrared behavior of this propagator is consistent with that of a previous study on a finer lattice with an unimproved action, but comparable volume. The propagator was then calculated on another improved lattice with the same spacing, but larger volume. The increase in volume left the propagator largely unchanged. In particular, it has been shown that the turnover observed in is not a finite volume effect. With more lattices it may be possible to extrapolate to infinite volume, but from this study we can only make tentative conclusions. We have ruled out the $`q^4`$ behavior popular in Dyson-Schwinger studies, and any infrared singularity appears to be unlikely. An infrared finite propagator is most plausible. The gluon propagator would need to drop rapidly for momenta below $``$ 350 MeV in order to vanish as suggested by Zwanziger and others. Even larger volume lattices will be needed to study this possibility. The possible effects of lattice Gribov copies remains a very interesting question and we plan to study this in the near future. ## Acknowledgements This work was supported by the Australian Research Council and by grants of supercomputer time on the CM-5 made available through the South Australian Centre for Parallel Computing. The authors also wish to thank J-I. Skullerud for his useful comments.
no-problem/0002/cond-mat0002186.html
ar5iv
text
# Phase diagram of a hard-sphere system in a quenched random potential: a numerical study ## I Introduction The equilibrium phase diagram of a classical system of interacting particles in a quenched, random, pinning potential is an important subject on which much effort is currently being spent . There are several experimentally studied systems, such as vortices in the mixed phase of high-T<sub>c</sub> superconductors , fluids confined in porous media , magnetic bubble arrays , and Wigner crystals , which provide physical realizations of a collection of interacting classical objects in the presence of an external, time-independent, random potential. In the absence of such a potential, systems of this kind are expected to crystallize at sufficiently low temperatures. Several years ago, Larkin showed that the presence of arbitrarily small amount of random pinning disorder destroys long-range translational order in all physical dimensions $`d<4`$. However, recent theoretical studies suggest that weak disorder distorts the crystalline state only slightly, leading to a phase with perfect topological order and logarithmic fluctuations of the relevant displacement field. This phase, with quasi-long-range translational order and power-law Bragg peaks in the structure factor, is called a “Bragg glass” . The transition point between a Bragg glass and the high-temperature liquid phase is likely to be shifted with increasing disorder, but the transition is believed to remain first order as long as the disorder is weak. A question of obvious interest is how this transition temperature and the nature of the transition depend on the strength of the random potential. As the relative strength of the disorder is increased, so that the week-disorder situation described above no longer applies, the Bragg glass phase is expected to undergo a transition to a topologically disordered amorphous phase with only short-range translational correlations. It is not yet clear whether this phase is thermodynamically distinct from the high-temperature liquid. An interesting possibility is that it is analogous to the glassy phase obtained by supercooling a liquid below the structural glass transition temperature in the absence of external quenched disorder . If this is so, then the phase diagram of such systems would contain three phases: a Bragg glass phase obtained at low temperature and weak disorder, an amorphous (without quasi-long-range translational order) glassy phase at low temperatures and strong disorder, and a weakly inhomogeneous (because of the presence of the external random potential) liquid phase at high temperatures. The glassy phase would be a thermodynamically stable one in these systems. This is different from the situation in the absence of external disorder where the crystalline state is the true equilibrium state near the structural glass transition and both the supercooled liquid and the glass are metastable. In other words, the presence of external disorder may lead to the possibility of occurrence of a true, thermodynamically stable, glassy phase. The phase diagram in the temperature ($`T`$) – magnetic field ($`H`$) plane of layered, highly anisotropic, type-II superconductors such as $`\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_8`$ in a magnetic field perpendicular to the layers is a credible candidate to exhibit these three phases. For a wide range of values of $`H`$, the flux lines in these materials may be regarded as columns of interacting “pancake” vortices residing on the layers, and the properties of the mixed phase may be described in terms of the classical statistical mechanics of these point-like objects. In these compounds, at low enough fields, a flux-lattice melting transition separates a nearly crystalline state of the flux lines from a disordered “vortex liquid” state. The first-order character of this transition has been carefully documented . When the magnetic field is increased, the transition becomes continuous , and the nearly crystalline state appears to be replaced by an amorphous state called “vortex glass” which is endowed with glassy properties such as non-ohmic current-voltage characteristics . It is generally assumed that the vortex glass phase owes its existence to the presence of point-like pinning disorder. Observation of Bragg peaks in neutron scattering experiments confirms that the phase at low $`H`$ and $`T`$ is a Bragg glass. As the effective strength of the disorder is increased, either indirectly by increasing $`H`$ (which is believed to increase the effective strength of the disorder), or directly by increasing the amount of defects in the sample , the Bragg glass phase changes over to the vortex glass. The latter is separated from the liquid by a continuous transition . This phase diagram, thus, suggests that the first-order liquid-to-crystal transition in a three-dimensional (3d) system of point-like objects may be driven by the pinning disorder into a continuous liquid-to-glass transition. The formation of a glassy phase at strong disorder was investigated recently in an analytic study of the phase diagram of a system of hard spheres in a random pinning potential of arbitrary strength. This work used a combination of two “mean-field”- type approaches based on the “replicated liquid formalism” : the replica method was used for treating the effects of quenched disorder, and the hypernetted chain approximation to calculate the equilibrium correlation functions in the liquid in the presence of the pinning potential. These correlation functions were then the input in a replicated density functional of the Ramakrishnan-Yussouff (RY) form from which the location of the freezing transition of the liquid into a nearly crystalline (Bragg glass) phase was obtained. The possibility of a liquid-to-glass transition was investigated using the phenomenological approach of Mézard and Parisi . The resulting phase diagram in the density – disorder plane (the density, rather than the temperature, is the appropriate control parameter for a hard-sphere system) shows three thermodynamic phases: a nearly crystalline Bragg glass, an amorphous glassy phase, and a low-density liquid. It is consistent with the expectation (from earlier work and the Lindemann criterion ) that the density at which the Bragg glass to liquid transition occurs should move to higher values as the strength of the disorder is increased. The first-order crystallization transition is replaced by a continuous glass transition as the disorder strength is increased above a threshold value. This phase diagram is thus qualitatively similar to that proposed for some layered type-II superconductors if, as noted above, the density is replaced by the temperature $`T`$ and the disorder strength by the magnetic field $`H`$. In the present paper, we report the results of a numerical investigation of the phase diagram of the same system, i.e. a hard-sphere fluid in the presence of a random pinning potential with short-range spatial correlations. Our work involves the use of direct numerical minimization to study the effects of the presence of an external random potential on the minima of a discretized version of the RY free energy functional for the hard-sphere system. It is known that in the absence of external disorder, this model free energy functional exhibits, at sufficienly high densities, a large number of “glassy” local minima characterized by inhomogeneous but aperiodic density distributions. In addition, a global minimum corresponding to the crystalline solid is also found at high densities if the sample size and the discretization scale are commensurate with the crystal structure. If they are incommensurate, only the glassy minima are present. We have carried out extensive numerical investigations of the resulting free-energy landscape in the absence of disorder. In the present study, and with the physical situations described above in mind, we develop similar numerical methods to find the location and structure of the local minima of the same model free energy with the addition of the presence of a time-independent, random, one-body potential. Using these numerical methods we investigate how the uniform liquid, crystalline solid and glassy minima of the free energy in the absence of the random potential evolve as the strength of the random potential is increased from zero. We also examine the dependence of the free energy of these minima, and of the density structure (as given by the two-point density correlation function) of the system at these minima, on the strength of the disorder. In this picture, a transition from one phase to another is signalled by the crossing of the free energies of the corresponding minima of the free energy. By monitoring where these crossings occur as the density and the strength of the disorder are varied, we are able to map out the phase diagram in the density – disorder plane. This phase diagram is qualitatively very similar to the one obtained in the analytic work. For weak disorder we find, in the commensurate case as described above where a crystalline minimum exists, a first-order liquid-to-crystal transition that moves to higher density as the strength of the disorder is increased. In the metastable “supercompressed” regime (i.e. at a density higher than the value at which equilibrium crystallization takes place for the commensurate case), we find in all cases a liquid-to-glass transition. The density at which this transition occurs decreases (very slowly for the largest systems studied, which are incommensurate, and more rapidly for the smaller, commensurate systems) as the strength of the disorder is increased. The nature of this glass transition depends on the strength of the disorder: it is first order when the disorder is weak, but it changes to second order beyond a certain critical value of the disorder strength. For the commensurate case, the crystallization line crosses the glass transition line at or very near the same critical value of the disorder strength, so that the system at stronger disorder then undergoes a liquid-to-glass transition (instead of the liquid-to-crystal transition found for weak disorder) as the density is increased from a low value. The continuous nature of the glass transition in the large disorder regime is in contrast with the first order transition from the liquid to a crystalline or glassy state (depending on the commensurability) at small values of the disorder strength. Thus, this work provides support to the prediction that the first-order liquid-to-crystal (Bragg glass) transition should change over to a continuous liquid-to-glass transition as the strength of the pinning disorder is increased above a critical value. The rest of the paper is organized as follows. In section II, we define the model studied here and outline the numerical procedure used in this study. The numerical results obtained for the different transition lines in the density–disorder plane are described in detail in section III. Section IV contains a summary of our main results and a few concluding remarks. ## II Methods ### A The Free Energy functional As discussed in the Introduction, our starting point is the free energy as a functional of the time-averaged local density $`\rho (𝐫)`$ at each point $`𝐫`$. We write this free energy functional in the form: $$F[\rho ]=F_{RY}[\rho ]+F_s[\rho ]$$ (1) where the first term in the right-hand side is the RY free enrgy functional for hard spheres in the absence of disorder, and the second is the contribution arising from the presence of a quenched random potential. Thus we have: $`\beta F_{RY}[\rho ]`$ $`=`$ $`{\displaystyle 𝑑𝐫\{\rho (𝐫)\mathrm{ln}(\rho (𝐫)/\rho _0)\delta \rho (𝐫)\}}`$ (2) $``$ $`{\displaystyle \frac{1}{2}}{\displaystyle 𝑑𝐫𝑑𝐫^{}C(|𝐫𝐫^{}|)\delta \rho (𝐫)\delta \rho (𝐫^{})}.`$ (3) Here, we have defined $`\delta \rho (𝐫)\rho (𝐫)\rho _0`$ as the deviation of $`\rho (𝐫)`$ from $`\rho _0`$, the density of the uniform liquid, and taken the zero of the free energy at its uniform liquid value. In Eq.(3), $`\beta =1/(k_BT)`$, $`T`$ is the temperature and the function $`C(r)`$ is the direct pair correlation function of the uniform liquid at density $`\rho _0`$, which can be analytically expressed in terms of the usual dimensionless density for hard spheres of diameter $`\sigma `$, $`n^{}\rho _0\sigma ^3`$, by making use of the Percus-Yevick approximation . This approximation is sufficiently accurate in the density ranges ($`n^{}1.0`$) considered in this paper. We write also: $$\beta F_s[\rho ]=𝑑𝐫\delta \rho (𝐫)V_s(𝐫)$$ (4) where $`V_s(𝐫)`$ is an external potential (in dimensionless form) representing the random, quenched disorder. We will assume that $`V_s`$ has zero mean and short-range Gaussian correlations as detailed below. In order to carry out numerical work, we discretize our system. We introduce for this purpose a simple cubic computational mesh of size $`L^3`$ with periodic boundary conditions. On the sites of this mesh, we define density variables $`\rho _i\rho (𝐫_i)h^3`$, where $`\rho (𝐫_i)`$ is the density at site $`i`$ and $`h`$ the spacing of the computational mesh. It is known from previous work that in the absence of any random potential, this discretized system crystallizes at sufficiently high densities if the quantities $`h`$ and $`L`$ are commensurate with a fcc structure with appropriate lattice spacing, whereas no crystalline state exists when the computational mesh is incommensurate with a fcc structure. Both commensurate and incommensurate systems exhibit many glassy (inhomogeneous but aperiodic) minima of the free energy at densities higher than the value at which crystallization occurs in commensurate samples. To model the random potential $`V_s(𝐫)`$, we introduce random variables $`\{V_i\}`$ defined at the sites of the computational mesh. These variables are uncorrelated with one another, and distributed according to a Gaussian probability distribution with zero mean and variance $`s`$. Thus, $`s`$ represents the dimensionless strength of the disorder. In terms of these quantities, the dimensionless free energy of our discretized system has the form $`\beta F`$ $`=`$ $`{\displaystyle \underset{i}{}}\{\rho _i\mathrm{ln}(\rho _i/\rho _{\mathrm{}})(\rho _i\rho _{\mathrm{}})\}`$ (5) $``$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{i}{}}{\displaystyle \underset{j}{}}C_{ij}(\rho _i\rho _{\mathrm{}})(\rho _j\rho _{\mathrm{}})+{\displaystyle \underset{i}{}}V_i(\rho _i\rho _{\mathrm{}}),`$ (6) where the sums are over all the sites of the computational mesh, $`\rho _{\mathrm{}}\rho _0h^3`$, and $`C_{ij}`$ is the discretized form of the direct pair correlation function $`C(r)`$ of the uniform liquid. Our objective is to study the phase diagram of this system in the $`(n^{},s)`$ plane, in which in principle crystalline, liquid, and glassy phases may be found. The thermodynamics of hard spheres in the clean limit is determined by the dimensionless density $`n^{}`$ only. Our rescaling of the potential $`V_s`$ by $`\beta `$ (see Eq.(4)) ensures that $`s`$ is now the only additional relevant variable. We wish to locate various transition lines in this $`(n^{},s)`$ plane, that is, we wish to determine which phase (crystal, glass or liquid) is the thermodynamically stable one at different points in this plane. We also wish to know when and how a certain phase becomes metastable or unstable as we move around in the $`(n^{},s)`$ plane. In our mean-field description, different phases are represented by different minima of the free energy. If several local minima of the free energy are simultaneously present, then the minimum with the lowest free energy represents the thermodynamically stable phase and the other local minima correspond to metastable phases. A crossing of the free energies of two different minima signals a first-order phase transition. The point where a minimum becomes locally unstable (i.e. changes from a true minimum to a saddle point or disappears altogether) corresponds to a mean-field spinodal point representing the limit of metastability of the corresponding phase. A merging of the transition point with the spinodal points of the two phases signals a continuous phase transition in this description. Thus, a study of how the minima of the free energy of Eq.(6) evolve as $`n^{}`$ and $`s`$ are changed is sufficient for mapping out the mean-field phase diagram in the $`(n^{},s)`$ plane. We locate the minima of the free energy by using a numerical procedure generalized from that originally developed for the clean case . This procedure works by changing the local density variables $`\{\rho _i\}`$ in a way that ensures that these changes always decrease the free energy. Given an initial configuration of the variables $`\{\rho _i\}`$, this procedure finds, by constantly moving downhill on the free-energy surface in the multidimensional configuration space spanned by the $`L^3`$ variables $`\{\rho _i\}`$, the local minimum whose basin of attraction contains the intial state. Thus, different local minima of the free energy can be located by using this minimization procedure for different, appropriately chosen, initial configurations. As noted earlier, there are in our system three different kinds of free-energy minima: liquid, crystalline and glassy. In the clean limit ($`s=0`$), it is easy to distinguish among them: the liquid minimum has uniform density ($`\rho _i=\rho _{\mathrm{}}`$ for all $`i`$), the crystalline minimum has a periodic distribution of the density variables ($`\rho _i`$ is close to unity at mesh points corresponding to the sites of a fcc lattice, and close to zero at all other mesh points), and a glassy minimum exhibits a strongly inhomogeneous nonperiodic density distribution (some of the $`\rho _i`$’s are close to unity and the others are close to zero). This symmetry-based distinction among minima of different kind becomes less clear when the external random potential is turned on: for $`s0`$, the density distribution in the liquid phase is not completely homogeneous, and the crystalline state is not strictly periodic. We use here, therefore, a procedure of “adiabatic continuation” to distinguish among the liquid, crystalline and glassy minima in the presence of the disorder. This procedure works as follows: We start with a minimum of a particular kind obtained at $`s=0`$ for a given value of $`n^{}`$. There is no difficulty in generating the liquid (and if appropriate the crystalline) configuration for the pure system. Glassy states at $`s=0`$ are easily obtainable also, in the right density ranges, by the procedures described in Ref. . Indeed, we have used in many cases the same density configurations obtained there, which were available as computer files. After thus choosing the initial state, we generate a set of uncorrelated random numbers $`r_i,i=1,\mathrm{},L^3`$, distributed according to a Gaussian with unit variance. A “realization” of the random potential $`\{V_i\}`$ is obtained by multiplying these random numbers by the strength parameter $`s`$. The initial $`s=0`$ minimum is then “followed” to finite $`s`$ by increasing $`s`$ in small steps $`\delta s`$ . After each step increase, the minimization routine is run, to find the nearest local minimum. Thus, in the first step of this process, the initial configuration is that of the minimum obtained at $`s=0`$ and the values of $`V_i`$ are set at $`(\delta s)r_i`$. The resulting density configuration obtained from the minimization routine is then used as the initial configuration for the next step, with the values of $`V_i`$ incremented to $`2(\delta s)r_i`$. During this process, the random variables $`\{r_i\}`$ are held fixed – only the strength parameter $`s`$ in increased in steps of $`\delta s`$. By iterating this procedure, minima of different kinds obtained at $`s=0`$ for a certain $`n^{}`$ are “followed” at constant density to the desired value of $`s`$. We use the terms “liquid”, “crystalline” and “glassy” to denote the continued $`s0`$ minima obtained from a $`s=0`$ minimum of the corresponding kind by using this continuation procedure without crossing transition lines. We will see below that even at large $`s`$, there are distinguishable differences in the structure of the different kinds of minima. Once a minimum of the desired kind is obtained at a particular point in the $`(n^{},s)`$ plane, the free energy at the minimum reached, as well as the entire density configuration of the system at the minimum are obtained and can be analyzed. The translational correlations can be quantified by the two-point correlation function $`g(r)`$ of the density variables $`\{\rho _i\}`$. This function is defined as $$g(r)=\underset{i>j}{}\rho _i\rho _jf_{ij}(r)/[\overline{\rho }^2\underset{i>j}{}f_{ij}(r)],$$ (7) where the distance $`r`$ is measured in units of $`\sigma `$, $`\overline{\rho }_i\rho _i/L^3`$ is the average value of the $`\rho _i`$ variables at the minimum under consideration, and $`f_{ij}(r)=1`$ if the separation between mesh points $`i`$ and $`j`$ lies between $`r`$ and $`r+\mathrm{\Delta }r`$ ($`\mathrm{\Delta }r`$ is a suitably chosen bin size), and $`f_{ij}(r)=0`$ otherwise. This function represents the spatial correlation of the time-averaged local density, and is distinct from the equal-time, two-point density correlation function which is often called $`g(r)`$ in the literature. We also calculate $`\rho _{max}`$, the maximum value of the $`\rho _i`$ variables at the minimum, which gives additional information about the inhomogeneity when contrasted with $`\overline{\rho }`$ or its rescaled equivalent $`\rho _{av}\overline{\rho }(\sigma /h)^3`$ at the minimum. In addition to examining the transitions by looking at discontinuities in $`F`$, $`g(r)`$ and the density configurations, we also directly check on the stability of the corresponding minima. The stability of a local minimum requires that all the eigenvalues of the Hessian matrix $`𝐌`$ whose elements are given by $$M_{ij}\frac{^2(\beta F)}{\rho _i\rho _j}=\frac{1}{\rho _i}\delta _{ij}C_{ij}$$ (8) evaluated at the minimum must be positive. This matrix is difficult to handle numerically if the minimum under consideration is strongly inhomogeneous, with some of the $`\rho _i`$’s very close to zero. In such cases, the $`1/\rho _i`$ in the first term on the right-hand side of Eq.(8) causes numerical difficulties. To avoid this problem, we consider instead the closely related matrix $`𝐌^{}`$ whose elements are given by $$M_{ij}^{}\sqrt{\rho _i}M_{ij}\sqrt{\rho _j}=\delta _{ij}C_{ij}\sqrt{\rho _i\rho _j},$$ (9) evaluated at the minimum under consideration. It is easy to show that an instability of the minimum corresponds to the vanishing of the smallest eigenvalue $`\lambda `$ of this matrix. In our numerical work, we calculate the value of $`\lambda `$ in order to check whether the minimum under study becomes unstable as $`n^{}`$ or $`s`$ is varied. In our computations we have included the density range from $`n^{}=0.65`$ to $`n^{}=0.95`$, and values of $`s`$ from zero to about two. These are sufficient to encompass the phenomena that we wish to study. We have used three lattice sizes, $`L=12,15`$, and 25. For the last two we have used an incommensurate ratio $`h/\sigma =1/4.6`$, whereas for the smallest lattice we have taken the commensurate value $`h/\sigma =0.25`$. ## III Results ### A General considerations: Phase diagram At a general point in the $`(n^{},s)`$ plane, the system may have a number of local minima, one of which is the absolute minimum while the others are metastable. Possible minima are that corresponding to the liquid (the one with uniform density at $`s=0`$ and its continuation to finite $`s`$), the crystalline minimum, by which we similarly mean the one with a periodic structure at $`s=0`$, and its continuation as described in the preceding section, and a large number of glassy minima. As the values of $`n^{}`$ or $`s`$ change, free energy minima may in general appear and disappear, and the free energy values of those that remain change. There will therefore be a number of instabilities and transitions, which are the main subject of our study. Consider first the previously studied case of the disorder-free system ($`s=0`$ line). There, only the uniform liquid minimum is present at low densities. As $`n^{}`$ increases, a crystalline minimum appears if the computational mesh is commensurate. When $`n^{}`$ is further increased, a density is reached at which the crystal becomes thermodynamically stable, that is, its free energy becomes lower than that of the liquid state. We will denote this density as $`n_D^{}`$. Regardless of commensurability, many glassy minima appear as the density is further increased. We denote by $`n_C^{}`$ the density at which the first glassy minimum makes its appearance. Alternatively, one may consider the evolution of the glassy minima as $`n^{}`$ is decreased from a large initial value, and define $`n_C^{}`$ as the density at which the last remaining glassy minimum becomes locally unstable and disappears: the free energy of this last remaining glassy minimum crosses that of the liquid at a density $`n_B^{}`$ which is somewhat higher than $`n_C^{}`$. This density corresponds to a liquid to glass transition. In the commensurate case, the density $`n_C^{}`$ is above the crystallization density $`n_D^{}`$, and the free energy of the crystalline minimum is lower than that of the glassy minima. Thus, the glass transition in the pure system occurs in a “supercompressed” regime where the crystalline state is the thermodynamically stable one. When we include the effects of the disorder ($`s>0`$), we find yet another density, $`n_A^{}`$, at which the liquid minimum becomes locally unstable (i.e. ceases to exist as a local minimum of the free energy). For weak disorder, the value of $`n_A^{}`$ is large (substantially higher than $`n_B^{}`$) so that the four densities $`n_A^{}`$, $`n_B^{}`$, $`n_C^{}`$, and $`n_D^{}`$ are in decreasing order. Thus, we have four (three in the incommensurate case where the crystalline state is absent) functions $`n_X^{}(s)`$ with $`X=A,B,C,D`$, representing precisely the four transitions or instabilities defined above. We denote the corresponding lines in the $`(n^{},s)`$ plane as the $`A,B,C,D`$ lines. The determination of the location of these lines is one the main results of our work. These results will be discussed below, but to fix ideas and to make this discussion easier to follow, we show in Fig. 1 these four lines for the $`L=12`$ commensurate case. There, FIG. 1.: The overall phase diagram of the hard sphere system in the density ($`n^{}`$) – disorder ($`s`$) plane, obtained for the $`L=12`$ commensurate sample. The meaning of the line labels is explained in the text. The results shown are averages over 5 realizations of the disorder. the general structure of the phase diagram, including the general shape of the four lines $`n_X^{}(s)`$ can be seen. Similarly, we show in Fig. 2 the three lines $`n_X^{}(s),X=A,B,C`$ (from top to bottom) found in the incommesurate, $`L=25`$ system. The similarities and differences between the comensurate and incommensurate cases are discussed below. The lines in the phase diagram for the incommensurate $`L=15`$ case are within error bars the same as those shown in Fig. 2, so that the differences between Figs. 1 and 2 must be attributed to different commensurability rather than to different sample size. There are certain trends that can be easily discerned FIG. 2.: The overall phase diagram for the incommensurate case at size $`L=25`$ as explained in the text. The diamonds represent the $`A`$ line, the crosses the $`B`$ line and the dashed line is the $`C`$ line. Sample error bars have been indicated. They reflect sample-to-sample variations for six to twelve (the number increases with $`s`$) realizations of the disorder. when one follows a free energy minimum as $`s`$ is increased at constant $`n^{}`$. If one starts from the uniform liquid minimum at $`s=0`$ and a relatively small value of $`n^{}`$, the free energy value at the minimum (initially zero according to our convention) decreases steadily with increasing $`s`$. For the case in Fig. 2 at $`n^{}=0.66`$, for example, $`\beta F`$ is close to $`180`$ at $`s=1.8`$. The density distribution becomes progressively less uniform, with $`\rho _{max}`$, which at $`s=0`$ equals the average value $`\overline{\rho }=\rho _{\mathrm{}}`$, rising by more than one order of magnitude as $`s`$ increases from zero to one. For a deep glassy state at a relatively large value of $`n^{}`$, the free energy is strongly negative even at $`s=0`$, and its value decreases further as $`s`$ increases. For example, at $`n^{}=0.78`$ the glassy minimum for $`L=25`$ with $`\beta F=63`$ at $`s=0`$ can be continued to a minimum with $`\beta F`$ equal to $`231`$ at $`s=1.8`$. The density distribution at a glassy minimum is considerably more inhomogeneous than that of the liquid minimum continued to the same value of $`s`$ and it is less sensitive to the value of $`s`$: the quenched disorder has less effect on a state that is inhomogeneous and disordered to begin with. These trends in the behavior of liquid and glassy minima as $`s`$ is increased from zero are clearly illustrated by examining the pair correlation function $`g(r)`$, defined in FIG. 3.: Liquid phase correlations. The pair correlation function $`g(r)`$ as defined in Eq. (7) plotted as a function of $`r`$, the distance in units of $`\sigma `$, for the liquid-like minimum at density $`n^{}=0.66`$. The curves shown, in order of increasing peak height at $`r=1`$, correspond to $`s=0.2,0.6,1.0,1.4,1.8`$. The system size is $`L=25`$. FIG. 4.: The pair correlation function $`g(r)`$ for a glassy minimum. The curves shown correspond to the same values of $`L`$ and $`s`$ as in Fig.3, but for a glassy minimum at $`n^{}=0.78`$ as discussed in the text. Eq.(7), at each minimum. In Fig. 3, we show $`g(r)`$ computed for the liquid minimum at size $`L=25`$ and density $`n^{}=0.66`$. The curves shown, in order of increasing value of the peak near $`r=1`$, correspond to increasing values of $`s=0.2,0.6,1.0,1.4,1.8`$. There is a clearly visible increase in structure, which becomes more evident as the value of $`s`$ increases beyond unity. However, this level of structure is still quantitatively different from that found for glassy minima at relatively high densities. This can be seen by comparing Fig. 3 with Fig. 4 where we plot $`g(r)`$ for a $`L=25`$ glassy minimum continued from $`s=0`$ to $`s=1.8`$ at $`n^{}=0.78`$, as mentioned in the preceding paragraph. We see that the $`s`$-dependence of the structure is now much weaker, and the heights of the peaks at $`r=0`$ and near $`r=1`$ are much larger than those in Fig. 3. These results can be compared to those found in the replica calculation . To make contact with those results, our $`g(r)`$ for the liquid-like minimum should be compared with the function $`g_0(r)`$ of the replica symmetric solution, and our $`g(r)`$ for a glassy minimum with the function $`g_1(r)`$ of the replica-symmetry-broken solution. Although, due to differences in the modeling of the random potential and effects of discretization in the present study (some of these effects are discussed in section IV), a detailed, quantitative comparison of our results with those of Ref. is not possible, it is clear that the main features we have discussed are qualitatively similar. The crystalline minimum obtained for $`s=0`$ in commensurate systems at sufficiently high densities shows very little change in structure as it is followed to non-zero values of $`s`$. Any effects of weak pinning disorder on the crystalline order may be too subtle to show up at the system sizes and discretization scales used here in the commensurate case. ### B Instability of the liquid minimum We consider first the $`A`$ line, that is, the density at which the liquid minimum becomes locally unstable as $`n^{}`$ is increased from a low initial value, keeping $`s`$ fixed. This transition is detected at any desired value of $`s`$ in the following way. At a density previously determined to be well below the value of $`n_A^{}(s)`$ (this determination is easily performed by trial and error), one “follows” the $`s=0`$ liquid minimum, as previously explained, to the value of the disorder strength being studied. The density configuration at this minimum is the initial condition. Then, one proceeds to increase $`n^{}`$ by small intervals, thus moving up along a vertical line in the phase diagram. At every value of $`n^{}`$ that is reached, we run our minimization routine (using the configuration at the minimum obtained at the previous step as the starting point) to locate the nearest minimum. The density configuration at the minimum is analyzed and then used as the initial condition to study the next higher density. In the initial stages of this process, the system remains in the liquid-like minimum, with little change in its properties. However, as $`n^{}`$ reaches the value $`n_A^{}(s)`$, discontinuities are found. These are more prominent for the larger system sizes (Fig. 2) and particularly dramatic for values of $`s`$ not too large. The free energy drops abruptly, as the liquid minimum becomes unstable and the system has to find some other nearby minimum (our numerical minimization procedure is designed to converge only to stable local minima of the free energy). Computationally, this is heralded by a very sharp and obvious increase in the number of iterations required by our numerical procedure to find the free energy minimum nearest to the starting configuration. This new minimum is invariably glassy, as one might expect, since a considerable number of glassy minima are close in configuration space to the liquid-like minimum . The value of the free energy at the minimum that the system has reached drops sharply as the $`n_A^{}(s)`$ value is crossed, because the free energies of glassy minima are considerably lower in the region of the $`(n^{},s)`$ plane being considered. Also, every measure of structure in the system inceases abruptly, since, as discussed above in connection with Figs. 3 and 4, glassy states are much more inhomogeneous than the liquid-like ones in this region of the $`(n^{},s)`$ plane. An example of the behavior found is displayed in Figs. 5 and 6. In the main part of Fig. 5, we show the evolution of the free energy as $`n^{}`$ is increased FIG. 5.: Discontinuities at the $`A`$ line. In the main plot, the free energy in dimensionless form for a $`L=12`$ sample ($`s=0.6`$) is plotted as a function of density. A sharp drop in the free energy is seen as the liquid minimum becomes unstable and the system switches to a glassy minimum. As shown in the inset, this switch is also reflected in the discontinuity in $`\lambda `$, the smallest eigenvalue of the matrix $`𝐌^{}`$ defined in Eq.(9). in steps of 0.001, keeping $`s`$ fixed at 0.6 for a $`L=12`$ sample. One can clearly see that $`\beta F`$ varies little while the system remains in the liquid minimum and jumps abruptly as this minimum becomes unstable near $`n_A^{}0.78`$. The behavior for the larger incommensurate samples is quite similar, the main difference being that the drop in $`\beta F`$ is much larger, and that the transition occurs, at this value of $`s`$, at $`n_A^{}0.84`$ for both $`L=15`$ and $`L=25`$. The value of $`n_A^{}`$ can readily be found to very high precision and it varies little as one averages over different realizations of the quenched disorder, for the same $`s`$. The error bars shown in Fig. 2 correspond to an average over six to twelve realizations (the larger number at larger $`s`$.) The results in Fig. 1 are averages over five realizations. In the inset, we show that the smallest eigenvalue, $`\lambda `$, of the matrix $`𝐌^{}`$ defined in Eq.(9) approaches zero as $`n^{}`$ approaches $`n_A^{}`$ from below. This is as would be expected – as noted in section II, the instability of a local minimum is signalled by the vanishing of $`\lambda `$. FIG. 6.: Example of how the system becomes more structured as the $`A`$ line is crossed for an $`L=12`$ sample at $`s=0.6`$. The height $`g_{max}`$ of the first finite-$`r`$ peak in $`g(r)`$ increases discontinuously, the density nonuniformity represented by $`\rho _{max}`$ exhibits a large increase, and the average density $`\rho _{av}`$ shows a small discontinuous increase. In order to be able to use a single vertical scale, we have displayed $`\rho _{av}`$ rather than $`\overline{\rho }`$ (see text). In Fig. 6, we show three quantities which characterize the nature of the density distribution at a minimum. These are: $`g_{max}`$, the value of the pair correlation function $`g(r)`$ at its first finite $`r`$ maximum (near $`r=1`$); $`\rho _{max}`$, the maximum value of the $`\rho _i`$; and the dimensionless average density $`\rho _{av}`$ defined in section II. All these quantities exhibit discontinuous changes as the system switches minima at $`n^{}=n_A^{}0.78`$, (or at $`n_A^{}0.84`$ in the incommensurate case). $`g_{max}`$ remains close to unity as long as the system stays in the liquid state, and then jumps to a substantially larger value consistent with the increased short-range order present in a glassy minimum. This can also be seen from Figs. 3 and 4. The value of $`\rho _{max}`$ also increases by a considerable amount, indicating the increased inhomogeneity of a glassy minimum relative to the liquid-like one. The small increase in the value of $`\rho _{av}`$ reflects that the average density at a glassy minimum is slightly higher than that at the liquid-like minimum. The behavior discussed above changes as $`s`$ is increased. The change occurs near $`s=1`$ for $`L=12`$, and at somewhat larger $`s`$ for the other system sizes, as the $`A,B,C`$ lines become very close. The results obtained in the larger-$`s`$ regime are described in the next subsection. ### C Instability of glassy minima and the liquid to glass transition To find the $`B`$ and $`C`$ lines, we must start with a carefully chosen glassy configuration at a relatively high $`n^{}`$ and fixed $`s`$, and then follow this configuration to lower densities by decreasing $`n^{}`$ in small steps ($`\delta n^{}0.001`$), keeping the value of $`s`$ unchanged. This is continued until the minimum becomes unstable and the minimization routine converges to a new minimum which, if the starting minimum is chosen as described below, turns out to be the liquid-like one. The density at which this occurs defines the value of $`n_C^{}`$. Then, comparing the free energy of the glassy minimum with that of the liquid minimum obtained for the same realization of the disorder, it is easy to determine the value of $`n_B^{}`$ – this is the value of $`n^{}`$ at which the two free energies are equal. The determination of the appropriate starting glassy minimum is nontrivial. Glassy minima for $`s0`$ are obtained by continuation from those of the pure system ($`s=0`$). One may think that the best choice would be to take the glassy minimum with the lowest free energy at the starting ($`n^{},s`$) point. In practice, this is difficult to implement because an exhaustive enumeration of all the glassy minima is computationally very hard. The glassy minimum with the lowest free energy at a particular point in the $`(n^{},s)`$ plane does not in general continue to have the lowest free energy as the values of $`n^{}`$ and $`s`$ are changed. Also, in the pure system, all the configurations obtained by applying one of the symmetry operations of the computational mesh to the density configuration at a particular glassy minimum also correspond to local minima with exactly the same free energy. For $`s0`$, all these symmetry-related minima have to be considered separately because the presence of the random potential destroys the symmetries present in the pure limit. We have not found a rigorous solution to this problem. Instead, we first carried out an exploratory study of how the locations of the $`B`$ and $`C`$ lines in the phase diagram depend on the choice of the initial glassy minimum. The following choices, among others, were considered in our initial exploration. (a) One of the low-lying $`s=0`$ glassy minima, continued to finite $`s`$. (b) Beginning with the same starting configuration as in (a) and a specific realization of the random variables $`\{V_i\}`$, minimize the random potential energy (the last term in Eq.(6)) with respect to all symmetry operations of the computational mesh. This attempts to find the configuration that minimizes, among all the symmetry related ones, the contribution of the random potential to the free energy but it is not quite rigorous because the minimization is performed using the values of $`\{\rho _i\}`$ at the $`s=0`$ minimum. (c) The glassy minima to which the system moves when the density is increased above the $`A`$ line, as discussed in the preceding subsection. The outcome of this study is that the locations of the $`B`$ and $`C`$ lines in the $`(n^{},s)`$ plane are not sensitive to the choice of the glassy minimum as long as it is one of the low-lying minima. (Even when we have deliberately or accidentally chosen a “wrong”, non-low-lying, minimum, we have found that the system often spontaneously makes a glass-to-glass transition to a low-lying minimum as one decreases $`n^{}`$ above the $`B`$ line.) The variation of the values of $`n_B^{}`$ and $`n_C^{}`$ for different choices of the glassy minimum is comparable to the uncertainty of these values arising from sample-to-sample variations caused by differences in the realization of the disorder. The results described below were obtained (unless otherwise indicated) from runs in which a low-lying glassy minimum obtained from continuation of one at $`s=0`$ was taken to be the initial state for the density-lowering run. As explained at the beginning of this subsection, the initial configuration is followed to lower densities at fixed $`s`$ and the $`n_C^{}(s)`$ and $`n_B^{}(s)`$ points are found for that value of $`s`$. For relatively small values of $`s`$, the signatures of the $`C`$ instability are very easy to detect: they are similar to the discontinuities shown in Figs. 5 and 6. At larger values of $`s`$ more care is required. For small values of $`s`$, the $`A`$, $`B`$ and $`C`$ lines are well separated from one another. However, as the value of $`s`$ is increased, these three lines begin to approach each other. As shown in Figs. 1 and 2, the separation between lines $`A`$ and $`B`$ decreases rather rapidly with increasing $`s`$, while the separation between lines $`B`$ and $`C`$ decreases more slowly. Finally, near $`s=1`$, these three lines appear to merge with one another for the $`L=12`$ system. For $`L=25`$ (and also for $`L=15`$), the separation between them does not exceed the combined error bars, but separate $`B`$ and $`C`$ transitions can be detected in most (not all) “runs” (i.e. realizations of the disorder) as explained in detail below. At larger $`s`$, it becomes increasingly difficult to resolve these three lines as they come close to each other. Since lines $`A`$ and $`C`$ represent, respectively, the limits of stability of the liquid and glassy minima and line $`B`$ corresponds to the first-order liquid-glass transition, a merging of these three lines suggests that this transition becomes continuous as $`s`$ is increased beyond a “tricritical” value which would be close to unity for the $`L=12`$ commensurate sample and somewhat larger for the incommensurate samples. Another possibility is that the first-order liquid-glass transition disappears beyond a critical point near $`s=1`$. To examine the behavior in this region more closely, we have carried out several numerical experiments in which the value of $`n^{}`$ is “cycled” through the liquid-glass transition, keeping $`s`$ fixed at values close to unity. In this way, the three lines are detected in the same “run”. These numerical experiments are similar to simulations of hysteresis in magnetic phase transitions. We start with the liquid minimum at a low value of $`n^{}`$ (below line $`C`$), and increase $`n^{}`$ in small steps, keeping $`s`$ fixed. The liquid minimum is thus followed to higher densities until it undergoes a rapid change signalling a possible instability. The process of increasing $`n^{}`$ in small increments is continued for a few more steps, and then the local minimum so obtained is followed to lower densities by decreasing $`n^{}`$ is small steps. This is continued until the starting value of $`n^{}`$ is reached. If the liquid-glass transition at the chosen value of $`s`$ is first-order with the three densities $`n_A^{}`$, $`n_B^{}`$, and $`n_C^{}`$ separated from one another, then the cycling experiment described above should exhibit clear evidence of hysteresis. This is indeed what we find, for FIG. 7.: Hysteresis and discontinuities across the liquid-glass transition at small values of $`s`$. In the main plot, the dimensionless free energy of the stable minimum is plotted vs. density as one cycles across the $`A`$, $`B`$ and $`C`$ lines as explained in the text. Hysteresis is clearly observed. In the inset, the quantity $`g_{max}`$ is shown. The results shown are at $`s=0.8`$ for a commensurate $`L=12`$ system, but the same behavior is found in this range of $`s`$ for incommensurate systems. all system sizes and at every run, if the value of $`s`$ is lower than a certain critical value. A typical example is shown in Fig. 7 which shows the results for a $`L=12`$ sample at $`s=0.8`$. The hysteresis in the free energy and $`g_{max}`$ (shown in the inset) is evident: the liquid minimum becomes unstable at $`n_A^{}0.735`$ as $`n^{}`$ is increased from a low initial value, while the glassy minimum found for $`n^{}>n_A^{}`$ can be continued all the way down to $`n_C^{}0.720`$ before it becomes unstable. The liquid-glass transition occurs at $`n_B^{}0.725`$ where the two branches of the free energy cross. The same situation occurs for the incommensurate $`L=25`$ system except that the values of the transition points are $`n_A^{}0.79`$, $`n_B^{}0.73`$ and $`n_C^{}0.71`$ for $`s=0.8`$. The results at $`L=15`$ are, within error bars, the same as those for $`L=25`$ at this value of $`s`$. The behavior in Fig. 7 is to be contrasted with that shown in Fig. 8 which displays the results of the cycling experiment on a $`L=12`$ sample at $`s=1`$. The distribution of the random variables $`\{r_i\}`$ in this sample is the same as that of Fig 7 – only the strength of the disorder is changed. In this figure, there is no evidence of FIG. 8.: Cycling across the liquid-glass transitions for $`s`$ = 1 in a $`L=12`$ commensuarte system. The same quantities are plotted as in Fig. 7, and now no hysteresis is seen. Incommensurate systems exhibit the same behavior at somewhat larger values of $`s`$, but not in all runs. hysteresis in the free energy. The plot of $`g_{max}`$ shown in the inset exhibits a sharp change near $`n^{}=0.706`$ for both inreasing-$`n^{}`$ and decreasing-$`n^{}`$ runs, and the results for the two runs are nearly identical. Given the rounding off errors associated with the numerical procedures we use, the small differences between the increasing-$`n^{}`$ and decreasing-$`n^{}`$ values of $`g_{max}`$ are likely to be insignificant. We, therefore, conclude that at least within the resolution of our numerical procedures, there is no hysteresis at $`s=1.0`$ for this $`L=12`$ sample. This implies that the first order transition found in this sample for $`s=0.8`$ either becomes a continuous one or disappears as the value of $`s`$ is increased to 1.0. The sharp change in the value of $`g_{max}`$ near $`n^{}=0.706`$ suggests that the transition persists as a continuous one. To investigate this further, we have calculated the derivatives of $`g_{max}`$, $`\rho _{max}`$, and $`\rho _{tot}_i\rho _i`$ with respect to $`n^{}`$ in the region where these quantities change rapidly. We have also examined the behavior of $`\lambda `$ as a function of $`n^{}`$ in this region. FIG. 9.: Derivatives with respect to $`n^{}`$ of the quantities $`\rho _{tot}`$, $`\rho _{max}`$ and $`g_{max}`$, as defined in the text, plotted as functions of $`n^{}`$ across a putatively continuous liquid-glass transition in a $`L=12`$ sample with $`s=1.0`$. The three quantities have sharp peaks at $`n^{}=0.706`$. The eigenvalue $`\lambda `$, also defined in the text, shows a pronounced dip at the same point. Results for these quantities are shown in Fig. 9 for the same sample as that of Fig. 8. All the derivatives exhibit sharp peaks at $`n^{}=0.706`$, and the value of $`\lambda `$ goes through a minimum that is very close to zero at the same point. These results strongly suggest the occurrence of a continuous phase transition at $`n^{}=0.706`$. However, due to the limited resolution of our numerical calculations and the smallness of sample size, we can not rule out the possibility that the observed behavior reflects a sharp crossover rather than a true phase transition. Similar results are found for larger values of $`s`$. The continuation of the “transition line” beyond the point where the lines $`A`$, $`B`$ and $`C`$ come together is determined by locating the value of $`n^{}`$ at which the eigenvalue $`\lambda `$ reaches a minimum. The value of $`s`$ at which the $`A`$, $`B`$ and $`C`$ lines merge and the hysteresis in the cycling experiment disappears is found to be weakly dependent on the realization of the disorder – it varies between 1.0 and 1.2 for the five different $`L=12`$ samples studied. For the incommensurate samples, the situation is somewhat more ambiguous. For $`L=25`$, the same cycling procedure shows that the transition is clearly hysteretic for all runs with $`s1.1`$. For larger values of $`s`$, an increasingly larger percentage of the runs is non-hysteretic (i.e. the results for $`\beta F`$ look like those in Fig. 8), while the other runs display a behavior similar to that in Fig. 7 but with much smaller discontinuities. As $`s`$ is increased beyond $`s=1.8`$, it becomes, in most of the “runs”, impossible to distinguish the discontinuities, if any, from computer noise. Thus, it is possible in this case to plot separate $`A`$, $`B`$, and $`C`$ lines all the way up to $`s=1.8`$. This accounts for the obvious difference in this respect between Figs. 1 and 2. The results for $`L=15`$ are quite consistent with those for $`L=25`$, but the smaller system size makes all interpretations more difficult. Thus, it is more difficult to identify the precise position of any well-defined tricritial point (or a critical point) from the results for the incommensurate samples. One might alternatively say that these incommensurate results are indicative of a crossover. It is not possible to completely rule out that the behavior is different for the commensurate and incommensurate samples, or that the poorer resolution of the smaller samples masks discontinuous behavior in some of the larger $`s`$ runs. ### D Crystallization To study how the crystallization density $`n_D^{}`$ changes as $`s`$ is increased from zero (i.e. the location of the line $`D`$), we start with the crystalline minimum obtained for a commensurate sample at a large value FIG. 10.: Free energy crossing at the crystallization transition. The solid and dotted lines represent, respectively, the free energies of the crystalline and liquid-like minima of a $`L=12`$ sample with $`s=0.6`$. Their crossing point is the density $`n_D^{}(s)`$. of $`n^{}`$. We then find the symmetry related configuration that minimizes the random potential energy for a particular realization of the disorder and continue this configuration to the desired value of $`s`$. This configuration is then continued to smaller values of $`n^{}`$ by decreasing $`n^{}`$ in small steps. The crystalline minimum turns out to be quite robust under changes of the density and the strength of the disorder – the minimization routine converges rather quickly to the new minimum as the value of $`n^{}`$ or $`s`$ is changed by a small amount. While decreasing the value of $`n^{}`$, we keep track of the free energy of the crystalline minimum and find the value of $`n^{}`$ at which this free energy crosses that of the liquid minimum for the same realization of the disorder. For relatively small values of $`s`$, the crossing point determines the value of $`n_D^{}`$ for the chosen value of $`s`$. Typical results for the crossing of these two free energies are shown in Fig. 10. Our results for line $`D`$, averaged over five realizations of the disorder, are shown in Fig. 1. The crystallization transition is strongly first order for all values of $`s`$. In the small $`s`$ regime, the crystalline minimum has the lowest free energy for all densities above line $`D`$. Therefore, the lines $`A`$, $`B`$ and $`C`$ do not have any equilibrium thermodynamic significance in this regime for a commensurate system: the liquid-glass transition at line $`B`$ can be observed only if the crystallization transition at line $`D`$ is avoided, e.g. by rapid compression. As shown in Fig. 1, the crystallization line crosses the liquid-glass transition line at a point which is very close to that where the lines $`A`$, $`B`$ and $`C`$ seem to come together. Beyond this point, line $`D`$ is determined by the crossing of the free energies of glassy and crystalline minima. The procedure is quite analogous to that shown in Fig. 10. This line, therefore, represents a first order transition between crystalline and glassy states in this regime. The phase diagram of Fig. 1 implies that the system undergoes a first order liquid-to-crystal transition for small values of $`s`$ as the density is increased from a low initial value. However, as the value of $`s`$ is increased above a critical value (which is close to unity for the $`L=12`$ system), the transition as $`n^{}`$ is increased becomes a continuous liquid-to-glass transition (or perhaps a sharp crossover). The glassy state then undergoes a first order transition to the crystalline state as the density is increased further. The observed curvature of line $`D`$ for large $`s`$ also implies that the system would undergo a first order crystal-to-glass transition as the strength of the disorder is increased at constant density. ## IV Summary and Discussion We have mapped out the mean-field phase diagram of a hard sphere system in the presence of a quenched random potential by numerically studying the evolution of the minima of a model free energy as a function of the density $`n^{}`$ and the strength $`s`$ of the disorder. The phase diagram in the $`(n^{},s)`$ plane exhibits liquid, glassy and crystalline (for commensurate samples) phases. We find that the standard first order crystallization transition which occurs at $`s=0`$ upon increasing $`n^{}`$ retains its character at small $`s`$ as a first order transition from a weakly inhomogeneous liquid phase to a nearly crystalline state. The density at which this transition occurs increases somewhat with the strength of the disorder. We also find for all samples a liquid to glass transition in the metastable, “supercompressed” regime. This transition is first-order for small $`s`$, but within the resolution of our results it appears to become continuous as $`s`$ is increased above a critical value. This critical value is larger for incommensurate samples. The crystallization line in the $`(n^{},s)`$ plane crosses the glass transition line near the point where the glass transition becomes continuous. Thus, the first order crystallization transition found for small $`s`$ as the density is increased from a small initial value is replaced, for sufficiently large $`s`$, by a continuous liquid to glass transition. The phase diagram also shows, at larger $`s`$, a first order crystal to glass transition as the strength of the disorder is increased at constant density. All the qualitative features of our phase diagram (i.e. its topology, the shapes of the transition and instability lines, and the nature of the transitions) are identical to those found in the replica-based analytic study of the same syatem . Two of our most important results, namely the change in the nature of the liquid to glass transition as the strength of the disorder is increased above a critical value, and the crossing of the crystallization and glass transition lines above this critical value of the disorder strength, were also found in the analytic calculation. This similarity between the results of two studies using extremely different methodologies strongly suggests that the qualitative features of our phase diagram are correct, at least at the mean-field level. The considerable quantitative differences that exist between the numerical and replica results, i.e. that all the transition and instability lines in our phase diagrams lie at substantially lower densities than those obtained in the analytic study, have the same origin as the discrepancy between our $`s=0`$ results and those of molecular dynamics simulations of the pure hard sphere system. As noted in our earlier work , these differences result from the discretization of the free energy functional. The use of a simple cubic mesh of spacing $`h0.2\sigma `$ in the discretization procedure increases the relative stability of inhomogeneous local minima of the free energy and thus leads to substantially lower values for the densities at which crystallization and the glass transition occur. The quantitative differences between our results in Fig. 1 and those in Fig. 2, on the other hand, appear to arise chiefly from the incommensurability of the latter sample, rather than from the slight difference in the values of $`h`$, or even from that in the values of $`L`$: we have found negligible sample-size effects in comparing the $`L=15`$ results to those at $`L=25`$ at the same value of $`h`$. The effects of discretization would presumably disappear for $`h`$ much smaller than the width of the approximately Gaussian density distributions near the points where the particles are localized at an inhomogeneous minimum of the continuum free energy functional. Unfortunately, a numerical calculation with such small values ($`0.01\sigma `$) of $`h`$ would require dealing with a very large number (of the order of $`10^6`$) of variables $`\{\rho _i\}`$. This appears to be computationally intractable. Our phase diagram is a mean-field one – possible effects of fluctuations are not included in our calculation. The first-order crystallization transition is not expected to be strongly affected by fluctuations. The situation is more complex for the glass transition because there are a large number of glassy local minima. When the effects of fluctuations are included, the system might visit a large number of different glassy minima during its evolution over a long period of time, and thus behave like a liquid in the sense that the particles would no longer be localized in space and the time-averaged local density would be only weakly inhomogeneous. A true thermodynamic glass transition would occur only if the characteristic time scale for transitions between different glassy minima diverges in the thermodynamic limit. Whether this happens in the pure system is still a highly controversial issue. Further investigations of this question for systems of particles in the presence of quenched disorder would be very worthwhile. Also, the presence of multiple low-lying glassy minima of the free energy is expected to lead to slow relaxation even if no thermodynamic glass transition is present. Therefore, signatures of the mean-field glass transition found in our study should show up in the dynamics of the system even if no thermodynamic glass transition occurs when fluctuations are taken into consideration. Our density–disorder phase diagram exhibits qualitative similarities with the field–temperature phase diagram of some high-$`T_c`$ superconductors in the presence of random point pinning. For a system of vortices in the mixed phase of type-II superconductors, the temperature $`T`$ plays the role of the density $`n^{}`$ of the hard sphere system – increasing $`T`$ is analogous to decreasing $`n^{}`$. As pointed out in the Introduction, increasing the magnetic field $`H`$ is believed to increase the effective strength of the pinning disorder. Using these analogies, one can translate, in a very crude and qualitative sense, our phase diagram in the $`(n^{},s)`$ plane to a phase diagram for superconductors in the $`(T,H)`$ plane. Then, our result that the crystallization transition at weak disorder is replaced by a continuous glass transition as the strength of the disorder is increased translates into the statement that for superconductors, the first-order liquid to Bragg glass transition at low fields should change over to a continuous glass transition as the field is increased. As noted in the Introduction, this is precisely the behavior found in experiments on a family of high-$`T_c`$ superconductors. The experimentally obtained phase diagram of these superconductors also exhibits a Bragg glass to amorphous solid transition as the field $`H`$ is increased at low temperatures. This is analogous to the crystal to glass transition found in our phase diagram as the strength of the disorder is increased at constant density. Further evidence in support of this analogy is provided by a recent numerical study that suggests that the high-field, low-temperature phase of high-$`T_c`$ superconductors (the so-called vortex glass phase) is very similar to a structural glass. In view of these similarities, an extension of our calculation to a system of pancake vortices in layered superconductors with random point pinning, using the appropriate form of the free energy, would be of obvious interest. We are not aware of any experimentally studied system that provides a direct and precise physical realization of the model studied here. Colloidal suspensions in the presence of a time-independent, spatially random external potential (produced, for example, by suitably configured laser fields ) would probably provide a close approximation to our model. Since simple liquids with short range pair potentials which are strongly repulsive at short distances behave in many ways like a hard sphere liquid, our calculation is expected to apply, at least qualitatively, to such systems also. ###### Acknowledgements. We are grateful to F. Thalmann, G. I. Menon, A. K. Sood, S. Bhattacharya, G.F. Mazenko and T. Witten for helpful discussions or comments.
no-problem/0002/astro-ph0002003.html
ar5iv
text
# Compact Stellar Systems in the Fornax Cluster: Super-massive Star Clusters or Extremely Compact Dwarf Galaxies? ## 1 Introduction In cold dark matter (CDM) galaxy formation, small dense halos of dark matter collapse at high redshift and eventually merge to form the large virialised galaxy clusters seen today. The CDM model is very good at reproducing large-scale structure, but only very recently have the best numerical simulations (Moore et al. 1998) had the resolution to trace the formation of small halos within galaxy clusters, with masses $`10^9`$M. We do not yet know what the lower mass limit is for the formation of halos in the cluster environment: determining the lower limit of galaxy mass in clusters will provide an important constraint on these models. Most of the smallest cluster galaxies are low surface brightness dwarfs for which mass estimates are very difficult, though comparison with field low surface brightness dwarfs would suggest that they may be dark matter dominated (e.g. Carignan & Freeman 1988). In this paper we describe a population of small objects we have found in the Fornax Cluster (see also Minniti et al. 1998 and Hilker et al. 1999) which have high surface brightness. The origin and nature of these objects is not yet clear, but if they are a product of the galaxy formation process in clusters, their high surface brightness will make it possible to probe the low-mass limit discussed above. They may represent extreme examples of compact low luminosity (“M32-type”) dwarf ellipticals. Alternatively, these objects may be super-massive star clusters—there is a very large population of globular clusters associated with the central galaxy of the Fornax Cluster, NGC 1399 (Grillmair et al. 1994). These objects are generally similar to Galactic globular clusters with similar colours and luminosities (Forbes et al. 1998). There is evidence that they are not all bound to the NGC 1399 system. Kissler-Patig et al. (1999) show that the kinematics of 74 of the globular clusters indicate that they are associated with the cluster gravitational potential rather than that of NGC 1399. They infer that the most likely origin of these globular clusters is that they have been tidally stripped from neighbouring galaxies. This has also been suggested by West et al. (1995), although the effect would be diluted by the large number of halo stars that would presumably be stripped at the same time. Bassino, Muzzio & Rabolli (1994) suggest that the NGC 1399 globular clusters are remnants of the nuclei of dwarf nucleated galaxies that have survived the disrupture of being captured by the central cluster galaxy. A related suggestion is a second model proposed by West et al. (1995) that intra-cluster globular clusters could have formed in situ in the cluster environment. Bassino et al. (1994) conclude their discussion by noting that remnant nuclei an order of magnitude larger (and more luminous) than standard globular clusters would also be formed in significant numbers, but that existing globular cluster searches would not have included them. In Section 2 of this paper we describe how the observations of our Fornax Spectroscopic Survey have sampled this part of the cluster population by measuring optical spectra of all objects brighter than $`B_J=19.7`$ in the centre of the Fornax Cluster. In Section 3 we describe the properties of a new population of compact objects found in the cluster that appear to be intermediate in size between globular clusters and the smallest compact dwarf galaxies. We discuss the nature of these objects in Section 4 and show that higher resolution observations will enable us to determine if they are more like globular clusters or dwarf galaxies. ## 2 Discovery Observations: The Fornax Spectroscopic Survey Our Fornax Spectroscopic Survey, carried out with the 2dF multi-object spectrograph on the Anglo-Australian Telescope (see Drinkwater et al. 2000), is now 87% complete in its first field to a limit of $`B_J=19.7`$. The 2dF field is a circle of diameter 2 degrees (i.e. $`\pi `$ square degrees of sky). We have measured optical spectra of some 4000 objects (some going fainter than our nominal limit) in a 2dF field centred on the central galaxy of the Fornax Cluster (NGC 1399). This survey is unique in that the targets (selected from digitised UK Schmidt Telescope photographic sky survey plates) include all objects, both unresolved (“stars”) and resolved (“galaxies”) in this large area of sky. The resolved objects measured are mostly background galaxies as expected with a minor contribution from Fornax Cluster members. The unresolved objects are mostly Galactic stars and distant AGN, also as expected, but some are compact starburst (and post-starburst) galaxies beyond the Fornax Cluster (Drinkwater et al 1999a). Finally, in addition to the dwarf galaxies already listed in the Fornax Cluster Catalog (FCC: Ferguson 1989) which we have confirmed as cluster members, we have found a sample of five very compact objects at the cluster redshift which are unresolved on photographic sky survey plates and not included in the FCC. These new members of the Fornax Cluster are listed in Table 1 along with their photometry measured from the UKST plates. Our 2dF measurements of unresolved objects are 80% complete in the magnitude range of these objects ($`17.5<b_j<20.0`$). There is therefore about one more similar compact object still to be found in in our central 2dF field. The number density of these objects is therefore $`6\pm 3`$ per 2dF field ($`\pi `$ square degrees). Two of the objects (the two brightest) were also identified as cluster members by Hilker et al. (1999). Hilker et al. measured spectra of about 50 galaxies brighter than $`V=20`$ in a square region of width 0.25 degrees at the centre of the Fornax Cluster. In the “galaxies” they include objects which were very compact, but still resolved. By contrast our own survey covers a much larger area and also includes all unresolved objects. ## 3 Properties of the compact objects ### 3.1 Sizes These object images are unresolved and classified “stellar” in our UKST plate data, although imaging with the CTIO Curtis Schmidt shows that the brightest two objects have marginal signs of extended structure. In Fig. 1 we present R-band (Tech Pan emulsion + OG 590 filter) photographic images of these compact objects from the UKST. These were taken in seeing of about 2.2 arcseconds FWHM and the third object (Thales<sup>1</sup><sup>1</sup>1Thales of Miletus was the first known Greek philosopher and scientist and possibly the earliest astronomer. 3) is resolved with a 3.2 arcsecond FWHM. Applying a very simple deconvolution of the seeing this corresponds to a scale size (HWHM) of about 80 pc. This is much larger than any known globular cluster, so this object, at least, is not a globular cluster. The other objects are all unresolved, so must have scale sizes smaller than this. ### 3.2 Luminosity and Colours These new objects have absolute magnitudes $`13<M_B<11`$, based on a distance modulus of 30.9 mag to the Fornax Cluster (Bureau et al. 1996). These values are at the lower limit of dwarf galaxy luminosities (Mateo 1998), but are much more luminous than any known Galactic globular clusters (Harris, 1996) and the most luminous of the NGC1399 globulars (Forbes et al. 1998) which have $`M_B10`$. The luminosities of the compact objects are compared to several other populations of dwarf galaxy and star cluster in Fig. 2. We note that the magnitude limit of the 2dF data corresponds to an absolute magnitude of $`M_B11`$ here. In order of decreasing luminosity the first comparison is with the dwarf ellipticals listed in the FCC as members of the Fornax Cluster. The possible M32-type galaxies in the FCC are not included as none of them have yet been shown to be cluster members (Drinkwater, Gregg & Holman 1997). The Figure shows that the Fornax dEs have considerable overlap in luminosity with the compact objects, but morphologically they are very different, being fully resolved low surface brightness galaxies. Recently, several new compact dwarf galaxies have been discovered in the Fornax Cluster (Drinkwater & Gregg 1998) but these are all brighter than $`M_B=14`$ and do not match any of the objects we discuss here. Binggeli & Cameron (1991) measured the luminosity function of the nuclei of nucleated dwarf elliptical galaxies in the Virgo Cluster. The Figure shows that this also overlaps the distribution of the new compact objects. In this case the morphology is the same, so the compact objects could originate from the dwarf nuclei. The Figure also shows the luminosity functions of both the NGC 1399 globular clusters (Bridges, Hanes & Harris, 1991) and Galactic globular clusters (Harris 1996). These are quite similar and have no overlap with the compact objects. For completeness we note that the luminosities of the compact objects have considerable overlap with the luminosities of dwarf galaxies in the Local Group (Mateo 1998), but even the most compact of the Local Group dwarfs, Leo I ($`M_B=11.1`$) would be resolved ($`r_e3^{\prime \prime }`$) in our images at the distance of Fornax. The only population they match in both luminosity and morphology is the bright end of the nuclei of nucleated dwarf ellipticals. ### 3.3 Spectral Properties The 2dF discovery spectra of these compact objects are shown in Figure 3. They have spectra similar to those of early type dwarf galaxies in the sample (two are shown for comparison in the Figure) with no detectable emission lines. As part of the spectral identification process in the Fornax Spectroscopic Survey, we cross-correlate all spectra with a sample of stellar templates from the Jacoby et al. (1984) library. The spectra of the new compact objects were best fit by K-type stellar templates, consistent with an old (metal-rich) stellar population. The dE galaxies observed with the same system by contrast are best fit by younger F and early G-type templates. This gives some indication in favour of the compact objects being related to globular clusters, although we note that two of them were analysed by Hilker et al. (1999) in more detail without any conclusive results. We do not have the spectrum of a dE nucleus available for direct comparison, but since our 2dF spectra are taken through a 2 arcsec diameter fibre aperture, the spectrum of FCC 211, a nucleated dE, is dominated by the nucleus. This spectrum was fitted by a younger F-type stellar template, again suggestive of a younger population than the compact objects. We cannot draw any strong conclusions from these low-resolution, low signal-to-noise spectra. ### 3.4 Radial Distribution The main advantage of our survey over the previous studies of the NGC 1399 globular cluster system (e.g. Grillmair et al. 1994, Hilker et al. 1999) is that we have complete spectroscopic data over a much larger field, extending to a radius of 1 degree (projected distance of 270 kpc) from the cluster centre. This means that we can determine the spatial distribution of the new compact objects. In Figure 4 we plot the normalised, cumulative radial distribution of the new compact objects compared to that of foreground stars and cluster galaxies. This plot is the one used to calculate Kolmogorov-Smirnov statistics and allows us to compare the distributions of objects independent of their mean surface densities. It is clear from the Figure that the new compact objects are very concentrated towards the centre of the cluster, at radii between 5 and 30 arcminutes (20–130 kpc). Their distribution is more centrally concentrated than the King profile fitted to cluster members by Ferguson (1989) with a core radius of 0.7 degrees (190 kpc). The Kolmogorov-Smirnov (KS) test gives a probability of 0.01 that the compact objects have the same distributions as the FCC galaxies: they are clearly not formed (or acreted) the same way as average cluster galaxies. To test the hypothesis that the compact objects are formed from nucleated dwarfs, we also plot the distribution of all the FCC nucleated dwarfs, as these are more clustered than other dwarfs (Ferguson & Binggeli 1994). However in the central region of interest here the nucleated dwarf profile lies very close to the King profile of all the FCC galaxies, so this does not provide any evidence for a direct link with the new compact objects. West et al. (1995) suggest that a smaller core radius should be used for intra-cluster globular clusters (GCs). This profile, also shown in the Figure, is more consistent with the distribution of the new objects: the KS probability of the compact objects being drawn from this distribution is 0.39. We also note that the radial distribution of the compact objects is much more extended than the NGC 1399 globular cluster system as discussed by Grillmair et al. and extends to three times the projected radius of that sample. It unlikely that all the compact objects are associated with NGC 1399. This is emphasised by a finding chart for the central 55 arcminutes of the cluster in Fig. 5 which indicates the location of the compact objects. They are widely distributed over this field and Thales 3 in particular is much closer to NGC 1404, although we note that its velocity is not close to that of NGC 1404 (see below). ### 3.5 Velocity Distribution We have some limited information from the radial velocities of the compact objects. The mean velocity of all 5 ($`1530\pm 110\mathrm{km}\mathrm{s}^1`$) is consistent with that of the whole cluster ($`1540\pm 50\mathrm{km}\mathrm{s}^1`$) (Jones & Jones (1980). However, given the small sample, it is also consistent with the velocity of NGC 1399 ($`1425\pm 4\mathrm{km}\mathrm{s}^1`$) as might be expected for a system of globular clusters. Interestingly, the analysis of the dynamics of 74 globular clusters associated with NGC 1399 by Kissler-Patig et al. (1999) notes that their radial velocity distribution has two peaks, at about 1300 and 1800$`\mathrm{km}\mathrm{s}^1`$. Our sample is far too small to make any conclusions about the dynamics of these objects at present. ## 4 Discussion We cannot say much more about the nature of these objects on the basis of our existing data. In ground-based imaging, they are intermediate between large GCs and small compact dwarf galaxies, so it becomes almost a matter of semantics to describe them as one or the other. The most promising way to distinguish between these possibilities is to measure their mass-to-light (M/L) ratios. If they are large, but otherwise normal, GCs, they will be composed entirely of stars giving very low M/L. If they are the stripped nuclei of dwarf galaxies we might expect them to be associated with some kind of dark halo, but we would not detect the dark halos at the small radii of these nuclei, so we would also measure small M/L values. Alternatively, these objects may represent a new, extreme class of compact dwarf elliptical (“M32-type”) galaxy. These would presumably have formed by gravitational collapse within dark-matter halos, so would have high mass-to-light-ratios, like dwarf galaxies in the Local Group (Mateo 1998). One argument against this interpretation is the apparent lack of M32-like galaxies at brighter luminosities (Drinkwater & Gregg 1998). If the compact objects are dwarf galaxies, they will represent the faintest M32-like galaxies ever found. They may also fill in the gap between globular clusters and the fainter compact galaxies in the surface brightness vs. magnitude distribution given by Ferguson & Binggeli (1994). A further possibility is that these are small scale length ($`100`$ pc) dwarf spheroidal galaxies of only moderately low surface brightness. While Local Group dSphs of equivalent luminosities generally have substantially larger scale sizes (and consequently lower surface brightnesses) (Mateo, 1998), Leo I for example has $`M_B=11.0`$, and a scale length of only 110 pc (Caldwell et al., 1992), but as we discuss above this would be resolved in our existing imaging. Our existing data will only allow us to estimate a conservative upper limit to the mass of these objects. If we say that the core radii of the objects are less than 75 pc and the velocity dispersions are less than 400$`\mathrm{km}\mathrm{s}^1`$ (the resolution of our 2dF spectra) we find that the virial mass must be less than $`10^{10}\text{M}\text{}`$. For a typical luminosity of $`M_B=12`$ this implies that $`M/L<2\times 10^3`$. This is not a very interesting limit, so we plan to reobserve these objects at higher spectral resolution from the ground and higher spatial resolution with the Hubble Space Telescope (HST) in order to be sensitive to $`M/L100`$. This will allow us to distinguish globular clusters from dwarf galaxies. In order to demonstrate what we could measure with high-resolution images, we present two extreme possibilities in Figure 6: a very compact Galactic globular cluster and a dwarf galaxy with an $`r^{1/4}`$ profile ($`r_e=0.2`$ arcsec), both normalised to magnitudes of $`B=19`$ ($`V=18.4`$) and the Fornax cluster distance. We also plot the PSF of the Space Telescope Imaging Spectrograph (STIS) in the Figure for reference. The globular cluster profile is that of NGC 2808 (Illingworth & Illingworth 1976) with the radius scaled to the distance of the Fornax Cluster and the surface brightness then scaled to give the desired apparent magnitude. The globular cluster profile is very compact and will only just be resolved with HST, but it will clearly be differentiated from the dwarf galaxy profile. In addition to measuring the size of these objects for the mass measurement, the radial surface brightness profiles may also give direct evidence for their origin and relationship to other kinds of stellar systems. For example, if they are the stripped nuclei of galaxies, the remnants of the outer envelope might show up in the HST images as an inflection in the surface brightness profile at large radius. ## 5 Summary We have reviewed the observed properties of these new compact objects discovered in the Fornax Cluster. Their luminosities are intermediate between those of known globular clusters and compact dwarf galaxies, but they are consistent with the bright end of the luminosity function of the the nuclei of nucleated dwarf ellipticals. The 2dF spectra are suggestive of old (metal-rich) stellar populations, more like globular clusters than dwarf galaxies. Finally the radial distribution of the compact objects is more centrally concentrated than cluster galaxies in general, but extends further than the known globular cluster system of NGC 1399. These objects are most likely either massive star clusters (extreme globular clusters or tidally-stripped dwarf galaxy nuclei) or very compact, low-luminosity dwarf galaxies. In the latter case these new compact objects would be very low-luminosity counterparts to the peculiar compact galaxy M32. This would be particularly interesting given the lack of M32-like galaxies at brighter luminosities (Drinkwater & Gregg 1998). With higher resolution images and spectra we will be able to measure the mass-to-light ratios of these objects and determine which of these alternatives is correct. ## Acknowledgements We thank the referee for helpful suggestions which have improved the presentation of this work. We wish to thank Dr. Harry Ferguson for helpful discussions and for providing the STIS profile. We also thank Dr. Trevor Hales for assistance in the naming of the objects. MJD acknowledges support from an Australian Research Council Large Grant. ## References Caldwell, N., Armandroff, T.E., Seitzer, P., Da Costa, G.S., 1992, AJ, 103, 840 Bassino, L.P., Muzzio, J.C., Rabolli, M. 1994, ApJ, 431, 634 Binggeli, B., Cameron, L.M., 1991, A&A, 252, 27 Bridges, T.J., Hanes, D.A., Harris, W.E., 1991, AJ, 101, 469 Bureau, M., Mould, J.R., Staveley-Smith, L., 1996, ApJ, 463, 60 Carignan, C., Freeman, K.C. 1988, ApJ, 332, L33 Drinkwater, M.J., Gregg, M.D., 1998, MNRAS, 296, L15 Drinkwater, M.J., Gregg, M.D., Holman, B.A., 1997 in Arnaboldi M., Da Costa G.S., Saha P., eds, ASP Conf. Ser. Vol. 116, The Second Stromlo Symposium: The Nature of Elliptical Galaxies. Astron. Soc. Pac., San Francisco, p. 287 Drinkwater, M.J., Phillipps, S., Gregg, M.D., Parker, Q.A., Smith, R.M., Davies, J.I., Jones, J.B., Sadler, E.M., 1999a, ApJ, 511, L97 Drinkwater, M.J., Phillipps, S., Jones, J.B., Gregg, M.D., Deady, J.H., Davies, J.I., Parker, Q.A., Sadler, E.M., Smith, R.M. 2000, A&A, submitted Ferguson H.C., 1989, AJ, 98, 367 Ferguson H.C., Binggeli, B., 1994, A&ARv, 6, 67 Forbes, D.A., Grillmair, C.J., Williger, G.M., Elson, R.A.W., Brodie, J.P. 1998, MNRAS, 293, 325 Grillmair, C.J., Freeman, K.C., Bicknell, G.V., Carter, D., Couch, W.J., Sommer-Larsen, J., Taylor, K. 1994, ApJ, 422, L9 Harris, W.E., 1996, AJ, 112, 1487 Hilker, M., Infante, L., Vieira, G., Kissler-Patig, M., Richtler, T., 1999, A&AS, 134, 75 Illingworth, G., Illingworth, W. 1976, ApJSup, 30, 227 Jacoby, G.H., Hunter, D.A., Christian, C.A., 1984, ApJSup, 56, 257 Jones, J.E., Jones, B.J.T. 1980, MNRAS, 191, 685 Kissler-Patig, M., Grillmair, C.J., Meylan, G., Brodie, J.P., Minniti, D., Goudfrooij, P., 1999, AJ, 117, 1206 Mateo, M., 1998, Ann. Rev. Astron. Astrophys., 36, 435 Minniti, D., Kissler-Patig, M., Goudfrooij, P., Meylan, G., 1998, AJ, 115, 121 Miller, L. A., Cormack, W., Paterson, M., Beard, S., Lawrence, L., 1992, in ‘Digitised Optical Sky Surveys’, eds. H.T. MacGillivray, E.B Thomson, Kluwer Academic Publishers, p. 133 Moore, B., Governato, F., Quinn, T., Stadel, J., Lake, G. 1998, ApJ, 499, L5 West, M.J., Cote, P., Jones, C., Forman, W., Marzke, R.O. 1995 ApJ 453 L77
no-problem/0002/astro-ph0002451.html
ar5iv
text
# Tully–Fisher Relation and its Implications for Halo Density Profile and Self-interacting Dark Matter ## 1 INTRODUCTION It has been known since the work of Tully and Fisher (1977) that there are systematic correlations between the luminosity and the rotation velocity of disk galaxies. Since the rotation curves of relatively bright disk galaxies are quite flat at large radii, a characteristic rotation velocity $`V_\mathrm{c}`$ (e.g. the maximum rotation velocity $`V_{\mathrm{max}}`$) can be defined for each disk. When luminosity $`L`$ is plotted against $`V_{\mathrm{max}}`$ for a sample of spiral galaxies, it is found that most galaxies lie close to a power-law relation (the Tully-Fisher relation): $$L=AV_{\mathrm{max}}^\alpha ,$$ (1) where $`\alpha `$ is the slope, and $`A`$ is the ‘zero–point’. The observed value of $`\alpha `$ ranges from about 2.5 to about 4; both $`\alpha `$ and $`A`$ may depend on the waveband (Strauss & Willick 1995 and references therein). In the $`I`$-band, recent determination by Giovanelli et al. (1997) gives $$_I5\mathrm{log}h=21.007.68\left(\mathrm{log}W2.5\right),$$ (2) where $`W2V_{\mathrm{max}}`$ is the $`21\mathrm{cm}`$ hydrogen line width, and the corresponding Tully-Fisher slope is $`\alpha =7.68/2.53.1`$. The observed Tully-Fisher relation is quite tight: for a fixed $`V_{\mathrm{max}}`$ the spread in the absolute magnitude is smaller than 1 magnitude. It is possible that the small scatter is caused by observational errors and that the intrinsic scatter could be even smaller. Because of its tightness, the Tully-Fisher relation can be used as a distance indicator to spiral galaxies. But the tightness also gives stringent constraints on the formation and evolution of disk galaxies. Since the typical rotation velocity (such as $`V_{\mathrm{max}}`$) is related to the total gravitational mass, while $`L`$ is related to the total number of stars, the position of a particular galaxy in the $`L`$$`V_\mathrm{c}`$ plane depends both on the efficiency for gas to be converted into stars and on the mass distribution in the galaxy. Real galaxy disks are observed to cover two orders of magnitude in luminosity and in surface luminosity density, and with a variety of rotation curves. The question arises whether the observed Tully-Fisher relation requires the conspiracy of some very special initial conditions for disk formation or does it have a more generic origin. Whatever the origin is, a successful model should explain both the tight Tully-Fisher relation and the diversity of the disk population. The current standard scenario of disk formation assumes that galaxy disks form as gas cools in dark matter haloes (e.g. Fall & Efstathiou 1980; Dalcanton, Spergel & Summers 1997; Mo, Mao & White 1998, hereafter MMW). In this scenario, the mass of a disk is determined by the mass of its host halo together with the efficiency with which the halo gas settles into a disk; the size of a disk is determined by the angular momentum of the disk material initially acquired from the tidal field of the cosmic density field, together with the processes that can change the disk angular momentum; and the rotation curve of a disk is determined by the density profile of its host halo and the interaction between the halo and disk components. Detailed modelling shows that this scenario of disk formation, combined with current theory of cosmogony, is quite powerful in interpreting observational data on galaxy disks (e.g. Dalcanton et al. 1997, MMW, van den Bosch 1998, 2000; Avila-Reese, Firmani & Hernandez 1998; Mao, Mo & White 1998; Mao & Mo 1998; Heavens & Jimenez 1999; Mo, Mao & White 1999; Syer, Mao & Mo 1999; Firmani & Avila-Reese 2000). In particular, MMW showed that disk formation in current CDM models can explain many observations on galaxy disks, provided that the following conditions are fulfilled: (1) dark haloes are not very concentrated; (2) only part of the gas in a dark halo settles into a disk; (3) disk material does not lose much of its angular momentum when it settles into a disk; (4) present disks form quite late, at redshifts $`z1`$. Numerical simulations have also been used to study disk formation in CDM models (e.g. Weil, Eke & Efstathiou 1998; Sommer-Larsen, Gelato & Vedel 1999; Koda, Sofue & Wada 1999; Navarro & Steinmetz 2000b). One might hope that such simulations can be used to check the assumptions made in the standard model. However, there is an outstanding problem at present. In many of these simulations, it is found that protogalactic gas is mostly in dense clumps. As such clumps sink towards the halo centre to make a disk, they lose much of their angular momentum to the dark halo due to dynamical friction, and the resulting disk is much too small to match any realistic galaxy disk. One possibility to solve this problem is to assume that some feedback effects can keep the protogalactic gas in a diffused form so that the effect of dynamical friction is reduced. Such a process has not been very successfully implemented in current models of disk formation, but is required in order to make progress in the standard framework. Recent high-resolution N-body simulations of dark haloes (Moore et al. 1999a,b; Jing & Suto 2000; Springel et al. 2000) also expose disk formation in the CDM models to another problem: CDM haloes may be too concentrated to match the observed rotation curves of disk galaxies (e.g., Navarro 1998). This mismatch suggests that some modifications on the CDM cosmogony might be needed. In this paper we take the attitude that the main idea of the standard scenario for disk formation is correct but some modifications on earlier models are required. As discussed above, disk formation in this scenario involves the following important aspects: (1) density profiles of dark haloes, (2) the fraction of halo gas that can settle into a disk, (3) the specific angular momentum of disk material relative to halo material, and (4) the assembly times of present disks. We will take an empirical approach towards all these different aspects. Our goal is to find a simple model within the standard scenario, so that the most important properties of the disk population, such as the Tully-Fisher relation, can be explained without invoking subtle assumptions. The formalism to be used is very similar to that in MMW, but with substantially relaxed model assumptions. One of our main conclusions is that the Tully-Fisher relation is nothing mysterious but a generic result of the gravitational interaction between the disk and the halo in systems with properties close to that predicted by the current model. Although this conclusion has already been reached in MMW and Dalcanton et al. (1997), here we examine the process in more detail. Another main result of this paper is that the halo profile required for explaining the Tully-Fisher relation is less concentrated than that predicted by conventional CDM models and that the required concentration of galactic haloes does not depend strongly on halo mass. This gives strong constraints on the properties of dark matter. In fact, our results cannot be explained by some of the recent proposals for resolving the conflict between conventional CDM models and the halo profile of faint galaxies. Assuming that the shallow profile is due to dark matter self-interaction (either scattering or annihilation), we discuss the implication of our results for the mass and cross section of the dark matter particles. ## 2 Simple Consideration ### 2.1 Self–Gravitating Disks To start with, let us consider a simple model where galaxy disks are self-gravitating and have exponential surface-density profiles: $$\mathrm{\Sigma }(R)=\mathrm{\Sigma }_0\mathrm{exp}\left(R/R_\mathrm{d}\right),$$ (3) where the central surface density $`\mathrm{\Sigma }_0`$ and the disk scale-length $`R_\mathrm{d}`$ are related to the disk mass by $`M_\mathrm{d}=2\pi \mathrm{\Sigma }_0R_\mathrm{d}^2`$. The rotation curve of such a disk is given by $$V_\mathrm{d}^2(R)=4\pi G\mathrm{\Sigma }_0R_\mathrm{d}y^2\left[_0(y)𝒦_0(y)_1(y)𝒦_1(y)\right],y\frac{R}{2R_\mathrm{d}},$$ (4) where $`_n`$ and $`𝒦_n`$ are modified Bessel functions of the first and second kinds (Freeman 1970). This rotation curve peaks at $`R2.16R_\mathrm{d}`$, with a maximum rotation velocity given by $`V_{\mathrm{max}}^22.5G\mathrm{\Sigma }_0R_\mathrm{d}`$. Assuming a disk mass-to-light radio $`\mathrm{{\rm Y}}_\mathrm{d}M_\mathrm{d}/L_\mathrm{d}`$, this relation can be written as $$L_\mathrm{d}=B\left(\frac{V_{\mathrm{max}}}{200\mathrm{km}\mathrm{s}^1}\right)^4\left(\frac{I_0}{100\mathrm{L}_{}\mathrm{pc}^2}\right)^1\left(\frac{\mathrm{{\rm Y}}_\mathrm{d}}{\mathrm{{\rm Y}}_{}h}\right)^2,$$ (5) where $`\mathrm{{\rm Y}}_{}\mathrm{M}_{}/\mathrm{L}_{}`$, $`I_0\mathrm{\Sigma }_0/\mathrm{{\rm Y}}_\mathrm{d}`$ is the disk central luminosity density, and $$B8.5\times 10^{11}h^2\mathrm{L}_{}.$$ (6) Equation (5) looks quite similar to the observed relation (1). But the amplitude is too high compared with the observed Tully-Fisher relation. To see this, we use equation (2) to obtain $$L_I(V_{\mathrm{max}}=200\mathrm{km}\mathrm{s}^1)2.2\times 10^{10}h^2\mathrm{L}_{}^{}{}_{I}{}^{}.$$ (7) The typical scale-length of normal galaxies with $`V_{\mathrm{max}}=200\mathrm{km}\mathrm{s}^1`$ is about $`3.5h^1\mathrm{kpc}`$ (Courteau 1996, 1997; de Jong 1996). The implied typical disk surface brightness is therefore $`I_0300\mathrm{L}_{}^{}{}_{I}{}^{}\mathrm{pc}^2`$. This surface luminosity density is slightly higher than the median value of the Freeman disk, $`\mu _B=21.7`$ (Freeman 1970), with a colour correction $`(BI)=1.8`$ (de Jong 1996). Since the observed colours of disk galaxies are quite uniform, the stellar mass-to-light ratio should not vary significantly among normal disks with similar luminosity. Based on various considerations – stellar population synthesis (e.g. de Jong 1996), stellar counts and kinematics in the solar neighborhood (Kuijken & Gilmore 1989; Gould, Bahcall & Flynn 1996), and kinematics of external galaxies (Bottema 1997) – the stellar mass-to-light ratio of galaxy disks in the $`I`$-band is about $`2h`$. Since the masses of normal disks are dominated by stars, we may write $`\mathrm{{\rm Y}}_\mathrm{d}2h`$ in the $`I`$-band. Inserting these values of $`I_0`$ and $`\mathrm{{\rm Y}}_\mathrm{d}`$ into equation (5), we see that the predicted luminosity at $`V_{\mathrm{max}}=200\mathrm{km}\mathrm{s}^1`$ is about 3 times as high as that observed. There are two possibilities to explain this factor of 3. First, it might be that the value of $`\mathrm{{\rm Y}}_\mathrm{d}`$ is closer to $`3h`$ than to $`2h`$. Given the uncertainty in $`\mathrm{{\rm Y}}_\mathrm{d}`$, such a moderate increase may be allowed. In this case, the observed Tully-Fisher zero-point would be consistent with the assumption that galaxy disks are self-gravitating at the radii of maximum rotation. The second possibility is that the value of $`\mathrm{{\rm Y}}_\mathrm{d}`$ is close to $`2h`$, but about half of the gravitational force responsible for the maximum rotation actually comes from an extra mass component, i.e., a dark halo. The second possibility is much more likely, not only because dark haloes are required to explain the flat rotation curves of spiral galaxies (e.g. Rubin, Ford & Thonnard 1980; Persic & Salucci 1988) and to stabilize a thin disk (e.g. Efstathiou, Lake & Negroponte 1982), but also because a model with disks dominating $`V_{\mathrm{max}}`$ cannot be made consistent with the small Tully-Fisher scatter. This can be seen from equation (5): if the mass-to-light ratio $`\mathrm{{\rm Y}}_\mathrm{d}`$ is assumed not to change significantly among galaxies with similar $`V_{\mathrm{max}}`$ (probably a good assumption given that their colours are quite uniform), the scatter in $`L_\mathrm{d}`$ is the same as that in $`I_0`$ for given $`V_{\mathrm{max}}`$. The observed range in surface brightness is about 1.5 magnitudes for normal galaxies and is even larger if low surface-brightness galaxies are included (see Bothun, Impey & McGaugh 1997). The implied scatter is therefore much larger than that observed (see also Courteau & Rix 1999). Thus, the observed Tully-Fisher relation is not consistent with the assumption that disk rotation is dominated by disk gravity at the radius of maximum rotation. ### 2.2 The Effects of Dark Haloes If a dark halo contributes significantly to $`V_{\mathrm{max}}`$ in a typical disk galaxy, we want to know whether the presence of dark haloes can also help to explain the zero-point and small scatter in the Tully-Fisher relation, given that disks have a large spread in their surface brightness. Taking into account the gravity of a dark halo (assumed to be spherically symmetric), we should replace equation (5) by $$L_\mathrm{d}=B\left(\frac{V_{\mathrm{max}}}{200\mathrm{km}\mathrm{s}^1}\right)^4\left(\frac{\mathrm{{\rm Y}}_\mathrm{d}}{\mathrm{{\rm Y}}_{}h}\right)^2\left(\frac{I_0}{100\mathrm{L}_{}\mathrm{pc}^2}\right)^1\left(\frac{V_\mathrm{d}^2}{V_{\mathrm{max}}^2}\right)^2,$$ (8) where the total rotation velocity at the peak of the rotation curve is a sum in quadrature of contributions from the disk and the halo: $$V_{\mathrm{max}}^2=V_\mathrm{d}^2+V_\mathrm{h}^2.$$ (9) For the halo density profiles to be considered in the following, the rotation curves typically reach a maximum near $`3R_\mathrm{d}`$ where the disk contribution is also near its peak. In what follows, we will identify the rotation velocity at $`3R_\mathrm{d}`$ to be the one in the Tully-Fisher relation. For simplicity, we will still denote this velocity by $`V_{\mathrm{max}}`$. Inserting the values of $`I_0`$ and $`\mathrm{{\rm Y}}_\mathrm{d}`$ inferred in the last subsection into equation (8), we find that the ratio between the predicted and observed Tully-Fisher zero-points is $`(V_{\mathrm{max}}=200\mathrm{km}\mathrm{s}^1)`$ $``$ $`{\displaystyle \frac{L_{\mathrm{d},\mathrm{predicted}}(V_{\mathrm{max}}=200\mathrm{km}\mathrm{s}^1)}{L_{\mathrm{d},\mathrm{observed}}(V_{\mathrm{max}}=200\mathrm{km}\mathrm{s}^1)}}`$ (10) $``$ $`1.0\times \left({\displaystyle \frac{\mathrm{{\rm Y}}_\mathrm{d}}{2h\mathrm{{\rm Y}}_{}}}\right)^2\left({\displaystyle \frac{I_0}{300\mathrm{L}_{}^{}{}_{I}{}^{}\mathrm{pc}^2}}\right)^1\left[{\displaystyle \frac{(V_\mathrm{d}/V_{\mathrm{max}})^2}{0.5}}\right]^2.`$ We see that $`V_\mathrm{d}^2/V_{\mathrm{max}}^20.5`$ is required for the model prediction to match the observation. In other words, dark haloes must contribute about half of the gravitational force at the radius of maximum rotation for a typical disk galaxy with $`V_{\mathrm{max}}200\mathrm{km}\mathrm{s}^1`$. To see if the presence of a halo also helps to reduce the scatter, let us consider a ‘typical’ disk with mass $`M_\mathrm{d}`$ and surface density $`I_0`$. As before we assume $`\mathrm{{\rm Y}}_\mathrm{d}`$ to be invariant from galaxy to galaxy, so that $`M_\mathrm{d}`$ is equivalent to $`L_\mathrm{d}`$ and $`I_0`$ is equivalent to $`\mathrm{\Sigma }_0`$. Now suppose we increase the disk mass from $`M_\mathrm{d}`$ to $`(1+ϵ)M_\mathrm{d}`$ in such a way that disk scale-length is fixed, i.e. we increase the disk mass without changing the disk concentration in the halo. In this case, both $`I_0`$ and $`V_\mathrm{d}^2`$ are increased by a factor of $`(1+ϵ)`$. Suppose such an increase in $`M_\mathrm{d}`$ induces a change $`\delta V_\mathrm{h}^2`$ in the halo contribution, the $``$ factor defined in equation (10) becomes $$^{}=\frac{1+ϵ}{(1+ϵ^{})^2}\text{with}ϵ^{}\frac{ϵV_\mathrm{d}^2+\delta V_\mathrm{h}^2}{V_{\mathrm{max}}^2}.$$ (11) If the maximum rotation is dominated by the disk, then $`ϵ^{}ϵ`$ and $`^{}/(1+ϵ)`$, i.e. the Tully-Fisher zero-point is reduced by a factor of $`(1+ϵ)`$. In this case, we recover the result for self-gravitating disks. If, on the other hand, disk gravity is negligible, then $`ϵ^{}0`$ and $`^{}(1+ϵ)`$, i.e. the Tully-Fisher zero-point is enhanced by a factor of $`(1+ϵ)`$. In this halo-dominated case, the disk rotation curve is independent of the disk mass, and so the Tully-Fisher zero-point is directly proportional to the disk mass. Thus, somewhere between the disk dominating and the halo dominating, the Tully-Fisher zero-point will not be altered by the change of $`M_\mathrm{d}`$. In fact, if both $`ϵ`$ and $`ϵ^{}`$ are small, and if $`\delta V_\mathrm{h}^2`$ is neglected, then $`^{}=`$ for $`(V_\mathrm{d}/V_{\mathrm{max}})^20.5`$, i.e. the Tully-Fisher zero-point does not depend on $`M_\mathrm{d}`$ if the disk contributes about half of the gravitational force at the maximum rotation. As another example, if we increase $`I_0`$ (or $`\mathrm{\Sigma }_0`$) by a factor of $`(1+ϵ)`$ but keep $`M_\mathrm{d}`$ unchanged, i.e. if we increase the disk concentration in the halo without changing the mass, then we have for small $`ϵ`$, $$^{}=\frac{1}{(1+ϵ^{})^2}\text{where}ϵ^{}\frac{(ϵ/2)V_\mathrm{d}^2+\delta V_\mathrm{h}^2}{V_{\mathrm{max}}^2}.$$ (12) So, $`^{}`$ for halo-dominating and $`^{}/(1+ϵ)`$ for disk-dominating. The above simple arguments suggest that the presence of dark haloes can help to reduce the Tully-Fisher scatter. However, the details depend on $`\delta V_\mathrm{h}^2`$, the halo response to the change in the disk, which we will study in the following section. ## 3 Detailed Modelling In order to examine in more detail the constraints provided by the Tully-Fisher relation on disk formation, we consider exponential disks in realistic dark haloes. Our modelling here follows the disk formation model of MMW. The reader is referred to that paper for details; here we only repeat the essentials of the model. Briefly, after the initial protogalactic collapse the gas and dark matter are assumed to be uniformly mixed in a virialized object. As a result of dissipative and radiative processes, the gas component gradually settles into a disk. We assume that the mass of this disk is a fraction $`m_\mathrm{d}`$ of the halo mass, and that its specific angular momentum is $`j_\mathrm{d}`$ times that of the dark halo. If the mass profile of the disk is taken to be exponential, and if the dark halo responds to the growth of the disk adiabatically (see Barnes & White 1984 and Navarro & Steinmetz 2000b for tests of the validity of this assumption), then $`\mathrm{\Sigma }_0`$, $`R_\mathrm{d}`$ and the galaxy’s rotation curve are determined uniquely. Specifically, $$R_\mathrm{d}=\frac{1}{\sqrt{2}}\left(\frac{j_\mathrm{d}}{m_\mathrm{d}}\right)\lambda r_\mathrm{h}F_R,\mathrm{\Sigma }_0=\frac{m_\mathrm{d}M_\mathrm{h}}{2\pi R_\mathrm{d}^2},V_{\mathrm{max}}=V_\mathrm{h}F_V,$$ (13) where $`\lambda `$ is the spin parameter of the halo, $`M_\mathrm{h}`$, $`V_\mathrm{h}`$ and $`r_\mathrm{h}`$ are the mass, circular velocity and virial radius of the halo before responding to disk gravity, $`F_R`$ and $`F_V`$ are factors which depend on the halo profile and disk action. For a given cosmology, $`M_\mathrm{h}`$, $`V_\mathrm{h}`$ and $`r_\mathrm{h}`$ are related by $$r_\mathrm{h}=\frac{V_\mathrm{h}}{10H(z)},M_\mathrm{h}=\frac{V_\mathrm{h}^3}{10GH(z)},$$ (14) where $`H(z)`$ is the Hubble constant at redshift $`z`$ (see MMW for details). Let us consider a case where the halo density profiles have the form $`\rho (r)`$ $`=`$ $`\rho _0{\displaystyle \frac{r_\mathrm{c}^3}{(r+r_\mathrm{c})^3}}`$ (15) $`=`$ $`{\displaystyle \frac{V_\mathrm{h}^2}{4\pi Gr^2}}{\displaystyle \frac{c}{\left[\mathrm{ln}(1+c)c(1+3c/2)/(1+c)^2\right]}}{\displaystyle \frac{r^2/r_\mathrm{c}^2}{(r/r_\mathrm{c}+1)^3}},`$ where $`r_\mathrm{c}=r_\mathrm{h}/c`$ is a core radius and the quantity $`c`$ is known as the halo concentration factor. The above profile can be compared with the NFW profile: $`\rho (r)`$ $`=`$ $`\rho _0{\displaystyle \frac{r_\mathrm{s}^3}{r(r+r_\mathrm{s})^2}}`$ (16) $`=`$ $`{\displaystyle \frac{V_\mathrm{h}^2}{4\pi Gr^2}}{\displaystyle \frac{c_{\mathrm{NFW}}}{\left[\mathrm{ln}(1+c_{\mathrm{NFW}})c_{\mathrm{NFW}}/(1+c_{\mathrm{NFW}})\right]}}{\displaystyle \frac{r/r_\mathrm{s}}{(r/r_\mathrm{s}+1)^2}},`$ where $`r_\mathrm{s}=r_\mathrm{h}/c_{\mathrm{NFW}}`$ is a scale radius (Navarro, Frenk & White, 1997, hereafter NFW), and $`c_{\mathrm{NFW}}r_\mathrm{h}/r_\mathrm{s}`$ describes the halo concentration in this profile. Although profiles (15) and (16) behave differently at small radii, the two can be made to have similar global properties by properly choosing the values of $`c`$ and $`c_{\mathrm{NFW}}`$. For example, galactic-sized haloes with $`V_\mathrm{h}200\mathrm{km}\mathrm{s}^1`$ in CDM models have $`c_{\mathrm{NFW}}20`$ (Moore et al. 1999b; see also the value quoted in Navarro & Steinmetz 2000a). The corresponding profile looks similar to profile (15) with $`c=40`$ over a large range of radii. Since our results depend mainly on the halo concentration and only weakly on the details of the halo profile, we will present our results based on profile (15) with $`c`$ as a free parameter, but we will also refer to the results obtained for the NFW profile. Disk formation in these kind of dark haloes is described in considerable detail in MMW, and the procedure outlined there can be used to calculate $`F_V`$ and $`F_R`$ as functions of $`c`$, $`j_\mathrm{d}\lambda `$ and $`m_\mathrm{d}`$ for a given profile. From these we can calculate the affects of changing model parameters on the Tully-Fisher relation and the disk size (or surface density). Figures 1 and 2 show the results of such calculations. Several important conclusions can be reached from these results. As the value of $`m_\mathrm{d}`$ changes from 0.01 to 0.1, the central surface density $`I_0`$ changes by a factor of about 30, but the Tully-Fisher zero-point \[described by the factor $``$ defined in equation (10)\] changes only by a factor of about 3 for any given $`\lambda `$. The value of $`m_\mathrm{d}`$ must be smaller than the overall baryon fraction in the universe, $`\mathrm{\Omega }_{\mathrm{B},0}/\mathrm{\Omega }_0`$ (which is, according to cosmic nucleosynthesis, about $`0.05`$ for an Einstein-de Sitter universe with $`h=0.5`$, and about $`0.1`$ for a low-density universe with $`\mathrm{\Omega }_0=0.3`$ and $`h=0.7`$). Also $`m_\mathrm{d}`$ should not be much smaller than 0.01, because such disks, if they exist, may not be able to form enough stars \[due to the failure to meet the Toorme (1964) instability criterion\] to be included in current Tully-Fisher samples. Therefore, any reasonable change in $`m_\mathrm{d}`$ will not introduce a large Tully-Fisher scatter. This is somewhat contrary to intuition. If disk gravity were negligible, an increase of $`m_\mathrm{d}`$ by a factor of $`10`$ would increase the Tully-Fisher zero-point by a factor of about 10; if, on the other hand, halo gravity were negligible, an increase of $`m_\mathrm{d}`$ by a factor of 10 would decrease the Tully-Fisher zero-point by a factor similar to that in $`I_0`$, i.e. of about 30. The reason for the insensitivity of the Tully-Fisher zero-point to the change of $`m_\mathrm{d}`$ is that the interaction between the disk and the halo acts to reduce the scatter, as is demonstrated in Section 2.2 by simple analytic arguments. Indeed, the Tully-Fisher zero-point is almost independent of $`m_\mathrm{d}`$ in the range where $`V_\mathrm{d}^2V_{\mathrm{max}}^2/2`$; it increases with $`m_\mathrm{d}`$ at $`V_\mathrm{d}^2<V_{\mathrm{max}}^2/2`$ and decreases with $`m_\mathrm{d}`$ at $`V_\mathrm{d}^2>V_{\mathrm{max}}^2/2`$. This behaviour is exactly what is expected from the simple model discussed in Section 2.2. Similar effect is found in the simulations of Navarro & Steinmetz (2000b). In their simulations, Navarro & Steinmetz defined a Tully-Fisher relation using disk rotation velocities measured at large radii which are proportional to the halo circular velocities. Because of the difference in definition, it is not straightforward to relate our results to their simulation results. The distribution in the spin parameter $`\lambda `$ also introduces scatters in the Tully-Fisher zero-point. Since changing $`\lambda `$ is equivalent to changing the disk concentration in a halo, the effect is smaller for smaller $`m_\mathrm{d}`$, as implied in equation (12). However, the scatter induced by the $`\lambda `$-distribution is not expected to be large. From N-body simulations we know that the spin parameters of dark haloes have log-normal distribution with a median value $`\overline{\lambda }0.05`$ and dispersion $`\sigma _{\mathrm{ln}\lambda }0.5`$ (see eq. in MMW). If disks have similar specific angular momenta as their dark haloes, then only 10 percent of the systems have $`\lambda 0.1`$ and 10 percent of them have $`\lambda 0.025`$. Such a spread in $`\lambda `$ does not induce too large a scatter in the Tully-Fisher relation, as shown in MMW and as can also be inspected from Figures 1 and 2. However, if disks could lose a large fraction of their angular momenta so that their effective spins were much lower than 0.05, they would be too compact and the Tully-Fisher zero-point would be too low. Notice that disks with large $`m_\mathrm{d}`$ and small $`\lambda `$, which are predicted to have systematically low Tully-Fisher zero-point, may be unstable, and so the scatter induced by the $`\lambda `$-distribution may be reduced if such systems do not form real disks. But significant loss of angular momentum would then lead to too few stable disks. Another important result shown in Figure 1 and 2 is that low surface-brightness disks (formed in systems with high $`\lambda `$ and low $`m_\mathrm{d}`$) have a Tully-Fisher zero-point similar to that of ‘normal’ disks with a surface-brightness close to that of a Freeman disk. Thus, the observational fact that low surface-brightness galaxies obey a Tully-Fisher relation similar to that of normal spiral galaxies (Zwaan et al. 1995, but also see O’Neil, Bothun, & Schombert 1999) can be explained here without invoking any subtle assumptions. In reality, however, some offset in the Tully-Fisher zero-point is expected for the low surface-brightness population. Some low surface-brightness disks may contain more cold gas (because of low star formation efficiency) than normal galaxies (e.g. McGaugh & de Blok 1997; O’Neil, Bothun, & Schombert 1999), and so their Tully-Fisher zero-point is expected to be lower because of their higher disk mass-to-light ratios. Our results suggest that low-surface brightness galaxies should obey the same Tully-Fisher relation as high surface-brightness galaxies, if the variation of the gas fraction is properly taken into account. This is in agreement with the observational results that the relation between rotation velocity and disk mass is tighter than the relation between rotation velocity and luminosity (e.g. McGaugh et al. 2000). The change of halo concentration $`c`$ has quite a large effect on the predicted zero-point. To match with observations, the value of $`c`$ is required to be low, $`c7`$. Similar match can be obtained for the NFW profile with $`c_{\mathrm{NFW}}3`$. The implied halo concentration is therefore much lower than the value ($`c_{\mathrm{NFW}}20`$) obtained from N-body simulations of CDM models for galactic-sized haloes. Since a lower $`c`$ value means lower concentration of haloes, the result suggests that real galaxy haloes must be much less concentrated than CDM haloes. This low value of $`c`$ is in fact required by the rotation-curve shapes of galaxies with low luminosity and low surface brightness (e.g. Navarro 1998). The result here suggests that haloes with the same low concentrations are also needed for normal galaxies. Another source that can cause scatter in the Tully-Fisher relation is the redshift distribution of the disk assembly. As one can see from equation (14), for a given halo circular velocity $`V_\mathrm{h}`$, haloes at redshift $`z`$ are lighter by a factor of $`H(z)/H(0)`$ than at $`z=0`$. If the other parameters (i.e. $`m_\mathrm{d}`$, $`\lambda `$, $`c`$, and $`\mathrm{{\rm Y}}_\mathrm{d}`$) are kept the same, disks in haloes with the same $`V_\mathrm{h}`$ have the same $`V_{\mathrm{max}}`$ without depending on redshift, and so disks assembled at higher redshifts would have lower luminosity. The effect could be quite large. At $`z=1`$, $`H(z)/H(0)`$ is about 2.8 for an Einstein-de Sitter universe, and about 1.8 for a flat universe with $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Lambda }=0.7`$. This factor becomes 5.2 and 3.0 at $`z=2`$ for these two cosmologies. Thus, unless present disks have quite uniform formation times, the induced scatter would be too large. This problem was noticed by MMW, and they solve it by assuming most of the present disks to be assembled at $`z1`$. In this case, disk formation in a flat universe with $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Lambda }=0.7`$ can be made compatible with the observed Tully-Fisher zero-point and scatter<sup>1</sup><sup>1</sup>1Notice that the halo concentration used in MMW is lower than that given by recent high-resolution N-body simulations. The disk instability criteria used there also act to increase the Tully-Fisher zero-point.. Here we suggest another possibility. If haloes at higher redshift are less concentrated, we can inspect from Figure 2 that the redshift effect on the Tully-Fisher zero-point is reduced. In fact, if the halo concentration decreases with redshift $`z`$ as $`[H(z)]^\beta `$, with $`\beta 0.5`$ – 1, then the redshift effect can be removed almost completely for $`z2`$. This is shown in Figure 3. The dependence of $`c`$ on $`z`$ in the NFW model is quite weak, but recent simulation results by Bullock et al. (1999) show that a strong $`z`$-dependence of $`c`$ is possible. In the model where the observed Tully-Fisher zero-point is reproduced, the distribution in $`I_0`$ is broader than that of Freeman disks. In particular, the model predicts the existence of low-surface brightness galaxies in systems with high $`\lambda `$ and small $`m_\mathrm{d}`$. This is consistent with the fact that galaxy disks with a surface brightness lower than that of a Freeman disk are observed in deep photometric observations (Bothun et al. 1997). As one can also see from Figure 2, the contribution of the disk component to the maximum rotation varies significantly. Generally, the maximum rotation velocity is dominated by the disk component in systems with high disk surface brightness, and becomes halo dominated in low surface-brightness systems. There is still intense debate whether the observed disk galaxies are maximal or not (e.g., Bottema 1997; Courteau & Rix 1999; Debattista & Sellwood 1998; Englmaier & Gerhard 1999; Bosma 2000). For our own Galaxy, the observed disk scale-length is about $`3.5\mathrm{kpc}`$. The dark matter mass within a radius $`10\mathrm{kpc}`$ (which is about $`3R_\mathrm{d}`$) is about $`5\times 10^{10}\mathrm{M}_{}`$ (e.g. Dehnen & Binney 1998). Using a rotation velocity of $`220\mathrm{km}\mathrm{s}^1`$ at this radius, we have $`(V_\mathrm{d}/V_{\mathrm{max}})^20.5`$, which is in good agreement with our prediction (see the bottom left panel in Figure 5.) If the dark halo of the Milky Way were as concentrated as that predicted by CDM models, then the predicted value of $`(V_\mathrm{d}/V_{\mathrm{max}})^2`$ would be much smaller. Navarro & Steinmetz (2000a) have used this to argue against CDM models. Figure 4 shows the ratio between $`V_{\mathrm{max}}`$ and the halo circular velocity $`V_\mathrm{h}`$. As one can see, the boost in the velocity is substantial in systems with high $`c`$, high $`m_\mathrm{d}`$ and low $`\lambda `$. For $`c=7`$, significant boost occurs for $`m_\mathrm{d}>0.05`$ and $`\lambda <0.03`$, while for $`c=40`$ the boost is significant for all values of $`m_\mathrm{d}`$ and $`\lambda `$ because of the concentrated halo profile. Our discussion so far has been based on systems with a halo circular velocity $`V_\mathrm{h}=200\mathrm{km}\mathrm{s}^1`$. Since the model described above also reproduced the Tully-Fisher slope (see MMW), the discussion is also valid for other $`V_\mathrm{h}`$. As a summary, we show in Figure 5 disk properties versus $`V_{\mathrm{max}}`$ for a Monte-Carlo sample, where the halo circular velocity changes uniformly between 50 and 250$`\mathrm{km}\mathrm{s}^1`$, $`m_\mathrm{d}`$ has a uniform distribution from 0.01 to 0.1, $`\lambda `$ has the log-normal distribution discussed above but with a lower cutoff at 0.03 (the small number of systems with $`\lambda <0.03`$ may produce disks that are too compact to be globally stable, see MMW for a discussion), disk assembly redshift has uniform distribution between $`z=0`$ and $`z=2`$, and the halo concentration $`c`$ changes with redshift as $`c=7[H_0/H(z)]^{1/2}`$. From Figure 5, it is clear that although $`m_\mathrm{d}`$ and formation redshift are allowed to vary in substantial ranges, the scatter around the Tully-Fisher relation is still compatible with observations. The predicted central luminosity density (top right) and disk sizes (bottom right) are also consistent with observations (Courteau 1996, 1997; de Jong 1996). The bottom left panel in Figure 5 shows the disk contribution to $`V_{\mathrm{max}}`$; it is quite clear that some systems are disk dominated while others are not. Disk domination of $`V_{\mathrm{max}}`$ preferentially occurs in systems with high $`m_\mathrm{d}`$ and low $`\lambda `$. As mentioned above, although the results presented here are based on the special functional form (15) for the halo density profile, our conclusions are not altered if other reasonable forms are used. This is not surprising, because the quantities we are interested in here are global properties of the halo/disk systems. Thus, the most important requirement is that dark haloes have low concentrations, while the exact form of the halo profile is not stringently constrained by the global properties considered here. Detailed modelling of disk rotation curves is needed in order to see which model fares better. ## 4 Discussion In this paper, we have applied the same formalism as in MMW, but with substantially relaxed assumptions, to study the properties of disk galaxies. We find that even if we allow the disk mass and formation redshift to vary substantially, the observed properties (including the Tully-Fisher relation, disk sizes and central luminosities) can still be reproduced. The Tully-Fisher relation is a generic result of the gravitational interaction between the disk and halo components in a disk/halo system with properties close to those predicted by current models. The model prediction is a relation between the disk mass and the maximum rotation velocity (the Tully-Fisher relation of mass). For a given halo concentration, this relation can be written $`M_\mathrm{d}V_{\mathrm{max}}^\alpha `$ with $`\alpha 3`$. If the halo concentration changes systematically with halo circular velocity, the value of $`\alpha `$ will be different from $`3`$. For example, if haloes with smaller circular velocities are systematically more concentrated, then the zero-point of the mass Tully-Fisher relation will decrease with decreasing halo circular velocity, leading to a higher value of $`\alpha `$. To predict the Tully-Fisher relation of light, we need to know the mass-to-light ratios of disks. Clearly, disks with higher mass-to-light ratios are predicted to have lower Tully-Fisher zero-points. Thus, if the disk mass-to-light ratio varies systematically with halo circular velocity, the slope of the Tully-Fisher relation for the light is expected to be different from that for the mass. These predictions can be tested by analysing the Tully-Fisher zero points for galaxies with different disk mass-to-light ratios. In order to reproduce the observed Tully-Fisher relation in the $`I`$ band, we have found that the galactic haloes must have quite low concentrations. The requirement of low halo concentration is consistent with direct modelling of rotation curves, particularly for low surface-brightness galaxies; the low central density of dark matter can also explain why bars in galaxies seem to rotate rapidly (Debattista & Sellwood 1998). The required concentration is, however, lower than that found in numerical simulations (NFW; Jing & Suto 1999; Moore et al. 1999b; Springel et al. 2000). The low halo concentration required by disk galaxies has many important observational consequences. For example, the lensing properties of disk galaxies may be different from those in models where concentrated profiles are assumed (e.g. Bartelmann & Loeb 1998). There are already a number of lenses that appear to be caused by disk galaxies, such as 2237+0305 (Huchra et al. 1984), 1600+4344 (Jackson et al. 1995) and 0218+357 (Patnaik et al. 1993). None of these systems shows a central image, which implies that the core radii for these lensing disks may be fairly small. It would be very interesting to use these systems to put quantitative constraints on the core radius. Similarly, the lack of central images in elliptical lenses also suggests that the total density profile (baryons plus dark matter) is near singular in the central region, i.e. the core radius must be small (e.g. Kochanek 1996). An important question is whether the halo profiles of elliptical galaxies have similar core radii as disk galaxies. It is possible that the dark halo profiles in elliptical galaxies are significantly modified by dynamical processes during formation (e.g., by merging). Recent high-resolution N-body simulations of the formation of cold dark matter haloes show that such haloes generally contain too many subclumps to match the number of dwarf galaxies observed in the Local Group. This happens because the CDM particles which form a dark halo were generally in progenitors with high central densities. As such progenitors merge to form a larger halo, their central parts can survive as subclumps (Moore et al. 1999a; Klypin et al. 1999; Springel et al. 2000). However, if the progenitors have lower concentrations, they are more likely to be destroyed by the merging process, and the number of subclumps in dark haloes may be reduced. Also, if dark haloes have flat central cores and if the core radius of a halo does not change much with redshifts, we would expect a limiting redshift beyond which dark haloes may not be able to form in large number. This may help to alleviate the overcooling problem in CDM models where too much gas can cool in small haloes at high redshift, but is constrained by the fact that large numbers of star forming galaxies are observed at redshift $`z3`$ (e.g. Steidel et al. 1998). Clearly, many of these issues need thorough investigation before any definite conclusions can be drawn. An equally important issue is the origin of the required profiles. Since highly-concentrated profiles are quite generic in conventional CDM models, the formation of haloes with low concentrations may require some modification in the properties of the dark matter particle (e.g. Spergel & Steinhardt 1999; Hogan & Dalcanton 2000) or some change in the power spectrum of initial perturbations (Kamionkowski & Liddle 1999). In the proposal of Spergel and Steinhardt, dark matter is assumed to be self-interacting. The initial hope was that the collisions between dark matter particles can heat up the low-entropy material and thereby produce a shallower density profile. However, recent numerical simulations of the formation of collisional haloes, with dark matter treated as a fluid, show that the inner density profiles are even steeper than their collisionless counterparts (Moore et al. 2000; Yoshida et al. 2000). Collisional dark halo might also be too spherical to match the elliptical shape of clusters like MS21137-23 as inferred from gravitational lensing (Miralda-Escude 2000). But the situation is not yet clear. The simulations of Burkert (2000) taking into account the effect of finite collisional cross-section show that core-like structure can be produced. The question is then whether such models can produce the low concentrations required to explain the Tully-Fisher zero-point. As we have pointed out before, in order to match the observed Tully-Fisher relation, it does not matter much whether we can get rid of the central cusps, it is the global halo concentration that matters. In the proposal of Hogan and Dalcanton, galactic haloes are assumed to be dominated by warm dark matter with initial velocity dispersion. In this case, the initial adiabat may produce a core radius which decreases with the halo circular velocity as $`r_\mathrm{c}V_\mathrm{h}^{1/2}`$. This is not favored by our results. The predicted relation between $`r_\mathrm{c}`$ and $`V_\mathrm{h}`$ implies that the halo concentration factor scales as $`r_\mathrm{h}/r_\mathrm{c}V_\mathrm{h}^{3/2}`$. Thus, at a given redshift the concentration for a halo with $`V_\mathrm{h}=250\mathrm{km}\mathrm{s}^1`$ is about 6 time larger than that for a halo with $`V_\mathrm{h}=75\mathrm{km}\mathrm{s}^1`$. From the results shown in Fig. 1a and Fig. 2a we see that the resulting Tully-Fisher relation is much too shallow. If dark matter self-interaction (either scattering or annihilation) is indeed responsible for the shallow profile of galactic haloes, then the Tully-Fisher relation can be used to constrain the mass and cross section of dark matter particles. Denote the mass and cross section by $`m_X`$ and $`\sigma _X|v|`$, respectively. The collision rate per particle is $`\mathrm{\Gamma }=n_X\sigma _X|v|`$. Collision is effective only in systems where $`\mathrm{\Gamma }^1`$ is smaller than the Hubble time. This defines a critical density $$n_{\mathrm{crit}}=\frac{H(z)}{\sigma _X|v|},$$ (17) above which the effect of self-interaction is important. The above critical density defines a characteristic radius ($`r_\mathrm{c}`$) in a dark halo: $`\rho (r_\mathrm{c})=m_Xn_{\mathrm{crit}}`$, where $`\rho (r)`$ is the halo density profile. Suppose that the halo profile before modification by self-interaction is $`\rho (r)=V_\mathrm{h}^2/(4\pi Gr^2)`$ near $`r_\mathrm{c}`$, the characteristic radius can be written $$r_\mathrm{c}=\frac{V_\mathrm{h}}{(4\pi Gm_Xn_{\mathrm{crit}})^{1/2}}=\frac{V_\mathrm{h}}{[4\pi GH(z)]^{1/2}}\left(\frac{\sigma _X|v|}{m_X}\right)^{1/2}.$$ (18) If self-interaction of dark matter particles is to reduce the local density of dark matter particles, the characteristic radius $`r_\mathrm{c}`$ may be identified as a ‘core’ radius. The halo concentration is then $$c\frac{r_\mathrm{h}}{r_\mathrm{c}}=\left[\frac{\pi G}{25H(z)}\right]^{1/2}\left(\frac{\sigma _X|v|}{m_X}\right)^{1/2}.$$ (19) Thus, if $`\sigma _X|v|`$ is velocity independent in the velocity range relevant for galactic haloes, halo concentration $`c`$ is proportional to $`1/\sqrt{H(z)}`$, independent of $`V_\mathrm{h}`$. This is just the profile we want to explain the Tully-Fisher relation. In order to have a concentration $`c=7`$ \[assuming profile (15)\] at present time, the mass and cross section should satisfy $$\frac{\sigma _X|v|}{m_X}10^{16}h^1\left(c/7\right)^2\mathrm{cm}^3\mathrm{s}^1\mathrm{GeV}^1.$$ (20) The value of $`\sigma _X`$ implied is consistent with that obtained by Spergel & Steinhardt (1999) and Firmani et al. (2000) based on different arguments. Much work remains to be done to see if a consistent model can be found to fulfill the requirement. ## Acknowledgments We thank Gerhard Börner, Andi Burkert, Ian Browne, Karsten Jedamzik, John McKean and Peter Wilkinson for helpful discussions and comments on the paper. We also thank our referee for a constructive report.
no-problem/0002/astro-ph0002515.html
ar5iv
text
# Radio Frequency Interference ## 1 The RFI Challenge and Spectrum Management If future telescopes like the SKA are developed with sensitivities up to 100 times greater than present sensitivities, it is quite likely that current regulations will not provide the necessary protection against interference. There is a range of experiments (eg redshifted hydrogen or molecular lines) which require use of arbitrary parts of the spectrum, but only at a few locations, and at particular times, suggesting that a very flexible approach may be beneficial. Other experiments require very large bandwidths, in order to achieve enough sensitivity. As shown in Figure 1, presently only 1-2% of the spectrum in the metre and centimetre bands is reserved for passive uses, such as radio astronomy (Morimoto 1993). In the millimetre band, much larger pieces of the spectrum are available for passive use, but the existing allocations are not necessarily at the most useful frequencies. Current regulations alone will be inadequate, we need technology as well as regulation. We cannot (and do not want to) impede the telecommunications revolution, but we can try to minimise its impact on passive users of the radio spectrum and maximise the benefits of technological advances. Further information on many of the topics discussed below is available on http://www.atnf.csiro.au/SKA/intmit/. ## 2 Classes of Interference It is important to be clear of what we mean when we talk about interference. Radio astronomers make passive use of many parts of the spectrum legally allocated to communication and other services. As a result, many of the unwanted signals are entirely legal and legitimate. We will adopt the working definition that interference is any unwanted signal entering the receiving system. Interfering signals vary a great deal in their source and nature. This naturally leads to different mitigation approaches. Local sources of interference include things internal to telescope instruments, networking for IT systems, and general and special purpose digital processors in the observatory. Interference compliance testing, shielding, separate power circuits, minimising nearby equipment are key steps that need to be taken to minimise this kind of interference. External interference may arise from fixed or moving sources. Not all methods of mitigation apply to both: in fact methods that work well for fixed sources, may not work at all for moving sources, due to problems like side lobe rumble. Interference may be naturally occurring or human generated. Examples of naturally occurring interference include: the ground, sun, other bright radio sources, and lightening. Human generated interference may come from broadcast services (eg TV, radio), voice and data communications (eg mobile telephones, two-way radio, wireless IT networks), navigation systems (eg GPS, GLONASS), radar, remote sensing, military systems, electric fences, car ignitions, and domestic appliances (eg microwave ovens) (Goris 1998). The vast majority of these operate legally with in their allocated bands, regulated by national authorities and the ITU (International Telecommunications Union). However there are sources of interference, such as the Iridium mobile communications systems, whose signals leak into bands protected for passive use. In this case, these interfering signals are $`10^{11}`$ times stronger than the signal from the early universe. In the case of Australia, there is a single communications authority for whole country and therefore for the whole continent. As a result there is a single database containing information on the frequency, strength, location, etc of every licensed transmitter (Sarkissian 2000). A key point therefore, is that the modulation schemes and other characteristics of the vast majority of these signal are known. Their effect on radio telescopes is not only predictable, but can be modelled and used to excise the unwanted signals. Radio astronomy could deal with most terrestrial interfering signals, by moving to a remote location, where the density and strength of unwanted signals is greatly reduced. As shown in Figure 2, this is getting more and more difficult, but there are still some possibilities. However with the increasing number of space borne telecom and other communications systems in low (and mid) Earth orbits, a new class of interference mitigation challenges are arising - radio astronomy can run, but it cannot hide ! The are several new aspects introduced into the interference mitigation problem by this and they include: rapid motion of the transmitter on satellites, more strong transmitters in dish side lobes and possibly in primary beam, and different spectrum management challenges, because no place on Earth is free from interference from the sky. ## 3 RFI fundamentals Undesired interfering signals and astronomy signals can differ (be orthogonal) in a range of parameters, including: frequency, time, position, polarisation, distance, coding, positivity, and multi path. It is extremely rare that interfering and astronomy signals do not possess some level of orthogonality in this $`8`$ dimensional parameter space. We therefore need to develop sufficiently flexible signal processing systems to take advantage of the orthogonality and separate the signals. This is of course very similar to the kinds of problems faced by mobile communication services, which are being addressed with smart antennas and software radio technologies. Examples in radio astronomy to date include the use of the time or frequency phase space or even better both, as in pulsar studies wherer the time/frequency dispersion relation can be used, and the requirement that signals are positive in very low frequency studies. Antenna arrays could take advantage of the position and distance (curvature of wavefront) phase space. Human generated interference is normally polarised, so unpolarised astronomical signals can be observed by measuring the unpolarised component $`(I(U^2+Q^2+V^2)^{1/2})`$ (R. Fisher, private communication. ### 3.1 Mitigation Strategies and Issues There is no silver bullet for detecting weak astronomical signals in the presence of strong undesired naturally occurring or human generated signals. Spectral bands allocated for passive use provide a vital window, which cannot be achieved in any other way. It is important to characterise the RFI so that the number, strength, band width, duty cycle, spatial and frequency distributions, and modulation and coding schemes can all be used to advantage in modelling and mitigating RFI. Doing this at low frequencies gives greater sensitivity due to the effects of harmonic content and ease of propagation. In order to do this, the telescope and instruments must be calibrated to provide the best possible characterisation of interfering and astronomy signals. A/D converters must be fast enough to give sufficient bandwidth, with a sufficient number of bits so that both strong and weak signals are well sampled. There are a range of techniques that can make passive use of other bands possible and in general these need to be used in a progressive or hierarchical way. $``$Remove at source is obviously best, but that is often not possible, $``$Regulation providing radio quiet frequencies or regions, $``$Negotiation with owners users can lead to win-win solutions, for example replacing nearby radio links with underground fibres, removes interference and improves voice and data connectivity for users, $``$Avoid interference by choosing appropriate locations with terrain screening or radio quiet zones, $``$Move to another frequency, $``$Screening to prevent signals entering the primary elements of receivers, $``$Far Side lobes of primary and secondary elements must be both minimised and well characterised, $``$Minimise coherent signals through out the array and thereby allow the natural rejection of the array to deal with the incoherent signals, $``$Front end filtering, using for example high temperature super conducting filters with high Q to reject strong signals in narrow bands, before they cause saturation effects. $``$High dynamic range linear receivers to allow appropriate detection of both astronomy (signals below the noise) and very strong interfering signals, $``$Notch filters (analog, digital or photonic) to excise bad spectral regions, $``$Clip samples from data streams to mitigate burst type interference, $``$Decoding to remove signals with complex modulation and multiplexing schemes. Blanking of period or time dependent signals is a very successful but simple case of this more general approach, $``$Cancellation of undesired signals, before correlation using fixed and adaptive signal processing (harris, 2000), $``$Post correlation cancellation of undesired signals, taking advantage of phase closure techniques (Sault 2000) $``$Parametric techniques allow the possibility of taking advantage of known interference characteristics to excise it (Ellingson, 2000), $``$Adaptive beam forming to steer one or more nulls onto interfering sources. This is equivalent to cancellation, but it provides a way of taking advantage of the spatial orthogonality of astronomy and interfering signals, $``$Use of Robust statistics in data processing to minimise the effects of outliers. ### 3.2 Which signal processing regime: traditional analog, digital, photonic ? In most applications of signal processing, there is a strong trend towards the use of digital techniques, as well as photonic techniques. The fundamental reason for this is that digital and photonic devices have cost curves which are evolving much more rapidly than traditional analog systems. In addition to that, they open up new techniques and offer substantial reduction in computational effort in many cases. The inherent immunity of photonic approaches to radio interference also creates functional advantages (Minasian 2000). Astronomers are joining these trends for exactly the same reasons. The jury is still out on what the appropriate balance or mixture of these techniques will be. ## 4 Adaptive EMI rejection Adaptive rejection algorithms can be either constrained or unconstrained. Constrained algorithms generally incorporate either a model or a copy of the desired or interfering signal, which is used control the adaption. For example it may be constrained so that only those signals with a certain coding or chip sequence are removed (Ellingson, 2000). Most astronomy signals are expected to be pure noise, so one could envisage a constraint that rejects all non noise like signals. In the case of unconstrained adaption, some algorithms (predictive adaptive algorithms) simply assume that the interference is much stronger than the signal and just use previous data samples to predict the following data samples for cancellation (harris, 2000). If that assumption cannot be made, another approach is to block the desired signal and let the unconstrained algorithm work on the remaining signals. This is often done using a blocking matrix, which can be thought of as an operator that applies a set of complex weights which block certain signals, while passing everything else. Advantages of this approach are that it can deal with multiple interfering signals which are changing in time and space, without affecting the signal to noise of the desired signal. A key ingredient of constrained adaptive algorithms is a reference channel that maximises interference to noise ratio for the ensemble of interferers. One way of achieving this is using additional omni-directional antennas or arrays which at least matches the gain of the side lobe response of the main array. ### 4.1 Adaptive Interference Cancelling Of all the approaches listed above, the nulling or cancellation systems (may be adaptive or predictive) are the most likely to permit the observation of weak astronomy signals that are coincident in frequency or space with undesired signals. There is an important space-time duality with cancelling algorithms. Any algorithm that works in the time domain can also be applied in the space domain. These techniques have been used extensively in communications, sonar, radar, medicine and others (Widrow & Stearns 1985, Haykin 1995). Radio astronomers have not kept pace with these developments and in this case need to infuse rather than diffuse technology in this area. A prototype time based cancellation system developed at NRAO (shown in Figure 3) has demonstrated 70dB of rejection on the lab bench and 30dB of rejection on real signals when attached to the 140 foot at Green Bank (Barnbaum & Bradley 1998). Adaptive nulling systems are being prototyped by NFRA in the Netherlands (van Ardenne, these proceedings). Combined space-time approaches have been used to cancel interference in GPS receivers (Trinkle, 2000). However, in all cases, the application in the presence of real radio astronomy signals is yet to be demonstrated and their effects on the weak astronomy signals needs to be quantified. A good prospect for doing this in the near future is recording baseband data from existing telescopes, containing both interfering and astronomy signals and simulating the receiver system in software (Bell et al. 1999). A number of algorithms can then be implemented is software and assessed relative to each other. Beam forming and adaptive nulling as are not necessarily being done sequentially, but rather in parallel. While there are some sequential schemes (genetic algorithms for example), most approaches simultaneously solve for the coefficients that give the desired beams and nulls. One can think of this as an optimisation problem. For example, in the minimum variance beam former, the “goal” is to minimise output power, and the “constraint” is maintain constant gain in a certain direction. The goal forces the nulls onto all interfering signals not coming from the direction of the astronomical source, and the constraint protects the beam gain (Ellingson, 2000). Of course, it is only protected for one direction, so there is still shape distortion. In general for an N element array, you can form up to N-1 nulls. However if more control over the main beam shape is required, one may use other beam formers, which form a smaller number of nulls, and use more degrees of freedom to control the main beam shape. A physical interpretation of why you can form nulls without wrecking the beam might go as follows: Phased arrays with many elements (not equally spaced), have lots of nulls, and they are all over the sky once you get a reasonable distance from the main beam. Imagine changing the coefficients a little to get the closest one on to an interferer. Very little variation in the coefficients is required. Since the difference is so small, the main beam hardly notices. The other nulls will shift around, of course, because they are sensitive to small changes in the coefficients. Close to the main beam, the nulls are further apart, so you need a bigger variation in the coefficients to nudge the closest null into place - hence the increased distortion in the main beam in this case. It may be necessary to record the weights applied to generate the nulls, so that the beam shape changes can be calibrated out later (Cram, 2000). ## 5 Real time v post correlation Real time systems permit full recovery of temporal information in the signal required. Real time systems have been well studied and numerous examples can be found in other fields such as: radar, sonar, communications, defence anti jam, speech processing, and medicine. For example, there are existing systems which are capable of nulling up to 7 simultaneous moving jammers. In radio astronomy we operate in a totally different regime in which the astronomical signals are weak and noise like. We only wish to measure the time averaged statistical properties of the signals. For example in aperture synthesis the time averaged coherence between two antennas. Since we don’t have to recover the signal modulation, radio astronomy does not have to use the real time algorithms developed for communications and radar. In such post correlation systems (Sault 2000) the information is only contained in the statistical properties of the signal which may vary slowly in time, frequency, space or direction. Both the signal of interest and the interference obey phase and amplitude closure relations. This results in an over determined set of equations which form a closed set which can be used to self calibrate the array for both the source and the interferer. We conjecture that: “post correlation processing of time averaged signals can achieve the same RFI rejection as in real-time algorithms and that self calibration (phase closure) techniques provide powerful additional constraints”.
no-problem/0002/hep-th0002095.html
ar5iv
text
# Non-uniqueness of the 𝜆⁢Φ⁴-vacuum ## I Introduction Various quantum field theories supposed to describe fundamental interactions in physics are scale invariant, for instance gauge field theories describing the electromagnetic and the strong interaction. In many cases, however, one may observe a scale dependence of observables, i.e. vacuum expectation values of suitable operators. In order to elucidate the origin of this scale it might be important to examine the corresponding vacuum states. The properties of the vacuum state of self-interacting theories could provide a deeper understanding of the origin of inherent scales. To start investigations in this direction it seems legitimate to consider at first a simple but generic model scenario. As an instructive example we focus on the massless, neutral $`\lambda \mathrm{\Phi }^4`$-theory. The $`\lambda \mathrm{\Phi }^4`$-theory was studied extensively in the framework of perturbation theory. This approach is based on the vacuum state of the free theory, which is scale invariant in the massless case and independent of the interaction. A modification of the vacuum state due to symmetry breaking induced by the self-interaction is not attainable in perturbation theory. The ground state of the non-interacting Hamiltonian is unique and coincides with the free vacuum state. However, the interacting Hamiltonian does not necessarily possess a unique ground state and thus an analogous identification with the exact vacuum state of the interacting theory does not hold in general. In order to specify this vacuum state it is essential to employ an appropriate non-perturbative treatment of the interaction. During the last decades non-perturbative techniques have become increasingly important owing to their relevance to QCD. Special attention was devoted to the non-trivial structure of the vacuum. Especially, there are indications for a vacuum degeneracy in the non-Abelian SU(2)-gauge theory . As another example one may investigate the non-linear Liouville model , which does not possess a translationally invariant vacuum. In this paper we would like to advocate ideas along this line of reasonings. It is our main intention to prove a clear assertion concerning the vacuum state in the special case of the $`\lambda \mathrm{\Phi }^4`$-theory. For this purpose we employ the axiomatic approach of Wightman, cf. . This article is organized as follows: The Wightman axioms summarized in the appendix are utilized to deduce a proof of the non-uniqueness of the vacuum state of the scale invariant $`\lambda \mathrm{\Phi }^4`$-theory in Section II. In Section III we introduce the non-perturbative vacuum via breaking scale invariance and evaluate the corresponding expectation values. Finally we address some implications of our results. ## II Proof of non-uniqueness In this Section we provide a general proof for the non-existence of a unique vacuum in the case of the massless and neutral $`\lambda \mathrm{\Phi }^4`$-theory. For this purpose we construct the rather general form of the corresponding two-point Wightman function. Utilizing the non-linear equation of motion we derive expectation values of higher powers of the fields. If we assume that a unique vacuum exists, these expectation values have to vanish. In view of the postulated cyclicity of the vacuum (see the appendix) this zero results in a contradiction for the non-vanishing self-interaction. Consequently, a unique vacuum cannot exist. ### A Classical $`\lambda \mathrm{\Phi }^4`$-theory The action of a massless neutral scalar field possessing a $`\lambda \mathrm{\Phi }^4`$ self-coupling is given by $`𝒜={\displaystyle d^4x\left(\frac{1}{2}(_\mu \mathrm{\Phi })(^\mu \mathrm{\Phi })\frac{\lambda }{4!}\mathrm{\Phi }^4\right)}.`$ (1) This theory exhibits two important symmetries. As every realistic field theory it obeys Poincar$`\stackrel{´}{\mathrm{e}}`$ invariance $`x^\mu L_\nu ^\mu x^\nu +a^\mu .`$ (2) On the other hand the action remains unchanged under transformations of the following form $`x^\mu `$ $``$ $`\mathrm{\Omega }^1x^\mu ,`$ (3) $`_\mu `$ $``$ $`\mathrm{\Omega }_\mu ,`$ (4) $`\mathrm{\Phi }`$ $``$ $`\mathrm{\Omega }\mathrm{\Phi }.`$ (5) This scale invariance of the action is a result of the dimensionless coupling constant $`\lambda `$. The latter property is also essential for the renormalizability of the corresponding perturbation theory. By means of Legendre transformation we derive the Hamiltonian $`H={\displaystyle d^3r\left(\frac{1}{2}\left(\mathrm{\Pi }^2+(\mathrm{\Phi })^2\right)+\frac{\lambda }{4!}\mathrm{\Phi }^4\right)},`$ (6) which is non-negative for $`\lambda 0`$. In situations, where the Hamiltonian is unbounded from below – e.g. for $`\lambda <0`$ – no ground state exists at all. As another example we mention the $`\lambda \mathrm{\Phi }^3`$-theory where the occurrence of arbitrarily negative energies is already present on the classical level. As proven in Ref. , this instability persists for the quantized theory. In the case of the $`\lambda \mathrm{\Phi }^4`$-theory the classical ground state is (for $`\lambda >0`$) simply given by $`\mathrm{\Phi }0`$. Turning to the quantum prescription the situation becomes less clear. ### B The exact quantum vacuum The main intention of this article is to show that the quantization of the $`\lambda \mathrm{\Phi }^4`$-theory described above is not unique, i.e. it does not maintain all the symmetries of the classical theory. In particular, the vacuum state and thereby the Hilbert space constructed out of it (see the appendix) cannot be scale invariant. In order to prove this assertion, we assume that a unique and hence scale invariant vacuum exists and show that this assumption leads to a contradiction. This (fictitious) state is denoted by $`|\mathrm{\Psi }_\lambda `$ to indicate the dependence on the coupling strength $`\lambda `$. If we assume that the vacuum would be unique, i.e. scale and Poincar$`\stackrel{´}{\mathrm{e}}`$ invariant, it would remain unchanged by the unitary scale transformation $`\widehat{S}(\mathrm{\Omega })`$, which is defined by (see Eq. (3)) $`\widehat{S}(\mathrm{\Omega })^1\widehat{\mathrm{\Phi }}(\underset{¯}{x})\widehat{S}(\mathrm{\Omega })=\mathrm{\Omega }\widehat{\mathrm{\Phi }}(\underset{¯}{x}/\mathrm{\Omega }),`$ (7) i.e. $`\widehat{S}(\mathrm{\Omega })|\mathrm{\Psi }_\lambda =|\mathrm{\Psi }_\lambda `$. Otherwise there exists a different vacuum, which can be derived via $`\widehat{S}(\mathrm{\Omega })|\mathrm{\Psi }_\lambda `$. Since the scale transformation $`\widehat{S}(\mathrm{\Omega })`$ represents a symmetry of the classical action, the two distinct vacuum states $`|\mathrm{\Psi }_\lambda `$ and $`\widehat{S}(\mathrm{\Omega })|\mathrm{\Psi }_\lambda `$ correspond to two equivalent quantum representations of the classical theory in this situation. It should be noted here that an anomalous scale dimension (see e.g. ) of the fields $`\widehat{\mathrm{\Phi }}`$ – inducing a symmetry transformation with other powers in $`\mathrm{\Omega }`$ than the one in Eq. (7) – already prevents the theory from being scale-invariant (see also Section II E). But since the proof of this assertion is exactly the aim of this Section we do not assume an anomalous scale dimension a priori. ### C Dyson argument As it is well known, the necessity of introducing a scale already occurs within perturbation theory (renormalization scale). This observation can be interpreted as a hint for the non-uniqueness of the quantization of the $`\lambda \mathrm{\Phi }^4`$-theory. Perturbation theory is a very powerful method that allows the very precise calculation of many observables within quantum field theory for small couplings $`\lambda `$, e.g. cross sections, etc. However, properties of the exact vacuum state for finite values of the coupling, e.g. $`\lambda =1`$, cannot be obtained rigorously within the framework of perturbation theory. The main argument can be traced back to Dyson who applied it to QED; we shall present in the following a modified version of the proof regarding the $`\lambda \mathrm{\Phi }^4`$-theory. Within perturbation theory one performs a Taylor expansion with respect to the coupling, in our case $`\lambda `$. Especially, the expansion of the exact vacuum $`|\mathrm{\Psi }(\lambda )`$ would read $`|\mathrm{\Psi }(\lambda )={\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\lambda ^n|\mathrm{\Psi }_n.`$ (8) The equal sign above is correct if and only if the infinite summation converges (to the exact quantity). But if this sum converges for some non-vanishing coupling $`\lambda _0`$ then it converges for all (possibly complex) values of $`\lambda `$ which satisfy $`|\lambda |<\lambda _0`$ as well. Accordingly, the expansion above describes an analytic function within the circle of convergence. But in this situation the exact vacuum could be analytically continued to negative values of the coupling $`\lambda `$, where the Hamiltonian is (even classically) unbounded from below. However, in such a highly unstable scenario a (translationally invariant, in particular stationary) vacuum cannot exist. This contradiction leads to the conclusion that the Taylor expansion, i.e. the perturbative approach, does not represent an analytic but an asymptotic expansion. Therefore, perturbation theory is applicable in a sufficiently small vicinity of the origin $`\lambda =0`$ – but not for finite $`\lambda `$ such as $`\lambda =1`$. Consequently, the fact that the scale invariance is broken in perturbation theory does not necessarily imply that it is broken for the exact vacuum state corresponding to finite $`\lambda `$. Instead one might imagine that the vacuum could be scale invariant at all fixed points, lets say at $`\lambda =1`$ and $`\lambda =0`$. Within perturbation theory one cannot exclude this possibility – instead one is led to search for non-perturbative methods. ### D Wightman function Poincar$`\stackrel{´}{\mathrm{e}}`$ and scale invariance of the vacuum state impose strong restrictions on the corresponding Wightman functions. Due to the translational symmetry they may depend on the difference of the coordinates $`(\underset{¯}{x}\underset{¯}{x}^{})`$ only. If we restrict ourselves to a region away from the light-cone $`(\underset{¯}{x}\underset{¯}{x}^{})^20`$ Lorentz invariance implies, that merely the scalar $`(\underset{¯}{x}\underset{¯}{x}^{})^2`$ enters the Wightman function. Taking into account the scale invariance $`W(\mathrm{\Omega }^1\underset{¯}{x},\mathrm{\Omega }^1\underset{¯}{x}^{})=\mathrm{\Omega }^2W(\underset{¯}{x},\underset{¯}{x}^{})`$ the two-point function has to adopt the following form $`W(\underset{¯}{x},\underset{¯}{x}^{})={\displaystyle \frac{\mathrm{const}}{(\underset{¯}{x}\underset{¯}{x}^{})^2}}`$ (9) for $`(\underset{¯}{x}\underset{¯}{x}^{})^20`$. By inspection we observe that the action of the d’Alembert operator $`\mathrm{}=_\mu ^\mu `$ on this function yields zero. At first this holds away from the light cone. To examine additional contributions on the light cone such as $`\delta [(\underset{¯}{x}\underset{¯}{x}^{})^2]`$ we investigate the Fourier transform $`\stackrel{~}{W}`$. Every positive $`L_+^{}`$-invariant distribution $`\stackrel{~}{\zeta }`$ with support in the closed forward cone $`\mathrm{supp}(\stackrel{~}{\zeta })\overline{V_+}`$ has to take the form (see and , Theorem IX.33) $$\stackrel{~}{\zeta }(\underset{¯}{k})=a\delta ^4(\underset{¯}{k})+\mathrm{\Theta }(k_0)\mu (\underset{¯}{k}^2)$$ (10) with $`a0`$ and a positive measure $`\mu 0`$ with $`\mathrm{supp}(\mu )\overline{_+}`$. In view of the Wightman axioms the Fourier transform of the Wightman function $`\stackrel{~}{W}(\underset{¯}{k})`$ has to be represented by a special choice of $`\stackrel{~}{\zeta }(\underset{¯}{k})`$. The above theorem allows for the Källén-Lehmann spectral representation of the Wightman function $`W(\underset{¯}{x},\underset{¯}{x}^{})=a+{\displaystyle 𝑑\mu (m^2)W^{\mathrm{free}}(\underset{¯}{x},\underset{¯}{x}^{},m^2)}`$ (11) where $`W^{\mathrm{free}}(\underset{¯}{x},\underset{¯}{x}^{},m^2)`$ denotes the Wightman function of a free scalar field with mass $`m`$, cf. and , Theorem IX.34. The imposed scale invariance of the Wightman function $`\stackrel{~}{W}(\mathrm{\Omega }^2\underset{¯}{k}^2)=\stackrel{~}{W}(\underset{¯}{k}^2)/\mathrm{\Omega }^2`$ implies $`a=0`$ and $`\mu (\mathrm{\Omega }^2\chi )=\mu (\chi )/\mathrm{\Omega }^2`$. As a consequence, if $`\mu `$ contributes for positive $`\chi `$ then it has to behave (for $`\chi >0`$) like $`\mu (\chi )=b/\chi `$ with $`b0`$. However, the resulting quantity $`\stackrel{~}{\zeta }(\underset{¯}{k})=\mathrm{\Theta }(k_0)\mathrm{\Theta }(\underset{¯}{k}^2)b/\underset{¯}{k}^2`$ does not represent a well-defined distribution owing to the singularity at $`\underset{¯}{k}^2=0`$ together with the Heaviside step-function $`\mathrm{\Theta }`$. Equivalently it does not possess a Fourier transform. This can easily be verified by considering $$\mathrm{}\zeta (\underset{¯}{x},\underset{¯}{x}^{})=b\left(\mathrm{\Theta }(k_0)\mathrm{\Theta }(\underset{¯}{k}^2)\right)=\frac{8\pi b}{(\underset{¯}{x}\underset{¯}{x}^{})^4}$$ (12) which yields for $`(\underset{¯}{x}\underset{¯}{x}^{})^2>0`$. But no scale invariant distribution exists which generates the r.h.s. of the above equation when the d’Alembert operator is applied to. On the contrary – as we have observed in Eq. (9) – the action of the d’Alembert operator on the Wightman function yields zero – at least for $`(\underset{¯}{x}\underset{¯}{x}^{})^20`$. As a result, the support of the measure $`\mu `$ can merely contain the point $`\chi =0`$. There exists only one positive distribution with support at the origin – the Dirac $`\delta `$-function. Ergo, the remaining possibility for the Fourier transform of the Wightman function is given by $`\stackrel{~}{W}(\underset{¯}{k})=\mathrm{\Theta }(k_0)\delta (\underset{¯}{k}^2)\mathrm{const}.`$ (13) This quantity indeed obeys scale invariance. In conclusion assuming a unique vacuum the d’Alembert operator acting on the Wightman function vanishes everywhere $`\underset{¯}{k}^2\stackrel{~}{W}(\underset{¯}{k})=0\mathrm{}W(\underset{¯}{x},\underset{¯}{x}^{})=0.`$ (14) and in particular on the light cone. ### E Equation of motion The variation of the action $`\delta 𝒜=0`$ leads to the non-linear equation $`\mathrm{}\widehat{\mathrm{\Phi }}={\displaystyle \frac{\lambda }{3!}}\widehat{\mathrm{\Phi }}^3.`$ (15) The field $`\widehat{\mathrm{\Phi }}(\underset{¯}{x})`$ is represented by an operator-valued distribution. However, the product of two or more distributions with the same argument, for example $`[\delta (x)]^3`$ is not well-defined in general. Consequently, the source term on the r.h.s. of the equation above $`[\widehat{\mathrm{\Phi }}(\underset{¯}{x})]^3`$ has at first glance no definite meaning. Strictly speaking, we have to define the non-linear source term $`\widehat{j}=\mathrm{}\widehat{\mathrm{\Phi }}=\lambda \widehat{\mathrm{\Phi }}^3/3!`$ as a local operator-valued tempered distribution. By virtue of the equation of motion it has to obey the following relation under rescaling, see Eq. (7) $`\widehat{S}(\mathrm{\Omega })^1\widehat{j}(\underset{¯}{x})\widehat{S}(\mathrm{\Omega })=\mathrm{\Omega }^3\widehat{j}(\underset{¯}{x}/\mathrm{\Omega }).`$ (16) As already discussed in Section II B, the occurrence of an anomalous scale dimension of the fields $`\widehat{\mathrm{\Phi }}(\underset{¯}{x})`$ or – more generally – the introduction of a renormalization scale $`\mathrm{\Lambda }_\mathrm{R}`$ in order to define $`\widehat{j}`$, i.e. $`\widehat{j}=\widehat{j}(\mathrm{\Lambda }_\mathrm{R})`$, violate this condition. But in this case the proof of the non-uniqueness of the exact vacuum state is already complete at this stage: In this situation the vacuum has to depend on this renormalization scale as well. Otherwise the difference of two source terms corresponding to different scales acting on the vacuum (supposed to be invariant) yields zero $`\left(\widehat{j}(\mathrm{\Lambda }_\mathrm{R})\widehat{j}(\mathrm{\Lambda }_\mathrm{R}^{})\right)|\mathrm{\Psi }_\lambda =0`$ (17) according to the equation of motion (15). With the same arguments as used at the end of the next Section this implies $`\widehat{j}(\mathrm{\Lambda }_\mathrm{R})\widehat{j}(\mathrm{\Lambda }_\mathrm{R}^{})=0`$, which contradicts the assumption of a scale dependent source. In summary, the eventual necessity of introducing a renormalization scale in order to define $`\widehat{j}`$ would result in a dependence of the vacuum on this scale. ### F Federbush-Johnson theorem As shown in Sec. II D assuming the existence of a unique vacuum the two-point Wightman function equals (up to an irrelevant pre-factor) the two-point function of the free field. On the other hand, we may now exploit the following trivialization theorem, which is sometimes called the Federbush-Johnson theorem: If the two-point function coincides with its free-field analogue then the theory is free, see . In the following we sketch a proof of this theorem: If the action of the d’Alembert operator on the two-point Wightman function yields zero we may utilize the equation of motion via $`\mathrm{}\mathrm{}^{}W(\underset{¯}{x},\underset{¯}{x}^{})`$ $`=`$ $`\mathrm{\Psi }_\lambda \left|\mathrm{}\widehat{\mathrm{\Phi }}(\underset{¯}{x})\mathrm{}^{}\widehat{\mathrm{\Phi }}(\underset{¯}{x}^{})\right|\mathrm{\Psi }_\lambda `$ (18) $`=`$ $`\left({\displaystyle \frac{\lambda }{3!}}\right)^2\mathrm{\Psi }_\lambda \left|\widehat{\mathrm{\Phi }}^3(\underset{¯}{x})\widehat{\mathrm{\Phi }}^3(\underset{¯}{x}^{})\right|\mathrm{\Psi }_\lambda `$ (19) $`=`$ $`0.`$ (20) This equality holds for all $`\underset{¯}{x}`$ and $`\underset{¯}{x}^{}`$ and especially for $`\underset{¯}{x}=\underset{¯}{x}^{}`$. Accordingly, we obtain $`\mathrm{\Psi }_\lambda \left|[\widehat{\mathrm{\Phi }}^3(\underset{¯}{x})]^2\right|\mathrm{\Psi }_\lambda =0`$, which implies $`\widehat{\mathrm{\Phi }}^3(\underset{¯}{x})|\mathrm{\Psi }_\lambda =0`$. The last conclusion was possible because the Hilbert space $``$ possesses a positive definite scalar product, for a Fock space with an indefinite metric additional considerations are necessary. Now we may exploit the postulated cyclicity $`𝔄|\mathrm{\Psi }_\lambda =`$ of the vacuum (see the appendix). This property implies that all states of the Hilbert space can be approximated by polynomials of fields (smeared with test functions) acting on the vacuum. Utilizing analyticity arguments (theorem of identity for holomorphic functions) it can be shown that it is sufficient to employ test functions with support in an arbitrary small open domain $`𝒪`$. This fact is known as Reeh-Schlieder theorem: $`\overline{𝔄(𝒪)|\mathrm{\Psi }_\lambda }=`$. One consequence of this theorem is the fact that if a local operator annihilates the vacuum, it is the zero operator, cf. and . As a result the annihilation of the vacuum $`\widehat{\mathrm{\Phi }}^3(\underset{¯}{x})|\mathrm{\Psi }_\lambda =0`$ again implies $`\widehat{\mathrm{\Phi }}^3=0`$, i.e. a free theory. In a similar way one can also show that the field does not only satisfy the equation of motion but also the commutation relations of a free field . This can be demonstrated via considering the quantity $`\widehat{𝔊}(\underset{¯}{x},\underset{¯}{x}^{})=[\widehat{\mathrm{\Phi }}(\underset{¯}{x}),\widehat{\mathrm{\Phi }}(\underset{¯}{x}^{})]\mathrm{\Psi }_\lambda \left|[\widehat{\mathrm{\Phi }}(\underset{¯}{x}),\widehat{\mathrm{\Phi }}(\underset{¯}{x}^{})]\right|\mathrm{\Psi }_\lambda `$ (21) and an argumentation similar to the one above, see and . The proof by Federbush and Johnson in Ref. employs canonical commutation relations and analyticity arguments but it does not refer to the Reeh-Schlieder property, which was established later. A completely different argument indicating the unphysical consequences of the annihilation of the vacuum by the source term is based on the natural assumption that the free theory should be recovered in the limit $`\lambda 0`$. Hence, the independence of the identity $`\widehat{\mathrm{\Phi }}^3(\underset{¯}{x})|\mathrm{\Psi }_\lambda =0`$ of the coupling $`\lambda `$ is in conflict to the fact, that $`\widehat{\mathrm{\Phi }}^3|0=0`$ is not valid in the non-interacting situation. In summary, these contradictions lead to two alternatives: Either the self-interaction vanishes or the vacuum is not unique. In the former case the vacuum is Poincaré and scale invariant, but the theory is trivial, see e.g. . Assuming a non-trivial self-interacting $`\lambda \mathrm{\Phi }^4`$-theory (latter case) no unique vacuum exists. ## III Symmetry breaking As we have shown in the previous Section a regular state that obeys all symmetries of the considered theory does not exist. Accordingly, the only possibility to define a vacuum is to break at least one of the symmetries. Certainly one agrees that the Poincaré invariance in fundamental field theories on a Minkowski space-time should not be broken. Without this symmetry it is by no means obvious how to distinguish the vacuum from all other states. As a consequence, we have to break the only symmetry left, i.e. the scale invariance. Even though the action exhibits no specific scale, the introduced vacuum now displays a scale-dependence. We denote the scale of the symmetry breaking by $`\mathrm{\Lambda }_\mathrm{\Phi }`$ and the vacuum accordingly by $`|\mathrm{\Psi }_\lambda ^\mathrm{\Lambda }`$. In the following we are going to analyze the consequences of this Ansatz. To investigate the relation of the vacuum state $`|\mathrm{\Psi }_\lambda ^\mathrm{\Lambda }`$ to the ground state of the theory we have to evaluate the renormalized expectation value of the energy density, i.e. the $`00`$-component of the energy-momentum tensor. In conjunction with the Wightman formalism it is most convenient to employ the powerful point-splitting renormalization technique , which is well-established in quantum field theory on curved space-times. With this tool we are able to calculate the expectation value of the energy-momentum tensor and the phionic and scalar condensates. ### A Point-splitting Several interesting observables, e.g. the energy-momentum tensor, contain two or more fields at equal space-time points $`\widehat{A}(\underset{¯}{x})\widehat{B}(\underset{¯}{x})`$. Due to the singular character of the product of two distributions with the same argument such quantities usually diverge $`\widehat{A}(\underset{¯}{x})\widehat{B}(\underset{¯}{x})=\mathrm{}`$. This necessitates an appropriate regularization and renormalization scheme. Having at hand merely the Wightman functions as input information the only well-known procedure, which can be applied directly, is the point-splitting method. Accordingly, one at first considers the fields at distinct space-time points $`\widehat{A}(\underset{¯}{x}^{})\widehat{B}(\underset{¯}{x})<\mathrm{}`$ and takes the coincidence limit afterwards – a method called point-splitting regularization. In order to generate physical reasonable, i.e. finite (renormalized) results, those terms, which become singular in the limit $`\underset{¯}{x}^{}\underset{¯}{x}`$, have to be discarded. The physical meaning of the renormalization scheme described above can be understood by considering a physical reasonable measurement process. Realistic detectors always produce finite results. Due to the fact, that those detectors are not point-like, but exhibit a finite extension the corresponding expectation values are finite as well. A linear detector (in the free field example a one-particle detector) can be described by the product of two fields smeared with the test functions $`F`$ and $`G`$, see Eq. (50). The response of that detector is given by the (finite) expectation value $`\widehat{\mathrm{\Phi }}(F)\widehat{\mathrm{\Phi }}(G)`$. In order to consider a divergent expectation value – for instance $`\widehat{\mathrm{\Phi }}^2`$ – as a limiting case of responses of suitable detectors one may proceed as follows: At first the space-time supports of the test functions $`F`$ and $`G`$ shrink to non-coinciding points $`\widehat{\mathrm{\Phi }}(\underset{¯}{x})\widehat{\mathrm{\Phi }}(\underset{¯}{x}^{})`$. The associated expectation value is still finite. Then one considers the coincidence limit $`\underset{¯}{x}\underset{¯}{x}^{}`$, where the expectation value diverges. Accordingly, this idealization of a physical detector exactly corresponds to the point-splitting procedure. The mechanism described above can be elucidated by a simple example. Let us consider the following Ansatz for the exact two-point Wightman function: $`W(\underset{¯}{x},\underset{¯}{x}+\underset{¯}{\mathrm{\Delta }x})`$ $`=`$ $`{\displaystyle \underset{n}{}}a_n(\mathrm{\Lambda }_\mathrm{F})\underset{¯}{\mathrm{\Delta }x}^{2n}\mathrm{\Lambda }_\mathrm{F}^{2n+2}`$ (23) $`+{\displaystyle \underset{n}{}}b_n(\mathrm{\Lambda }_\mathrm{F})\underset{¯}{\mathrm{\Delta }x}^{2n}\mathrm{\Lambda }_\mathrm{F}^{2n+2}\mathrm{ln}\left|\underset{¯}{\mathrm{\Delta }x}^2\mathrm{\Lambda }_\mathrm{F}^2\right|.`$ (For reasons of simplicity we restrict ourselves to space-like separations.) This expansion is correct for a sufficiently well-behaving spectral measure $`\mu (m^2)`$ in the Källén-Lehmann representation in Eq. (11). For more complicated measures further terms such as $`\mathrm{ln}^2\left|\underset{¯}{\mathrm{\Delta }x}^2\mathrm{\Lambda }_\mathrm{F}^2\right|`$ may appear and create additional singularities without altering the following considerations. Owing to the occurrence of the logarithmic terms we had to introduce a scale $`\mathrm{\Lambda }_\mathrm{F}`$. As a variation of this scale induces a transfer of contributions from the $`b_n`$ to the $`a_n`$ terms these coefficients may dependent explicitly on $`\mathrm{\Lambda }_\mathrm{F}`$. This scale characterizes the way of distinguishing the different contributions to the response of the detector. For the moment it is completely determined by the observer and should not be confused with the scale of symmetry breaking $`\mathrm{\Lambda }_\mathrm{\Phi }`$, which is a property of the vacuum. Having at hand the explicit expression for the Wightman function we are now able to derive the renormalized expectation value of $`\widehat{\mathrm{\Phi }}^2`$. To this end one keeps only those terms of the above equation, which are finite in the coincidence limit, i.e. one obtains $`\widehat{\mathrm{\Phi }}^2_{\mathrm{ren}}^{\mathrm{\Lambda }_\mathrm{F}}=a_0(\mathrm{\Lambda }_\mathrm{F}).`$ (25) Note, that the renormalized expectation values may depend on the the scale $`\mathrm{\Lambda }_\mathrm{F}`$ as well. ### B Operator product expansion In order to elucidate the physical meaning of the introduced scales $`\mathrm{\Lambda }_\mathrm{F}`$ and $`\mathrm{\Lambda }_\mathrm{\Phi }`$ we examine their relation to an important and powerful tool in quantum field theory – the operator product expansion . Considering the expectation value of a product of two operators at distinct space-time points as non-local quantity it is possible to perform an expansion into a sum of local operators with non-local coefficients $`\widehat{A}(\underset{¯}{x}+\underset{¯}{\mathrm{\Delta }x}/2)\widehat{B}(\underset{¯}{x}\underset{¯}{\mathrm{\Delta }x}/2)=`$ (26) $`{\displaystyle \underset{n}{}}C_n(\underset{¯}{\mathrm{\Delta }x},\mathrm{\Lambda }_\mathrm{F})\widehat{O}_n(\underset{¯}{x},\mathrm{\Lambda }_\mathrm{F}).`$ (27) Within the framework of the operator product expansion $`\mathrm{\Lambda }_\mathrm{F}`$ is denoted as the factorization scale. Considering the most simple example $`\widehat{A}=\widehat{B}=\widehat{\mathrm{\Phi }}`$ leads us back to the Wightman function in Eq. (23). The operator corresponding to $`(\underset{¯}{\mathrm{\Delta }x}^2)^{n=0}=\mathrm{const}`$ exactly represents the second-order scalar condensate $`\widehat{O}_0=\widehat{\mathrm{\Phi }}^2_{\mathrm{ren}}`$. Calculating the expectation values of the local operators $`\widehat{O}_n`$ in the vacuum $`|\mathrm{\Psi }_\lambda ^\mathrm{\Lambda }`$, which possesses the symmetry breaking scale $`\mathrm{\Lambda }_\mathrm{\Phi }`$, this scale obviously enters the local quantities $`\widehat{O}_n`$ as well. By inspection we observe that for small distances $`\underset{¯}{\mathrm{\Delta }x}^2\mathrm{\Lambda }_\mathrm{\Phi }^21`$ the lowest-order term is most relevant. This contribution describes the short range behavior of the theory. On the other hand for large distances $`\underset{¯}{\mathrm{\Delta }x}^2\mathrm{\Lambda }_\mathrm{\Phi }^2=𝒪(1)`$ higher-order contributions become more and more relevant. Accordingly, the long range features – mediated via the operators $`\widehat{O}_n`$ – usually dominate in this situation. Evidently, the symmetry breaking scale can be envisaged as the natural scale of factorization, which distinguishes between the long and short range behavior $`\mathrm{\Lambda }_\mathrm{\Phi }=\mathrm{\Lambda }_\mathrm{F}`$. ### C Observables In analogy to the second-order scalar condensate $`\widehat{\mathrm{\Phi }}^2_{\mathrm{ren}}=a_0`$ we may derive further renormalized expectation values by means of an appropriately chosen differential operator acting on the Wightman function in Eq. (23). The contributions which are finite in the coincidence limit read $`\widehat{\mathrm{\Phi }}\mathrm{}\widehat{\mathrm{\Phi }}_{\mathrm{ren}}=4\mathrm{\Lambda }_\mathrm{\Phi }^4(2a_1+3b_1).`$ (28) By virtue of Poincaré invariance we can deduce $`_\mu \widehat{\mathrm{\Phi }}_\nu \widehat{\mathrm{\Phi }}_{\mathrm{ren}}=g_{\mu \nu }\mathrm{\Lambda }_\mathrm{\Phi }^4(2a_1+3b_1).`$ (29) Utilizing the equation of motion (15) we derive from Eq. (28) the fourth-order scalar condensate $`\widehat{\mathrm{\Phi }}^4_{\mathrm{ren}}={\displaystyle \frac{4!}{\lambda }}\mathrm{\Lambda }_\mathrm{\Phi }^4(2a_1+3b_1),`$ (30) where we have used the definition $`\widehat{\mathrm{\Phi }}^4=\widehat{\mathrm{\Phi }}^3\widehat{\mathrm{\Phi }}`$ that is consistent with the equation of motion and the Heisenberg representation. The factor $`1/\lambda `$ indicates that our non-perturbative results cannot be obtained using elementary perturbation theory. The expectation value of the Lagrangian density corresponding to the phionic condensate yields $`\widehat{}_{\mathrm{ren}}=\mathrm{\Lambda }_\mathrm{\Phi }^4(2a_1+3b_1).`$ (31) These ingredients enable us to achieve one of our main intentions – the derivation of the energy-momentum tensor $`T_{\mu \nu }=_\mu \mathrm{\Phi }_\nu \mathrm{\Phi }g_{\mu \nu }`$. We observe the exact cancellation of the above contributions $`\widehat{T}_{\mu \nu }_{\mathrm{ren}}=0.`$ (32) Note, that for deriving this equation we merely need Poincaré invariance (together with the point-splitting technique). As a counter-example one may remember the Casimir effect where $`\widehat{T}_{00}_{\mathrm{ren}}<0`$ holds, even for $`\lambda =0`$. The above result indeed confirms the identification of the vacuum state $`|\mathrm{\Psi }_\lambda ^\mathrm{\Lambda }`$ with a ground state. Because of $`\widehat{H}=d^3r\widehat{T}_{00}`$ it follows $`\widehat{H}_{\mathrm{ren}}=0`$ from the equation above. (The spectral condition explained in the appendix together with the scale invariance implies $`\widehat{H}_{\mathrm{ren}}0`$.) Ergo, any of the introduced vacua $`|\mathrm{\Psi }_\lambda ^\mathrm{\Lambda }`$ characterized by a positive value of the scale $`\mathrm{\Lambda }_\mathrm{\Phi }`$ may be identified as ground states of the theory (which are therefore not unique). It should be emphasized, that the zero expectation value in Eq. (32) is rather non-trivial. Considering a massless scalar field with a $`\lambda \mathrm{\Phi }^n`$-coupling in a $`D`$-dimensional space-time one arrives at $$T_{\mu \nu }_{\mathrm{ren}}g_{\mu \nu }\mathrm{\Lambda }_\mathrm{\Phi }^n\left(1\frac{D}{2}+\frac{D}{n}\right).$$ (33) Cancellations similar to the situation discussed above occur exactly in those cases, where the theory is scale invariant, i.e. for $`2(n+D)=nD`$. In addition to the absence of any mass terms the scale invariance implies a dimensionless coupling constant. ## IV Conclusions ### A Summary Utilizing the Wightman axioms we have shown for the scalar, massless, neutral, and self-interacting $`\lambda \mathrm{\Phi }^4`$-theory that no state exists, which preserves Poincaré as well as scale invariance, i.e., all the symmetries of the Lagrangian. Accordingly, we are lead to introduce the non-perturbative vacuum state by breaking the scale invariance. Consistent with the Wightman approach we employed the point-splitting technique, which allows for an explicit evaluation of renormalized expectation values. Within this formalism we calculated the scalar as well as the phionic condensate. The renormalized vacuum expectation value of the energy-momentum tensor vanishes, which implies that all vacua $`|\mathrm{\Psi }_\lambda ^\mathrm{\Lambda }`$ are ground states. ### B Discussion Raising the question about the existence of a unique vacuum state in a field theory including a non-trivial interaction term we focused on the real and massless $`\lambda \mathrm{\Phi }^4`$-theory. Perturbation theory is based on the vacuum of the free theory, which can be uniquely determined and coincides with the ground state of the corresponding free Hamiltonian. It is evident from the beginning that a unique vacuum state should respect all the symmetries of the underlying theory, i.e. that of the Lagrangian. Since the vacuum state is defined via the free theory in perturbation theory, this property of the vacuum is established by brute force and is independent of the form of the interaction term in the Lagrangian. In the framework of non-perturbative methods there is a need for the definition of a corresponding exact vacuum state of the interacting theory. Naively this state should again respect all the symmetries of the underlying Lagrangian but now incorporating the interaction term. The latter will in general have some impact on the vacuum state. As indicated below the structure of the exact vacuum state becomes rather complex in comparison with the free (perturbative) vacuum. As a generic example we checked, whether such a non-perturbative vacuum state can be found in the $`\lambda \mathrm{\Phi }^4`$-theory. To this end we started with the definition of a vacuum state as a state, which preserves all the symmetries of the Lagrangian: Poincaré and scale invariance. However, it turned out that the conjectured vacuum state does not allow for the generation of all other states in the self-interacting theory by means of field operators, and thus it is inconsistent with the property of cyclicity of vacuum states. Therefore the vacuum cannot be unique. We conclude that the only reasonable way to define a vacuum of $`\lambda \mathrm{\Phi }^4`$-theory is to break scale invariance. As a consequence, the vacuum state of the theory now depends on a new scale $`\mathrm{\Lambda }_\mathrm{\Phi }`$. We were able to interpret this scale in the framework of operator product expansion (OPE), where the expectation value of a field product is decomposed into a sum of products consisting of two parts describing the long range and short range behavior, respectively. Here the scale $`\mathrm{\Lambda }_\mathrm{\Phi }`$ is to be identified with the factorization scale of OPE, i.e. with the scale separating the components of long and short distances. This is of considerable importance in theories with asymptotic freedom for which the short distance dependent part may be calculated perturbatively. However, this is not the case for the $`\lambda \mathrm{\Phi }^4`$-theory because of its QED-like asymptotic behavior (Landau pole). Finally we were investigating possible consequences of the new scale $`\mathrm{\Lambda }_\mathrm{\Phi }`$ and its appearance in observable quantities of the $`\lambda \mathrm{\Phi }^4`$-theory. Starting from a very general structure for the two-point Wightman function, we found that the expectation value of the energy-momentum tensor vanishes non-trivially. Non-trivially means an exact cancellation of the scale dependence in both terms contributing to the energy momentum tensor which occurs for scale invariant Lagrangians only. This zero result confirms the notion of the scale dependent vacuum state as a ground state of the theory. This result may also be compared to corresponding results obtained in the framework of perturbation theory and one may ask about the relation of our zero result to the known trace anomalies . In Ref. the following expression for the trace anomaly has been derived $`T_\mu ^\mu _{\mathrm{ren}}={\displaystyle \frac{\beta }{4!}}\mathrm{\Phi }^4_{\mathrm{ren}}.`$ (34) One should be aware that within renormalization theory perturbative results keep the same form independently of the momentum flow through the corresponding Feynman diagrams. Actually, in this paper we calculated the vacuum expectation value of the energy-momentum tensor. Owing to the translation invariance of the vacuum the Fourier transform of every local expectation value contributes only at vanishing momentum. In order to compare our result in Eq. (32) with Eq. (34) we have to evaluate the quantities there – especially the $`\beta `$-function – at zero momentum. Since the $`\lambda \mathrm{\Phi }^4`$-theory obeys an infra-red fixed point ($`\beta =0`$) our zero-result $`T_\mu ^\mu _{\mathrm{ren}}=0`$ for the energy-momentum tensor is in accordance with the calculations within perturbation theory – even though $`\mathrm{\Phi }^4_{\mathrm{ren}}0`$. Nevertheless one should be careful in comparing perturbative and non-perturbative results, as one cannot expect in general that a non-perturbative result has a relation to any finite order perturbative calculation. In addition the comparison of results obtained within different renormalization procedures (i.e. dimensional and point-splitting) is a delicate task. To elucidate the properties and the complex nature of the non-perturbative vacuum, we may analyze this state by considering e.g. its content of free particles $`\widehat{N}_\stackrel{}{k}^{\mathrm{free}}`$. Applying this number operator to the free vacuum yields zero and it diagonalizes the free Hamiltonian $`\widehat{H}(\lambda =0)=\widehat{H}^{\mathrm{free}}=d^3r\widehat{T}_{00}^{\mathrm{free}}`$. The simultaneous ground state of all these non-negative operators $`\widehat{N}_\stackrel{}{k}^{\mathrm{free}}`$ is unique and coincides with the free vacuum. To calculate their expectation values it is sufficient to know the two-point function. Owing to the deviation of the exact Wightman function of the interacting theory from the free (scale invariant) two-point function (as proved in Section II) at least one expectation value differs from zero $`\mathrm{\Psi }_\lambda ^\mathrm{\Lambda }\left|\widehat{N}_\stackrel{}{k}^{\mathrm{free}}\right|\mathrm{\Psi }_\lambda ^\mathrm{\Lambda }>0`$ (35) indicating that the non-perturbative vacuum contains a non-vanishing amount of ”free” scalar particles. This provides another hint for the non-triviality of the zero result in Eq. (32). The non-perturbative vacuum contains exactly such an amount of free particles that the contributions to the energy-momentum tensor of the interacting theory cancel. Traditional scattering theory is based on asymptotically free particles in the in- and out-states. For energy ranges where $`\mathrm{\Psi }_\lambda ^\mathrm{\Lambda }\left|\widehat{N}_\stackrel{}{k}^{\mathrm{free}}\right|\mathrm{\Psi }_\lambda ^\mathrm{\Lambda }`$ yields significant contributions the naive application of the above formalism is not obviously justified. Instead the propagation of the particles is similar to that in a medium. The necessity of breaking the scale symmetry in a non-perturbative approach has consequences to the application of the path-integral formalism. The generating functional $`W[J]={\displaystyle 𝒟\mathrm{\Phi }\mathrm{exp}\left(id^4x+J\mathrm{\Phi }\right)},`$ (36) if it exists beyond perturbation theory with the usual regular measure $`𝒟\mathrm{\Phi }`$, is scale invariant per definition. So are all expectation values deduced of it. Usually these expectation values may be identified with the vacuum expectation values, which are then scale invariant as well. But this is inconsistent with the scale dependence of the exact vacuum state. It follows that the usual scale invariant path-integral formalism is not naively applicable to non-perturbative analytical calculations in the case of the $`\lambda \mathrm{\Phi }^4`$-theory. Of course, the argument presented above does not apply to lattice calculations where the lattice spacing induces a scale which may be connected with the intrinsic scale of the vacuum. In summary the results obtained so far motivate further examinations concerning the relation of the presented non-perturbative approach to other formalisms. Furthermore, it might be interesting to extend the method for the explicit non-perturbative evaluation of expectation values – as presented in this article – to other observables. ### C Outline We expect that the non-uniqueness of the vacuum state is a more general feature, which holds true in other scale invariant field theories as well. This may especially be the case for the gauge sector of QCD – a statement which is currently under consideration . If so, our assertion may have consequences for the current efforts to find a treatable approach in QCD in the medium energy range of some $`\mathrm{GeV}`$. The Lagrangian governing the dynamics of the gluons $`=G_{\mu \nu }^aG_a^{\mu \nu }/4`$ is scale invariant as the Lagrangian of the $`\lambda \mathrm{\Phi }^4`$-theory. In contrast to the latter case further difficulties arise. The character of this field theory as a gauge field theory implies primary and secondary constraints. The equations of motion are more involved and contain terms linear and quadratic in the coupling $`g`$. On the other hand there is an additional $`SU(3)`$-color symmetry. In QCD the expectation value of the Lagrangian density of the gluonic sector $`\widehat{G}_{\mu \nu }^a\widehat{G}_a^{\mu \nu }_{\mathrm{ren}}`$ represents the gluonic condensate. The calculation of this quantity in analogy to Sec. III C might provide some interesting insights owing to its considerable relevance in QCD sum rules (see e.g. ) and more generally for OPE. The energy-momentum tensor of the pure gluonic sector is traceless at the classical level. Then Poincaré invariance would imply the vanishing of its renormalized vacuum expectation value $`\widehat{T}_{\mu \nu }_{\mathrm{ren}}=0.`$ (37) Nevertheless, in analogy to the $`\lambda \mathrm{\Phi }^4`$-theory the phenomenon of a trace anomaly occurs in QCD as well . Since the Yang-Mills theory possesses a low momentum behavior, which is different from the $`\lambda \mathrm{\Phi }^4`$-theory the arguments presented in Sec. IV B do not necessarily apply in this case. This may result in a non-vanishing expectation value. ## V Appendix: The Wightman axioms For the free field there exist two different options to define the vacuum, firstly as the ground state of the Hamiltonian and secondly as the state which is Poincaré invariant. For the interacting field the former possibility does not apply in general. In the following we recapitulate an axiomatic approach to quantum field theory based on the Wightman formalism that utilizes Poincaré invariance. The quantum field $`\widehat{\mathrm{\Phi }}`$ is represented by an operator-valued tempered distribution acting on a separable Hilbert space $``$. The convolution of operator-valued tempered distributions with smooth test functions of compact support yields regular operators which generate an algebra $`𝔄`$. Poincaré transformations are mediated via unitary operators $`\widehat{U}(\underset{¯}{L},\underset{¯}{a})`$ $`\widehat{U}(\underset{¯}{L},\underset{¯}{a})^1\widehat{\mathrm{\Phi }}(\underset{¯}{x})\widehat{U}(\underset{¯}{L},\underset{¯}{a})=\widehat{\mathrm{\Phi }}(\underset{¯}{Lx}+\underset{¯}{a}).`$ (38) The Hilbert space $``$ possesses a cyclic and Poincaré invariant $`\widehat{U}(\underset{¯}{L},\underset{¯}{a})|\mathrm{\Psi }_0=|\mathrm{\Psi }_0`$ state $`|\mathrm{\Psi }_0`$ which is called the vacuum. Per definition of cyclicity, all other states $`|\mathrm{\Psi }`$ of the Hilbert space $``$ can be created by acting an appropriate functional $`F_\mathrm{\Psi }[\widehat{\mathrm{\Phi }}]`$ on the vacuum $`|\mathrm{\Psi }`$ $`=`$ $`F_\mathrm{\Psi }[\widehat{\mathrm{\Phi }}]|\mathrm{\Psi }_0,`$ (39) $``$ $`=`$ $`𝔄|\mathrm{\Psi }_0.`$ (40) As a consequence, the expectation values of all observables in all states can be expressed in terms of vacuum expectation values of field operators – the Wightman functions (reconstruction theorem). In order to represent a realistic field theory the Wightman functions have to fulfill certain properties. These axioms are presented in the following for the example of the two-point function for a neutral scalar field $`\widehat{\mathrm{\Phi }}`$, see e.g. . ### A Definition To ensure the character of the quantum field as an operator-valued tempered distribution the two-point Wightman function $`W(\underset{¯}{x},\underset{¯}{x}^{})=\mathrm{\Psi }_0\left|\widehat{\mathrm{\Phi }}(\underset{¯}{x})\widehat{\mathrm{\Phi }}(\underset{¯}{x}^{})\right|\mathrm{\Psi }_0`$ (41) has to be a tempered bi-distribution. The property of neutral fields to be described by hermitian operators implies $`W^{}(\underset{¯}{x},\underset{¯}{x}^{})=W(\underset{¯}{x}^{},\underset{¯}{x}).`$ (42) ### B Covariance In order to generate a Poincaré invariant vacuum the Wightman functions must exhibit the same feature $`W(\underset{¯}{x},\underset{¯}{x}^{})=W(\underset{¯}{L}\underset{¯}{x}+\underset{¯}{a},\underset{¯}{L}\underset{¯}{x}^{}+\underset{¯}{a})`$ (43) for all translations $`\underset{¯}{a}`$ and all rotations $`\underset{¯}{L}`$ of the restricted Lorentz group $`L_+^{}=\{\underset{¯}{L}:\mathrm{det}\underset{¯}{L}=1,L_0^0>0\}`$, which contains all transformations connected continuously to the identity, i.e. no time and/or space inversion. Translation invariance implies the Wightman function to depend on the difference of the coordinates $`\underset{¯}{x}\underset{¯}{x}^{}`$ only. Inside of each light cone merely $`(\underset{¯}{x}\underset{¯}{x}^{})^2`$ enters the Wightman functions due to rotational symmetry. However, they may differ in their values inside the future and the past light cone, respectively. ### C Spectral condition The properties listed above allow for the Fourier transformation of the Wightman function according to $`W(\underset{¯}{x},\underset{¯}{x}^{})`$ $`=`$ $`\left(\stackrel{~}{W}\right)`$ (44) $`=`$ $`{\displaystyle d^4k\stackrel{~}{W}(\underset{¯}{k})\mathrm{exp}\left(i\underset{¯}{k}(\underset{¯}{x}\underset{¯}{x}^{})\right)}.`$ (45) It should be stated that all considerations employ the Minkowski metric $`g_{\mu \nu }=\mathrm{diag}(+1,1,1,1)`$. To ensure the stability of the theory the support of this Fourier transform $`\stackrel{~}{W}(\underset{¯}{k})`$ has to be contained in the closed forward cone $`\overline{V_+}=\{\underset{¯}{k}:\underset{¯}{k}^20,k_00\}`$ $`k_0<0`$ $``$ $`\stackrel{~}{W}(\underset{¯}{k})=0`$ (47) $`\underset{¯}{k}^2<0`$ $``$ $`\stackrel{~}{W}(\underset{¯}{k})=0.`$ (48) $`k_0`$ is related to the energy and thus, the first condition $`k_00`$ prevents the system from collapsing. Poincaré symmetry implies the vanishing of the Fourier transform in the whole space-like region. ### D Locality By means of Einstein causality space-like separated events cannot interfere. As a result we require the fields to commute at space-like distances and therefore the Wightman functions to be symmetric in this case $`(\underset{¯}{x}\underset{¯}{x}^{})^2<0W(\underset{¯}{x},\underset{¯}{x}^{})=W(\underset{¯}{x}^{},\underset{¯}{x}).`$ (49) For neutral fields the Wightman functions are therefore completely real at space-like distances. ### E Positivity Smearing the (hermitian) operator-valued distributions $`\widehat{\mathrm{\Phi }}(\underset{¯}{x})`$ with smooth test functions of compact support $`G(\underset{¯}{x})`$ one acquires regular operators $`\widehat{\mathrm{\Phi }}(G)={\displaystyle d^4x\widehat{\mathrm{\Phi }}(\underset{¯}{x})G(\underset{¯}{x})}.`$ (50) The absolute value squared of an operator $`|\widehat{\mathrm{\Phi }}(G)|^2=[\widehat{\mathrm{\Phi }}(G)]^{}\widehat{\mathrm{\Phi }}(G)=\widehat{\mathrm{\Phi }}(G^{})\widehat{\mathrm{\Phi }}(G)`$ and thereby also its expectation value are non-negative. Therefore the Wightman functions have to obey the following positivity condition for all test functions $`G`$ $`{\displaystyle d^4xd^4x^{}G^{}(\underset{¯}{x})W(\underset{¯}{x},\underset{¯}{x}^{})G(\underset{¯}{x}^{})}0.`$ (51) Applying the Fourier transformation on this inequality the positivity requirement takes the very simple form in terms of the Fourier transform of the Wightman function $`\stackrel{~}{W}(\underset{¯}{k})0.`$ (52) ### F Cluster property The existence of a unique translationally invariant state (i.e. the vacuum) $`|\mathrm{\Psi }_0`$ is used (cf. ) to deduce the cluster property of quantum field theories $`\underset{\underset{¯}{s}^2\mathrm{}}{lim}\mathrm{\Psi }_0\left|\widehat{A}(\underset{¯}{x}+\underset{¯}{s})\widehat{B}(\underset{¯}{x}^{})\right|\mathrm{\Psi }_0`$ (53) $`=\mathrm{\Psi }_0\left|\widehat{A}(\underset{¯}{x})\right|\mathrm{\Psi }_0\mathrm{\Psi }_0\left|\widehat{B}(\underset{¯}{x}^{})\right|\mathrm{\Psi }_0,`$ (54) where $`\widehat{A},\widehat{B}`$ are operators composed out of fields. This property is crucial for defining the S-matrix . The existence of more than one translationally invariant state in the Hilbert space $``$ would imply that the cluster property does not hold in general. However, for operators associated with physically meaningful events the cluster property should remain valid, because events at large space-like distances are asymptotically uncorrelated. Recalling the scale-dependence and thereby non-uniqueness of the vacuum of the considered $`\lambda \mathrm{\Phi }^4`$-theory one is lead to the question whether the cluster property is satisfied in this case. Since the Hilbert space is constructed starting from the cyclic vacuum $`(\lambda ,\mathrm{\Lambda }_\mathrm{\Phi })=𝔄|\mathrm{\Psi }_\lambda ^\mathrm{\Lambda }`$ it may also depend on the scale. The remaining question is, whether different vacuum states corresponding to different scales belong to the same Hilbert space or not, i.e. whether $`|\mathrm{\Psi }_\lambda ^\mathrm{\Lambda }(\lambda ,\mathrm{\Lambda }_\mathrm{\Phi }^{}).`$ (55) Indeed, it is possible that different values of the scale $`\mathrm{\Lambda }_\mathrm{\Phi }`$ correspond to distinct Hilbert spaces $`(\lambda ,\mathrm{\Lambda }_\mathrm{\Phi }^{})`$, which are not connected by local excitations. Such a situation, where different global features generate distinct Hilbert space representations (super-selection sectors), occurs for example in field theories at different values of the temperature. If the physically realized vacuum state coincides with such a vacuum state – which generates a Hilbert space containing only one translationally invariant state – then the cluster property still holds. Acknowledgment The authors are grateful to H. Araki, D. Diakonov, R. Jackiw, F. Krauss and D. J. Schwarz for valuable conversations and kind comments. R. S., R. K., and G. P. acknowledge fruitful discussions with K. Sailer and Z. Schram during their stay at the Department of Theoretical Physics at the University of Debrecen, Hungary. This visit was supported by MÖB and DAAD. Financial support from BMBF, DFG and GSI is gratefully acknowledged.
no-problem/0002/cond-mat0002281.html
ar5iv
text
# Quantum Protectorates in the Cuprate Superconductors ## 1 Introduction In seeking a theoretical understanding of the cuprate superconductors, a key question from the outset has been the relevance of attempts to deduce their behavior from first principles. As Laughlin and I have recently argued , ab-initio computations have failed completely to explain their phenomenology; indeed it would appear that not only has deduction from microscopics not been able to explain the wealth of crossover behavior found in the underdoped cuprates, but that as a matter of principle it probably cannot explain it, much less calculate the high transition temperatures found at optimal doping. We concluded that a more appropriate starting point would be to focus on the results of experiments on the low energy properties of the novel states of matter found in the cuprates in the hope of identifying the corresponding quantum protectorates–stable states of matter whose generic low energy properties, insensitive to microscopics, are determined by a higher organizing principle and nothing else. To the extent one has correctly identified the quantum protectorate, one would then hope that microscopic protectorate-based toy model calculations might be relevant for understanding experiment. From this perspective, in conventional superconductors one observes a transition from a Landau Fermi liquid protectorate to the BCS s-wave protectorate; each protectorate can be characterized by a small number of parameters, which can be determined experimentally, but which are, in general, impossible to calcuate from first principles. In the cuprate superconductors both the normal state and the superconducting state protectorates differ dramatically from their conventional superconductor counterparts. There is now a consensus that the superconducting protectorate is a BCS d-wave protectorate, but no consensus has been reached on the nature of the novel normal state phases. Candidate protectorates that have been proposed include Luttinger liquids, nearly antiferromagnetic Fermi liquids, nearly charge-ordered Fermi liquids, mesoscopically ordered phases (stripes), and quantum critical behavior.In this talk I will present the case for one of these, the nearly antiferromagnetic Fermi liquid (NAFL) protectorate. ## 2 Experimental evidence for the NAFL protectorate and dynamical scaling For over a decade it has been known from NMR measurements that a single spin component is responsible for the planar <sup>63</sup>Cu and <sup>17</sup>O spin-lattice relaxation rates and Knight shifts, and that a quantitative account of these experiments, as well as the more recent experiments on the <sup>63</sup>Cu spin-echo decay time, may be obtained with the generic low energy dynamic magnetic susceptibility appropriate for a commensurate almost antiferromagnetic protectorate , $$\chi (𝐪,\omega )=\frac{\alpha \xi ^2}{1+\xi ^2(𝐪𝐐)^2i\omega _{sf}/\omega }$$ (1) where $`Q=(\pi ,\pi )`$ is the commensurate wavevector, $`\xi `$ is the antiferromagnetic correlation length, $`\omega _{sf}`$ is the frequency of the relaxational mode, and $`\alpha `$ is the scale factor that relates the static commensurate susceptibility, $`\chi _Q`$, to the square of the correlation length, $`\chi _Q=\alpha \xi ^2`$. For the optimally-doped 1-2-3 system,near $`T_c`$, $`\xi `$ twice the lattice constant, a, $`\alpha `$ is $`15`$ states/eV, so $`\chi _Q`$ is some 60 states/ev, large indeed compared to the expected Landau Fermi liquid value of $`1`$ state/ev; the corresponding value of $`\omega _{sf}`$ is some 15 meV, small compared to the Landau Fermi liquid value of $`1`$ eV. Still larger values of $`\chi _Q`$ and $`\xi `$, and smaller values of $`\omega _{sf}`$ are encountered as one goes to the underdoped materials. As sample quality and the range of experiments improved, it became possible for Barzykin and Pines to construct the generic magnetic phase diagram for 1-2-3, Bi-based, and Hg-based materials shown in Fig. 1 and subsequently confirmed in the NMR experiments of Curro et al on YBa<sub>2</sub>Cu<sub>4</sub>O<sub>8</sub> and Aeppli *et al* on nearly optimally doped 2-1-4. For magnetically underdoped materials,those that exhibit a maximum in the uniform temperature-dependent susceptibility at some temperature $`T_{cr}`$, experiment shows that at $`T_{cr}`$, $`\xi `$ is $`2`$,and the system crosses over from mean field $`z=2`$ behavior to a $`z=1`$ dynamical scaling regime. The corresponding weak pseudogap behavior persists until a second crossover in the normal state takes place at $`T_{}`$, the temperature at which <sup>63</sup>Cu $`T_1`$T is minimum; in this strong pseudogap regime, one no longer has $`z=1`$ scaling, while ARPES experiments show that an energy gap develops for the quasiparticles near $`(\pi ,0)`$. In the 2-1-4 system, there is little or no evidence from spin-lattice relaxation rates for a crossover to strong pseudogap behavior. Magnetically overdoped materials do not exhibit any crossover behavior in the normal state; the af correlations never become strong enough to bring about the weak pseudogap behavior associated with nascent spin density wave formation and $`z=1`$ dynamical scaling, and one finds a direct transition from normal state mean field behavior to superconductifvity. ## 3 Theoretical support for the NAFL protectorate Since transport and specific heat measurements suggested that planar quasiparticles of some kind were determining the properties of the normal state, our theoretical group in Urbana was led to ask what might be the properties of a Fermi liquid which exhibits nearly antiferromagnetic behavior. We carried out microscopic calculations for a relevant toy model– quasiparticles, whose spectra were characterized by nearest neighbor and next-nearest neighbor hopping terms, interacting through spin-fluctuation exchange, with an effective interaction that is proportional to the dynamic magnetic susceptibility and hence reflects the near approach to antiferromagnetism required by the NMR experiments, $$V_{eff}=g^2\chi (𝐪,\omega )$$ (2) where $`g`$ is an arbitrary coupling constant. Weak coupling numerical calculations of the consequences of this experiment-based highly anisotropic quasiparticle interaction (note that only quasiparticles in the vicinity of the ”hot spots”, regions of the Fermi surface separated by Q, feel the full effects of the interaction) showed that the resulting resistivity would be roughly linear in temperature, and that for modest values of the coupling constant one could easily get a transition at high temperatures to a superconducting state with $`d_{x^2y^2}`$ pairing. Subsequent strong coupling (Eliashberg) calculations showed that for a coupling constant which yielded quantitative agreement with experiment for the magnitude and temperature dependence of the resistivity of optimally-doped 1-2-3, a transition to the $`d_{x^2y^2}`$ state took place at $`100`$K. Armed with this ”proof of concept” for the candidate NAFL protectorate, Monthoux and I predicted that over time experimentalists would find our calculated pairing state, which indeed proved to be the case within the next few years. (Had they not done so we were prepared to abandon our NAFL approach.) A second prediction was highly anisotropic quasiparticle behavior as one moves around the Fermi surface; the calculated frequency and temperature dependent self-energy of the hot quasiparticles located near $`(\pi ,\pi )`$ was highly anomalous, with an imaginary part which was proportional to the maximum of $`\omega `$ or $`T`$, while that of the cold quasiparticles, those near the diagonals from $`(0,0)`$ to $`(\pi ,\pi )`$ that feel little of the NAFL interaction, was Landau-Fermi-liquid like. a prediction that was borne out by subsequent ARPES experiments on overdoped materials. This highly anisotropic quasiparticle behavior of hot and cold quasiparticles was shown by Stojkovic to provide a natural explanation for the measured anomalous Hall transport and optical behavior of the overdoped materials. Using a Boltzmann equation approach, Stojkovic found that the ctn of the Hall angle was determined almost entirely by the cold quasiparticles, and so would necessarily exhibit $`T^2`$ behavior for a wide range of dopings, while both hot and cold quasiparticles contribute to the conductivity in such a way as to yield the familiar linear in T behavior of the resistivity. Stojkovic obtained quantitative agreement with experiments on the conductivity, Hall conductivity, optical conductivity, magnetotransport, and thermoelectric behavior of the overdoped and underdoped systems. Subsequent detailed calculations by Monthoux have shown that as long as one is in the mean-field regime, vertex corrections to the Eliashberg calculations are small, and do not bring about an appreciable change in the mean field calculations of transport properties, The success of these calculations makes a very strong case for the proposition that in magnetically overdoped systems the normal state is an NAFL protectorate with a dynamic magnetic susceptibilty which exhibits $`z=2`$ dynamical scaling behavior. Matters are otherwise for magnetically underdoped systems, where (see Fig. 1) weak pseudogap behavior is found at temperatures below $`T_{cr}`$, where the dynamic magnetic susceptibility exhibits $`z=1`$ dynamical scaling behavior, while at still lower temperatures one finds a second crossover at $`T_{}`$ to strong pseudogap behavior. Still it proved possible to extend the NAFL toy model calculations to cover the weak pseudogap regime ($`T_{}<T<T_{cr}`$) where Schmalian *et al* have shown that in the classical limit of temperatures large compared to the spin fluctuation energy,$`\omega _{sf}`$, one can sum all the relevant diagrams in a perturbation-theoretic treatment of the interaction, Eq. (2); they find, in agreement with experiment, a substantial transfer of spectral weight from low to high frequencies for the hot quasiparticles, a cross-over from $`z=2`$ to $`z=1`$ scaling behavior at $`\xi 2a`$, and a uniform susceptibility that decreases as the temperature decreases, and show that all these phenomena are associated with the nascent spin density wave formation anticipated for longer AF correlation lengths. They found the renormalized spin fluctuation-quasiparticle coupling is enhanced at low frequencies by vertex corrections, contrary to an argument presented by Schrieffer, but that at high frequencies it is reduced, in agreement with Schrieffer. Finally, Schmalian has developed an RG approach to the NAFL which takes into account the damping of spin waves by quasiparticle-quasihole pairs. He finds the crossovers from quantum disordered to $`z=1`$ quantum critical to mean field behavior that are seen experimentally, while his calculated temperature dependence of the AF correlation length agrees with that obtained by Aeppli *et al* in their INS experiments on near-optimally-doped 2-1-4. ## 4 Open questions Despite the evident success of theoretical calculations based on the NAFL protectorate, there remain a number of open issues and questions. Foremost, what is the protectorate in the strong pseudogap regime? Experiment tells us that the leading edge gap measured for the hot quasiparticles in this regime develops rapidly as the temperature is lowered below $`T_{}`$, while its doping dependence is anomalous, increasing with decreasing doping. This gap thus has a markedly different doping dependence than that found for the cold quasiparticles; the latter tracks $`T_c`$, which decreases as the doping is reduced. Until one understands this protectorate no model calculation of the superconducting transition temperature for the underdoped cuprates is possible. A second question, for the 2-1-4 system, is the origin of the incipient, and typically dynamic, mesoscopic ordering of spin and charge inferred from the appearance of incommensurate peaks in the INS measurements of spin fluctuations. Is this ordering related to the failure to find any magnetic evidence for strong pseudogap behavior in these materials? Clearly for this system one needs to sort out the consequences, including quantum critical behavior, of the competition between antiferromagnetism, superconductivity, and stripe formation. Why, for similar doping levels, do the 2-1-4 materials have substantially lower superconducting transition temperatures than the 1-2-3, Bi-based, or Hg-based systems? Does their greater tendency toward mesoscopic ordering inhibit superconductity? But why do they possess this greater tendency? And why do INS and NMR experiments exhibit no sign of strong pseudogap behavior (where is the $`T_{}`$?). In summary, it can reasonably be argued that we have been able to identify the protectorates associated with two of the three normal state phases found in magnetically underdoped superconductors–the $`z=2`$ NAFL mean field protectorate found above $`T_{cr}`$ and the $`z=1`$ NAFL protectorate found between $`T_{cr}`$ and $`T_{}`$, and to demonstrate that for these protectorates there is no spin-charge separation. But the issue of the strong pseudogap protectorate remains open, and until it is solved, and the above questions answered, we will be far from possessing a complete understanding of these remarkable systems.
no-problem/0002/astro-ph0002209.html
ar5iv
text
# ISO observations of the reflection nebula Ced 201: evolution of carbonaceous dust Based on observations at the Cal Tech submillimeter observatory (CSO) and with ISO, an ESA project with instruments funded by ESA member states (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA. ## 1 Introduction The mid–infrared emission bands at 3.3, 6.2, 7.7, 8.6, 11.3 and 12.7 $`\mu `$m are ubiquitous in the interstellar medium (ISM). The coincidence of these bands with characteristic wavelengths of aromatic molecules and attached functional groups (see e.g. Allamandola et al. Allamandola89 (1989)) is sufficiently convincing to call them Aromatic Infrared Bands (AIBs). Other, more neutral designations are Unidentified Infrared Bands (UIBs) or Infrared Emission Features (IEFs). The designation should stabilize in the future. Their carriers are very small grains (or big molecules) heated transiently by the absorption of a single UV (Sellgren et al. Sellgren83 (1983)) or visible (Uchida et al. Uchida98 (1998); Pagani et al. Pagani99 (1999)) photon. They are often identified with Polycyclic Aromatic Hydrocarbons (PAHs) (Léger & Puget Leger (1984), Allamandola et al. Allamandola85 (1985)). Duley & Williams (Duley88 (1988)) and Duley et al. (Duley93 (1993)) have proposed an alternative interpretation in which the carriers of the AIBs are not free particles but “graphitic islands” (i.e. akin to PAHs) loosely attached to each other so that the heat conductivity is small. Then their thermal behaviour is comparable to that of free particles. Similar models have been also proposed by others. The carriers could also be 3–D very small particles. In this Letter, we will not enter into this debate and will simply call them the AIB carriers. The origin of the AIB carriers is still unclear. The idea that they are formed in the envelopes of carbon stars is not supported by observation and meets with theoretical difficulties (e.g. Cherchneff et al. Cherchneff (1992)). They might be formed outside carbon star envelopes from carbonaceous particles condensed previously in these envelopes. Schnaiter et al. (Schnaiter99 (1999)) and others have proposed such an evolutionary scheme from carbon stars to planetary nebulae. However the AIB carriers formed in this way might not survive the strong UV field of planetary nebulae. For example, Cox et al. (Cox (1998)) find no AIB in the Helix nebula, an old carbon–rich planetary nebula. Another possibility is that they are formed in the ISM from carbonaceous grains or carbonaceous mantles covering silicate nuclei. Boulanger et al. (Boulanger (1990)) and Gry et al. (Gry (1998)) have suggested that the strong variations in the IRAS 12 $`\mu `$m/100 $`\mu `$m interstellar flux ratio are due to the release of transiently heated particles (emitting at 12 $`\mu `$m) by bigger particles, respectively due to UV irradiation or to shattering by shocks. Thanks to the Infrared Space Observatory (ISO), we know now that in the general ISM the emission in the IRAS 12 $`\mu `$m band is dominated by AIBs. Variations in the ratio of AIB carriers to big grains are thus confirmed. Laboratory experiments shed some light on this process. Scott et al. (Scott97 (1997)) have observed the release of aromatic carbon clusters containing in excess of 30 carbon atoms by solid hydrogenated amorphous carbon (HAC) irradiated by a strong UV laser pulse. The energy deposited by the laser pulse per unit target area is similar to the energy of a central collision between two grains of 10 nm radius at a velocity of 10 km s<sup>-1</sup>. This is an indication for a possible release of AIB carriers in interstellar shocks. ISO has also shown the existence of variations of AIB spectra that may be related to a transformation of carbonaceous grains. Recently, Uchida et al. (Uchida00 (2000)) have observed a broadening of the 6.2, 7.7 and 8.6 $`\mu `$m AIBs when going from strong to weak radiation field in the reflection nebula vdB 17 = NGC 1333. They also notice an increase of the flux ratio $`I`$(5.50–9.75 $`\mu `$m)/$`I`$(10.25–14.0 $`\mu `$m) with increasing radiation field in several reflection nebulae. We present mid–IR spectrophotometric and CO(2–1) line observations of another reflection nebula, Ced 201, that shed light into this transformation. Section 2 describes briefly the observations and reductions. Sect. 3 describes the results, and Sect. 4 contains a discussion and the conclusions. ## 2 ISO observations and data reduction The observations have been made with the 32$`\times `$32 element mid-infrared camera (ISOCAM) on board of ISO, with the Circular Variable Filters (CVFs) (see Cesarsky et al. 1996a for a complete description). The observations employed a 6″/pixel magnification, yielding a field of view of about 3′$`\times `$3′. Full scans of the two CVFs in the long-wave channel of the camera have been performed, covering a wavelength range from 5.15 to 17 $`\mu `$m. 10 exposures of 2.1s each were added for each step of the CVF, and 20 extra exposures were added at the beginning of each scan in order to limit the effect of the transient response of the detectors. The total observing time was about 1 hour. The raw data were processed as described in Cesarsky et al. (1996b ) using the CIA software<sup>1</sup><sup>1</sup>1CIA is a joint development by the ESA Astrophysics Division and the ISOCAM Consortium led by the ISOCAM PI, C. Cesarsky. (Starck et al. Starck (1999)). However, the new transient correction described by Coulais & Abergel (Coulais (1999)) has been applied, yielding considerable improvements with respect to previous methods. ## 3 Results The reflection nebula Ced 201 is a rather compact object at a distance of 420 pc (Casey Casey (1991)), on the edge of a molecular cloud. It is excited by the B9.5 V star BD +691231. Witt et al. (Witt (1987)) notice that the radial velocity of this star differs from that of the molecular cloud by 11.7 $`\pm `$ 3.0 km s<sup>-1</sup> so that Ced 201 is probably the result of an accidental encounter of the star with the molecular cloud, while for most other reflection nebulae the exciting star was born in situ. An arc-like structure located between the star and the denser parts of the cloud to the north and the north-east at about 18″ from the star might represent a shock due to the supersonic motion of the star. It is not very clearly visible on Fig. 1 but is well visible on the red POSS1 image delivered by ALADIN (http://aladin.u-strasbg.fr). Our CVF observation shows a small source surrounded by a faint extended emission (Fig. 1). The absolute positioning of the ISOCAM images is uncertain due to the lens wheel not always returning to its nominal position (the maximum error is 2 times the Pixel Field of View, viz. 12 arcsec here) and the images have to be recentered on known objects when possible. Since we see on the spectra of the pixels corresponding to the strong source a continuum between 5.15 and 5.5 $`\mu `$m that we attribute to the photospheric emission of the exciting star (see Fig. 3), we have recentered the CVF image at these wavelengths on the nominal SIMBAD position of BD +691231 ($`\alpha `$(J2000) = 22h 13m 27s, $`\delta `$(J2000) = 70 15′ 18″). This resulted in Fig. 1. Unfortunately the astrometry of BD +691231 is itself somewhat problematic due to the surrounding bright nebulosity and the location of the CVF image on the optical image is correspondingly uncertain. Also, the proper motion of the star from the original BD position is unknown. Fig. 2 presents a set of CO(2–1) spectra of the molecular cloud obtained by us at the Caltech Submillimeter Observatory with 30″ HPBW and 30″ sampling. These observations are better sampled than the CO(2–1) map of Kemper et al. (Kemper (1999); 21″ HPBW, 60″ sampling) but are in good agreement. The line intensity peaks in the direction of the star, probably due to local excitation of CO. The higher–resolution <sup>13</sup>CO(2–1) map of Kemper et al. (Kemper (1999); 21″ HPBW, 20″ sampling) shows that this is not the position of maximum column density which peaks well to the NE of the star, confirming this conclusion. The <sup>12</sup>CO profiles are double peaked in this area, in particular towards the arc, and this makes difficult the search for line wings which would be a signature of a shock. There is however some suggestion of a negative–velocity wing near the star, in the same sense as the radial velocity of this star. Fig. 3 displays a set of 7$`\times `$7 CVF spectra on a grid with 6″$`\times `$6″ spacing centered on the exciting star. One sees typical AIB spectra near the emission peak evolving towards fainter, different spectra 12–18 arcseconds away. The spectra near the peak show not only the classical AIBs, but also the S(3) rotation line of H<sub>2</sub> at 9.6 $`\mu `$m and the S(5) line at 6.91 $`\mu `$m or the line of \[Ar ii\] at 6.98 $`\mu `$m, which cannot be separated at our resolution. The 12.7 $`\mu `$m AIB might be contaminated by the \[Ne ii\] 12.8 $`\mu `$m line. These spectra are typical for a low–excitation photodissociation region (PDR). The AIBs are superimposed on a continuum rising towards long wavelengths. This could be the continuum emission of 3–D very small grains (VSGs). The AIBs are broader and much fainter relative to the continuum. The 11.3 $`\mu `$m AIB is now the strongest one at this location. The 7.7 and 8.6 $`\mu `$m AIBs are merged into a single broad band. The most striking feature is the strong emission plateau extending from 11 to 14 $`\mu `$m. These spectra resemble Class B spectra (Tokunaga Tokunaga97 (1997)) (Class A spectra are the usual AIB spectra). ## 4 Discussion and conclusions Fig. 3 shows a behaviour of the spectrum with radiation field similar to that observed by Uchida et al. (Uchida00 (2000)) in some other reflection nebulae. We find a trend for the 7.7 $`\mu `$m AIBs to become broader at fainter radiation fields away from the exciting star, Fig. 4. The trend is not clear for the other bands, but we should not forget that there is contamination by the AIBs from foreground and background material. We suggests that some carbonaceous material that emits the continuum and broad bands far from the star is processed through the effect of the star that moves through the molecular cloud, producing AIB carriers: the continuum is only 2 times fainter 12″ from the star than close to the star, demonstrating the partial disappearance of its carriers near the star while the AIBs become very strong. The very appearance at relatively large distances from the star (at least 18″) of a continuum rising towards long wavelengths is surprising. This continuum must be emitted by VSGs heated by single (visible) photons. These grains must be quite smaller than the “classical” VSGs which require a strong radiation field to emit in the wavelength range of ISOCAM (Contursi et al. Contursi (1998)). The spectra seen far from the star remind strongly of Class B spectra seen in carbon–rich proto–planetary nebulae (Guillois et al. Guillois96 (1996)). In the laboratory, such spectra are produced (in absorption) by natural coals rich in aromatic cycles (Guillois et al. Guillois96 (1996)) or by a-C:H materials produced by laser pyrolysis of hydrocarbons (Herlin et al. Herlin98 (1998); Schnaiter et al. Schnaiter99 (1999)). In Ced 201, these carbonaceous grains must be very small (radius of the order of 1 nm) since they are heated transiently by visible photons to the temperatures of $``$ 250 K necessary to emit the observed mid–IR features. Since this is also the approximate size of the classical AIB carriers it would not be appropriate to say that these particles release these carriers. One should invoke instead chemi–physical transformations like aromatization (Ryter Ryter (1991)). In any case, these small grains were already present before the star penetrated the molecular cloud. They are expected to be present elsewhere in the ISM. We find that assuming spherical symmetry the total emissivity per unit volume in the CVF spectral range (5.15 to 17 $`\mu `$m) decreases approximately as the inverse square root of the distance to the star, independently of the shape of the spectrum. We base the following calculation on the light diffusion model of Witt et al. (Witt (1987)), who postulate a uniform density that we find equal to n(H) = 2n(H<sub>2</sub>) = 1800 cm<sup>-3</sup>. This density corresponds to a visual extinction of only 0.02 mag. per 6″ angular distance. Since the integrated mid–IR emission is proportional to the stellar radiation density, we can derive an energy conversion efficiency, $`\eta `$ = 7.5 % (emitted in 4$`\pi `$ steradians). Knowing the density, the radiation field of the B9.5 V star, and adopting an absorption cross–section per carbon atom of $`3\times 10^{18}`$ cm<sup>-2</sup> (Allamandola et al. Allamandola89 (1989)) and an interstellar carbon abundance C/H=$`2.3\times 10^4`$ (Snow & Witt Snow95 (1989); see also Andrievsky et al. Andrievsky (1999) for carbon abundance in B stars), we find that the fraction of carbon locked in the very small particles emitting in the mid–IR is 15% . This is in agreement with the independent analysis of Kemper et al. (Kemper (1999)) who find that 10% of the gas–phase carbon is locked in particles smaller than 1.5 nm. The absorption by these particles contributes to 1/4 of the extinction in the visible, a considerable fraction. We have examined the ratio of the strength of the 6.2$`\mu `$m (a vibration of the aromatic skeleton) and of the 11.3$`\mu `$m AIB features (a C–H bending mode) as a function of distance to the exciting star. No variation has been seen in this ratio within our rather large (30 %) errors. It is interesting to note that Witt et al. (Witt (1987)) find from UV–visible scattering studies of Ced 201 that the grains responsible for the visible scattering have a narrow size distribution skewed towards big, wavelength–sized grains. This is mostly seen within about 20″ from the star. On the other hand the very strong 2175 Å extinction band suggests the presence of many smaller carbon grains. All this is evidence for grain processing by the radiation of the star, or/and by shattering of grains by a shock wave associated with its motion. The very small carbonaceous grains we see in Ced 201 through their mid–IR emission might be responsible for the 2175 Å band. In summary, we have shown the existence of transformations near a star of very small, 3–D hydrogenated carbonaceous grains which are probably present everywhere in the ISM. This transformation of carbonaceous grains into AIB carriers might be due either to the radiation of a B9.5V star, or to a shock induced by the supersonic motion of the star through the molecular cloud. Although the existence of a shock is indicated by an optical arc visible 18″ from the star (and also suspected from our CO observations), there is no obvious sign of it in the spectra shown in Fig. 3 and it may not be the cause of the transformation. The very small grains as well as the AIB carriers formed from them are excited by the light of the star. The conversion efficiency between received excitation energy and mid–IR emitted energy is of the order of 7.5% for both kinds of particles, and the fraction of carbon locked in these particles is approximately 15%. ###### Acknowledgements. We thank Cecile Reynaud and Olivier Guillois for interesting discussions, Kris Sellgren for suggesting Fig. 4 and the referee Adolf N. Witt for his useful remarks. The CSO is funded by NSF contract AST 96-15025.
no-problem/0002/astro-ph0002152.html
ar5iv
text
# References During the past few years, remarkable progress has been made in cosmology, both observational and theoretical. One of the outcomes of these rapid developments is the increased confidence that most of the energy density of the observable universe is of an unusual form, i.e., not made up of the ordinary matter (baryons and electrons) that we see around us in our everyday world. There are convincing arguments for the existence of a large amount of non-luminous, i.e., dark, matter. The matter content of the universe is at least a factor of 5 higher than the maximum amount of baryonic matter implied by big bang nucleosynthesis. This dark matter is thus highly likely to be “exotic”, i.e, non-baryonic. There are also indications, although still not entirely conclusive, of the existence of vacuum energy, corresponding to the famous “cosmological constant” that Einstein introduced but later rejected (although without very good reasons) in his theory of general relativity. This possibility has recently been given increased attention due to results from Type Ia supernova surveys . (For a recent review of the observational status of dark matter and dark energy, see .) It may be interesting to investigate possible consequences of the cosmological constant besides its influence on the geometry of the universe, and the redshift-dependence of the luminosity-distance relation for standard candles . This is the subject of the present paper. We find disagreement with a recent paper where the cosmological constant was claimed to influence the rotation curves of galaxies strongly. However, some small effects on the dynamcis of galaxy clusters do not seem excluded. Let us first set our conventions. Einstein’s equations read $$R^{\mu \nu }\frac{1}{2}g^{\mu \nu }\mathrm{\Lambda }g^{\mu \nu }=8\pi G_NT^{\mu \nu }.$$ (1) The energy density in the form of a cosmological constant $`\mathrm{\Lambda }`$ can be conveniently written in units of the density scaled to the critical density, $$\mathrm{\Omega }_\mathrm{\Lambda }\frac{\rho _\mathrm{\Lambda }}{\rho _{\mathrm{crit}}},$$ (2) where $$\rho _\mathrm{\Lambda }=\frac{\mathrm{\Lambda }}{8\pi G_N}$$ (3) with $`G_N=1/m_{\mathrm{Pl}}^2`$ (the numerical value of the Planck mass is $`m_{\mathrm{Pl}}=1.210^{19}`$ GeV) and the present value of the critical density $$\rho _{\mathrm{crit}}=\frac{3H_0^2}{8\pi G_N}.$$ (4) Thus, $$\mathrm{\Omega }_\mathrm{\Lambda }=\frac{\mathrm{\Lambda }}{3H_0^2}.$$ (5) Writing $`H_0=100h`$ km s<sup>-1</sup> Mpc<sup>-1</sup> (with $`h0.6\pm 0.1`$ from observations), the present numerical value of the critical density is $$\rho _{\mathrm{crit}}810^{47}h^2\mathrm{GeV}^4.$$ (6) This was derived using particle physics units ($`c=\mathrm{}=1`$). Expressed in $`cgs`$ units, the presently observationally favoured value $`\mathrm{\Omega }_\mathrm{\Lambda }0.7\pm 0.2`$ then translates to $$\mathrm{\Lambda }_{\mathrm{obs}}10^{56}\mathrm{cm}^2.$$ (7) In a recent preprint , an attempt was made to explain the flat rotation curves of galaxies as being due to the effect of a cosmological constant instead of the “traditional” explanation in terms of dark matter. However, values some three to four orders of magnitude larger than that in (7) were needed, something which is clearly in extreme disagreement with observations. In fact, there seems to be a further mistake, a sign error, in . A positive cosmological constant, as favored by the observations, will tend to accelerate the expansion of the universe and, if anything, make matter in the outer regions of galaxies less rather than more bound. While the effects of a cosmological constant thus are negligible on the length scale of galaxies, one might expect observable consequences for galaxy clusters. As a first attempt to see such an effect, we will consider the fate of circular orbits in a flat, expanding universe with a cosmological constant. To do this we start with the equation of motion for a particle in an expanding universe with an additional gravitational potential, $$\ddot{\overline{\chi }}+\frac{2\dot{a}}{a}\overline{\chi }=\frac{\overline{g}}{a},$$ where $`\overline{\chi }`$ is the comoving coordinate and $`a`$ is the scale factor. In terms of physical distances, $`\overline{R}=a\overline{\chi }`$, one finds $$\ddot{\overline{R}}\overline{R}\frac{\ddot{a}}{a}=\overline{g}.$$ As an example we consider the de Sitter case with $`\mathrm{\Omega }_\mathrm{\Lambda }=1`$, i.e. a universe totally dominated by the cosmological constant. We then use $$a(t)=e^{Ht}$$ where $`H=\sqrt{\frac{\mathrm{\Lambda }}{3}}`$ is constant, to obtain $$\frac{v^2}{R}=\frac{G_Nm}{R^2}\frac{\mathrm{\Lambda }}{3}R.$$ (8) for an object in orbit around a central mass $`m`$. One might note that the same result may be obtained by starting with the static form of the de Sitter metric, i.e. $$ds^2=\left(1\frac{2G_Nm}{r}\frac{\mathrm{\Lambda }}{3}r^2\right)dt^2\frac{dr^2}{\left(1\frac{2G_Nm}{r}\frac{\mathrm{\Lambda }}{3}r^2\right)}r^2d\mathrm{\Omega }^2,$$ and using a Newtonian analysis. This form of the metric is related to the cosmological form through a coordinate transformation. Equation (8) shows that for large enough $`R,`$ i.e. $$R>\left(\frac{3G_Nm}{\mathrm{\Lambda }}\right)^{1/3},$$ there are no longer bound orbits. Does this have observable consequences? Unfortunately it is easy to see that the effect becomes important only for orbits that are such that they have periods of the order of the age of the universe. Furthermore, the effect of the cosmological constant decreases rapidly for smaller orbits. Hence the concept of a rotation curve loses its meaning and we had better look elsewhere for a better approach to the possible effect of a cosmological constant. One possibility is the way clusters and superclusters of galaxies form. As a first step, one may consider the infall of matter onto a galaxy cluster in the regime where linear perturbation theory is valid. This has in fact been treated by Peebles , in the case $`\mathrm{\Omega }_M+\mathrm{\Omega }_\mathrm{\Lambda }=1`$ (i.e. zero curvature) which is the natural case in view of inflation, and which, incidentally, is now also indicated by the recent balloon measurements of the cosmic microwave background . Peebles showed that, unfortunately, the dependence of the infall peculiar velocity $`v`$ for a cluster of proper radius $`R`$ and overdensity $`\delta `$ on $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is quite weak, being well parametrized by $$v=0.3H_0R\delta \mathrm{\Omega }_M^{0.6}$$ (9) for $`0.03<\mathrm{\Omega }_M<0.3`$ and $`1<\delta <3`$, essentially independent of $`\mathrm{\Omega }_\mathrm{\Lambda }`$. This formula was generalized to arbitrary $`\mathrm{\Omega }_M+\mathrm{\Omega }_\mathrm{\Lambda }`$ in . In the non-linear, collapsing phase, we may obtain an estimate of the influence of $`\mathrm{\Lambda }`$ by adapting the simple constant density, spherical collapse model to the situation when the cosmological constant is present. We thus look at the situation when an overdense region expands to a maximal radius $`R_{\mathrm{max}}`$, and then contracts to a viral radius $`R_{\mathrm{vir}}`$. To analyse this situation, we use the equation for the energy per unit mass (*cf.* , Eq. (20) or our equation (8)) of a mass shell of proper radius $`R(t)`$ containing a fixed mass $`m`$: $$E=\frac{\dot{R}^2}{2}\frac{G_Nm}{R}\frac{\mathrm{\Lambda }R^2}{6},$$ (10) where the three terms correspond to the kinetic energy, Newtonian gravitational energy, and vacuum energy, respectively. As in the standard analysis , we may employ the virial theorem relating the average value of the kinetic energy $`T`$ to the potential energy $`V`$ $$2T=\stackrel{}{r}\frac{V}{\stackrel{}{r}}.$$ (11) Taking the average over a sphere of constant density of the energy equation (at the turn-around radius $`R_{\mathrm{max}}`$ the kinetic energy is zero) $$E=T_{\mathrm{vir}}+V_{\mathrm{vir}}^G+V_{\mathrm{vir}}^\mathrm{\Lambda }=V_{\mathrm{max}}^G+V_{\mathrm{max}}^\mathrm{\Lambda },$$ (12) utilizing $$\mathrm{\Lambda }r^2=\frac{3\mathrm{\Lambda }}{R^3}_0^Rr^4𝑑r=\frac{3R^2}{5}$$ (13) and $$G_NM(r)r^1=\frac{3G_NM}{R^6}_0^Rr^4𝑑r=\frac{3G_NM}{5R}$$ (14) we recover in the case $`\mathrm{\Lambda }=0`$ the well-known result $$\frac{3G_NM}{5R_{\mathrm{max}}}=\frac{3G_NM}{10R_{\mathrm{vir}}},$$ (15) i.e., $`R_{\mathrm{vir}}=R_{\mathrm{max}}/2`$. For a non-vanishing $`\mathrm{\Lambda }`$, the corresponding equation is $$\frac{3G_NM}{5R_{\mathrm{max}}}+\frac{\mathrm{\Lambda }}{10}R_{\mathrm{max}}^2=\frac{3G_NM}{10R_{\mathrm{vir}}}+\frac{\mathrm{\Lambda }}{5}R_{\mathrm{vir}}^2.$$ (16) Suppose that the mass overdensity contrast compared to the cosmological average mass density at the maximal (turn-around) radius is $`\omega `$ ($`=5.6`$ in the standard case), and assume $`\mathrm{\Omega }_M+\mathrm{\Omega }_\mathrm{\Lambda }=1`$. Then we can write $`M=4\pi R_{\mathrm{max}}^3\omega \rho _M/3`$, and this inserted into (16) gives, by use of (3) $$1+\kappa =\frac{1}{2\mu }+2\kappa \mu ^2$$ (17) where we have introduced $`\mu =R_{\mathrm{vir}}/R_{\mathrm{max}}`$ ($`=0.5`$ in the standard case) and $$\kappa =\frac{1}{\omega }\frac{\mathrm{\Omega }_\mathrm{\Lambda }}{(1\mathrm{\Omega }_\mathrm{\Lambda })(1+z)^3}.$$ This result is written in a somewhat different form than, but agrees with, Eq. (26) of . If we assume as a first approximation $`\omega =5.6`$ as in the $`\mathrm{\Lambda }=0`$ case, and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, we find by solving (17) numerically for $`z=0`$ that $`\mu `$ has decreased from 0.5 to around 0.39. This means that the virialized radius is smaller, which is not unreasonable, since for a given mass, only more compact clusters can “survive” the repulsive force from a positive cosmological constant. We can improve this analysis somewhat by taking into account the fact that $`\omega `$ also depends on $`\mathrm{\Lambda }`$. It can be seen that this has the effect of increasing $`\omega `$ at intermediate redshifts. This increase causes an increase in $`\mu `$, meaning that we have overestimated the effect of $`\mathrm{\Lambda }`$ above. The effect is, however, small. One can also estimate what the final density contrast of the cluster will be. To do this we have obtained an expression comparing the density at maximum expansion with the density of the universe today. This is given by $$\stackrel{~}{\omega }=\frac{9\pi ^2}{16}\frac{1}{f(a)^2}\frac{1}{(1\mathrm{\Omega }_\mathrm{\Lambda })}$$ (18) with $$f(a)=\frac{3}{2}_0^a\frac{da}{\sqrt{(1\mathrm{\Omega }_\mathrm{\Lambda })/a+\mathrm{\Omega }_\mathrm{\Lambda }a^2}}.$$ (19) The result is a further net relative compression of the clusters due to the cosmological constant above the one given by $`\mu `$. Since we have been dealing with an imagined situation where one can compare between a “standard” scenario without a cosmological constant, and one where $`\mathrm{\Lambda }0`$, and found only small effects on cluster scales, it seems difficult, but maybe not excluded, to draw conclusions about the value of $`\mathrm{\Omega }_\mathrm{\Lambda }`$ in the real universe where observational uncertainties have to be taken into account. The effect to search for, is a tendency for virialized clusters to get smaller and more overdense for a positive $`\mathrm{\Lambda }`$. It seems clear, however, that the effects on galactic scales are extremely tiny and negligible for rotation curves. Acknowledgements We wish to thank H. Rubinstein for discussions, and P. Lilje and M. Rees for informing us about reference . This work was supported in part by the Swedish Natural Science Research Council (NFR).
no-problem/0002/astro-ph0002443.html
ar5iv
text
# Extremely Red Objects in the Field of QSO 1213–0017: A Galaxy Concentration at 𝑧 = 1.31Based in part on observations obtained with the NASA/ESA Hubble Space Telescope, which is operated by the STScI for the Association of Universities in Research in Astronomy, Inc., under NASA contract NAS5-26555. Based in part on observations obtained at the W. M. Keck Observatory, which is operated jointly by the California Institute of Technology and the University of California. ## 1 Introduction The observational study of galaxy formation and evolution can broadly be divided into two approaches: (1) searches for primordial galaxies, typically oriented toward high redshifts, in order to scrutinize formation and evolution processes as they occur, and (2) studies of existing old galaxies to decipher their origin and life history from their current properties. With the advent of infrared imaging detectors, a population of infrared-bright, extremely red objects (EROs) have been uncovered which is relevant to both of these approaches. Elston et al. (1988) discovered a few objects with exceptional optical/IR colors ($`RK5`$) in the first deep near-IR sky survey. While the brightest objects were shown to be $`z0.8`$ ellipticals (Lilly et al., 1988; Elston et al., 1989), their tantalizing conjecture that the fainter ($`K18`$) red objects are $`z>1`$ ellipticals has remained unresolved until recently. Subsequent deep infrared imaging of primarily high-redshift radio galaxy and quasar fields serendipitously uncovered handfuls of objects with extreme colors (McCarthy et al., 1992; Eisenhardt & Dickinson, 1992; Graham et al., 1994; Hu & Ridgway, 1994; Soifer et al., 1994; Dey et al., 1995; Djorgovski et al., 1995). Deep multicolor optical/IR sky surveys have begun to assemble larger samples of these objects using well-defined selection criteria, an essential starting point for statistical studies (Cowie et al., 1994; Hall & Green, 1998; Cohen et al., 1999; Barger et al., 1999; McCracken et al., 2000). In particular, the recent availability of $`1024\times 1024`$ pixel IR detectors has made possible surveys for significant numbers of EROs (Thompson et al., 1999). The use of the designation “extremely red” has varied in the literature. This was especially true in earlier work where the description was applied to any object appearing in IR images but undetected optically. In this paper, we follow Graham & Dey (1996) and use the term “extremely red object” (ERO) for a source with $`RK>6`$. This criteria was an operational one, encompassing several IR-detected, optically-invisible sources known at the time, and it has since become widely used. In particular, Thompson et al. (1999), who have conducted the widest deep $`RK`$ survey to date, also adopt this definition. They confirm EROs defined by this criteria are unusual objects, being the reddest 2% of the $`K20`$ field galaxy population. However, we will also pay attention in this paper to galaxies with $`RK>5`$, the criteria used by Elston et al. (1988) and Cohen et al. (1999), as this is about the expected color of passively-evolving elliptical galaxies at $`z1`$. We will generically refer to this larger sample as “red galaxies.” EROs are very optically faint ($`R24`$) which has hampered studies of these objects. Successful spectroscopy to determine their redshifts and physical nature has only become possible with the development of the Keck 10-m telescope. Recent work has found some members of this “missing population” are luminous ($`L^{}`$) galaxies at $`z>1`$; hence determining their origin has direct relevance to the formation of massive galaxies and AGN. Being solely defined by optical/IR color, the few EROs with measured redshifts form a heterogeneous population as expected, comprising at least two broad classes: (1) ultraluminous dusty star-forming systems, perhaps akin to local objects like Arp 220, and (2) massive galaxies with old passively-evolving stellar populations. The best studied ERO to date, ERO J164502+4626.4 (object 10 of Hu & Ridgway, 1994, hereinafter “HR10”), is the prototype for star-forming EROs. This object is a $`z=1.44`$ dusty, ultraluminous galaxy with enormous ongoing star formation ($`1000`$ $`M_{\mathrm{}}`$ yr<sup>-1</sup>) suggested by its sub-mm continuum emission (Graham & Dey, 1996; Cimatti et al., 1998; Dey et al., 1999). However, since HR 10 has among the reddest colors ($`IK=6`$) of known EROs, it is unrepresentative of the bulk of the population or at least is an extreme example. These EROs may provide excellent case studies of optically obscured star formation in the early Universe, especially given the association of faint EROs with some of the sub-mm emitting sources found by SCUBA (Smail et al., 1999). In fact, it may be that intense, very brief bursts of star formation are a common mode of star formation at high redshift. There are also examples of EROs as galaxies with old stellar populations. Passively evolving ellipticals at $`z>1`$ are expected to have large $`RK`$ colors, which stem from their rising spectral energy distributions (SEDs) longward of $`4000`$ Å being redshifted into the near-IR. Therefore, selection using very red colors is an excellent method to search for early-type galaxies at $`z>1`$. This is especially true in searching for clusters since the surface density of galaxies becomes quite high at faint magnitudes and it would otherwise be hard to distinguish a cluster from the foreground and background populations. However, ellipticals at these distances are expected to be optically very faint so measuring spectroscopic redshifts is challenging. Dickinson (1995) has identified a cluster of elliptical galaxies associated with the powerful radio galaxy 3C 324 at $`z=1.206`$ along with a foreground structure at $`z=1.15`$. To date, the highest-redshift collection of old, red galaxies spectroscopically confirmed has been found by Stanford et al. (1997). They have discovered a $`z=1.27`$ cluster by its large $`JK`$ colors; the cluster galaxies have $`RK5`$ and rest-frame UV spectra resembling local elliptical galaxies. There is also a neighboring cluster at $`z=1.26`$ found from its X-ray emission by Rosati et al. (1999) which contains spectroscopically old red galaxies. Isolated examples of old EROs have been discovered at still higher redshifts. The very weak radio sources LBDS 53W091 at $`z=1.552`$ (Dunlop et al., 1996; Spinrad et al., 1997) and LDBS 53W069 at $`z=1.432`$ (Dunlop, 1998; Dey et al., 2000) both have $`RK6`$ and rest-frame UV spectra which imply ages of a few Gyr. Recently, Soifer et al. (1999) have identified an $`RK7`$ object as an old galaxy at $`z=1.58`$ based on associating a large continuum break observed at $``$1 µm with redshifted 4000 Å break. Discovery of $`z>1`$ galaxies with old stellar populations offers several powerful lines of inquiry into understanding galaxy formation and evolution and its cosmological context. Detailed comparison of the absorption lines and continuum breaks of these galaxies with galaxies at $`z=0`$ may prove fruitful in tracing the evolutionary course and enrichment history of the oldest stellar populations. Absolute age dating of these galaxies would provide a constraint on the time scale of galaxy formation and the age of the Universe. Moreover, clusters of old galaxies at high redshift can provide testing grounds for competing scenarios of galaxy formation; the predicted appearance of early-type galaxies in these clusters is dramatically different in hierarchical galaxy formation scenarios as compared to monolithic collapse ones (e.g., Kauffmann & Charlot, 1998). Finally, the existence of these old galaxies at high redshift potentially can constrain cosmological parameters and theories of structure formation (e.g., Peacock et al., 1998). We are conducting an on-going study of the nature of these EROs, using deep optical and near-IR imaging from the ground to assemble a large sample of EROs for statistical study. We have been acquiring high-resolution morphological information from HST optical and Keck near-IR imaging, and we are obtaining spectroscopy from Keck in the optical and near-IR to determine ERO redshifts and physical properties. Deciphering the identity of EROs based on comparing broad-band colors alone to theoretical stellar population synthesis models is dubious given that the model parameter space is vast, at least comprising age, metallicity, and reddening variations. Spectroscopy is essential. In addition, given the apparent heterogeneity of the population, a reasonably large sample of objects needs to be studied to understand the nature and relative abundances of the subsets, instead of the spectroscopy of individual EROs which has been done to date. In this paper, we present a study of the EROs in the field of QSO 1213–0017 (RA = 12<sup>h</sup>15<sup>m</sup>49.8<sup>s</sup>, Dec = –0034′34″ ; J2000.0). This $`z=2.69`$ quasar, also known as UM 485, has exceptionally strong and complex Mg II absorption systems at $`z=1.3196`$ and $`z=1.5534`$ (Steidel & Sargent, 1992). In § 2, we describe optical and near-IR imaging covering an 11 arcmin<sup>-2</sup> region around this field and Keck optical spectroscopic follow-up of galaxies selected by their very red colors. We consider in § 3 the surface density of the red galaxies, their morphologies, and their spectroscopic redshifts. We examine in § 4 the collection of red galaxies as a whole, both their spatial distribution to consider the possibility that they are members of a cluster at $`z=1.31`$ and their spectrophotometric properties to understand their stellar populations. We summarize our findings in § 5 and discuss their implications. Throughout this paper, we assume a cosmology with $`\mathrm{\Omega }=1`$, $`\mathrm{\Lambda }=0`$, and $`H_0=50h_{50}`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. At $`z=1.31`$ with these parameters, 1″ = 8.58 $`h_{50}^1`$ kpc; the luminosity distance $`d_L=9.40`$ $`h_{50}^1`$ Gpc; and the angular diameter distance $`d_\theta =1.78`$ $`h_{50}^1`$ Gpc. ## 2 Observations ### 2.1 Optical Imaging Optical imaging of the field of QSO 1213–0017 was obtained in April 1993 using the KPNO 4-m Mayall telescope as part of a program to image $`z>1`$ Mg II absorbing galaxies near QSO sightlines (see Steidel, 1995). We used the PFCCD equipped with a $`2048\times 2048`$ Tektronix CCD and a pixel scale of 0$`\stackrel{}{\mathrm{.}}`$47/pixel. A total integration of 3500 s was obtained through the $``$-band filter ($`\lambda _c=6930`$ Å, $`\mathrm{\Delta }\lambda =1500`$ Å) under photometric conditions and 1$`\stackrel{}{\mathrm{.}}`$25 FWHM seeing. The data were reduced using standard techniques and calibrated onto the AB magnitude system<sup>1</sup><sup>1</sup>1Note that our preliminary results in Liu et al. (1999) designated the $``$ filter as “$`R_S`$” and presented Vega-based magnitudes, instead of AB ones. For the $``$ filter, the offset from AB mags to Vega-based mags is –0.28 mag, i.e., $`^{Vega}=^{AB}0.28`$. using spectrophotometric standard stars from the lists of Massey et al. (1988) and Oke & Gunn (1983). The $``$ filter is a compromise between the standard Cousins $`R_C`$ and $`I_C`$ filters. A filter trace for $``$ and a photometric transformation to the Kron-Cousins system are given in Steidel & Hamilton (1993). For the very red objects considered in this paper, $`R_C`$ after accounting for the color terms, so the $`K`$ colors given for the EROs in this paper are roughly equivalent to $`R_CK`$ colors given for other EROs in the literature (see § 3.1). We obtained four images of the 1213–0017 field with the WFPC2 camera aboard Hubble Space Telescope (HST) on U.T. 1998 January 11. Data were taken using the $`F814W`$ filter with a total integration of 4500 s. The integrations were taken in slightly offset positions, allowing reasonable identification of cosmic ray hits in the individual images which were then excluded from the averaging to create the final image. ### 2.2 Near-IR Imaging #### 2.2.1 KPNO Infrared data were obtained in February 1994 on the KPNO 4m Mayall telescope using the IRIM imager, equipped with a $`256\times 256`$ NICMOS-3 HgCdTe detector and sampling the sky at 0$`\stackrel{}{\mathrm{.}}`$60/pixel. A total integration time of 3660 s was obtained in a non-repetitive grid, in which the telescope was moved after each 60 s of integration time. Conditions were photometric with 1$`\stackrel{}{\mathrm{.}}`$0 FWHM seeing. The data were reduced using the DIMSUM package (Eisenhardt, Stanford, Dickinson, & Ford, priv. communication) in IRAF.<sup>2</sup><sup>2</sup>2IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. The data were calibrated using the UKIRT faint standards (Casali & Hawarden, 1992), resulting in Vega-based magnitudes. The data were taken in the $`K_S`$ filter (McLeod et al., 1995) which is slightly bluer and narrower than the standard $`K`$ filter. As discussed by Hall et al. (1998), the shift between $`K_S`$ and $`K_{UKIRT}`$ is expected to be quite small, less than 0.04 mag for objects as red as $`HK=1`$. In the absence of IR colors for all the objects to compute color terms, we assume $`K_S=K`$ hereinafter, which will have a negligible effect on any of our results. #### 2.2.2 Keck We obtained deeper, higher spatial resolution IR imaging of two sub-fields on U.T. 1998 May 14–15 using the facility near-IR camera NIRC (Matthews & Soifer, 1994) of the 10-meter W. M. Keck I Telescope located on Mauna Kea, Hawaii. The camera employs a Santa Barbara Research Corporation 256 $`\times `$ 256 InSb array and has a pixel scale of 0$`\stackrel{}{\mathrm{.}}`$150/pixel resulting in a 38″ field. We observed 2 fields close to the quasar, one to the southwest (containing R4, R5, and R6) and one to the northeast (containing R7, R9, and R10). The SW field, centered about (–27″ E, –7″ N) from QSO 1213–0017, was observed with the standard $`JHK`$ filters with integration times of 900, 2040, and 2280 s, respectively, totaled over the two nights. The NE field, centered about (27″ E, 10″ N) from QSO 1213–0017, was observed in $`J`$ for 780 s and in $`K`$ for 1200 s on 15 May 1998 UT. We used integration times of 6 or 20 s per coadd, depending on the filter. After one minute of integration was coadded, the sum was saved as an image, and then the telescope was offset by a few arcseconds. The telescope was stepped through a non-redundant dither pattern, and an off-axis CCD camera was used to guide the telescope during the observations. Conditions were non-photometric while observing the NE field and also for part of the SW field observations. For the SW field, we culled frames which were non-photometric from the reduction process. For the NE field, the data are used only to examine morphologies. The data were reduced in a manner typical for near-IR images. An average dark frame was subtracted from each image to remove the bias level. We constructed a flat field by median averaging scaled images of the twilight sky. A preliminary sky subtraction of the images was performed to identify astronomical objects. Then for each image, we subtracted a local sky frame constructed from the average of prior and subsequent images, excluding any of the identified astronomical objects from the averaging. We used the brightest source in each field to register the reduced frames, which were then shifted by integer pixel offsets and averaged to assemble a mosaic of the field. Bad pixels were identified by intercomparing the registered individual images and masked during the construction of the mosaic. All these reductions were done using custom software written for Research System Incorporated’s IDL software package (version 5.1). We observed the faint HST IR standard SJ 9143 (Persson et al., 1998) as a flux calibrator so the resulting magnitudes are Vega-based. Aperture photometry of the registration object in each individual frame verified the SW field data which were retained were nearly photometric, with any systematic errors of order 5% or less. There are no obvious point sources in the images, but the most compact objects have a FWHM of $`0\stackrel{}{\mathrm{.}}5`$. ### 2.3 Optical/Near-IR Photometry To identify very red objects, we compiled a photometric catalog of all objects in the KPNO $`K`$-band mosaic using the SExtractor software of Bertin & Arnouts (1996), version 2.0.21. Each pixel in the $`K`$-band image was multiplied by the square root of its exposure time to create a mosaic with uniform noise over the entire field. Objects in the noise-normalized mosaic were then identified as any set of contiguous pixels 1.5$`\sigma `$ above the background level with an area equal to a FWHM-diameter circular aperture (10 pixels), i.e., a highly significant detection, and then aperture photometry was done on the original $`K`$-band image. We used the resulting “$`\mathrm{𝙼𝙰𝙶}\mathrm{\_}\mathrm{𝙱𝙴𝚂𝚃}`$” magnitudes, which for uncrowded objects use apertures determined from the moments of each object’s light distribution. This method is similar to that of Kron (1980) and is designed to recover most of an object’s flux while keeping errors low. The $``$ image was transformed to be registered with the $`K`$ image, and photometry was done using the same apertures for each object as for the $`K`$-band image. The Galactic reddening towards this field is small, $`E(BV)=0.02`$ (Schlegel et al., 1998), so we apply no extinction corrections to the photometry. We ran Monte Carlo simulations to verify the accuracy of the measured magnitudes and errors. We inserted 10 artificial stars of known magnitude and color into the images and then processed the images with SExtractor in the same fashion as the original data. For each magnitude and color bin, we ran the simulation 30 times for a total of 300 stars and then compared the recovered magnitudes and colors to the input ones in order to determine the random and systematic photometric errors. The artificial stars were chosen to span a range in magnitude and color encompassing all the real objects, with 0.5 mag steps in $`K`$ magnitude and 1 mag steps in $`K`$ color. A low-order polynomial surface was then fitted to the photometric errors from the simulations as a function of $`K`$ magnitude and $`K`$ color, and the fit coefficients were used to assign photometric errors for the real objects. In all cases the systematic offset from the input magnitude to mean recovered magnitude for the artificial stars was always much smaller than the 1$`\sigma `$ random errors, so we applied no systematic corrections to the photometry of the real objects. The errors produced by SExtractor were found to agree well with the errors deduced from the simulations, except for objects near the detection limit. In principal, there will be a surface brightness dependence since errors measured from stars will not be the same as those measured from less compact sources such as galaxies (e.g., Bershady et al., 1998). However, this effect is more important for determining the completeness corrections, which is not relevant to our work here, and, furthermore, at the faint limit the majority of our sources are nearly unresolved so any error in measuring the error will be negligible. Since the Keck/NIRC imaging reaches fainter magnitudes than the KPNO data, they were processed separately, but using SExtractor in an analogous fashion. Objects were identified and photometric apertures were determined from the $`K`$-band image. This information was used to process the other bands, with Monte Carlo simulations used to compute photometric errors. ### 2.4 Optical Spectroscopy We observed the 1213–0017 field with the 10-m Keck II Telescope using the facility optical spectrograph LRIS (Oke et al., 1995) on U.T. 1999 June 15. We prepared a focal plane mask of 1$`\stackrel{}{\mathrm{.}}`$5-wide slitlets to observe seven objects with $`K>5`$. The length of each slitlet varied, with a minimum length of 14″. We used the 150 lines/mm grating blazed at 7500 Å to cover a nominal wavelength region from 4000–10000 Å, depending on the position of the slitlets on the mask. The dispersion of $``$4.8 Å/pixel resulted in a spectral resolution of $``$25 Å, as measured by the FWHM of a few strong emission lines of serendipitous objects and the lines of the calibration arc lamp spectra. Sky conditions were photometric with seeing of 0$`\stackrel{}{\mathrm{.}}`$45 at the start of the integrations. We obtained 3 exposures of 1800 s each with the mask oriented at a position angle of 46 east of north. We dithered the telescope a few arcseconds along the slit direction between each exposure. The slitmask data for the objects were separated into seven individual spectra and then reduced using standard longslit tasks in IRAF. The data were bias-corrected using the unilluminated pre- and post-scan regions of the detector and flat-fielded using internal quartz lamps obtained immediately after the observations. Sky lines were subtracted by fitting a low-order analytic function to each column of the images, perpendicular to the dispersion axis. We corrected each exposure for long-wavelength fringing by taking an average of the other 2 exposures and subtracting this average. The exposures were registered and combined. The most compact objects in the resulting mosaic are $`0\stackrel{}{\mathrm{.}}6`$ FWHM in the spatial direction, though they may not be point sources. A 1-d spectrum of each red galaxy was extracted using a 1$`\stackrel{}{\mathrm{.}}`$3 (6 pixel) wide aperture and was wavelength-calibrated using HgKr and NeAr lamps taken immediately after the observations, with an RMS of $`0.30.4`$ Å in the wavelength solutions. Zeropoint shifts of $`2`$ Å were applied to the spectra based on the measured wavelengths of the night sky lines. Flux calibration was performed using the standard star G191B2B (Massey et al., 1988; Massey & Gronwall, 1990) ## 3 Results ### 3.1 An Excess of EROs Figure 1 presents the KPNO $``$-band and $`K`$-band images of the 1213–0017 field, which cover an area of 11.2 arcmin<sup>-2</sup>. Figure 2 shows the field’s color-magnitude diagram along with the no-evolution locus of local elliptical galaxies redshifted to $`z=1.3`$, computed from multicolor photometry of the Coma cluster as described in Stanford et al. (1998). If we were to account for passive stellar evolution, the locus would appear somewhat bluer and brighter. A population of objects with red ($`K>5`$) colors are found starting at $`K18`$ mag, irregularly distributed on the sky. Most of the galaxies are somewhat bluer than the Coma locus at $`z=1.3`$, and the brightest of the EROs have $`K`$-band fluxes comparable to the brightest Coma cD galaxies. As we discuss below, there is a significant excess of red objects in this field compared to the surface density measured from blank sky surveys. However, before analyzing the counts of the reddest objects, we must first account for a few issues: (1) the effective wavelength of the $``$ filter is significantly redder ($`350`$ Å) than the standard Cousins $`R_C`$ filter; (2) our data becomes incomplete at $`K19.5`$, as gauged by the turnover in the number counts; and (3) the exposure time is not uniform over the field, with only about half the field receiving at least 50% of the maximum integration. (Also, there will be some added uncertainty since the abundance of the reddest objects has a strong dependence on the choice of color and magnitude cuts.) The most extensive $`RK`$ color-selected counts in a blank field of the sky come from the CADIS survey of Thompson et al. (1999). They find a surface density of $`0.039\pm 0.016`$ arcmin<sup>-2</sup> in an area of 154 arcmin<sup>2</sup> for the bright EROs ($`RK\mathrm{}>6`$ and $`K\mathrm{}<19`$), slightly revised to $`0.047\pm 0.009`$ arcmin<sup>-2</sup> in their final survey area of 600 arcmin<sup>2</sup> (D. Thompson, priv. communication). Direct comparison of our ERO counts with theirs is complicated as their $`R_{CADIS}`$ filter is $`450`$ Å bluer than our $``$ filter. Because EROs are very red and their number counts are a strong function of $`RK`$ color, the difference is significant. (The difference in IR filters, their $`K\mathrm{}`$ versus our $`K_S`$, is much less important.) We computed the offset between $`R`$ filters as a function of redshift using the elliptical galaxy SED of Coleman et al. (1980). The true SEDs of the ERO population(s) are unknown, but we adopt this template as a reasonable guess. At $`z=0`$, $`(R_{CADIS})`$ is –0.08 mag; the difference is negative due to the fact that $``$ mags are AB and $`R_{CADIS}`$ mags are Vega-based (see footnote 1). This difference increases nearly monotonically with redshift out to $`z1.5`$, becoming as large as 0.4 mag. For redshifts of $`0.51.7`$, the calculated offset is $`0.17`$ mag. Since the few EROs with known redshifts lie in this range, we add 0.17 mag to our measured $`K`$ colors to compare to the Thompson et al. results. This is the minimum expected amount for the transformation (i.e., an ERO at $`z=0.5`$) and therefore leads to conservative estimates for the 1213–0017 ERO overdensity. After accounting for this difference in filters, we find five $`K19`$ objects with $`(R_{CADIS}K)>6`$ in our images, four of which lie in the central 6.2 arcmin<sup>2</sup> area which received least half of the maximum integration time. Since the counts over the whole mosaic are guaranteed to be incomplete, we consider only this central region. The counts of Thompson et al. (1999) predict 0.24 objects in a 6.2 arcmin<sup>2</sup> area, 16$`\times `$ fewer than observed. The Poisson probability of our observing four or more EROs given the expected surface density is only $`1.1\times 10^4`$, i.e., the overabundance of EROs in the 1213–0017 field is highly statistically significant. The probability grows to 5% if the blank sky counts are increased by a factor of 5.7, i.e., the 1213–0017 ERO surface density is overabundant by this factor at the 95% confidence level. Similarly, with the less restrictive color criteria of $`(R_{CADIS}K)=56`$, where the blue cutoff is about the color of a passively evolving single-burst population at $`z1`$ (Figure 9), the Thompson et al. (1999) data predict 2.4 objects with $`K19`$ while we observe 7 objects. The Poisson probability of finding at least this many objects is $`1.2\times 10^3`$, and the enhancement of such objects in the 1213–0017 field is at least a factor of 1.4 at the 95% confidence level. We can combine the counts in the $`(R_{CADIS}K)>6`$ and $`(R_{CADIS}K)=56`$ bins to compute the enhancement of the 1213–0017 field over the blank sky counts. We compute the joint probability of observing the 4 and 7 galaxies in these two color bins over a range of enhancement factors and find the most likely excess is a factor of 4.2, with the 95% confidence range being factors of 2.2–7.1. Note that this calculation assumes the enhancement is the same for both color bins, which may not be the case. ### 3.2 Morphologies of the Red Galaxies Figure 3 and 4 show the reduced Keck/NIRC $`K`$-band mosaics for the fields NE and SW of QSO 1213–0017. Note that the images reach different limiting depths; the faintest objects in the NE and SW data are $``$20.5 and 21.3 mag, respectively. We compare these data with the HST $`F814W`$ images in Figure 5 to examine the observed optical and IR morphologies of the objects with $`K>5`$. The HST images are centered on the QSO so we do not have images for the red galaxies farther away, including unfortunately R1 and R8 for which we have continuum-break spectroscopic redshifts (§ 3.3.2). For $`z=1.3`$, these datasets roughly correspond to the rest-frame near-UV/blue (3100 – 4100 Å) and far-red (8700 – 10400 Å). \[O II\] $`\lambda `$3727 emission lies in the $`F814W`$ filter bandpass for objects at $`z0.91.5`$. All the red objects are extended in the HST images meaning they cannot be galactic M dwarfs. A few of them appear to be E/S0 type. The HST images are relatively shallow ($`3\sigma `$ limiting surface brightness $`\mu _{F814W}25.7`$ mag arcsec<sup>-2</sup>), but we can still see faint emission around a number of the red galaxies. Several seem to have both a central spheroid and a faint component suggestive of a disk, while a few are entirely diffuse emission without any compact core/bulge. None show any strong evidence for interactions. Overall, the detected red galaxies show remarkable consistency in their morphologies between the $`F814W`$ and $`K`$-band images. We return to this issue in § 4.2.1. ### 3.3 Optical Spectroscopy From the 2-d reduced spectra, we were able to identify spectroscopic features in five of the red galaxies on our slitmask, both from emission lines and continuum breaks (Table 1). To more accurately determine the redshifts and their errors, we used the Fourier cross-correlation technique of Tonry & Davis (1979) as implemented in the FXCOR task of IRAF. We linearly interpolated over significant residuals in the spectra resulting from imperfect cancellation of the strongest telluric features. The three absorption-break galaxies were cross-correlated against the UV spectra of F and G dwarfs from the IUE Spectra Atlas of Wu et al. (1991), using an average spectrum for each spectral type. The spectra of the emission-line galaxies were cross-correlated against the emission-line template of an Sc galaxy from Kinney et al. (1996). #### 3.3.1 R6 and R7 — Emission-Line Galaxies The spectrum of R6 shows a single emission line with an observed equivalent width (EW) of $``$60 Å (Figure 6). There is ample continuum blueward of the line so we identify it as \[O II\] $`\lambda `$3727, placing R6 at $`z=1.203`$, slightly in the foreground of the other redshifts in this field. Assuming this emission originates from H II regions of massive young stars, we infer a star formation rate of $`10h_{50}^2`$ $`M_{\mathrm{}}`$ yr<sup>-1</sup> using the \[O II\] calibration of Kennicutt (1992), comparable to the most active spirals in the local neighborhood (Kennicutt, 1983). The spectrum of R7 reveals the presence of an active galactic nucleus at $`z=1.319`$ (Figure 6). The data show strong lines of \[O II\] $`\lambda `$3727 and C II\] $`\lambda `$2326 with observed equivalent widths of $`50`$ Å. The C III\] $`\lambda `$1909 line is possibly detected, though this line falls near the blue end of the spectrum where the signal is quite low. Examination of the reduced images clearly shows a weak \[Ne IV\] $`\lambda `$2424 emission line which lies on the red edge of the strong \[O I\] $`\lambda `$6300 telluric line, preventing a good measurement. Strangely, no Mg II $`\lambda `$2800 emission line is seen, even though it is typically at least as strong as \[Ne IV\] $`\lambda `$2424 and C II\] $`\lambda `$2326 in high-$`z`$ AGN (e.g., Stern et al., 1999). Presumably the emission line is absorbed by gas in this galaxy though no strong Mg II $`\lambda `$2800 absorption feature is seen. Also, there are some possible weak absorption features in the 8800–9200 Å range near the expected location of the higher order Balmer lines, in particular at 8896 Å which matches the Balmer H9 transition. However, spectra become quite noisy in this region and the features are not overly compelling. Finally, these are no signs of an old stellar population as the strongest continuum breaks (B2640, B2900, D4000) and the Ca H+K lines are all absent. #### 3.3.2 R1, R8, and R10 — Old Continuum-Break Galaxies Three of the red galaxies on our slitmask — R1, R8, and R10 — are devoid of strong emission lines (Figures 7), but they do show continuum breaks identifiable as the rest-frame mid-UV (2000 – 3300 Å) breaks of B2640 and B2900 at $`z=`$ 1.317, 1.298, and 1.290, respectively. The breaks arise from metal-line blanketing on either side of the tophat-shaped feature from 2640–2720 Å. These features, which were first suggested by Morton et al. (1977) as possibly being useful for redshift determinations at $`z>0.75`$, have been used recently to identify old galaxies out to $`z1.55`$ (Dunlop et al., 1996; Dey et al., 2000). At least two of the red galaxies may also have Mg II $`\lambda `$2800 in absorption. These spectral features are characteristic of old stellar populations, such as Galactic and M31 globular clusters, M32, and the cores of local elliptical galaxies (e.g., Rose & Deng, 1999; Ponder et al., 1998, and references therein). Galaxies R8 and R10 also may show the 4000 Å break at $`9200`$ Å. For these three galaxies, we determined the redshifts using the 4300–8000 Å region of the spectra in the cross-correlation analysis. At longer wavelengths, the random errors increase from enhanced telluric OH emission, and the systematic errors also increase from imperfect removal of the CCD fringing pattern, due to the small number of exposures we were able to acquire. In addition to the choice of wavelength region, another important aspect of the redshift determination is the choice of stellar template. We explored template spectra ranging from spectral types F0 to G8, using both the Fourier analysis and a direct computation of the cross-correlation between galaxy and template as a function of redshift. All templates produced fairly consistent redshifts using both methods. Figure 8 shows the rest-frame galaxy spectra with IUE stellar spectra overplotted for comparison. ## 4 Discussion ### 4.1 A Concentration of Red Galaxies at $`z`$ = 1.31 We have measured five redshifts for the red galaxies in the 1213–0017 field. Four of these lie near $`z=1.31`$ with the remaining one at $`z=1.203`$. The spectrum of QSO 1213–0017 also shows an Mg II absorption system at $`z=1.3196`$ (Lanzetta et al., 1987; Steidel & Sargent, 1992), similar to the redshift of $`1.319\pm 0.002`$ for galaxy R7. Could this galaxy be responsible for the absorption? It is located 15″ from the QSO, which is 130 $`h_{50}^1`$ kpc. Based on preliminary results from a survey of $`z=0.30.9`$ Mg II absorbers by Steidel (1993) and Steidel & Dickinson (1995), the median projected impact parameter between QSOs and their best candidate absorber is 45 $`h_{50}^1`$ kpc with an upper bound of 80 $`h_{50}^1`$ kpc. Early results from their $`z>1`$ survey find the absorber properties are not dramatically different compared to the lower redshift sample (Steidel, 1995; Dickinson & Steidel, 1996). The projected separation of R7 therefore seems too large for it to be the Mg II absorber, though the geometry and extent of Mg II absorption systems remain open questions (c.f. Steidel, 1995; Charlton & Churchill, 1996). In addition, our HST images show a few faint galaxies with projected separations from the QSO of $`80h_{50}^1`$ kpc, making them better candidates for the absorber. Since R7 is unlikely to be the Mg II absorber, we infer that there is at least one more galaxy at $`z=1.3196`$. Including the Mg II absorber, there are five galaxies within an angular region of 3′ and close together in velocity. Their redshifts have an unweighted mean of 1.309 and a standard deviation $`\sigma =1790\pm 570`$ km s<sup>-1</sup> in the mean rest frame.<sup>3</sup><sup>3</sup>3The above results use the standard formulae. Similar results are obtained using the central location (“mean”) and scale (“dispersion”) estimators recommended by Beers et al. (1990) for non-Gaussian velocity distributions. Their preferred central location measures for a sample of this size, the median and the biweight-estimated mean, give $`z=1.317`$ and 1.319, respectively. The gapper method they advocate for computing the scale gives 1840 km s<sup>-1</sup>. The full spread in redshift is 3800 km s<sup>-1</sup> in the mean rest frame; for comparison, consider that 95% of the galaxies in the central 6 $`h_{50}^1`$ Mpc of the Coma cluster and with $`cz<12,000`$ kms lie within $`\pm 3500`$ km s<sup>-1</sup> (Postman et al., 1998). The considerable dispersion and range of the 1213–0017 redshifts are possibly due to velocity sub-structure in this field. There is a hint of this in the data: two of the galaxies (R8 and R10) lie at $`z=1.2901.298`$ while the other three are at $`z=1.3171.320`$, i.e., there is a “gap” of 2500 km s<sup>-1</sup> in the mean rest frame, although there is no clear segregation of the two subsets on the sky. We may be seeing two separate physical entities close in redshift (e.g., filaments/sheets/sub-clusters) which may later merge into one, but this speculation lies beyond the available data. Interestingly, the few known $`z1`$ massive clusters show large velocity dispersions and/or signs of sub-structure, e.g., the 3C 324 “cluster” at $`z=1.15`$ and 1.21 (Dickinson, 1997); RX J1716.6+6708 at $`z=0.81`$ (Henry et al., 1997; Gioia et al., 1999; Clowe et al., 1998); and CL 0023+0423 at $`z=0.84`$ (Postman et al., 1998; Lubin et al., 1998). Spectroscopically confirmed galaxies in these clusters have angular extents of a few arcminutes but are not strongly concentrated on the sky, which is also the case for the red galaxies in the 1213–0017 field. In the case of ClG J0848+4453 at $`z=1.27`$ and the neighboring RXJ0848.9+4452 at $`z=1.26`$ (Stanford et al., 1997; Rosati et al., 1999), each individual cluster is fairly compact but taken together these two add up to an entity with a large velocity range separated by only a few arcminutes. In addition to the galaxies with spectroscopic redshifts, we can use optical/IR colors to constrain the redshifts of objects in the SW field, the only one with photometric data in all filters (see § 2). The galaxies’ $`JK`$ colors are consistent with Bruzual & Charlot (1996) models of old stellar populations at $`z=12`$ (Figure 9), with the identification largely based on the change in $`J`$ color. The $`JHK`$ colors provide less sensitivity, but they are consistent with the same redshift interval (Figure 10). To summarize, the collection of red galaxies in the 1213–0017 field is concentrated in redshift at $`z1.31`$. The small number of measured redshifts prevents any definitive conclusions, but the redshifts do have a considerable spread. The red galaxies are spread over $`1.5h_{50}^1`$ Mpc on the sky, with a surface density about a factor of four above $`RK>5`$ counts and a factor of 16 above bright ERO counts in blank fields (§ 3.1). Although we do not know all their redshifts, it is quite plausible that most of the red galaxies are at $`z=1.31`$, as we discuss in the next section. ### 4.2 Old Stellar Populations at $`z`$ = 1.31 #### 4.2.1 Origin of the Extremely Red Galaxy Colors With the Keck and HST findings, we now examine more closely the origin of the colors of the 1213–0017 red galaxies and EROs. Their colors are consistent with $`z>1`$ ellipticals (Figures 2 and 9), and we have confirmed this inference for three of them using Keck spectroscopy. For these galaxies there is little doubt the large $`K`$ colors are due to old stellar populations with a negligible amount of dust. For the red galaxies in the SW field, their $`JHK`$ colors are also consistent with unreddened $`z=12`$ ellipticals (Figures 9 and 10); the one SW galaxy with a Keck spectrum, R6, supports this idea as it lies at $`z=1.203`$, though its redshift is based on weak \[O II\] emission. (The spectrum has insufficent S/N to detect any continuum breaks, if present.) So even if R6 is composed of an old, red stellar population (its $``$-band flux is only marginally brightened by the \[O II\] line), the emission line points to a small amount of simultaneous star-formation or weak AGN activity. In fact, R6 has quite a different appearance in optical as compared to the near-IR (Figure 5) which may suggest significant dust reddening in its central regions, unlike most of the other red galaxies (see below). The other galaxy with a spectroscopic redshift, R7, has an optically revealed AGN. Its $`K`$ color is far too red compared to typical Seyferts (e.g., Kotilainen & Ward, 1994). Its redness might come from dust or from old stars, although its optical spectrum shows no signs of the latter. If the Balmer features in the spectrum are real, this would suggest R7 has experienced significant star formation in the last few $`10^8`$ years and would hint that the red colors are due to dust reddening. For the remaining 1213–0017 red galaxies, we have no direct evidence about their redshift nor their stellar content; all that we know is their $`K`$ colors. One unlikely cause for their redness would be a strong emission line redshifted into the $`K`$-band; the lowest possible redshift for this to occur would be for H$`\alpha `$ at $`z2.12.6`$. It seems implausible that all or even any of these galaxies are at this redshift given the very low surface density of strong high-redshift H$`\alpha `$ emitters (e.g., Thompson et al., 1996; Beckwith et al., 1998; Teplitz et al., 1998). In fact, the more likely case of emission-line contamination of the broad-band magnitudes is in the $``$-band, either from H$`\alpha `$ or \[O II\]; more than half of $`z=0.51.3`$ field galaxies show \[O II\] emission (Hammer et al., 1997). In this case, the measured $`K`$ colors would be slightly bluer than the true color of the underlying stellar continuum. We find the observed optical and near-IR morphologies are fairly regular and in good agreement with one another (Figure 5). Both of these tell us that dust reddening in these galaxies is spatially uniform on scales of $`4h_{50}^1`$ kpc, as small as $`0.9h_{50}^1`$ kpc for the HST images. This is in sharp contrast to the highly wavelength-dependent appearance of the prototypical ERO HR 10: in the rest-frame near-UV/blue it has an inverted S-shape while in the rest-frame red it shows a smooth compact appearance (Dey et al., 1999). This implies HR 10 is an interacting/disturbed system whose central region has a large amount of dust, an interpretation augmented by its sub-mm emission which presumably springs from a large amount of ongoing dusty star formation. HST optical morphologies for the first sub-mm selected sources, which are possibly similar to HR 10, also show disturbed appearances (Smail et al., 1998). Likewise, the rest-frame UV appearance of local ultraluminous IRAS galaxies (ULIRGs) are significantly different than their appearance at longer wavelengths (Trentham et al., 1999). Therefore, we conclude the wavelength-independent appearance of the 1213–0017 red galaxies means they are unlikely to be dusty star-forming galaxies like HR10 or ULIRGs. Though dust reddening may still still play a role, most of their redness probably originates from the light of old stars. Dust variations could still exist on spatial scales below our resolution limits. For this reason, attempts to fit the broad-band SEDs are potentially misleading given the uncertainties in the dust geometry and distribution — such fits would be highly degenerate. Near-IR images with comparable spatial resolution to the HST optical data are needed to address this possibility. #### 4.2.2 Rest-Frame UV Spectral Features Comparisons between the galaxy spectra and the IUE stellar spectra show the three continuum-break galaxies have real differences in their rest-frame mid-UV spectra. The spectra of R10 rises most rapidly to the red and has the strongest 2640 Å and 2900 Å breaks, while R1’s continuum is noticeably flatter and its breaks are the weakest, especially the 2900 Å break. R8’s spectrum is intermediate between R10 and R1. Results from both cross correlation analysis (§ 3.3.2) and examination by eye find R1 is best-matched by early-type F dwarfs, R8 by mid-type F dwarfs, and R10 by late-type F and early-type G dwarfs (Figure 8) in this wavelength region. Furthermore, even when the IUE spectra are normalized only to the galaxy flux longward of 3000 Å, there is still good agreement between the star and galaxy spectra all the way to the bluest wavelengths. This suggests any ongoing or recent star formation, which would be revealed by a rising blue continuum, is small. The UV continuum breaks at 2640 Å and 2900 Å are of special interest. This region of an old galaxy’s SED is expected to be dominated by light from stars near the main-sequence turnoff. As the stellar population ages, the main-sequence turnoff mass decreases, and later-type stars produce the UV SED, leading to larger break amplitudes. In this fashion the rest-frame UV spectrum might be used to age date the stellar population, or at least the portion contributing the bulk of the UV flux. Spinrad et al. (1997) attempted to do this in a robust fashion for the old galaxy LBDS 53W091 at $`z=1.55`$. Given difficulties of absolute age calibration for these features and their uncertain metallicity dependence (Heap et al., 1998; Yi et al., 1999a), we do not attempt to age date the populations of the 1213–0017 galaxies. However, we can still study the stellar content of the spectroscopically old 1213–0017 galaxies by comparing the strength of these breaks with old populations in the local Universe. The spectra of R8 and R10 are barely amenable to this line of investigation, but the S/N of R1’s spectrum is too low so we exclude it. The standard indices for measuring these breaks, 2609/2660 and 2828/2921 from Fanelli et al. (1990), are too narrow for our spectra: these use $``$25 Å wide bandpasses, meaning they include only 2–3 spectral resolution elements for our data. Instead, we use our own broader custom indices, W2640 and W2900. We also measure the moderate-bandwidth $`26003000`$ index of Fanelli et al. (1990) to track the UV continuum color. These are all calculated in the standard fashion, from the ratio of the fluxes in blue and red bandpasses and expressed in magnitudes: $$\mathrm{Index}2.5\times \mathrm{log}_{10}\left[\frac{\overline{F_\lambda }(\mathrm{blue})}{\overline{F_\lambda }(\mathrm{red})}\right],$$ (1) where $`\overline{F_\lambda }`$ is the average flux density in the bandpasses. Objects with stronger breaks have larger values for the indices. The red bandpass of W2640 covers the entire top-hat feature redward of the 2640 Å break and ends before the Fe+Cr absorption at 2745 Å; the blue bandpass has the same width and includes Fe II $`\lambda `$2609 absorption and part of the broad BL $`\lambda `$2538 feature (e.g., see Ponder et al., 1998). The blue bandpass of W2900 spans the Mg II $`\lambda `$2800 and Mg I $`\lambda `$2852 absorption, and the red bandpass is chosen to have the same width. Definitions and measurements of the standard indices and our custom ones are listed in Table 2. Figure 11 plots the results juxtaposed against individual main-sequence stars (Fanelli et al., 1992), local old spheroids (Ponder et al., 1998), and the $`z=1.552`$ galaxy LBDS 53W091. The UV indices of R8 and R10 are consistent with local populations, though the W2640 strengths of R8 and R10 differ, with the former having a weaker 2640 Å break than even the local elliptical sample. Also, the two galaxies have different $`26003000`$ colors, with R8 lying closer to the metal-poor local populations, which have bluer mid-UV colors. The similar break amplitudes of LBDS 53W091 and R10 suggest these two galaxies are comparably old. If the analysis of 53W091’s spectrum is correct, its inferred age is barely compatible with present cosmological parameters, even with the assumption of solar metallicity to reduce the inferred age (Spinrad et al., 1997). Therefore, R10 must have had a comparably high formation redshift, since the difference in look-back time is only 5% between $`z=1.55`$ and $`z=1.31`$, and also must have experienced little or no recent star-formation. For the local populations, the offset of the ellipticals from the stars and globular clusters in plots of the 2640 Å break versus $`26003000`$ color was noticed by Ponder et al. (1998), who suggested it was due to the hot UV-excess (UVX) population in elliptical galaxies. The UVX is thought to originate from post-main sequence stars, though its specific origin remains uncertain (O’Connell, 1999). Interestingly, galaxy R8 does appear to be more consistent with the local ellipticals than the local non-UVX populations which might suggest a hot component in R8. If this could be confirmed for this galaxy and other high-redshift ellipticals, a detailed study of their UV spectra might uncover the origin of the UVX emission, since the potential UVX candidates evolve on different time scales (e.g., Brown et al., 1998; Yi et al., 1999b). However, ongoing star formation, even just a small amount by mass, can greatly brighten a galaxy’s UV flux, thereby complicating such an analysis. To summarize, comparison of the rest-frame UV spectra of R1, R8, and R10 shows a real scatter in their stellar content even though their observed $`K`$-band fluxes, which should roughly trace galaxy mass, are within a factor of two of each other. This may reflect differences in their star formation and/or merging histories. The UV color and break amplitudes of R8 and R10 are consistent with $`z=0`$ old stellar populations, though the data’s limited S/N and spectral resolution hinders any more detailed comparison, e.g., studying the weaker Mg II and Mg I absorption features. At a redshift of 1.3, we are observing these old galaxies when the Universe was only 30% of its present age. Therefore, improved data, both in S/N and spectral resolution, could probe the evolving composition of the oldest stellar populations from their relative youth at high redshift to their advanced age at $`z=0`$. #### 4.2.3 Relation to Galaxy Morphology and Implications for Formation The morphological results from the HST images are somewhat surprising if our suggestion that the 1213–0017 red galaxies contain old stars is correct. Most of the galaxies do not seem to be early-type ones but rather are disky or diffuse. Indeed, the one absorption-break galaxy with HST imaging, R10, looks like a compact edge-on disk — this galaxy does not resemble an elliptical galaxy morphologically even though it does so spectroscopically. These findings argue that not all old EROs are necessarily ellipticals, at least in their structure. This may reflect a fundamental ambiguity in identifying ellipticals at $`z>1`$ as compared to those at $`z=0`$: are elliptical galaxies properly defined by their morphological or spectrophotometric properties (c.f. Kauffmann et al., 1996; Schade et al., 1999)? If they formed by monolithic collapse at very high redshift, there should be no ambiguity. But in current scenarios of hierarchical galaxy formation, massive early-type galaxies form late ($`z12`$ depending on the cosmology) by agglomeration of smaller older sub-units, so the stars of the resulting elliptical are actually much older than the equilibrium morphology. Simulations (Kauffmann & Charlot, 1998) suggest these scenarios can still explain the homogeneity of the early-type cluster galaxy population out to $`z0.9`$ (e.g., Aragon-Salamanca et al., 1993; Kodama et al., 1998; Gladders et al., 1998; Stanford et al., 1998). However, it is at $`z1`$ when differences are predicted to become apparent, and there is little observational evidence in this regime due to the few number of $`z>1`$ clusters known. Spectroscopic follow-up of the remaining red galaxies in the 1213-0017 field combined with their HST images should prove very useful in addressing this issue. ## 5 Conclusions We have found an overdensity of EROs in the field of the $`z=2.69`$ quasar QSO 1213–0017 (UM 485), about a factor of 16 overdense compared to the blank field ERO surface density and at least of factor of 6 overdense at the 95% confidence level. The optical/IR colors of the EROs and numerous other red galaxies in this field are consistent with those of passively-evolving elliptical galaxies at $`z>1`$. HST optical imaging shows a few of the red ($`K>5`$) galaxies seem to have early-type morphologies while the remainder are either disk galaxies or diffuse objects without any obvious core. Their near-IR morphologies are consistent with those observed in the optical, unlike in the case of the prototypical ERO HR 10. This suggests that the dust extinction in these galaxies is either relatively smooth or that reddening by dust is not a significant cause of the colors. Follow-up Keck spectroscopy has measured redshifts for five red galaxies. Four lie at $`z1.31`$, and three of these have rest-frame UV absorption-line spectra similar to present-day elliptical galaxies, making this the most distant concentration of old galaxies spectroscopically confirmed to date. Including an Mg II absorber seen in the spectrum of the background quasar, there are five spectroscopic redshifts at $`z1.31`$ with a standard deviation of 1800 km s<sup>-1</sup> and a full range of 3800 km s<sup>-1</sup> in the mean rest frame. A number of lines of evidence possibly suggest the red galaxies in this field delineate the presence of a massive high-redshift cluster. The ERO surface density is enhanced above blank field counts, and there are five spectroscopic redshifts close together. The reddest 1213–0017 galaxies, three of which have the spectra of early-type galaxies, are nearly as luminous and as red as the brightest Coma cluster ellipticals. The red galaxies also have a large angular extent on the sky, and a considerable velocity spread, suggesting dynamical youthfulness. An impression of youthfulness is also conveyed by the roughly filamentary distribution of the red galaxies on the sky. Aside from the direct physical evidence, the presence of spectroscopically old EROs provides circumstantial support to the idea of a cluster. Elliptical galaxies in the local Universe, presumably the present-day counterparts of $`z>1`$ old EROs, are highly clustered and predominantly found in high density regions (e.g., Dressler, 1980). Such an analogy suggests that old EROs should be found with others of their ilk — this idea has yet to be fully explored. Since the hallmark of rich clusters is the presence of old ellipticals, this $`z=1.31`$ field may be one of the most distant rich cluster candidates to date. Moreover, regardless of whether this system is shown to be a genuine massive cluster, the concentration of EROs is likely a sign that this field is an uncommonly overdense region at high redshift, which warrants follow-up studies. Further observations are needed to develop a full physical picture of this field. A much larger sample of redshifts, inferred from multicolor photometric redshifts and directly measured from deep spectroscopy, will reveal the number of luminous galaxies at this redshift. An expanded spectroscopic sample will also be valuable to measure the galaxy velocity distribution and to search for dynamical sub-structure. X-ray observations will allow us to look for hot intracluster gas, a sign of a deep gravitational potential, and also for mass sub-structure. This issue can also be addressed with radio observations to search for a Sunyaev-Zeldovich decrement. Extending galaxy cluster studies to $`z>1`$ has special importance since current hierarchical formation theories predict massive galaxies assemble from smaller sub-units during this epoch (e.g., Kauffmann & Charlot, 1998), so we can potentially test these models by witnessing the formation process in situ. Measuring the high-redshift cluster abundance using wide-area surveys can provide strong tests of cosmological models (Eke et al., 1996; Bahcall et al., 1997). However, even before a large number of such clusters have been found, the few known to date are valuable sites to investigate the processes which drive the evolution of the oldest stellar populations, galaxies, and galaxy clusters. Furthermore, these clusters can also be used as testing grounds for $`z>1`$ cluster-search strategies, which by necessity are derived from extrapolating the known properties of lower-redshift clusters. It may be that we have to abandon some common precepts formed from studying local rich clusters in order to develop a complete understanding of high-redshift clusters and their constituent galaxies at an epoch of less than half the current age of the Universe. Much of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. We are grateful to Hyron Spinrad, Aaron Barth, and Dave Burstein for discussions which improved this work. It is also a pleasure for us to acknowledge Rychard Bouwens for the stellar population model calculations, Adam Stanford for the redshifted Coma color-magnitude relation, Dave Thompson for results from the CADIS survey, and Jim Rose and Dave Burstein for providing UV spectra. We thank the NOAO staff for assistance with the Kitt Peak observing runs, and Greg Wirth, David Sprayberry, and Ron Quick for supporting the Keck/LRIS observations. Dan Stern and Andy Bunker provided helpful advice and software for the LRIS reductions. We also thank Emmanuel Bertin for distributing SExtractor, and Wayne Landsman and Jonathan Baker for contributing IDL routines. Jenny Graves helped with the initial near-IR reductions and photometry. This research has made use of the Digitized Sky Survey, NASA’s Astrophysics Data System Abstract Service, and the NASA/IPAC Extragalactic Database (NED). A. Dey acknowledges financial support from NASA grant HF-01089.01-97A. This work has also received support from HST NASA grant GO-06958.02-95A.
no-problem/0002/astro-ph0002516.html
ar5iv
text
# Cancellation of GLONASS signals from Radio Astronomy Data ## 1 INTRODUCTION Many papers in the astronomical literature cite problems with interference from the Russian Global’naya Navigatsionnaya Sputnikovaya Sistema (GLONASS) system of navigational satellites when trying to observe 1612 MHz OH spectral line emission. Galt (1991) and Combrinck et al. (1994) both present data demonstrating the damaging effect of GLONASS signals on astronomy data. Some reports have stated that up to 50% of all observations have had to be discarded . The scientific merits of OH spectral line observations are discussed in detail elsewhere ; however, there is no question that this is extremely valuable spectrum whose continued use is essential to radio astronomy. One possible solution to the problem is regulation; this is being addressed within international organisations such as the ITU and URSI. However, regulation cannot be expected to recover the spectrum into which the GLONASS system already transmits. The solution most often employed by radio astronomers in dealing with unwanted signals is to put their telescopes in remote locations. However, when dealing with signals that emanate from Earth-orbiting satellites, that method obviously fails. The next most obvious solution is not to observe when interfering signals are present, or simply throw away affected data . Some “GLONASS aware” tools have been developed that allow dynamic scheduling observations in order to minimise interference . However, the strategy of avoidance results in the loss of valuable telescope time, which often amounts thousands of dollars per day, a better solution is desired. Here we present a direct, technical solution to the problem. We have developed and demonstrated a parametric signal processing algorithm which identifies GLONASS signals present in the pre-detection, complex baseband telescope output, and removes them. This algorithm results in a high degree of suppression with negligible distortion of radio astronomical signals. We believe this approach can be applied to interference from the U.S. Global Positioning System (GPS) and possibly other sources as well. This technique is presented in Section 3 of this paper. First (Section 2), we describe the properties of GLONASS that are relevant to the operation of the canceller. In Sections 4–5, we present experimental results demonstrating the effectiveness of this approach. Section 4 describes the procedure used to collect GLONASS-corrupted data; whereas Section 5 shows the results before and after application of the canceller. In section 6 we consider how this approach may be implemented on existing telescope systems. ## 2 Properties of GLONASS signals GLONASS satellites transmit at frequencies between 1602–1616 MHz and have shared primary user status with radio astronomy for the 1610.6–1613.8 MHz band. There are 24 carriers spread over the 14 MHz band at intervals of 0.5625 MHz. The carrier is modulated by a pair of noise like, equal power, pseudo noise (PN) codes of 0.511 and 5.11 MHz. Figure 1. of Combrinck et al. (1994) shows time averaged spectra of these signals. The unfiltered sinc<sup>2</sup> side lobes of these signals have relative power levels as high as $`25`$ dB extending out to 20 MHz either side of the main carrier in some cases. GLONASS satellites launched more recently do have some band-limiting filters. GLONASS, despite its wide band spectrum, actually has a very simple structure. Consider the narrow band (0.511 MHz) GLONASS modulation. This signal is simply a sinusoidal carrier which experiences a phase shift of $`0^{}`$ or $`180^{}`$ every $`(0.511\text{MHz})^1`$. Each phase shift represents a modulation symbol, or chip. Each group of 511 chips represents a PN code, which is public knowledge, never changes, and is the same for every GLONASS satellite. GLONASS data bits are represented by changing the sign of a block of 10 PN codes, with 10 ms period. Parameters of the signal which are unknown when received are (1) the Doppler shift due to satellite motion, (2) the code phase, that is, the relative position within the 1 ms PN code period, and (3) the carrier phase, which rotates because the satellite is moving and the transmitter’s LO is not perfectly stable. However the carrier phase, the current value of the data bit, and the complex gain due to the antenna pattern can all be combined into a single unknown complex magnitude parameter. Thus three parameters are sufficient to describe the GLONASS signal with high accuracy. Finally, we note that the modulation used by the course/acquisition (C/A) mode of the U.S. Global Positioning System (GPS) is very similar to modulation used in the GLONASS 0.511 MHz transmission. The main differences are longer code (1023 chips) and higher chip rate (1.023 MHz); also, all GPS satellites transmit on the same centre frequency, but with different (but known) PN codes. Thus, techniques which are effective against 0.511 MHz GLONASS modulation may be effective against GPS C/A transmissions as well. ## 3 CANCELLATION ALGORITHM ### 3.1 Theory Our technique for suppressing GLONASS signals in radio astronomy data is based on parametric signal modelling. Recall that the GLONASS signal can be described using a model consisting of just three parameters: Doppler, code phase, and complex magnitude. Given a block of data containing a GLONASS signal, one can then estimate the parameters. Given the parameters, it is possible to synthesise a noise-free copy of the GLONASS signal. This copy is then subtracted from the telescope output to achieve the suppression. This procedure is illustrated in Figure 1. The parametric solution proceeds on two time scales. Doppler frequency and code phase are difficult to estimate, but change slowly. Complex magnitude, on the other hand, is simple to estimate, and changes quickly. Our approach is to first acquire the GLONASS signal. This involves a joint search over the possible Doppler frequencies and the code phases. For each Doppler/code phase pair, a complex baseband (zero-IF) version of the signal is cross-correlated with the PN sequence. The correct Doppler/code phase pair is the one which maximises the magnitude of the cross-correlation. Although tedious, this is a simple procedure, and is essentially the same acquisition procedure used by hand held GPS receivers. Once acquired, the Doppler and code phase can be tracked simply by sensing the drift in the correlation peak and adjusting the Doppler and code phase parameters accordingly. It appears that the Doppler and code phase estimates can be frozen for at least 0.1s between updates without any significant effect on the results. Once the signal is acquired, we estimate the complex magnitude by cross-correlating the time- and frequency-aligned PN code with the complex zero-IF representation of the GLONASS signal. The magnitude and phase of the cross-correlation then represents the desired complex gain. The complex gain is expected to change quickly, so this procedure must be updated often. In the example presented below, the complex gain update rate is 128 $`\mu s`$, using 1024 samples at 8 Msamples per second (conversion to a complex baseband signal has halved the sample rate). Given the Doppler frequency, code phase, and complex magnitude, one can then synthesise a noise-free estimate of the GLONASS signal. However, it has been found by experience that better cancellation is achieved by low-pass filtering the zero-IF version of the synthesised GLONASS signal before subtraction from the telescope output. This models the real-world low-pass effect which smoothes discontinuities in band limited signals. This also has the desirable effect of suppressing the high-order side lobes of the synthesised signal, which may not be accurately represented by the proposed signal model. A suitable filter was found to be a 32-tap finite impulse response (FIR) filter based on the Hamming window, with cutoff frequency equal to $`0.05F_S`$ at $`F_S=8`$ MSPS. Such a filter can be obtained using the MATLAB command fir1(32,0.1). ### 3.2 Implementation The results presented below were obtained using non-real-time post-processing software, written in MATLAB. On a 400MHz pentium the processing presently runs at 1000 times real time. The MATLAB source code is freely available from the authors. Any practical system would, of course, require real time implementation. The maturity of GPS technology means that the techniques and hardware for the acquisition of GLONASS and GPS signal parameters are well developed. The design of the signal modulators in the satellites is also known. With the knowledge of these two areas a practical real time implementation is within reach and is discussed in Section 6. ## 4 DATA COLLECTION The astronomy data used in testing these algorithms is a single linear polarisation, 4-bit data stream from each antenna of the 6x22m antenna, CSIRO Australia Telescope Compact Array at Narrabri in Australia. The data was 4-bit sampled at 16MHz and recorded on an S2 recorder. The resulting 8MHz bandpass centred on 1610 MHz was wide enough to include signals from GLONASS-69 at $``$1609 MHz, an OH maser source (IRAS 1731-33) at $``$1612 MHz and some flat spectrum. The data were then extracted using the S2TCI system and demultiplexed. More details on this dataset and others that are freely available for conducting these kind of experiments are in reports by Smegal et al. and Bell et al.. The algorithm works on a single polarisation data stream from one antenna only. However, data from a second antenna were also used in cross correlations as a test of how well the GLONASS signals were removed. ## 5 RESULTS TO DATE The results so far are encouraging, with GLONASS narrow band signals being effectively removed in a way that is non toxic to astronomy signals. Figure 2 (left plot, top curve) shows a spectrum of the raw data, with test tones added in software. The bottom curve shows the same 0.1s ($`1.6\times 10^6`$ samples) of data with cancellation technique applied, with no apparent GLONASS signals left. There is an OH maser source at $``$1612 MHz. The top two curves in the right plot of Figure 2 show a blow up of this region. The bottom curve in the same plot shows the difference multiplied by a factor of 1000, indicating that no damage has been done in the spectral region of the OH source. In order to examine the toxicity of this algorithm in the same frequency range as the GLONASS signal we added a test tone to the data before the cancellation was applied and examined how it was affected by the processing. The left plot of Figure 3 shows the result of subtracting the test tone again, after the cancellation. There is no evidence that the test tone has been affected by the cancellation. However, there is a small spike left right under the middle of the GLONASS signal, that is unrelated to the test tones. This seems to be a result of some break through, or leakage of the GLONASS carrier signal from the GLONASS imbalance in the GLONASS phase modulator. It should be possible to model and remove this as well, but we have not addressed that yet. A more sensitive test of the supression is to cross correlate with signals from another antenna. The Right plot of Figure 3 show some cross correlations. The top curve is the cross correlation of raw data from two antennas. The bottom curve is the cross correlation of raw data from one antenna and data with GLONASS cancelled from data from another antenna. There are some extra ripples here that are not apparent in the other spectra, suggesting that there are some inaccuracies in the algorithm that need further investigation. The majority of the signal seen in this cross correlation is probably due to the GLONASS wide band signal, which we have not tried to suppress yet. The addition and subtraction of test tones give us some indication of how toxic the algorithm is to astronomy signals. However, astronomy signals are not coherent sine waves, but are more like band limited noise in the case of spectral lines and wide band noise in the case of continuum sources. We replaced the test tones with some synthetic band limited noise, added it before the cancellation and subtracted it again after. As shown in Figure 4 the band limited noise is not affected by the cancellation algorithm, down to a part in 1000, in other words, we have achieved 30 dBs of dynamic range in the supression. Processing with and without the presence of an astronomy sources make very little difference to the effectiveness of the cancellation. ## 6 LIMITATIONS AND FUTURE IMPROVEMENTS The carrier break through described earlier needs to be modelled and cancelled. While we have not investigated this carefully yet, we believe this should be possible. We do not know the cause of the ripples in the cross correlation spectra and some more investigation there may lead to an improvement in the algorithm. At present we have modelled the band limiting filters of the GLONASS transmission system by a low-pass filter when the signal is centered on zero-IF. This is limiting the accuracy with which we estimate the GLONASS spectrum and therefore possibly limiting the cancellation. We aim to either obtain details of the band limiting filters used on the GLONASS transmission system, or use the existing data to estimate them. Note that both GPS and GLONASS have in-band secondary channels that are spread 10 times wider, and therefore have power spectral density about 10 times weaker. This secondary channel for GPS and GLONASS is much harder to deal with, because the PN codes are more complex and may not be completely known. We do need to find a way to mitigate against these signals, because they do still cause substantial problems for radio astronomy. In addition to GLONASS and GPS, many other satellite systems have well-specified modulation and coding schemes. This opens up the possibility of removing these other classes of signals using digital signal processing. The techniques described here will be useful for some others, but not all. At a recent meeting a wide range of interference mitigation techniques applicable to many different undesired signals were presented and discussed. The usefulness of the method presented in this paper must ultimately be shown by conducting astronomy in the presence of GLONASS or GPS. Ideally the astronomy results with and without interference present would be indistinguishable. The use of recorded data and post processing is useful for demonstrating the method. But with processing currently running at one 1/1000 real time the amount of astronomy that can be demonstrated is very limited. It is therefore desirable to explore the use of dedicated hardware to achieve real time processing of the signal. The most comprehensive approach is to build a complete receiver that incorporates the interference cancellation method described in this paper. This is the approach that must be taken in designing new interference resistant receivers but currently the best option is to build an ’add on’ to existing receivers. The problem with this method is that the quantiser in current receivers is designed for adequate performance when processing noise like signals. Typically a 1 or 2 bit quantiser is used. Techniques like adaptive noise cancellation will cause the number of bits needed to represent the signal to grow when the interference is suppressed. For example, consider an interferer whose peak amplitude is equivalent to one eighth of the least significant bit in the quantiser and assume that 2 bits can accurately represent the interference. To generate an interference free estimate of the signal it is necessary to subtract the estimate of the interferer from the signal. In doing this, the number of bits needed to represent the signal grows by 4 lower order bits. The signal now has too many bits to be processed by receivers normally used for radioastronomy. The solution to this problem is to form auto and cross correlations of the measured signal and the estimate of the interference. If the estimate e(t) and the interference i(t) are related to each other by a linear transfer function then in frequency domain this can be written as I(f) = H(f)E(f) where H(f) is the frequency domain transfer function. In this context the astronomy signal can be considered to noise. The system can be redrawn as where o(t) is the wanted astronomy signal plus interference. The cross-spectrum method can now be used to estimate the transfer function H(f) . If the astronomy signal a(t) is uncorrelated with i(t) then H(f) equal to the ratio of the cross-spectrum $`S_{oe}(f)`$ between o(t) and e(t) and the power spectrum $`S_{ee}(f)`$ of e(t). $`H(f)=S_{oe}(f)/S_{ee}(f)`$ The power spectrum $`S_{aa}(f)`$ of a(t) is now equal to: $`S_{aa}(f)=S_{oo}(f)|H(f)|^2.S_{ee}(f)`$ Thus measurement of the power spectrum of a(t) and e(t) plus the cross spectrum of the two gives enough information to derive the power spectrum of the uncorrupted astronomical signal. These spectra can be obtained from the auto and cross correlations of the two signal. Correlators used for astronomy normally process at most 2-bit data. Straight 2-bit sampling may not be sufficient to accurately represent the estimated GLONASS/GPS signal. The addition of dither and the use of noise shaped oversampling should solve this problem. The hardware itself will internally generate a multi-bit accurate representation of the interferer. The hardware used to synthesise the baseband GLONASS or GPS is comparatively simple and easy to emulate in an FPGA. The main difficulty is the estimation of and tracking of signal phase and amplitude. This task is best left to software. Data needed to perform this task are correlations between the input and the current estimate of the interferer. If a reasonable initial estimate of the interferer has been found then very few correlations are needed to maintain tracking of carrier phase, carrier Doppler and code phase. The hardware could also be used to generate these correlations. The operations performed by the hardware are: 1. Generate the carrier digitally with adjustable carrier phase and Doppler. 2. Generate the GLONASS/GPS modulation with adjustable code phase 3. Modulate the carrier with the GLONASS code. This gives an unscaled ’noise free’ estimate of the interference. 4. Generate the zero delay and $`\pm 1/2`$ chip correlations between the input signal and the ’noise free’ estimate. 5. Scale the magnitude of the ’noise free’ estimate to match the interference. 6. Optionally delay data and estimate. This allows the magnitude and phase corrections to be applied to the estimate used in generating the corrections. This extra item is needed to fully emulate the current software. In practice it may be unnecessary. This hardware removes most of the intensive tasks from software and leaves the software to monitor the correlations. From this monitoring the software then needs to generate updates for the carrier phase, carrier Doppler, code phase and amplitude. With these updates the output of the hardware is a ’noise free’ estimate of the interference. ### Acknowledgements The authors gratefully acknowledge discussions with and help in obtaining data from Ron Ekers, Rick Smegal, Peter Hall, Bob Sault, Matthew Bailes, Willem van Stratten, Frank Briggs, Mike Kesteven, Warwick Wilson, Dick Ferris
no-problem/0002/cond-mat0002218.html
ar5iv
text
# Stripes in Quantum Hall Double Layer Systems ## I Introduction Recently, it has been established that quantum Hall systems in which several Landau levels are filled may support states with highly anisotropic transport properties. High mobility two-dimensional electron systems, in perpendicular magnetic fields such that the filling factor $`\nu `$ is in the vicinity of $`N+1/2`$, with $`N`$ an odd integer greater than 7 , are found to have diagonal resistivity ratios $`\rho _{xx}/\rho _{yy}`$ as large as 3500 . ($`\nu N_e/N_\varphi `$ is the ratio of the number of electrons to the orbital degeneracy of a Landau level $`N_\varphi =AB/\mathrm{\Phi }_0`$, where $`B`$ is the magnetic field, $`A`$ is the sample area and $`\mathrm{\Phi }_0`$ is the magnetic flux quantum.) While a good deal of this anisotropy is purely geometric, the experiments clearly indicate a groundstate for the electrons of a highly anisotropic nature. These results are very likely to be related to stripe phases that have been found in mean-field calculations and exact diagonalization studies of electrons with partially filled high Landau levels. When thermal and quantum fluctuations are included, it is useful to classify possible states of such systems according to their symmetries, which bear a close resemblance to those found in liquid crystals . Of particular interest is the smectic state, which, in the presence of some (hitherto unidentified) orientational ordering field, should exhibit strong macroscopic transport anisotropies as observed in experiment. Quantum fluctuations in this state have been studied, with some models indicating the possibility of metallic behavior in the groundstate. (See, however, Ref. .) The mean-field stripe states are a natural first approximation to such smectic states. It by now well appreciated that for smaller filling factors, the introduction of a second layer in the quantum Hall system (i.e., double quantum well systems) leads to new phenomena and a rich phase diagram . For a partially filled lowest Landau level ($`N=0`$), at total filling factor $`\nu =1`$ and for small layer separation $`d`$, a spontaneously broken symmetry can occur in which interlayer phase coherence develops in the absence of interlayer tunneling. In this state the system supports a Goldstone mode, which has a roton minimum at a wavevector $`q1.2\mathrm{}`$. For separations $`d>d_c\mathrm{}`$, this minimum goes to zero, indicating a phase transition into a unidirectional, (interwell phase) coherent charge density wave (UCCDW). At still higher separations, there is another phase transition into a phase-coherent Wigner crystal state (CWC) with two non-collinear primitive lattice vectors. For $`N`$=0, these states with broken translational symmetry are usually preempted by strongly correlated Fermi-liquid states in which translational invariance is restored by quantum fluctuations. However, in higher Landau levels the effective interaction among electrons in a single layer contains structure that favors such a mean field state relative to uniform correlated states. This makes the double layer electron system in higher Landau levels an excellent candidate for observation of the interesting translational broken-symmetry states described above. In this article, we demonstrate within the Hartree-Fock approximation that such states do indeed occur for electrons systems with high filling factors. We will focus on systems with $`\nu `$=$`2\nu _L`$=$`2\times (2N+1/2)`$, where $`N`$=0,1,2,… $`N`$ is the index of the highest occupied orbital Landau level (LL) and $`\nu _L`$ is the filling factor in each layer. Unlike the situation for single layers, we find circumstances under which the system supports stripe ordering and a quantized Hall conductance simultaneously. Physically, this arises due to the development of coherence between the edges of the individual stripes, which opens up a gap in the single particle spectrum. This coherence has interesting consequences for the collective modes of the system, as well as for its low-energy charged excitations. The results of this work may be summarized as follows: i) For any value of $`N`$, and for small layer separation, the groundstate of this system has a uniform density and phase coherence between the layers. The system will support a quantized Hall conductance. ii) In the limit where tunneling may be neglected, the energy spectrum of the system contains a gapless linear (Goldstone) mode due to the spontaneous interlayer coherence. This mode contains a roton minimum at wavevectors $`q1/\mathrm{}`$, which vanishes when the layer separation $`d`$ exceeds some critical value $`d_c`$. The precise value of $`d_c`$ depends on $`N`$. iii) For $`d>d_c`$, a state with unidirectional charge density wave order and phase coherence between the layers is lower in energy than the uniform density state. For $`N1`$ and any value of the layer separation, the coherent unidirectional charge density wave has lower energy than the square or triangular Wigner crystal. For tunneling parameter $`t0`$, this state supports a quantized Hall effect. iv) For small values of $`t`$, the linear regions in which the density has significant magnitudes in both layers – i.e., the regions between the centers of stripes in different layers – will support low energy collective modes. These excitations are analogous to spin waves of one-dimensional $`XY`$ magnets in a magnetic field. They sustain soliton excitations which carry charge $`\pm e`$, whose energy scales as $`\sqrt{t}`$. v) For $`t=0`$, the charged excitations become gapless and no quantized Hall effect can occur. At zero temperature the system may undergo a Kosterlitz-Thouless transition, in which, in the equivalent 1+1 dimensional classical XY system , vortices unbind. The bound vortex state is very unusual, similar to quantum Hall states in having a vanishing tunneling conductance, but nevertheless being gapless and not supporting a plateau in the Hall conductivity. Because of the power law correlations inherent in systems with bound vortices, we call this a critical Hall state. vi) At higher values of the layer separation the electron system undergoes a first order phase transition to a non-coherent unidirectional charge-density wave state with a weak longitudinal modulation of the charge density. This state also can be viewed as a highly anisotropic Wigner crystal (i.e., a stripe crystal.) This paper is organized as follows. In Section II we describe the uniform coherent state and study its charge density excitations. From the softening of this excitation we obtain a phase diagram, in tunneling versus layer separation, for which the UCS becomes unstable. In Section III we discuss in more detail the method of calculation used for studying the translational broken symmetry ground states, and introduce the different phases analyzed. In Section IV we present our numerical results. Section V is devoted to a qualitative discussion of the excitations supported by the unidirectional coherent charge density wave state, and their consequences for quantum fluctuations. We conclude in Section VI with a summary. ## II Instabilities of the Uniform Coherent State. In what follows, we consider our double layer system at total filling factor $`\nu =4N+1`$. The $`4N`$ part of this filling refers to the fully filled Landau levels, each of which has two spin states and two “layer states”. In the large magnetic field limit, it is safe to assume these lower levels are filled and inert, and we do not explicitly include them in the calculations below. We also assume there is no spin texture in the groundstate, which is always true if the Zeeman coupling is large. If the tunneling amplitude $`t`$ is smaller than the Zeeman coupling, then our results apply to filling factors $`\nu =4N+3`$ mutatis mutandis. For small separations between the layers, the groundstate of the system is expected to be a uniform coherent state, with wave function of the form $$|\mathrm{\Psi }_N>=\underset{X}{}\frac{1}{\sqrt{2}}\left(C_{N,X,l,}^++e^{i\theta }C_{N,X,r,}^+\right)|N1>$$ (1) where $$|N1>=\underset{X^{};\sigma =;N^{}<N;i=l,r}{}C_{N^{},X^{},i,\sigma }^+|0>.$$ (2) In these equations $`C_{N,X,l(r),\sigma }^+`$ creates an electron in the Landau level (LL) $`N`$, with guiding center orbital label $`X`$ and spin $`\sigma `$ in the left (right) well. $`|0>`$ is the electron vacuum and $`|N1>`$ is the wavefunction which describes the system at filling factor $`4N`$, for which all the states with LL orbital index $`N^{}<N`$ are occupied. Note that since in Eq.(1) the product runs over all the possible values of $`X`$ this state corresponds to filling factor $`\nu =4N+1`$. By construction this state has the same electronic charge in each well. Since we assume that the only active electron states are those in the highest LL with spin parallel to the field, in order to simplify the notation we drop the indices $`N`$ and $``$ in the creation and destruction operators in what follows. The state described by Eq.(1) has long-range order in the phase difference between electrons in the two layers, $`\theta `$. In the absence of tunneling between layers, the energy is independent of $`\theta `$, and the system has a continuous broken symmetry. For $`t0`$, the ground state of the system is composed of a full LL of single-particle orbitals which are a symmetric combination of the left and right layer orbitals ($`\theta =0`$). This wavefunction is exact for non-interacting electrons, and for $`d0`$ we expect it to remain exact when interactions are included. For finite $`d`$, quantum fluctuations become important and the ground state described by Eq.(1) is only an approximation. Nevertheless, we expect a broken symmetry ground state to survive for $`d`$ smaller that a critical value of the layer separation $`d_c`$. In order to estimate $`d_c`$ we calculate the charge density excitations (CDE’s) of the system. The CDE’s are classified by a conserved wavevector $`𝐪`$ , and their dispersion can be obtained from the poles of the charge density response function. Neglecting mixing between LL’s, it has the form $$\omega _{cde}(q)=\left\{[\mathrm{\Delta }^{HF}V_d(q)][\mathrm{\Delta }^{HF}V_b(q)+V_a(q)V_c(q)]\right\}^{1/2},$$ (3) where $`\mathrm{\Delta }^{HF}`$ $`=`$ $`2t+V_d(q=0),`$ (4) $`V_a(q)`$ $`=`$ $`{\displaystyle \frac{e^2}{ϵ\mathrm{}}}{\displaystyle \frac{1}{q\mathrm{}}}V(q),`$ (5) $`V_b(q)`$ $`=`$ $`{\displaystyle \frac{e^2}{ϵ\mathrm{}}}{\displaystyle _0^{\mathrm{}}}d(q^{}\mathrm{})J_0(qq^{}\mathrm{}^2)V(q^{}),`$ (6) $`V_c(q)`$ $`=`$ $`{\displaystyle \frac{e^2}{ϵ\mathrm{}}}{\displaystyle \frac{1}{q\mathrm{}}}V(q)e^{qd}`$ (7) $`V_d(q)`$ $`=`$ $`{\displaystyle \frac{e^2}{ϵ\mathrm{}}}{\displaystyle _0^{\mathrm{}}}d(q^{}\mathrm{})J_0(qq^{}\mathrm{}^2)V(q^{})e^{q^{}d}`$ (8) with $$V(q)=e^{q^2\mathrm{}^2/2}\left(L_N(q^2\mathrm{}^2/2)\right)^2,$$ (9) where $`L_N(x)`$ is a Laguerre polynomial. In these expressions $`\mathrm{}=\sqrt{\mathrm{}c/eB}`$ is the magnetic length and $`ϵ`$ is the dielectric constant of the host semiconductor. $`V_a`$ and $`V_b`$ are the direct and exchange intrawell Coulomb interaction, while $`V_c`$ and $`V_d`$ are the direct and exchange interwell Coulomb interactions . We have computed $`\omega _{cde}(q)`$ for different values of $`N`$, and have plotted several representative curves (with $`t=0`$) in Fig.1 for different values of $`N`$ and for a layer separation near $`d_c`$. For $`t=0`$, at small wavevector $`q`$ and any value of $`N`$, the dispersion curves increase linearly. This acoustic excitation is the Goldstone mode associated with the continuous broken symmetry; i.e., the spontaneous interlayer coherence. At intermediate $`q`$ the dispersion curves develop a dip which becomes soft at a wavevector $`q_c`$ for values of $`d`$ bigger than a critical distance $`d_c(N)`$. As discussed above, this indicates an instability of the state. We can see in Fig.1 that, in magnetic length units, the value of $`d_c`$ decreases as the LL index increases. At larger values of $`q`$ (not shown in Fig.1) the spectra corresponding to $`N>0`$ develop some structure associated with the zeros of the Laguerre polynomials appearing in the definition of $`V(q)`$. Fig. 2 illustrates $`d_c`$ versus $`t`$ for a few values of $`N`$, and thus represents a phase diagram for the stability of the uniform coherent state (stable for small $`d`$ and large $`t`$) with respect to the formation of a density wave state (for large $`d`$ and small $`t`$). Note that the uniform state is stable for $`N=2`$ even at $`t=0`$. This is not the case for single layers, where at $`\nu =1/2`$ the system forms stripes. Evidently, the condensation energy for spontaneous interlayer coherence outweighs the gain in energy associated with stripe formation in the individual layers. For large enough $`d`$, however, the single layer tendency to form stripes must eventually become important. In order to more precisely characterize the ground state after the instability sets in, we study different broken translational symmetry states. This is the subject of the next Section. ## III Broken Translational Symmetry States In this Section we study different translational broken symmetry states of the electron gas confined in a DQW system. We adopt the Hartree-Fock approximation in the form introduced by Côté and MacDonald ; in this method the important quantities are the expectation values of the operators $$\rho _{j,j^{}}(𝐪)=\frac{1}{N_\varphi }\underset{X,X^{}}{}e^{iq_x(X+X^{})/2}\delta _{X,X^{}q_y\mathrm{}^2}C_{X,j}^+C_{X^{},j^{}}.$$ (10) Here the quantum numbers $`j,j^{}`$ are layer indices. Details of the application of this method to bilayer systems may be found in Ref. . The Hartree-Fock equations have a number of different solutions corresponding to different states of the electron gas in the DQW system. Each solution is characterized by a set of order parameters $`<\rho _{i,j}(𝐪)>`$; the ground state energy can be expressed solely in terms of these quantities. In the following calculations we consider a limited number of physically interesting solutions and compare their energies, and consider situations when transitions among these different states may occur. ### A Uniform Coherent State (UCS). This state is described by Eq.(1); it is characterized for $`\theta =0`$ by $`<\rho _{l,l}(0)>=<\rho _{r,r}(0)>=<\rho _{r,l}(0)>=<\rho _{l,r}(0)>=1/2`$, and all other order parameters zero. As discussed above, translational invariance is not broken in this phase, but there is interwell coherence which, at $`t=0`$, breaks a $`U(1)`$ symmetry of the Hamiltonian. In Eq. 1, this arises because the energy of the state is independent of $`\theta `$ . The UCS has always lower energy than the incoherent, $`<\rho _{l,r}(0)>=0`$, uniform state. ### B Unidirectional Coherent CDW State (UCCDW) In this state $`<\rho _{i,j}(n𝐆_0)>0`$ , $`n=0,\pm 1,\pm 2,\mathrm{}`$; the translational symmetry in one direction is broken and interwell coherence is allowed. The value of $`G_0`$ is chosen to minimize the energy of the system. Furthermore, in order to have an energy minimum in this class of states, the CDW in the two wells must be shifted by a distance $`\pi /G_0`$ with respect to one another. For $`dd_c`$ the value of $`G_0`$ which minimizes the energy approaches $`q_c`$. The UCCDW state may be written in the form $$|UCCDW>=\underset{C}{}(\sqrt{\nu _l(X)}c_{X,l}^++\sqrt{\nu _r(X)}e^{i\theta }c_{X,r}^+)|N1>,$$ (11) where $`\nu _{l(r)}(X)`$ is the occupation of the state $`X`$ in the left (right) well, and $`\nu _l(X)+\nu _r(X)=1`$. The local occupation factors $`\nu _i(X)`$ are periodic functions of $`X`$ with period $`2\pi /G_0`$. With an appropriate choice of origin, the charge density is an even function of $`X`$, so the state may be constructed with only odd Fourier components of $`<\rho _{i,i}(nG_0)>`$. The local coherence between wells is given by the quantity $`\sqrt{\nu _l(X)\nu _r(X)}e^{i\theta }`$, which is even in $`X`$ and periodic with period $`\pi /G_0`$; thus $`<\rho _{l,r}(nG_0)>0`$ only when $`n`$ is even. From the order parameters it is easy to obtain the band structure of the UCCDW phase. The eigenvalues depend on the guiding center coordinate $`X`$, and (up to an unimportant constant) are given by $$ϵ(X)=\pm \sqrt{A(X)^2+B(X)^2},$$ (12) where $`A(X)`$ $`=`$ $`{\displaystyle \underset{n0}{}}e^{inG_0X}[V_a(nG_0)V_b(nG_0)V_c(nG_0)]<\rho _{i,i}(nG_0)>,`$ (13) $`B(X)`$ $`=`$ $`t{\displaystyle \underset{n}{}}V_d(nG_0)<\rho _{l,r}(nG_0)>.`$ (14) Since the total filling factor of the $`Nth`$ Landau level is 1, only the lowest energy band is occupied. In the UCCDW phase we define the gap, $`\mathrm{\Delta }_{UCCDW}`$, as the difference in energy between the highest occupied state and the lowest empty state. In the coherent solution, $`<\rho _{l,r}(0)>`$ is always different from zero, so that $`B(X)0`$ for all $`X`$; in this case $`\mathrm{\Delta }_{UCCDW}>0`$. It is important to recognize that the Hartree-Fock approximation overestimates the actual gap to create well-separated particle-hole pairs above the UCCDW state, in particular because the groundstate band structure does not reflect the possible existence of soliton quasiparticles. As we discuss below, for $`t>0`$, a magnetic field-dependent gap is always present and a quantized Hall conductance will be observed ; for $`t=0`$ we shall that the quantized Hall effect is absent. In the limit $`d\mathrm{}`$, $`\nu _l(X)\nu _r(X)=0`$ and the coherence between wells is lost; $`\nu _i(X)`$ can only be zero or one. This limit corresponds to two uncoupled 2DEG’s each supporting an independent stripe state. ### C Coherent WC state (CWC) In this state $`\{<\rho _{i,j}(𝐆)>\}0`$, translational symmetry is broken along two non-collinear vectors, and interwell coherence is allowed. The reciprocal lattice vectors $`\{𝐆\}`$ define a two-dimensional lattice containing one electron per unit cell. At intermediate values of $`d`$ the square lattice has lower energy than the triangular lattice, because the interstitial regions in a square lattice are larger than those of a triangular lattice, allowing a particularly low interwell Hartree energy . At larger values of $`d`$ there will necessarily be a first order phase transition into a triangular lattice, since its intralayer Madelung energy is lower than that of the square lattice. However, because the Madelung energies of the two states are very close , this only occurs at a very large value of $`d`$. We thus do not further consider the triangular lattice for the purposes of this work; the Wigner crystals we discuss are square lattices, with unit cell lattice parameters $`a_x=a_y=2\sqrt{\pi }\mathrm{}`$. As in the case of the UCCDW, in order to minimize the interwell electrostatic energy, the lattice in the two wells are shifted by a vector $`(a_x/2,a_y/2)`$. For systems modulated in two directions, such as the CWC phase, there are gaps in the excitation spectrum, but the densities at which they occur are not $`B`$ dependent and the Hall conductance is not expected to be quantized. ### D Modulated Unidirectional CDW state. (MUCDW) In this state the electron gas has modulations in the charge density along the direction of the stripes. Such modulations are known to significantly lower the energy of stripes in a single layer . While the two-dimensional long-range order that appears in such a state may not survive in the groundstate due to quantum fluctuations , the appearance of such modulations within Hartree-Fock reflects the tendency of the stripes to look like one-dimensional crystals at short length scales. In the calculation we use either an oblique or a rectangular lattice with one electron per unit cell. In either case, the primitive lattice vectors have the form $`𝐚_1`$ $`=`$ $`(X_0,Z_0)`$ (15) $`𝐚_2`$ $`=`$ $`(0,Y_0),`$ (16) with $`X_0=2\pi /G_0`$ and $`Y_0=4\pi \mathrm{}^2/X_0`$. The parameter $`Z_0=Y_0/2`$ for an oblique lattice, and $`Z_0=0`$ for a rectangular lattice. In single layer systems, it is found that the energy is extremely insensitive to the parameter $`Z_0`$, but there is a very shallow minimum for $`Z_0=Y_0/2`$. This turns out to be the case of the present system as well. For the two layer system under consideration here, to minimize the interwell Madelung energy the charge in the two wells is shifted with respect to one another by a vector $$(\frac{a_1^2}{2X_0},0).$$ (17) This particular shift vector is the analog of what one finds in hexagonal close packed structures. In this class of solutions we have never found a coherent state. Whenever $`<\rho _{i,i}(G_x,G_y)>0`$ for $`G_y0`$, we always find $`<\rho _{l,r}(𝐆)>=0`$ for all set of $`𝐆^{}s`$. Physically, this arises because formation of modulations competes with interwell coherence. The latter lowers the energy by introducing admixtures of single particle states near the Fermi surface in different wells. Modulations, by contrast, destroy the Fermi surface: modulated stripes essentially represent a highly anisotropic Wigner crystal state, with gaps everywhere in the band structure between occupied and unoccupied states. Thus, there are no low-energy states that may be admixed between wells. The MUCDW is a solution without coherence between the wells and we find that it is not possible to change continuously from the UCCDW phase to the MUCDW phase. As in the CWC phase, the gap in the excitation spectrum which appears in the MUCDW phase is present over a range of densities. This phase does not exhibit a quantized Hall effect. ## IV Results. We now present our numerical results for the states discussed in the last Section. In Fig.3 we plot the energy difference $`E_{CWC}E_{UCCDW}`$ as a function of the layer separation for different values of $`N`$. A negative (positive) value of this quantity implies that the CWC has lower (higher) energy than the UCCDW state. Because the transition at $`d_c`$ is continuous, both energies, $`E_{CWC}`$ and $`E_{UCCDW}`$, tend to the energy of the UCS at $`d=d_c`$. For any value $`d`$ near but higher than $`d_c(N)`$, we find the UCCDW phase has lower energy than the CWC, indicating the instability at $`d_c`$ is a transition from the UCS to the UCCDW state. The driving energy for this transition comes from the difference between intra- and inter-well interactions, which increases with layer separation. For small $`d`$, interwell and intrawell interactions are rather similar; at larger layer separations lower energies can be attained by improving intrawell correlations via $`<\rho _{i,i}(G_0)>`$ taking on a non-zero value. This can be accomplished only by allowing the phase relationship between electrons in different wells to fluctuate, i.e., by lowering the expectation value of $`<\rho _{l,r}(0)>`$ (cf. Eq. 7). For $`N=0`$ a second transition occurs from the UCCDW state to the CWC state at $`d1.65\mathrm{}`$. As commented above in the $`N=0`$ case the fluctuations will probably melt this CWC state, forming two non-coherent highly correlated Fermi liquids . For $`N1`$, the situation is different: there is no UCCDW$``$CWC transition. We find that the UCCDW phase always has lower energy than the CWC state. The difference in energy between these states increases with $`N`$. Obviously this is closely related to the fact that for $`N1`$, in the HF approximation the stripe phase is lower in energy than the WC state . We now discuss the properties of the UCCDW state in more detail. For concreteness, we focus on the case $`N=2`$, $`d=1.2\mathrm{}`$ and $`t=0`$. Most of the qualitative results, however, apply to all the UCCDW states for other parameters. Fig. 4 illustrates the local filling factor in the left well $`\nu _l(X)`$ and the local coherence factor $`\sqrt{\nu _l(X)\nu _r(X)}`$ as a function of $`X`$. For this case $`G_0\mathrm{}^1`$, and the width of a unit cell is $`2\pi \mathrm{}`$. Note that $`\nu _l(X)`$ is the particle-hole conjugate of $`\nu _r(X)`$, i.e., $`\nu _r(X)=1\nu _l(X)`$. The coherence between the wells is maximum in the region where the charge density is shared by both wells. Note that, as commented above, the interwell coherence order parameter has periodic $`\pi /G_0`$. For smaller values of $`d`$, the oscillations of the density between wells decreases until, at $`d_c`$, the charge density becomes uniform, with $`\nu _l(X)=\nu _r(X)=1/2`$. At this point the coherence factor becomes constant and takes on its maximum value of $`1/2`$. For larger values of the layer separation, $`\nu _{l(r)}(X)`$ changes more rapidly, and the coherence is significantly different from zero in thinner regions. As $`d\mathrm{}`$, the width of the coherent regions tends to zero, and the local occupation is just zero or unity. In Fig. 5 we plot the band structure, Eq.(9), in the UCCDW case. For the parameters used in the calculations the Hartree-Fock results show a gap for the charged excitations. The gap is maximum in the regions where the charge is localized in one of the wells and minimum in the regions where the charge is shared by both wells. For small values of $`d`$, the band structure becomes flatter and the energy gap increases until, at $`d_c`$, the gap is constant in the Brillouin zone and takes its maximum value $`2(t+V_d(0))`$. For larger values of $`d`$ the gap decreases, reaching its minimum value $`2t`$ when $`d\mathrm{}`$. In the UCCDW phase, $`<\rho _{l,r}(0)>0`$, so that there is always a finite energy gap, although in some circumstances this gap is much smaller than the typical experimental temperatures. It should be noted that the spatial structure of this gap suggests that charged quasiparticles, either due to thermal fluctuations or to doping away from total filling factor $`\nu =4N+1`$, will be much more mobile along the direction of the stripes than perpendicular to them. It would be most interesting if such anisotropy in double well transport could be observed experimentally. It should be noted that in Eq. 6, the wavefunction for the UCCDW has a free parameter $`\theta `$, whose value for $`t=0`$ does not affect the energy. This implies that the UCCDW supports gapless modes (beyond phonons), due to the spontaneous coherence. The interesting aspect of this system is that, for large enough $`d`$, the coupling between the coherent regions will become negligible, and each linear region may support its own gapless mode. As discussed below, the resulting system is similar to a collection of one-dimensional $`XY`$ ferromagnets. The fluctuations of this degree of freedom greatly reduces the charge gap for any value of $`t`$ compared to what is found in the band structure, and for $`t=0`$ we predict that such fluctuations will drive the gap to zero. Returning to the Hartree-Fock results, the physical origin for the existence of a CDW in DQW systems is easy to understand. For $`d`$ near $`d_c`$ the Hartree-Fock one body potential has a minimum at a wavevector smaller than the reciprocal lattice vector associated with a WC, and the electron gas can attain a lower energy by forming a UCCDW. As $`d`$ increases, the minimum in the effective potential moves to larger wavevectors. Because of this, for $`N=0`$ the WC becomes more stable than the unidirectional CDW. For $`N1`$, by contrast, the effective two-dimensional Coulomb interaction, $`V(q)`$, contains Laguerre polynomials that have zeros at decreasing wavevectors when $`N`$ increases. This produces a zero in the repulsive Hartree potential where the exchange interaction is strong, making the UCDW more stable than the WC at larger layer separations. In Fig.6 we plot the order parameter $`<\rho _{l,r}(0)>`$ and the energy gap $`\mathrm{\Delta }_{UCCDW}`$ as a function of the layer separation. Note that in the UCCDW phase the coherence is non-zero even for large values of $`d`$. As $`d`$ increases the charge energy gap goes to its minimum value, but this is not so much due to the decrease in the interwell coherence as it is to the increase of the intrawell coherence $`<\rho _{i,i}(G_0)>`$. Finally, we address the question of the value of the layer separation for which the two electron layers become decoupled, forming at $`N>0`$, a pair of noncoherent MUCDW states. To find this, one must compute the energies of the MUCDW and the UCCDW as functions of the layer separation. These may be found in Figs. 7 and 8 for the cases $`N=2`$ and $`N=1`$, respectively. As commented above, as a function of $`d`$, there is a crossing (first order phase transition) in energies between the MUCCDW and the UCCDW phases. The transitions occurs at $`d1.6\mathrm{}`$ in the $`N=2`$ case and at $`d1.4\mathrm{}`$ in the $`N=1`$ case, indicating that the UCDW phase becomes more stable as the LL index increases. From the value of the charged excitation gap, we expect that the QHE in double layer systems for $`t0`$ will disappear for large enough $`d`$ due to the UCCDW$``$MUCDW transition and not the UCS$``$MUCDW transition. ## V Collective Modes and Solitons in the UCCDW State As mentioned above, the UCCDW state contains an interesting degree of freedom, the phase of the coherence factor in the regions where there is significant occupation of both the left and right wells (cf. Fig. 4). As in the case of the UCS, this phase may be thought of as a function of position, and from it low energy excitations may be constructed . Interesting new physics emerges from this degree of freedom in the limit of large $`d`$, when the linear coherent regions become very narrow, and the magnitude of the coherence factor nearly vanished between these regions. In this situation we may view the coherent regions as a collection of one-dimensional systems which are, in a first approximation, uncoupled . Each coherent region thus has its own phase, $`\theta _i(y)`$, where $`i`$ labels the individual coherent region, and we assume the stripes are parallel to the $`\widehat{y}`$ direction. Once we adopt a model in which the $`\theta _i`$’s are uncoupled, the low energy statics and dynamics become identical to those of a domain wall between spins of opposite orientation in a filled Landau level. This system has been studied in Ref. , and we may use those results to understand what will happen in the double layer system. At long wavelengths, the energy functional for the phase takes the form $$E_i=𝑑y\left[\frac{1}{2}\rho _s\left(\frac{\theta _i}{y}\right)^2gt\mathrm{cos}\theta _i\right].$$ (18) In the above equation, $`\rho _s`$ is a stiffness for the effective $`XY`$ spin degree of freedom, and $`g`$ is analogous to a Landé $`g`$ factor, with $`t`$ playing the role of a magnetic field. The values of both $`\rho _s`$ and $`g`$ depend on the precise form the UCCDW wavefunction, and require a microscopic calculation to evaluate. Note both $`\rho _s`$ and $`g`$ remain finite in the limit $`t0`$. Eq. 18 has the form of a sine-Gordon model, and a number of interesting properties immediately follow. Of special importance is the fact that the system supports soliton excitations, where $`\theta _i`$ changes by $`2\pi `$ in $`y`$ going from $`\mathrm{}`$ to $`\mathrm{}`$ . The energy of these excitations have the form $`\epsilon _s=8\sqrt{\rho _sgt}`$. For reasons very similar to what occurs in double layer quantum Hall systems near $`\nu =1`$ , or analogously for systems with spin , the sine-Gordon model solitons carry a net charge of $`\pm e`$ . This means the linear coherent regions support charged excitations much lower in energy than predicted by the Hartree-Fock theory for small $`t`$: as discussed above, the energy to create a well-separated particle-hole pair tends to a constant as $`t0`$, while the soliton energy vanishes as $`\sqrt{t}`$. For small $`t`$, at finite temperature the solitons should dominate the transport properties of the system. In the limit of vanishing tunneling amplitude, a very interesting possibility arises for behavior of the linear coherent regions. As stated above, the system will have gapless modes associated with the spontaneously broken symmetry; these lead to a set of collective modes dispersing linearly: $`\omega (q)=Aq`$ . The action consistent with this has the form $$S=𝑑\tau 𝑑y\left[\frac{1}{2}\rho _\tau \left(\frac{\theta _i}{\tau }\right)^2+\frac{1}{2}\rho _s\left(\frac{\theta _i}{y}\right)^2\right],$$ (19) where $`\tau `$ is imaginary time, and $`\rho _\tau `$ is an effective moment of inertia that depends on the precise form of the UCCDW wavefunction. At zero temperature, Eq. 19 may be reinterpreted as an energy functional in a 1+1 dimensional classical system . This system will undergo a Kosterlitz-Thouless transition if $`\sqrt{\rho _s\rho _\tau }`$ passes through $`2/\pi `$. Above this value, vortices in the $`\theta _i`$ field are bound in pairs; below it, they are unbound. The meaning of this phase transition may be understood by examining what happens for small but non-vanishing $`t`$. In this case, vortices are bound by a linear potential, and there will be a “string” attaching them in which $`\theta _i`$ rotates by $`2\pi `$ . bFrom the spin-charge coupling inherent in such multicomponent systems, this means the vortices “cap” a region in spacetime in which the total charge in the $`ith`$ linear coherent region fluctuates away from the average by $`\pm e`$. For $`t>0`$ the vortices always remain bound, and these fluctuations are controlled. Thus, fluctuations in the charge density of the 2DEG are suppressed, and the system exhibits a quantized Hall conductance. For $`t=0`$ and $`\sqrt{\rho _s\rho _\tau }>2/\pi `$, the vortices also remain bound, albeit logarithmically. This suggests that the charge density fluctuations are still suppressed, and the system bears a resemblance to a quantum Hall phase. However, since charged excitations are not gapped at $`t=0`$, the Hall conductance should not exhibit a plateau. The difference between states with bound and unbound vortices would be most directly probed by a tunneling experiment: in the bound-vortex phase, one expects a tunneling current $`I`$ that vanishes as a power of the voltage $`V`$ that is greater than one, so that the tunneling conductance vanishes. In the unbound state, the presence of free vortices means charge may be injected into the system, resulting in a finite tunneling conductance. Note that for a quantized Hall state, the tunneling current should vanish as $`Ie^{V_0/V}`$, where $`V_0`$ is proportional to the quasiparticle gap . The differing behaviors of the low-voltage $`IV`$ characteristics indicates that the bound vortex state is qualitatively different than either a conducting state or a quantum Hall state. We tentatively call this new state a critical Hall state. In closing this Section, we note that in real systems, the true groundstate of the DQW can never be a critical Hall state, because the tunneling amplitude can never be made to completely vanish in practice. In addition, there may be small but relevant gradient couplings among the different linear coherent regions, driving the system away from the critical Hall state at low enough temperature. In practice, both these perturbations may be made very small, so that at experimentally realizable temperatures their effects might be unimportant. It is important to evaluate whether the system parameters (i.e., $`t`$, $`N`$ and $`d`$) can be adjusted so that properties of a critical Hall state may be observed in practice. This will be the subject of a future study. ## VI Summary In this paper, we studied double layer quantum Hall systems in which each layer has a high-index Landau level that is half-filled. Hartree-Fock calculations revealed that for small layer separations, the groundstate is a uniform coherent state (UCS). Above a critical layer separation $`d_c(N)`$, we found a continuous phase transition to a unidirectional coherent charge density wave (UCCDW), which is related to stripe states in single layer systems. This UCCDW state supports a quantized Hall effect when there is tunneling between layers ($`t0`$), and is always stable against formation of a coherent Wigner crystal (CWC) for Landau indices $`N1`$. The state does become unstable to the formation of a modulated unidirectional charge density wave (MUCDW) state for large enough layer separation, which should lead to the destruction of the quantum Hall effect. The UCCDW state supports interesting low-energy modes associated with interlayer coherence, which become gapless in the limit $`t0`$. For relatively large layer separations, in a model where the Coulomb interaction is screened, the regions of coherence may be treated as independent one-dimensional systems with an $`XY`$ degree of freedom. The resulting effective Hamiltonian is a sine-Gordon model, which supports charge soliton excitations, whose energy vanishes in the zero tunneling limit. At zero temperature, the equivalent 1+1 dimensional system may be in a state with either bound or unbound vortices. Finally, we argued that the former possibility is an unusual situation which we call a “critical Hall state”, and is characterized by a power law tunneling $`IV`$ characteristic. We thank A.H.MacDonald, C.Tejedor, L.Martín-Moreno and J.J.Palacios for useful discussions. This work was supported by the CICyT of Spain under Contract No. PB96-0085, and by the NSF under Grant No. DMR98-70681. . . . .
no-problem/0002/cond-mat0002293.html
ar5iv
text
# Dynamics of Excited Electrons in Copper: Role of Auger Electrons ## I Introduction Two-photon photoemission (2PPE) has been applied intensively to study the electron-electron interaction in alkali, noble and transition metals. The experimental results are often compared with the Fermi-liquid theory (FLT), but such a comparison is only valid if the influence of secondary-electron generation and transport of excited carriers is negligible. Indeed, a series of 2PPE experiments involving photoexcitation of electrons from the $`d`$ band in Cu have shown an unusual non-monotonic behavior of the relaxation time. The relaxation time shows a broad peak at energies just above the $`d`$-band threshold which shifts linearly with photon energy. Both the non-monotonic behavior and the dependence on photon energy are incompatible with FLT description or with recent calculations based on Green’s-function formalism. There is currently a controversy in the literature on the origin of the peak and especially on the role of secondary electrons. One of the key questions concerns the amount of Auger electrons contributing to the hot-electron distribution in the region of the peak. On the basis of simple model calculations, Petek et al. have estimated that the Auger contribution is less than 10%, while Knoesel, Hotzel, and Wolf could reproduce their correlation measurement by assuming that the hot electron population above the $`d`$-band threshold was entirely generated via the Auger process. Note that the experiments were performed using different pulse durations (12 and 70 fs, resp.). In a previous work, we have developed a theory for the dynamics of excited electrons in metals, which includes both the effects of secondary electrons and transport. We used a Boltzmann-type equation in the random-k approximation for the calculation of the distribution function of excited electrons. A detailed description is given in Ref. . The calculations have shown a peak in the relaxation time at the right energy reflecting the $`d`$-band threshold and a linear shift with photon frequency, in agreement with experiments. However, the height of the peak $`\mathrm{\Delta }\tau _{th}10`$ fs calculated using $`\tau _h=9`$ fs was smaller than the observed one $`\mathrm{\Delta }\tau _{exp}17`$ fs. Furthermore, a simplified procedure was used to extract the relaxation time which also led to an overestimate of the peak height. Indeed for simplicity, the relaxation time was extracted by fitting the distribution of excited electrons, $`f(t)`$, which was very sensitive to the influence of secondary electrons causing a deviation from a simple exponential decay. To make our results more reliable, in this work we use the same fitting procedure as the one used by experimentalists, i.e. we fit $`I^{2\mathrm{P}\mathrm{P}\mathrm{E}}(\mathrm{\Delta }t)`$, given by the convolution of the probe laser profile with the distribution of excited electrons. In this paper, we will analyze in detail the structure and height of the peak and the role of secondary electrons, especially focussing on the Auger contribution and the $`d`$-hole lifetime. In order to shed light on the contribution of Auger electrons, we have started by analyzing their weight in the total hot-electron distribution. For this purpose we have performed calculations for different hole lifetimes ($`\tau _h`$) and laser pulse durations ($`\tau _l`$) and found that their relative contribution can be expressed in a simple way as a product of a function of $`\tau _l/\tau _h`$ and a phase space function which is energy-dependent. In addition, we show that the height of the peak in the relaxation time depends mainly on the hole lifetime and surprisingly not on the amount of Auger electrons. It will be shown that the relaxation time for energies above the $`d`$-band threshold scales linearly with $`\tau _h`$ and its structure is determined by the same phase space function as mentioned above. Finally, we study how the height of the peak depends on the optical transition matrix elements and on the laser pulse duration. ## II Results and discussion As said in the introduction, the composition of the hot-electron distribution in terms of different contributions ($`f(E)=_if_i(E),i=P,A,OS`$ for primary, Auger, other secondary electrons) is an open question. We determine their respective weight and compare the distribution of hot electrons for different cases in Fig. 1. We especially focus our attention on the variation as function of the hole lifetime at the top of the $`d`$ band, $`\tau _h`$. The hole lifetime plays an important role, since it determines the generation rate of Auger electrons which may strongly influence the dynamics of excited electrons above the Fermi energy. Note that in our model, the hole lifetime is the inverse of the Auger scattering rate, thus we cannot vary independently the hole lifetime and the amount of Auger electrons generated. One should mention that experimentally, lower bounds of 24 fs and 26 fs have been reported for the hole lifetime at the top of the $`d`$ band in Cu. Since there is up to now no clear consensus on the value of $`\tau _h`$, in our calculation we will consider it as a free parameter. In absence of a $`d`$ band, the electron-hole symmetry would hold, and our calculation gives then $`\tau _h=9`$ fs for the hole lifetime at $`EE_F=2.2`$ eV. Due to the localized nature of the $`d`$-wave functions, the $`d`$-hole lifetime is expected to be significantly larger than the $`s`$-hole lifetime, which will hence be considered as a reasonable lower bound. We start with the case of a laser pulse of photon energy $`h\nu =3.3`$ eV and duration $`\tau _l=70`$ fs (FWHM) as used in Ref. . Later, we will discuss the effect of changing the laser pulse duration. In Fig. 1(a) we show results of the distribution of hot electrons for different cases: with and without secondary electrons and for different hole lifetimes. We observe a peak in the primary electron distribution at $`EE_F=1.1`$ eV which is due to the sharp feature in the $`d`$-band density of states; at this energy, primary electrons are dominating. To simplify the discussion, in Fig. 1(b), we plot $`f_i/f_P`$ for $`i=A,OS`$. In the region of the peak in $`f`$, the primary electron contribution is dominating, which appears now as a dip in $`f_i/f_P`$. Below the peak, there is a continous rise of both the Auger and other secondary electron contribution. On the other hand, just above the peak, there is a sharp increase of the weight of the secondary electrons, largely dominated by the Auger contribution, with a maximum at $`EE_F=1.3`$ eV. We observe that a change of the hole lifetime leads to a significant variation of the Auger contribution. For example, for $`\tau _h=35`$ fs, $`f_A/f_P=2.2`$, whereas for shorter lifetime $`\tau _h=9`$ fs, $`f_A/f_P=3.6`$. In our model, a shorter hole lifetime corresponds to a larger Auger scattering rate and therefore to a larger amount of Auger electrons. Note that $`f_A/f_P`$ has the same shape for any given $`\tau _h`$, only the relative amplitude is changing. This will be analyzed further when discussing the effect of the laser pulse duration. Furthermore, we find that for $`EE_F>2.5`$ eV, the contribution of secondary electrons is always negligible. In Fig. 2, we show the relaxation time as function of energy for different cases (see Fig. 1). The relaxation time is determined by a fit to the 2PPE signal as discussed in connection with Fig. 6. It will be interesting to see how the peak in the relaxation time depends on both $`f_A/f_P`$ and on the hole lifetime. First, we observe that when considering only primary processes ($`\tau (E)=\tau _P(E)`$), the relaxation time is a monontonic function, as it can be expected on the basis of FLT. However, when including other secondary processes (but no Auger process), we immediately observe some significant changes for $`EE_F<2.3`$ eV and some new structure at intermediate energy. When comparing with Fig. 1(b), the changes in $`\tau (E)`$ due to the presence of other secondary electrons coincide with the variation of $`f_{OS}/f_P`$ discussed previously. Indeed, at $`EE_F=1.1`$ eV (dip in $`f_{OS}/f_P`$), $`\tau `$ is almost unchanged. Then a small peak appears at $`EE_F=1.3`$ eV (region where the secondary contribution is the highest), and for $`EE_F>2.5`$ eV, $`\tau (E)=\tau _P(E)`$ (negligible secondary electron contribution). This picture remains qualitatively unchanged when including the Auger process. Let us now comment on the quantitative aspect. First of all, a clear and well-defined peak is observable when $`\tau _h`$ gets sufficiently large (typically $`\tau _h>17`$ fs). It is also interesting to note that for small $`\tau _h`$ (large Auger scattering rate), the effect is very weak, in spite of the fact that the Auger contribution to the electron distribution is large \[see Fig. 1(b)\]. This is a clear indication that the amplitude of the peak in $`\tau `$ is mainly controlled by the hole lifetime. This is indeed illustrated in the inset of Fig. 2, where $`\tau `$ at $`EE_F=1.3\mathrm{eV}`$ is plotted as function of the hole lifetime. We see clearly that $`\tau `$ scales linearly with $`\tau _h`$. It is important to mention that we have found that this linear scaling behavior is also valid for other energies. This suggests that $`\tau (E)`$ could be expressed as a sum of two terms, $`\tau _0(E)`$ (including primary and other secondary processes), and a term proportional to $`\tau _h`$ (Auger contribution), where the coefficient is a phase space factor depending on the DOS only. To be more explicit, we suggest that the relaxation time can be written as $`\tau (E)\tau _0(E)\tau _hF(E)`$, where $`F(E)`$ is a phase space factor defined as the ratio between a) the available phase space for Auger scattering and b) the phase space for the optical excitation. To test the validity of this expression, in Fig. 3, we have plotted in the same figure $`[\tau (E)\tau _0(E)]/\tau _h`$ as function of $`EE_F`$ and the calculated $`F(E)`$. Clearly, for $`EE_F>1.3`$ eV, one can see that the values for $`[\tau (E)\tau _0(E)]/\tau _h`$ lie almost on the same curve, which depends only on the energy. A deviation is observed for smaller energy, where Auger electrons are no longer the dominant contribution to the hot-electron distribution \[see Fig. 1(b)\]. However, the position of the dip is always found at 1.1 eV. Within some deviations at high and low energy, we observe a fair agreement between $`F(E)`$ and $`[\tau (E)\tau _0(E)]/\tau _h`$. Note that a peak appears in $`F(E)`$ at $`EE_F1.7`$ eV, whereas in $`[\tau (E)\tau _0(E)]/\tau _h`$, one finds only a shoulder at this position. This can be attributed to the fact that by definition $`F(E)`$ contains the explicit details of the DOS, whereas the relaxation time is not so sensitive to the details of the DOS. To summarize this discussion, the important result is that the amplitude of the broad peak in $`\tau (E)`$ is controlled by $`\tau _h`$ and its shape by the phase-space factor $`F(E)`$ (in random-k approximation, it depends only on the DOS). When comparing Fig. 3 and Fig. 1, we find that $`f_A/f_P`$ is also proportional to $`F(E)`$, which will be commented later. As was shown in Fig. 3, the shape of the peak in the relaxation time is determined by the function $`F(E)`$ which contains the information on the optical transition matrix elements. Therefore, in Fig. 4 we analyze how sensitively the relaxation time $`\tau (E)`$ depends on the optical transition matrix elements. Till now, we have presented data for the case of equal matrix elements, $`M_{dsp}=M_{spsp}`$. However, on the basis of the different degree of localization of the $`d`$ and $`sp`$-type wave functions, it is expected that $`M_{dsp}`$ should be larger than $`M_{spsp}`$. For example, in the calculation of the dielectric function for Ag, a ratio $`|M_{dp}|^2/|M_{ps}|^2=2.21`$ was estimated. In Fig. 4, we have plotted $`\tau (E)`$ for different values of the ratio $`R=|M_{dsp}|^2/|M_{spsp}|^2`$. If we define the height of the peak as $`\mathrm{\Delta }\tau =\tau (1.3\mathrm{eV})\tau (1.2\mathrm{eV})`$, we find that when changing $`R`$ from 1 to 2, $`\mathrm{\Delta }\tau `$ varies significantly from 9 fs to 14 fs (increase by 60%). However for a further increase of $`R`$ from 2 to 4, the change is much weaker (14 fs to 17 fs, increase by only 30%). Assuming that for Cu, $`R`$ is of the same order of magnitude as for Ag, one can get an estimate of the hole lifetime required to get fair agreement with the observed height of the peak for polycrystalline Cu, $`\mathrm{\Delta }\tau _{exp}17`$ fs. We have found that for $`R=2`$, $`\tau _h=35`$ fs gives $`\mathrm{\Delta }\tau _{th}=14`$ fs. Note that our results obtained in the random-k approximation are most suitable for comparison with experiments performed on polycrystalline material. Measurements on single crystals for different orientations have provided values ranging from $`\mathrm{\Delta }\tau _{exp}=15`$ to 40 fs, which is the same order of magnitude. It is interesting to note that $`\tau _h=35`$ fs coincides with the value used by Knoesel, Hotzel, and Wolf in their simulation. Note that our conclusions are not restricted to the case of Cu only, but hold also in the case of other noble and transition metals. Indeed, let us discuss the case of Ag, Au and the 3$`d`$ transition metals Fe, Co and Ni. In the case of Ag, no peak is observed in the relaxation time, due to the fact that the $`d`$ band threshold is approximatively at 4 eV below the Fermi level and no $`d`$ electrons are excited for the photon energies widely used in 2PPE experiments ($`h\nu <4`$ eV). The case of Au is more intriguing, since the location of the $`d`$-band threshold is very similar to the one in Cu. Indeed, a very weak structure in the relaxation time was also observed for Au. The fact that the peak in Au is so weak can be attributed to the small degree of localization of the of the $`d`$ hole. It is expected that a 5$`d`$ hole in Au should be less localized and therefore have a larger Auger scattering rate than a 3$`d`$ hole in Cu. Thus one expects $`\tau _h^{\mathrm{Au}}<\tau _h^{\mathrm{Cu}}`$ and thus in accordance with our result that the peak is governed by $`\tau _h`$, a less pronounced structure in Au than in Cu. Another possible explanation is the fact that the peak in the $`d`$-band DOS in Au is much less pronounced than in Cu (again due to the less localized $`d`$ electrons in Au), leading to a less sharp separation between excitations from the $`d`$ and $`sp`$ bands and therefore to a weaker peak. Let us now comment on the fact that in the ferromagnetic transition metals Fe, Co and Ni, no peak is observed. There are two main reasons for the absence of this feature. (i) A threshold for $`dsp`$ transitions is not observable due to the band structure. The majority $`d`$ band extends to energies above $`E_F`$ and a $`dsp`$ threshold does not exist. The upper minority $`d`$ band edge is located very closely to $`E_F`$ and the threshold occurs only at the highest excitation energies where secondary (Auger) electron contributions are not important. (ii) The $`d`$ holes have a large available phase space for decay within the $`d`$ band which leads to a very short $`d`$-hole lifetime. All the results presented up to now were for $`\tau _l=70`$ fs. However, experiments were performed with laser pulse durations ranging from 70 fs down to 12 fs. Hence, it is interesting to calculate how $`\tau (E)`$ depends on $`\tau _l`$. First, let us show that the relative weight of the Auger contribution to the hot electron distribution is indeed strongly sensitive to the laser pulse duration. In Fig. 5, we have plotted $`f_A/f_P`$ as a function of $`1/\tau _h`$ for laser pulses varying from 12 to 120 fs duration. The first observation is that when reducing the laser pulse duration for a given $`\tau _h`$, $`f_A/f_P`$ is strongly reduced. For instance, for $`\tau _h=35`$ fs, $`f_A/f_P`$ goes from 3 to 0.5 (6 times smaller) when reducing the laser pulse from 120 to 12 fs. This might already explain the origin of the controversy about the amount of the Auger electron contribution mentioned before. It is clear that in experiments with very short laser pulses (12 fs in Ref. ), the Auger electron contribution is much smaller than in experiments with longer laser pulses (70 fs in Ref. ). At this level, one can conclude that the laser pulse duration $`\tau _\mathrm{l}`$ is a new relevant time scale. However, we have found that the only relevant parameter for the variation of $`f_A/f_P`$ is in fact $`\tau _\mathrm{l}/\tau _h`$. In the inset of Fig. 5, we have plotted $`f_A/f_P`$ as function of $`\tau _\mathrm{l}/\tau _h`$ and we see that all the data points lie to a very good approximation on the same curve, i.e. $`f_A/f_P(\tau _\mathrm{l},\tau _h)g(\tau _\mathrm{l}/\tau _h)`$. In this figure, the data are plotted for $`EE_F=1.3`$ eV, however, it has to be stressed that this scaling is also found for higher energies. The ratio $`f_A/f_P`$ can be reduced to an expression of the form $`f_A/f_P(E,\tau _\mathrm{l},\tau _h)g(\tau _\mathrm{l}/\tau _h)F(E)`$, where $`F(E)`$ is again the phase factor discussed in the previous section. Since the relaxation time is determined from $`I^{2\mathrm{P}\mathrm{P}\mathrm{E}}(\mathrm{\Delta }t)`$ which contains explicitly the information about the laser pulse, let us now analyze how $`\tau `$ is affected by changing the pulse duration. In Fig. 6, we show the dependence of the relaxation time $`\tau (E)`$ on the pulse duration for durations varying from 120 to 12 fs. We observe a significant effect in the region of the peak. Interestingly, the peak gets higher for the shorter pulses, although the amount of Auger electrons is much smaller, see Fig. 5. Note that the dependence on the pulse duration disappears for high energies, where the secondary-electron contribution is negligible. In order to understand the sensitivity to the pulse duration, in the inset of Fig. 6, we present the calculated $`I^{2\mathrm{P}\mathrm{P}\mathrm{E}}`$ and the fit by an exponential model function used to extract the relaxation time. We see that the agreement is very good for pulses of 120 and 70 fs duration, while there is a clear deviation from the single-exponential fit for shorter pulses of 40 and 12 fs. Indeed, for the 12 fs pulse, there is a delayed rise in $`I^{2\mathrm{P}\mathrm{P}\mathrm{E}}`$ which is due to the delayed generation of Auger electrons. Such a delayed rise for very short laser pulses was recently observed experimentally. As an additional remark, the observed deviations underline the difficulties in extracting a relaxation time in the limit of very short pulses and indicate that maybe an improved definition of the relaxation time would be appropriate in this case. ## III Conclusion To conclude, this work presents a detailed analysis of the role of secondary electrons in 2PPE experiments on Cu. The $`d`$-hole lifetime plays a crucial role. Our conclusions are more general and can also be applied to other noble and transition metals as discussed. We have found that in the case of Cu, the secondary-electron distribution is dominated by the Auger contribution (for $`\tau _h35`$ fs) for longer laser pulses ($`\tau _l>40`$ fs), but not for a very short laser pulse of $`\tau _l=12`$ fs. For a given $`\tau _h`$, the Auger contribution is much larger for a longer laser pulse duration than for a shorter one. The parameter which controls the variation of $`f_A/f_P`$ is $`\tau _l/\tau _h`$. Concerning the structure in the relaxation time, we have shown that the relaxation time at the peak depends linearly on $`\tau _h`$, but surprisingly not on the amount of secondary electrons generated. The shape depends on a phase-space factor (in random-k approximation, on the DOS). We provided an expression for the relaxation time $`\tau (E)`$ as a sum of two terms: the first for primary (and other secondary) electrons and the second for the Auger contribution. We have also found that the height of the peak depends sensitively on the optical transition matrix element ratio, $`R=|M_{dsp}|^2/|M_{spsp}|^2`$, and on the laser pulse duration. For a value of $`R=2`$ and $`\tau _h=35\mathrm{fs}`$, the calculated height of the peak is $`\mathrm{\Delta }\tau _{th}=14`$ fs in fair agreement with a measurement on polycrystalline Cu giving $`\mathrm{\Delta }\tau _{exp}17`$ fs. Note, preliminary results indicate that transport changes the magnitude of the relaxation time, but not the structure and height of the peak. The influence of transport will be studied in detail in a separate publication. In view of the importance of the $`d`$-hole lifetime, it would be highly desirable to perform further experiments as well as theoretical calculations on the $`d`$-hole lifetime in both Cu and Au. ###### Acknowledgements. We would like to thank M. Aeschlimann, M. Wolf and E. Matthias for interesting discussions. Financial support by Deutsche Forschungsgemeinschaft, Sfb 290 and 450, is gratefully acknowledged.
no-problem/0002/astro-ph0002038.html
ar5iv
text
# A Search for Resonant Structures in the Zodiacal Cloud with COBE DIRBE: The Mars Wake and Jupiter’s Trojan Clouds ## 1 Introduction A planet interacting with a circumstellar dust cloud can produce a variety of dynamical structures in the dust. Planets can clear central holes and create large-scale asymmetries, such as arcs and warps (Roques et al. 1994). Planets can also detain dust in mean motion resonances, forming structures such as the circumsolar ring and wake of dust trailing the Earth in its orbit (Jackson and Zook 1989; Dermott et al. 1994) and clouds of dust at the planet’s Lagrange points (Liou and Zook 1995). Understanding the structures of circumstellar debris disks is vital to the search for extra-solar analogs of our solar system. Concentrations in circumstellar dust clouds may confuse planet-finding interferometers like the Keck Interferometer or the proposed Terrestrial Planet Finder (Beichman 1999). Smooth exo-zodiacal clouds can be identified by their symmetry and subtracted from the signal of a Bracewell interferometer (MacPhie and Bracewell 1979), but cloud asymmetries can be difficult to distinguish from planets (Beichman 1998). On the other hand, planet-induced asymmetries can serve to reveal the presence of a planet that is otherwise undetectable (Wyatt et al. 1999). If we understand the inhomogeneities in our own zodiacal dust cloud, we will be better prepared to interpret observations of other planetary systems. The Diffuse Infrared Background Experiment (DIRBE) aboard the Cosmic Background Explorer (COBE) satellite has provided detailed, revealing images of the zodiacal cloud (eg. Spiesman et al. 1995, Reach et al. 1997). It surveyed the entire sky from near-Earth orbit in 10 broad infrared bands simultaneously with a $`0.7^{}`$ by $`0.7^{}`$ field of view over a period of 41 weeks (Boggess et al. 1992), and imaged the Earth’s ring and wake (Reach et al. 1995). We investigated the COBE DIRBE data set as a source of information about structure in the solar zodiacal cloud associated with planets other than Earth. ## 2 The Data Set We worked with a version of the DIRBE data set which contains the sky brightness with a model for the background zodiacal emission subtracted: the zodi-subtracted weekly data set from the DIRBE Sky and Zodi Atlas (DSZA). The DIRBE team created the DZSA by fitting an 88-parameter model of the zodiacal dust emission to the observed sky brightness (Kelsall et al. 1998). The model includes a smooth widened-fan component, three pairs of dust bands near the ecliptic plane, and the Earth’s ring and trailing wake, but no structures associated with Mars, Jupiter, or any other planets. The zodi-subtracted weekly data set contains 41 files for each DIRBE band, spanning a period from 10 December 1989 to 21 September 1990, covering 10 bands, centered at 1.25, 2.2, 3.5, 4.9, 12, 25, 60, 100, 140, and 240 microns. Galactic emission dominates the zodi-subtracted maps in the mid and far-infrared near the galactic plane. Near the ecliptic plane, the zodi-subtracted maps are dominated by residuals from the subtraction of the dust bands that are associated with prominent asteroid families (Reach et al. 1997; Kelsall et al. 1998). The presence of these bands makes searching for smooth, faint heliocentric rings of dust near the ecliptic plane impossible. However, we could hope to distinguish a blob of dust following a planet across the sky from other cloud components and from the galactic background by the apparent motion of the blob during the COBE mission. The COBE satellite orbited the Earth near the day/night terminator and repeatedly mapped a swath of the sky extending about 30 degrees before and behind the terminator (see the COBE DIRBE Explanatory Supplement (1997) for details). Each weekly map contains a robust average of all the week’s data and covers a region a little larger than the daily viewing swath. This weekly averaging tends to exclude transient events that would contaminate our final maps, but should not otherwise significantly affect a search for large features that move only a few degrees per week. Figure 1 is a schematic view of the solar system during week 34 of the mission (9–16 July 1990) showing the positions of Earth, Mars and Jupiter, and the DIRBE viewing swath for that week. Because DIRBE never imaged the sky within $`60^{}`$ of the sun, the orbits of Mercury and Venus, for instance, do not appear in the data. Mars appeared in the DIRBE viewing swath for 25 weeks of the mission, and moved $`111^{}`$ in ecliptic longitude during those weeks. Jupiter moved only $`40^{}`$ in ecliptic longitude during the entire mission, but this is sufficient to allow some crude background subtraction. More distant planets moved less. Based on these constraints, we decided to search the weekly maps for dust features following the orbital paths of Mars and Jupiter. Figure 2 shows the intersection of the DIRBE viewing swath with the ecliptic plane throughout the 41 weeks of the mission, and the ecliptic longitudes of Mars, Jupiter and the Sun during those weeks. ## 3 The Mars Wake A ring of zodiacal dust particles detained in near-Earth resonances follows the Earth around the sun (Jackson and Zook 1989; Dermott et al. 1994). This ring consists mainly of dust in mean-motion resonances where the particles orbit the sun $`j`$ times every $`j+1`$ Earth years (j is a whole number). Smaller trapped particles experience greater Poynting-Robertson acceleration, so the equilibrium locations of their orbital pericenters shift closer to the Earth on the trailing side, where the component of Earth’s gravity that opposes Poynting-Robertson drag is stronger. The result, averaged over many particles, appears as a density enhancement in the ring behind the Earth—a trailing dust wake. The Earth’s wake was detected by by IRAS (Dermott et al. 1988, Reach 1991), and later, by DIRBE as an asymmetry in the near-Earth dust brightness of $`1.1`$ MJy ster<sup>-1</sup> at 12 microns and $`1.7`$ MJy ster<sup>-1</sup> at 25 microns (Reach et al. 1995). We searched the DIRBE data set for a similar wake of dust trailing Mars. Blackbody dust at the heliocentric distance of Mars has a typical temperature of $`220`$ K; it emits most strongly in the 12 and 25 micron DIRBE bands. We restricted our exploration to data from these two bands. We began by assembling a composite map of the emission from beyond the solar system, mostly due to stars and dust in the Galactic plane, by averaging together all the zodi-subtracted weekly maps in their native COBE quadrilateralized spherical cube coordinates, a coordinate system that is stationary on the celestial sphere. We subtracted this composite map from each of the zodi-subtracted weekly maps, effectively removing most of the galactic emission and any other stationary emission except within a few degrees of the galactic plane, where the emission is so high that detector and pointing instabilities make our linear subtraction method ineffective. The remaining maps, with outlying data removed, had surface brightness residuals in the range of -1.7 to +1.0 MJy ster<sup>-1</sup> at 12 microns, and -1.6 to 2.1 MJy ster<sup>-1</sup> at 25 microns. For comparison, the typical total zodiacal background near Mars during the mission is $`35`$ MJy ster<sup>-1</sup> at 12 microns and $`66`$ MJy ster<sup>-1</sup> at 25 microns. The most prominent remaining features were the stripes parallel to the ecliptic plane within a few degrees of the ecliptic plane that are associated with the asteroidal dust bands. The next most prominent remaining features were wide bands extending $`\pm 30`$ degrees from the ecliptic that appeared to follow the sun. The $`1225`$ micron color temperature of the wide bands was $`280`$ K; they are probably residuals resulting from imperfect subtraction of the Earth’s ring and wake. We assembled a crude map of the residual near-Earth flux by averaging together the galaxy-subtracted maps in geocentric ecliptic coordinates referenced to the position of the Sun. Subtracting this from the weekly maps cancelled most of the signal in the wide bands. Mars moved $`87^{}`$ with respect to the Sun during the mission, allowing us to subtract this composite map without subtracting a significant flux from a wake moving with Mars. Figure 3a shows our map of the galactic background; Figure 3b shows the near-Earth residuals. Next we chose subframes of each weekly map centered on the ecliptic coordinates of Mars in the middle of the week, and inspected them visually. No structure in the data appeared to move with Mars from week to week. In order to understand the data better, we constructed a simple model of the Mars wake from the empirical model of the Earth’s trailing wake fit to the DIRBE data by Kelsall et al. (1998). The model has the following form: $$n=n_0\mathrm{exp}\left[\frac{(rr_0)^2}{2\sigma _r^2}\frac{|z|}{\sigma _z}\frac{(\theta \theta _0)^2}{2\sigma _\theta ^2}\right]$$ (1) where $`n`$ is the local average of particle number density times particle cross section, and $`r`$,$`z`$, and $`\theta `$ are cylindrical coordinates in the plane of the orbit of Mars centered on the sun. Mars is located at $`r=r_0`$, $`z=0`$, $`\theta =0`$. The parameters of the model, $`\theta _0`$, $`\sigma _r`$, $`\sigma _z`$, $`\sigma _\theta `$, and $`n_0`$, are the same as the corresponding parameters for the Earth’s wake: $`\theta _0=10^{}`$, $`\sigma _r=0.10`$ AU, $`\sigma _z=0.091`$ AU, $`\sigma _\theta =12.1^{}`$. The shaded area trailing Mars in Figure 1 shows how this model would appear viewed from above the ecliptic plane. The Kelsall et al. (1998) Earth wake has $`n_0=1.9\times 10^8\mathrm{AU}^1`$, but we chose $`n_0=1.08\times 10^8\mathrm{AU}^1`$ so that the density of the model would be proportional to the local background dust density at the orbit of Mars. The model represents what the Earth wake would look like if it were trailing Mars instead of Earth. We evaluated the model’s surface brightness by computing the line-of-sight integral $$I_\lambda =E_\lambda n(r,z,\theta )B_\lambda (T)𝑑s$$ (2) where $`B_\lambda (T)`$ is the Planck function and $`E_\lambda `$ is an emissivity modification factor prescribed by the COBE model to account for the deviation of the Earth wake’s spectrum from a blackbody; $`E_{12\mu \mathrm{m}}=1.06`$, $`E_{25\mu \mathrm{m}}=1.00`$. The temperature of the dust varies with heliocentric distance, $`R`$, as $`T=286\mathrm{K}R^{0.467}`$, following the DIRBE model. This expression is similar to what you would expect for grey-body dust ($`T=278\mathrm{K}R^{0.5}`$). In Figure 4, we compare a synthesized image of the model wake with a background-subtracted image of the infrared sky around Mars. The image shows the flux in the 25 micron band averaged over weeks 26–34 (14 May 1990 to 15 July 1990) in ecliptic coordinates referenced to the position of Mars. The subset of the weekly 25-micron data used in this image is indicated by the horizontal stripes labeled “M” in Figure 2. Mars moved 40 degrees in ecliptic longitude over this period. The DIRBE team blanked the data within a square about $`2.5^{}`$ on a side centered on Mars, and within a $`1.5^{}`$ radius circle centered on Jupiter. A software mask in Figure 4 covers the region around Mars affected by this processing. This whole region, up to $`40^{}`$ behind Mars, shows no sign of a brightness enhancement that we would associate with a wake of dust trailing Mars. Maps from the later weeks suffer from an oversubtraction due to imperfections in the Kelsall et al. (1998) zodi model, visible as the dark region to the lower right. Weeks earlier in the mission suffer from a similar undersubtraction. These artifacts, our primary source of noise, appear to arise from dust bands at latitudes of $`\pm 10^{}`$, where bands associated with the Eos asteroids are prominent in the raw DIRBE data (Reach et al. 1997). As the dust from the asteroid belt spirals towards the sun, perturbations from planets deform the bands. The Kelsall et al. model includes a simple model of this dust band which could not take these perturbations into account. We chose the span of weeks used to create Figure 4 to minimize these artifacts, which are easily discernible by their extent in latitude and longitude. To better compare the model with the data, we focused on a narrow strip with a height of $`3^{}`$ in ecliptic latitude, extending from $`8^{}`$ ahead of Mars to $`39^{}`$ behind Mars in ecliptic longitude. This strip contains most of the flux in the model wake. We averaged together maps from weeks 26–34 prepared as described above to produce an image of this strip. In Figure 5, we plot a cut through this strip, and we compare it with the model, processed in the same manner as the data. The data are dominated by residuals from the ecliptic bands and the Earth’s ring, smeared out in the ecliptic plane by the orbital motion of Mars. The standard deviation of the data is 0.54 MJy ster<sup>-1</sup>; although the distribution of the residuals is not Gaussian, based on this comparison we can place a rough 3-$`\sigma `$ upper limit on the central peak of the Mars wake of 18% of the flux expected from our simple model. The empirical model of the Earth’s wake we have used for comparison to the Mars dust environment is not an ideal model for the Mars wake. It may not even be a good representation of the Earth’s wake. Since COBE viewed the Earth wake from near the Earth only, the observations constrain the product $`n_0\sigma _\theta `$ for the Earth wake, but do not provide good constraints on either of these parameters alone. Kelsall et al. (1998) quote a formal error of 28% on the determination of $`\sigma _\theta `$. Calculations for 12 micron particles suggest that $`\sigma _\theta `$ for the Earth wake might be 40% lower than the Kelsall et al. (1988) number; this figure is based on Figure 5 in Dermott et al. (1994). Since we are sensitive to the wake’s surface brightness peak as seen from the Earth, not Mars, using a more compact wake model affects our upper limits. Holding $`n_0\sigma _\theta `$ constant and decreasing $`\sigma _\theta `$ by 40% translates into a decrease of our upper limit to 11% of one Earth wake. Mars has 11% of the mass of the Earth, so we expect it to trap less dust than the Earth, but not simply 11% as much dust. In fact, there is no simple scaling law that describes how the density of a dust ring relates to the size of the planet that traps it. The density of the Mars ring is proportional to the capture probability times the trapping time for each resonance summed over all relevant resonances and the distribution of particle sizes. In the adiabatic theory for resonant capture due to Poynting-Robertson drag, the capture probabilities depend on the mass of the planet compared to the mass of the star and on the eccentricity of the particle near resonance and (Beauge and Ferraz-Mello 1994). So one complicating factor is that the orbits of the dust particles are slightly more eccentric when they pass Mars than when they pass the Earth; a particle released on the orbit of a typical asteroid, at 2.7 AU with an eccentricity of 0.14, will have an eccentricity of 0.07 as it passes Mars, and an eccentricity of 0.04 when it passes the Earth (Wyatt & Whipple 1950). The higher eccentricity makes them harder to trap. The trapping time scale is proportional to the time it takes for the resonant interaction to significantly affect the eccentricity and libration amplitude of the particle. When the planet has a circular orbit, these time scales are on the order of the local Poynting-Robertson decay time (Liou and Zook 1997), which scales as $`r_0^2/\beta `$, where $`\beta `$ is the ratio of the Sun’s radiation-pressure force on a particle to the Sun’s gravitational force on the particle. Compared to the P-R drag at the heliocentric distance of the Earth, the Poynting-Robertson drag force at the orbit of Mars is less for a given particle by a factor of $`1.52^2=2.31`$. The small mass of Mars and the higher eccentricities of the orbits of the incoming particles work against the formation of a dense ring, but the greater heliocentric distance of Mars compared to the Earth works in favor of the formation of the ring. So far our discussion has assumed that the trapping is adiabatic— that the orbital elements of the particles change on time scales much longer than the orbital period. This approximation may not be as good for trapping by Mars as it is for trapping by the Earth. Mars has a greater orbital eccentricity ($`e=0.093`$) than the Earth ($`e=0.017`$). This increases the widths of the zones of resonance overlap, and makes a larger fraction of dust orbits chaotic (Murray and Holman 1997). Predicting the density of the Mars wake is another step more complex than predicting the density of the Mars ring. Compared to the Earth wake, the Mars wake may form closer to the planet and have a smaller $`\sigma _\theta `$. Since Mars is less massive than the Earth, a given particle would need to have a closer interaction with Mars than with the Earth to receive an impulse from the planet’s gravity that would balance the Poynting-Robertson drag on the particle (Weidenschilling and Jackson 1993). For this reason, we expect the trapped particles which form the Mars wake to prefer resonant orbits with higher $`j`$ and lower $`\varphi `$ than similar particles trapped by the Earth, where $`\varphi `$ is the angle between the perihelion of the orbit of a particle and the longitude of conjunction of the particle and the planet. Our upper limit shows that the Mars wake is less dense than the Earth wake by more than the simple factor of the mass ratio times the square of the ratio of the semimajor axes $`=0.11\times 2.31=0.25`$. However, a thorough numerical simulation which includes the effects we mentioned and others such as resonant interactions with Jupiter may be the only good way to relate our upper limit to the dynamical properties of the dust near Mars. ## 4 Trojan Dust While the Earth and Mars can collect abundant low eccentricity particles from all different orbital phases spiraling in from the asteroid belt, Jupiter orbits in a distinctly different dust environment. Outside the asteroid belt, the dust background probably consists mainly of small particles with high orbital eccentricities: submicron particles released by asteroids or comets that are kicked by radiation pressure into more eccentric orbits than their parent bodies (Berg and Grün 1973; Mann and Grün 1995). There is also a stream of submicron particles from the interstellar medium (Grün et al. 1994, Grogan et al. 1996) and there are probably a few particles near Jupiter that originated in the Kuiper belt (Liou et al. 1996). Jupiter probably traps many of the small particles in 1:1 mean motion resonances (Liou and Zook 1995). However these small trapped particles should occupy both “tadpole” and “horseshoe” orbits, without a strong preference for either, and the locations of their Lagrange points vary with $`\beta `$ (Murray 1994). They probably form large, diffuse ring-like clouds which are difficult for us to detect. But there is another potential source of dust that could form concentrated clouds we could hope to detect against the asteroid bands in the DIRBE data: the Trojan asteroids. This population of asteroids orbits the Sun at $`5.2`$ AU in 1:1 resonances with Jupiter, librating about Jupiter’s L4 and L5 Lagrange points, roughly $`60^{}`$ before and behind the planet. They number about as many as the main-belt asteroids. Marzari et al. (1997) have simulated the collisional evolution of the Trojan asteroids, and concluded that collisions in the L4 swarm produce on the order of 2000 fragments in the 1–40 km diameter range every million years. If we simplisticly assume a equilibrium size distribution for the produced particles, $`dna^{3.5}da`$, where a is the particle radius (Dohnanyi 1969), we find that there are roughly $`10^{23}`$ particles in the 10–100 micron diameter size range produced every million years. These large particles are likely to stay in roughly the same orbits as their parent bodies, trapped by Jupiter in “tadpole” orbits—orbits that librate around a single Lagrange point. They could conceivably form detectable clouds at L4 and L5. Liou and Zook (1995) calculated that 2-micron diameter particles will stay trapped in 1:1 resonances for $`5000`$ years. A 20-micron diameter particle at Jupiter’s orbit experiences $`1/10`$ of the Poynting-Robertson acceleration of 2 micron particles, and will typically stay trapped for 10 times as long (Schuerman 1980). Assuming a trapping time of $`5000`$ years $`\times `$ the dust grain diameter/2 microns, and emissivity appropriate for amorphous icy grains (eg. Backman and Paresce 1993), the 10–100 micron diameter particles in the Trojan cloud will emit a total flux, as viewed from the Earth, of $`3\times 10^4`$ MJy at 60 microns, a few orders of magnitude below our detection limit. However, this is a drastic extrapolation and probably a poor guess at the actual cloud brightness; the size-frequency distribution of the Trojan asteroids is not well known and dust cloud is probably not near collisional equilibrium. Moreover, the total amount of trapped dust is subject to severe transients, such as the events that produced the dust bands associated with main belt asteroid families (Sykes and Greenberg 1986). For example, a 20-km diameter Trojan asteroid ground entirely into 10-micron diameter dust corresponds to a transient cloud which, as viewed from the Earth, would produce a 60 micron flux of $`6`$ MJy. A similarly enhanced cloud might be visible a few percent of the time. Unfortunately, Jupiter’s Lagrange points do not move far with respect to the galactic background during the COBE mission; L4 moves 10 degrees and L5 moves 50 degrees, as shown in Figure 2. Only L5, the trailing Lagrange point, moves far enough during the mission to make subtracting the galactic background feasible. There are about half as many L5 Trojans known as L4 Trojans, but this is probably because the L5 region has been searched less intensely than the L4 region, not because the L4 and L5 populations are significantly different (Shoemaker et al. 1989). To make a background-subtracted image of the L5 region, we chose two subsets from the zodi-subtracted data set. The first, subset A, is from the beginning of the mission (weeks 5-10) when L5 was in the viewing swath and approximately stationary on the sky. The second, subset B, is the same region of sky, but contains data from half a year later in the mission (weeks 33–38), when L5 has moved 45 degrees away, out of the viewing swath. The average distance from Earth to L5 is approximately the same during each time period. These data sets are depicted in Figure 2. We focused on data in the 60 micron band, the band which contains the emission peak for dust at the local blackbody temperature at 5.2 AU. To minimize the residuals from the zodiacal dust model, we used only data from solar elongations between 65 and 115 degrees, (or between 245 and 295 degrees). Figure 6 shows an image constructed from data set A and an image constructed from data set B, and the difference, A$``$B, which is dominated by residuals from dust bands associated with asteroid bands and shows no obvious evidence of enhanced emission at L5. We made a simple model for a Trojan cloud of large dust particles by assuming that they occupy the same dynamical space as the Trojan asteroids themselves, following Sykes (1990). Sykes modeled the asteroidal dust bands by showing how particles constrained to orbits with a given inclination, eccentricity, and semimajor axis form a cloud when the remaining three orbital elements are randomized. He then convolved the shapes of these clouds with the distributions of orbital elements of the asteroids. In our case, however, the distribution of orbital elements is much broader and more important in determining the shape of the final distribution of particles. Since there are only about 70 Trojan asteroids whose orbits are well studied, the distributions of Trojan asteroid orbital parameters have severe statistical uncertainties. Therefore we settle for a simple Gaussian model for the Trojan cloud, using the orbital parameters as a guide to the parameters of the Gaussian. In the following calculations, we will neglect the inclination of Jupiter’s orbit relative to the the Earth’s orbit ($`1.305^{}`$). A typical Trojan asteroid librates around its Lagrange point with a period of 148 days. The mean longitude of the asteroid with respect to Jupiter, $`\varphi `$, oscillates within limits $`\varphi _{min}`$ and $`\varphi _{max}`$, which can be calculated, according to Yoder et al. (1983), from $$\mathrm{sin}\frac{\varphi _{min}}{2}=\frac{\mathrm{sin}(\alpha /3)}{B}\mathrm{sin}\frac{\varphi _{max}}{2}=\frac{\mathrm{sin}(\alpha /3+120^{})}{B}$$ (3) where $`B=\eta _0(3\mu /2E)^{1/2}`$, $`\mathrm{sin}\alpha =B`$, $`\mu =\mathrm{M}_{\mathrm{Jupiter}}/\mathrm{M}_{}=0.000955`$ and $`\eta _0`$ = mean motion of Jupiter = 0.01341 rad yr<sup>-1</sup>. These limits are set by $`E`$, which is a constant of the motion in the absence of Poynting Robertson drag: $$E=\frac{1}{6}\left(\frac{d\varphi }{dt}\right)^2\frac{\mu \eta _0^2}{2x}(1+4x^3)$$ (4) where $`x=|\mathrm{sin}(\varphi /2)|`$. The libration amplitude, $`D`$, is $`\varphi _{max}\varphi _{min}`$. We find that the energy constant is approximately $$E\frac{3}{2}\mu \eta _0^2\left(10.0133D+0.2266D^20.0392D^3\right)$$ (5) for $`D1.3`$. The fraction of time a particle spends at a given phase, or equivalently, the distribution in phase of an ensemble of particles is given by $$P_\varphi \frac{1}{d\varphi /dt}.$$ (6) We can evaluate this as a function of $`D`$ with the aid of equations (4) and (5). For the distribution of dust libration amplitudes, $`P_D`$, we used a simple analytic function that approximates the distribution of libration amplitudes for Trojan asteroids shown in Figure 5 of Shoemaker et al. (1989). When we average $`P_\varphi `$ over $`P_D`$, we find that the L5 dust cloud is distributed in orbital phase roughly as a Gaussian centered at $`\theta _0=59.5^{}`$ behind Jupiter with a dispersion $`\sigma _\theta =10^{}`$. The distribution of the dust in heliocentric latitude can be approximated in a similar way. If particle in an orbit of given inclination, $`i`$, with small eccentricity, spends a fraction of its time, $`f`$, at latitude, $`\beta `$, an ensemble of particles with small eccentricities and evenly distributed ascending nodes will have a distribution, at a fixed orbital phase, of $$P_\beta f(\mathrm{cos}^2\beta \mathrm{cos}^2i)^{1/2}.$$ (7) We takes the inclination distribution of the particles, $`P_i`$, to be a simple analytic function that approximates the data for “independently discovered Trojans” shown in Figure 3 of Shoemaker et al. (1989). When we average $`P_\beta `$ over $`P_i`$, we find the distribution in latitude is roughly a Gaussian with dispersion $`\sigma _\beta =10^{}`$, and the distribution in height above the ecliptic has a dispersion $`\sigma _z=0.94`$ AU. The radial distribution of Trojans is more complicated to model, since both librations and epicycles include radial excursions. The average L5 Trojan eccentricity is 0.063; a particle with this eccentricity orbits at a range of heliocentric distances, $`\mathrm{\Delta }r`$ 0.66 AU. In the course of its librations, a particle with a typical Trojan libration amplitude, $`D=29^{}`$, oscillates in semi-major axis over a range of $`\mathrm{\Delta }a`$0.14 AU. We are not sensitive to the radial structure of the Trojan clouds, so we simply model the radial distribution as a Gaussian with a full width at half maximum of 0.66 AU, or a dispersion $`\sigma _r=0.24`$ AU. Our final model has the form: $$n=n_0\mathrm{exp}\left[\frac{(rr_0)^2}{2\sigma _r^2}\frac{z^2}{2\sigma _z^2}\frac{(\theta \theta _0)^2}{2\sigma _\theta ^2}\right]$$ (8) where $`r`$, $`z`$, and $`\theta `$ are cylindrical coordinates in the plane of the orbit of Jupiter, and the parameters are: $`r_0=5.203`$ AU, $`\sigma _r=0.24`$ AU, $`\sigma _z=0.94`$ AU, $`\theta _0=59.5^{}`$, and $`\sigma _\theta =9.7^{}`$. We calculated the surface brightness in the same way as we calculated the surface brightness of the model Mars wake, using an emissivity $`E_{60\mu \mathrm{m}}=1`$ because we are not considering small grains. The shaded region at L5 in Figure 1 represents this model as viewed from above the ecliptic plane. In Figure 6, we compare the difference image A$``$B to a synthesized image of our model cloud. For this image, $`n_0`$ is $`3.4\times 10^8\mathrm{AU}^1`$, corresponding to an effective emitting surface area at 60 microns of $`3.3\times 10^{18}`$ cm<sup>2</sup>, or one 3-km diameter asteroid ground entirely into 10-micron diameter dust. Figure 7 compares the difference image A$``$B and the model image in a different way; it shows the region within $`\pm 10^{}`$ of the ecliptic plane averaged in ecliptic latitude. The 1-$`\sigma `$ noise in the data in Figure 7 is 0.09 MJy ster<sup>-1</sup>. Based on this, we can place a rough 3-$`\sigma `$ upper limit on the effective surface area of the large dust grains at L5 of $`6\times 10^{17}\mathrm{cm}^2`$. ## 5 Conclusions The zodiacal cloud near the ecliptic plane is a complex tapestry of dynamical phenomena. We could not detect the Mars wake or Jupiter’s Trojan clouds among the asteroidal dust bands in the DIRBE maps, despite the efforts of the DIRBE team to subtract these bands from the maps. We would have detected the Mars wake if it had 18% of the overdensity of the Earth wake, based on our empirical model for the Earth wake. This upper limit illustrates the complexity of relating resonant structures in circumstellar dust disks to the properties of perturbing planets. For instance, we would have detected the Mars wake if the surface area of the dust in the wake scaled simply with the mass of the planet times the Poynting-Robertson time scale. The Trojan clouds, by our crude estimation, would have been a few orders of magnitude too faint to detect if the dust concentration in these clouds were at its mean levels. However, a transient cloud created by a recent collision of Trojan asteroids might have been detectable. We measured that the total 60-micron flux from large (10–100 micron diameter) dust particles trapped at Jupiter’s L5 Lagrange point is less than $`30`$ kJy. We thank Antonin Bouchez, Eric Gaidos, Peter Goldreich, Renu Malhotra and Ingrid Mann for helpful discussions, and our referees for their thoughtful comments.
no-problem/0002/astro-ph0002212.html
ar5iv
text
# The porous atmosphere of 𝜂-Carinae ## 1 Introduction $`\eta `$-Carinae is probably one of the most remarkable stellar object to have ever been documented. About 150 years ago, the star began a 20 year long giant eruption during which it radiated a supernova-like energy of roughly $`3\times 10^{49}ergs`$ (Davidson & Humphreys 1997). Throughout the eruption it also shed some $`12_{}`$ of material carrying approximately $`6\times 10^{48}ergs`$ as kinetic energy (Davidson & Humphreys 1997), while expanding at a velocity of $`650km/sec`$ (Hiller & Allen 1992, Currie et al. 1996). $`\eta `$-Carinae can therefore serve as a good laboratory for the study of atmospheres at extreme luminosity conditions. At first glance, it appears that the star shed a large amount of material. Indeed, the inferred mass loss rate during the great eruption of $`0.1_{}/yr`$ is significantly larger than the mass loss rate inferred for the star today ($`\begin{array}{c}<\\ \end{array}10^3_{}/yr`$, Davidson & Humphreys 1997 and references therein). However, considering that the luminosity during the great eruption is estimated to be significantly above the Eddington limit, we shall show that the star should have had a much higher mass loss rate. In fact, it should have lost during the 20 year eruption more mass than its total mass, giving rise to an obvious discrepancy. A review of our current knowledge of $`\eta `$ Car can be found in Davidson & Humphreys (1997). In section 2 we summarize how a wind solution for the star $`\eta `$ Car should be constructed. Since the luminosity is very high, the effects of convection must be taken into account. In section 3 we integrate the wind equations to show that no consistent solution for $`\eta `$ Car exists within the possible range of observed parameters. Section 4 is devoted to possible classical solutions to the discrepancy, showing that no such possibility exists. In section 5, we show that a porous atmosphere is a simple and viable solution to the wind discrepancy. ## 2 Solving for the Wind Since the mass of $`\eta `$-Car is estimated to be of order $`100120_{}`$ (Davidson & Humphreys 1997), the average luminosity in the great eruption was clearly super-Eddington (of the order of 5 times the Eddington limit). That is to say, the radiative force upwards, assuming the smallest possible opacity (for ionized matter) given by Thomson scattering, was significantly larger than the gravitational pull downwards. Optically thin winds formally diverge at the Eddington limit (e.g., Kudritzki et al. 1989 and references therein). Consequently, a consistent wind solution requires an optically thick wind. We thus look for a wind in which the sonic point (which is the point at which the local speed of the outflow equals the speed of sound) is below the photosphere. Moreover, since the duration of the eruption is longer than the sound crossing time of the star by about a factor of 50, a stationary wind appears to be a good approximation. In practically all super-sonic wind theories which describe super-sonic outflows from an object at rest, a consistent stationary solution is obtained only when the net driving force of the wind (excluding the pressure gradient) vanishes at the sonic point<sup>1</sup><sup>1</sup>1 The exception is line driven winds in which the force is explicitly a function of $`dv/dr`$ which is actually an approximation to the line transfer equations. If we had written the proper radiation transfer equations for this case which only implicitly depend on $`dv/dr`$, we would have recovered that the sonic point coincides with the point at which the total force vanishes (cf Mihalas & Weibel Mihalas 1984 §107). Moreover, line driven winds are important only under optically thin conditions while we describe the optically thick part of the wind. . Thus, material experiencing a super-Eddington flux necessarily has to be above the sonic point. If most of the envelope carries a super Eddington flux, then no consistent stationary wind solution can be obtained and in fact, the object will evaporate on a dynamical time scale. In most systems however, this need not be the case. For example, in very hot systems (e.g., hot neutron stars during strong X-ray bursts, Quinn & Paczynski 1985), the opacity in the deep layers is reduced due the reduced Klein-Nishina opacity for Compton scattering at high temperatures. Thus, the sonic point in these objects is found where the temperature is high enough to reduce the opacity to the point where the flux corresponds to the local Eddington limit. Another important effect, which should be taken into account, is convection. Deep inside the atmosphere, convection can carry a significant part (or almost all) of the energy flux, thus reducing the radiative pressure to a sub-Eddington value. In fact, as the radiative flux approaches the Eddington limit, convection generally arises and carries the lion share of the total energy flux (if it can) to keep the system at a sub-Eddington level (Joss et al. 1973). Although the total flux in the entire envelope (or almost all of it) can be equivalent to a super-Eddington flux, up to some depth below the photosphere, convection carries most of the flux so as to reduce the radiative flux alone into a sub-Eddington value. A consistent wind solution should therefore, have its sonic point at the location where the most efficient convection cannot carry enough flux any more. As we shall soon see, the problem in $`\eta `$-Car is that this point is relatively deep within the atmosphere, where the density is so high that the expected mass loss is significantly higher than the observed one. To see this in a robust way we integrated numerically the wind equations starting from the photosphere inwards. The equations are those that describe optically thick spherically symmetric winds (Quinn & Paczynski 1985; Żytkow 1972; Kato & Hachisu 1994). The equations of mass conservation, momentum conservation and temperature gradient are $$4\pi r^2\rho v=\dot{M}=\mathrm{const}$$ (1) $$v\frac{dv}{dr}+\frac{GM}{r^2}+\frac{1}{\rho }\frac{dP_g}{dr}\frac{\chi _r}{4\pi r^2c}=0$$ (2) $$\frac{dT}{dr}=\frac{3\chi \rho _r}{16\pi r^2caT^3}(1+\frac{2}{3\chi \rho r}),$$ (3) with standard notation. The parenthesized term in the last equation is a simple approximate interpolation that has the correct asymptotic limits for optical depths much larger and much smaller than unity (Quinn & Paczynski 1985). The last equation is the integrated form of the energy conservation equation. Unlike the aforementioned references, we specifically include advection by a maximally efficient convection. Thus, the integrated form of the energy conservation equation becomes $`_r+\dot{M}\left({\displaystyle \frac{v^2}{2}}+w{\displaystyle \frac{GM}{r}}\right)+_{conv}`$ $``$ (4) $`_r{\displaystyle \frac{GM\dot{M}}{r}}+_{adv}+_{conv}`$ $`=`$ $`\mathrm{\Lambda }_{tot}=_{obs}+_{kin,\mathrm{}},`$ (5) were $`\mathrm{\Lambda }_{tot}`$, $`\dot{M}`$, $`_{obs}`$ and $`_{kin,\mathrm{}}`$ are the total energy output of the star, the wind mass loss rate, the observed luminosity at infinity and the kinetic energy flux at infinity. On the other hand, $`_r,v,w,_{conv},_{adv}`$ are respectively, the local radiative luminosity, velocity, enthalpy, convective flux and advected flux (as internal and kinetic energies). The expression adopted for $`_{conv}`$ is $`4\pi r^2uv_s`$ where $`u`$ is the internal energy per unit volume and $`v_s`$ is the speed of sound. By no means can convection be more efficient than this expression since highly dissipative shocks are unavoidable at higher speeds. It is likely that the maximally efficient convection is somewhat less efficient than this expression, but this will only aggravate the problem that we shall soon expose. Detailed calculations of the wind were carried out. The calculations include the latest version of the OPAL opacities (Iglesias & Rogers 1996). It is found that the total opacity below the photosphere has comparable contributions from Thomson scattering and absorption processes. This implies that the modified Eddington limit, in which the Thomson opacity is replaced by the local total opacity, is somewhat lower than the classical Eddington limit. Since $`T_{\mathrm{𝑒𝑓𝑓}}9000^{}`$K<sup>2</sup><sup>2</sup>2This is the typical observed effective temperature for LBV’s during outbursts (Humphreys & Davidson 1994). If the temperature is higher than this value, the inferred bolometric magnitude of $`\eta `$ Car during the eruption would be more negative, increasing the Eddington factor. If the temperature is lower than $`7000^{}`$K, the opacity at the photosphere and outwards rises abruptly (Davidson 1987), thus reducing the modified Eddington limit. In both cases, the discrepancy will be aggravated., the average luminosity implies a photospheric radius of $`10^{14}cm`$. Note that since it is a thick wind, the exact definition of the photosphere is ambiguous. Nevertheless, different definitions do not change the results by more than $`1020\%`$. Knowing that the observed mass loss rate is roughly $`0.1_{}/yr`$ (which gives the observed $`2_{}`$ of shed material in 20 years, Davidson & Humphreys 1997), a specified flow speed at the photosphere can be translated to a required density. We can therefore integrate the wind equations inwards. If a consistent wind solution can be obtained for some value of the imposed velocity in the photosphere (which has to be between $`v_s`$ and $`v_{\mathrm{}}`$), then the integration inwards should reach a sonic point at which the total force on the gas vanishes. This will be attained if the convective and advective fluxes can carry a significant amount of the total flux so as to reduce the residual radiative flux to a sub-Eddington one. ## 3 The Discrepancy We define the luminosity needed to be carried by convection and advection in order to bring about the vanishing of the total local force as $`_{crit}`$. If enough energy is advected and convected then $`_r`$ will be reduced to the local modified Eddington flux: $$_{Edd,mod}=\frac{4\pi cGM}{\chi }$$ (6) with $`\chi `$ the local opacity which can be larger than the Thomson opacity. Thus, from eq. (5), the critical advective+convective flux can be written as $$_{crit}=\mathrm{\Lambda }_{tot}+\frac{GM\dot{M}}{r}_{Edd,mod}.$$ (7) Figure 1 shows the fraction $`\eta (_{adv}+_{conv})/_{crit}`$ at the sonic point. A consistent solution can be found only if $`\eta =1`$ at the sonic point. Inspection of the figure clearly shows that the space of possible observed values does not contain a viable and consistent solution. This is of course irrespective of whether a solution from the photosphere outward can or cannot be obtained. The discrepancy arises because a wind corresponding to the observed low mass loss rate necessarily has a sonic point that is not deep enough to have either convection or advection as an efficient mean of transporting energy. This can be seen from the optical depth at which the sonic point is obtained. In all cases, $`1\begin{array}{c}<\\ \end{array}\tau <300`$. However, convection is efficient only up to an optical depth of $`\tau c/v_s300`$ for $`p_{rad}p_{gas}`$ (Shaviv 2000b), or even deeper for larger radiation pressures (i.e., when close to the Eddington limit). ## 4 Unfeasible Solutions to the Discrepancy Can the discrepancy be resolve with a classical assumption? Since the discrepancy is rather large, assuming the wind to emerge from an angular fraction $`f`$ from the star does not relax the problem (it actually aggravates the problem because more material will be blown away from the higher luminosity regions). Another possibility that fails is having a higher velocity in the photosphere than the one observed today for the shed material. This might be the case if the wind collides with previously ejected slow moving material. Even if such material did exist, the necessarily reduced mass loss rate inferred from the present day observed momentum aggravates the problem. The problem is not mitigated if we relax the assumption that the mass loss rate and the luminosity are assumed to be constant in time throughout the eruption. If one wishes to solve the problem using magnetic fields, then a solution can be found only if the magnetic energy density at the photosphere is significantly larger (by several orders of magnitude) then the equipartition value with the gas pressure. This of course seems unlikely. Another option is to have the distance estimate to $`\eta `$-Car be three times smaller than $`2300pc`$. A shorter distance will remove $`\eta `$-Car out of the cluster Tr16 of massive stars inside which it is observed and leave it instead roaming the inter galactic arm space. Considering the short lifetime of the star, just a few million years, this possibility appears as very unlikely. The problem can be solved if the mass of the star corresponds to a sub-Eddington luminosity. This proposed solution requires $`\eta `$-Car to be at least a $`1000_{}`$ star. However, this suggestion is at variance with much lower estimates (see for instance Davidson & Humphreys 1997 and references therein). Nevertheless, having such a massive star is in fact not completely unrealistic and would have far reaching consequences if found to be true. ## 5 A Viable Solution to the Discrepancy As the title suggests, there is a clear solution to the discrepancy. As the results show, the sonic point appears to be between the optical depths of $`1`$ and $`300`$. The exact value cannot be obtained since it requires the integration outward from the photosphere, which owing to the relatively inaccurate effective temperature and therefore opacity, yields a wide range of results. If the mean radiative force between the point $`\eta =1`$ and the above found optical depth, is smaller than classically estimated, then a solution to the discrepancy can be found. Such a reduction in the mean radiative force is a natural result if the atmosphere is inhomogeneous. Shaviv (1998) has shown that in an inhomogeneous atmosphere, the effective opacity used to calculate the average force is reduced relative to the effective opacity used for the radiation transfer in a homogeneous medium. The effective opacity used for the average force should be a volume flux weighted average of the opacity per unit volume<sup>3</sup><sup>3</sup>3When the flux is frequency dependant, a similar average should be taken in order to find the radiative force. However, one then takes a flux weighted mean over frequency space. $`\chi _v\chi \rho `$. Namely, $$\chi _{\mathrm{𝑒𝑓𝑓}}=\frac{\chi \rho F}{F\rho }.$$ (8) The effect is universal and arises in inhomogeneous systems that conduct heat or electricity. Extensive discussions exist in the literature under a different terminology (Isichenko 1992). The only requirement is therefore, that close to the Eddington limit the star develop inhomogeneities. The transformation from an homogeneous to an inhomogeneous atmosphere at luminosities close to but below Eddington luminosities, was recently found to take place generically even in Thomson scattering atmospheres (Shaviv 1999; Spiegel & Tao 1999; Shaviv 2000a). It was found that two different types of instabilities arise naturally when the luminosity approaches the Eddington limit (Shaviv 2000a). One instability is of a phase transition into a stationary nonlinear pattern of “fingers” that facilitate the escape of the radiation. The second type of instability allows the growth of a propagating wave, from which one expects a propagating nonlinear pattern to form. The two possibilities are summarized in figure 2. Both instabilities bring about a reduction of the average radiative force on the matter and a significant reduction of the mass loss rate since the sonic surface can sit near (or not much below) the photosphere. In both cases, the nonlinear pattern is necessarily expected to form in the region between the radius $`r_{conv}`$ at which $`\eta =1`$, or in other words, that $`_{conv}+_{adv}`$ is large enough to have $`_r\begin{array}{c}<\\ \end{array}_{Edd,mod}`$, and the photosphere. When the pattern is stationary, the rarefied regions have a larger than Eddington flux and the sonic surface in these regions is near $`r_{conv}`$. On the other hand, if the pattern is propagating, the flux may be larger locally than the Eddington limit but the time average of the force on a mass element is less than Eddington. Since the instability does not occur above the photosphere, it should be homogeneous and hence super-Eddington with a super-sonic flow. Further analysis of the instabilities is needed to know which instability will dominate though it is more likely to be the phase transition since it is dynamically more important. ## 6 Summary To summarize, the super-Eddington luminosity emitted by $`\eta `$-Car should have generated a much thicker wind with a sonic point placed significantly deeper than what can be directly inferred from the observations. A solution which lives in harmony with observations and theoretical modeling is a porous atmosphere, which allows more radiation to escape while exerting a smaller average force. It also means that the Eddington limit is not as destructive as one would a priori think it must be, even in a globally spherically symmetric case. Namely, all astrophysical analyses that employ the Eddington limit as a strict limit should be reconsidered carefully, even if they involve only unmagnetized Thomson scattering material. If $`\eta `$-Carinae could have been super-Eddington for such a long duration without “evaporating”, other systems can display a similar behavior.
no-problem/0002/astro-ph0002466.html
ar5iv
text
# Pulsation in two Herbig Ae stars: HD 35929 and V351 OriBased on observations carried out at the European Southern Observatory, La Silla, Chile under proposals number 62-I-0533, 63-I-0053 ## 1 Introduction Young pre–main-sequence (PMS) stars of mass greater than $``$1.5 M cross the region of pulsational instability during their contraction toward the main-sequence. The location of the instability strip in the H-R diagram has been recently identified by means of nonlinear models for the first three radial modes (Marconi & Palla 1998). The time spent by intermediate-mass stars within the boundaries of the strip represents a small fraction of the Kelvin-Helmoltz time scale, varying between $``$10% for a star of 1.5 M and $``$5% for a 4 M star. Despite the brevity of this phase, however, a number of known Herbig Ae stars have the appropriate combination of luminosity and effective temperature to become pulsationally unstable. In our previous study, we suggested to look for $`\delta `$ Scuti-type photometric variations with periods of minutes to several hours and amplitudes less than few tenths of magnitudes in a sample of Herbig Ae stars whose position in the H-R diagram coincides with the instability strip. The identification of a few PMS objects pulsating as $`\delta `$ Scuti stars (Breger 1972), the prototype being the star HR 5999 (Kurtz & Marang 1995), has provided some support to the connection between variability and stellar pulsation. Of particular interest is the Ae star HD 104237 that shows both short- and long-term velocity changes of spectral lines (Donati et al. 1997). These variations indicate that the star is undergoing radial pulsations with a period of approximately 40 minutes and an amplitude of about 1 km s<sup>-1</sup> (see also Böhm et al. 2000, in preparation). Interestingly, HD 104237 is the first intermediate mass PMS star with a measured magnetic field (Donati et al. 1997). A few new PMS candidate pulsators have been recently identified by Pigulski et al. (2000). Stimulated by these initial results, we have started a photometric investigation of a sample of seven Herbig Ae stars with spectral types in the range A5 to F5, located within or near the boundaries of the instability strip. For some of them, large time scale variations have been observed during the long term monitoring program of variable stars conducted at ESO (LTPV project: Sterken et al. 1995 and references therein). However, no information is available on their variability on time scales shorter than 2 or 3 days. The main goal of our study is to detect and characterize the pulsation properties of young stars. This way, we can improve our knowledge of their internal structure and obtain unique constraints on the theoretical predictions of the models. Ultimately, the analysis of the pulsation characteristics can yield an indirect estimate of the stellar mass. This represents a powerful method for stars that are not part of the restricted group of spectroscopic binary systems. In this Letter, we report the discovery of two additional Herbig Ae stars which show evidence for variability of the $`\delta `$ Scuti-type. ## 2 Selection of the sample, observations and data reduction The selection of the stars was based on their spectral type. As an initial choice, we have adopted values published in the literature. To have an independent check on the effective temperature of the selected stars, we have used the Strömgren photometry provided by the ESO catalogues of the LTPV project. Then, we have placed the stars in the dereddened color-color \[m1\]–\[c1\] diagram (see Strömgren 1966), and compared their position with the theoretical colors from model atmospheres (Kurucz 1992). As a result, we derived a sample of seven stars with spectral types between F5 and A5: V346 Ori, V351 Ori, BF Ori, BN Ori, NX Pup, HD 35929, and AK Sco. The photometric observations were performed in two runs with different telescopes at ESO, La Silla. In the first period (Dec. 23–29, 1998), we used the 0.5k$`\times `$0.5k CCD (Tektronix TK512CB) attached to the 0.9m DUTCH telescope. Five of the seven nights were photometric. In the second period (June 30 to July 4, 1999) we used the 2k$`\times `$2k CCD (LORAL/LESSER) attached to the 1.5 m. DANISH telescope. Unfortunately, weather conditions were excellent for accurate photometric observations only during one night. Due to their brightness, all of the sources are suitable for observations in the Strömgren $`uvby`$-system. Two comparisons stars were used for each candidate variable; whenever possible, we observed the same comparison stars as in the LTPV program. Data reduction and analysis to derive instrumental aperture magnitudes were performed following the standard procedures under the MIDAS environment. The typical intrinsic instrumental photometric error in each Strömgren filter is of the order of a few thousands of a magnitude. ## 3 Results: $`\delta `$ Scuti candidates The stars V346 Ori, V351 Ori, BF Ori, HD 35929, and AK Sco have been found to show photometric variability. For V351 Ori and HD 35929, we could derive reliable pulsational periods that fall in the expected range of $`\delta `$ Scuti type variability<sup>1</sup><sup>1</sup>1As for V346 Ori and AK Sco, the variation seems $`\delta `$ Scuti-like, with $`u`$ amplitudes of about 0.035 and 0.08 mag respectively but no period could be derived due to the poor phase coverage. As for BF Ori, its variation is clear ($`\mathrm{\Delta }u`$ $``$ 0.11 mag), but seems rather monotonic. In Table 1 we report their main properties. HD 35929 was observed during two nights (Dec. 28 and 29, 1998). The stars HD 37399 and HD 37210 were used for comparison. Unfortunately, HD 37210 turned out to be variable with a maximum amplitude in the $`u`$-filter of $``$0.03 mag and a rather monotonic variation. Therefore, the differential light curve of HD 35929 was derived using the data points relative to HD 37399 only. The instrumental phased light curves in $`\mathrm{\Delta }u`$ and $`\mathrm{\Delta }b`$ are shown in the two upper panels of Fig. 1. The frequency spectrum of HD 35929 obtained from the $`u`$-filter is shown in the lower panel of Fig. 1. The frequency spectrum peaks at 5.1 d<sup>-1</sup>, i.e. 0.196d. We note that no significant differences in the calculated period are found using light curves in other filters. V351 Ori was observed during the nights of Dec. 23 and 25, 1998. HD 38155 and HD 35298 were used as comparison stars. The latter turned out to be a variable star, with a maximum amplitude in $`u`$ of $``$0.04 mag and a quasi-periodic behaviour. Thus, we use only HD 38155 for comparison. The differential magnitudes in each filter ($`\mathrm{\Delta }u,\mathrm{\Delta }v,\mathrm{\Delta }b,\mathrm{\Delta }y`$, computed as variable$``$comparison) are shown in the upper panels of Fig. 2. The light curve in the $`u`$-band, where the largest amplitude is observed, was used to determine the observed period of V351 Ori. The frequency spectrum of V351 Ori obtained from the $`u`$-filter is shown in the lower panel of Fig. 2. The main maximum in this spectrum corresponds to a period of 0.058 d. Again, no significant differences in the derived periods occur when using light curves in other filters. ## 4 Evolutionary state of HD 35929 and V351 Ori Although more systematic observations are needed to define the precise pulsational periods of HD 35929 and V351 Ori, the available data already provide some preliminary constraints on their position in the H-R diagram, stellar mass and evolutionary state. As remarked by the referee, the discussion presented in this section is useful as an illustration of how well-defined periods for these stars could indeed tightly constrain their evolutionary states. Fig. 3 shows the location of HD 35929 in the H-R diagram. The dotted box accounts for the uncertainty in the spectral type (A5 to F0: Malfait et al. 1998, Miroshnichenko et al. 1997, van den Ancker et al. 1998) and distance ($`d=`$360 to 430 pc: van den Ancker et al. 1998). The instability strip for the first three radial modes, as predicted by Marconi & Palla (1998), is also shown together with the PMS evolutionary tracks computed by Palla & Stahler (1993) and the post-MS evolutionary tracks for 2.5 and 3.0 $`M_{}`$ of Castellani et al. (1999). The two circles indicate the best combination of the stellar parameters ($`M`$, $`L`$, $`T_{\mathrm{eff}}`$) that yield a period equal to the observed one, $`P=0.196\pm 0.005`$ d. These values are listed in Table 2. The two solutions indicate a mass of 3.4 or 3.8 M, pulsating in the first overtone (FO) and second overtone (SO) respectively: in both cases, HD 35929 can be considered a PMS pulsator, as expected. A combination of parameters for a post-MS stellar mass can also reproduce the pulsational period observed in HD 35929. In this case, the best choice would be a post-MS model of a 2.7 $`M_{}`$ star with $`L=83L_{}`$ and $`T_{\mathrm{eff}}=6900`$ K, pulsating in the SO mode. The location of this solution is quite close to the lower circle shown in Fig. 3. Although only few specific studies exist on HD 35929, some evidence supports the fact that HD 35929 is a young star associated with the Ori OB-1c association. For example, Malfait et al. (1998) have discussed the infrared excess observed toward this star, whereas Miroshnichenko et al. (1997) have suggested that the star might be in a transition phase between a PMS Herbig Ae star and a $`\beta `$ Pictoris-type object. We also note that the pulsational character (period and H-R diagram location) of HD 35929 is similar in many respects to that of the well known Herbig Ae star HR 5999. From these considerations, we favor the conclusion that HD 35929 is a PMS star with a mass in the narrow range 3.4-3.8 $`M_{}`$, pulsating in the FO or SO. Finally we note that if the Strömgren indices \[c1\] and \[m1\] measured for this star are taken into account, together with published values for the $`H\beta `$ index, one derives an effective temperature $`\mathrm{log}T_{\mathrm{eff}}=3.84`$ and a luminosity varying between 26 and 36 $`L_{}`$. This means that according to the Strömgren photometry, HD35929 should be located on a $`2.5M_{}`$ evolutionary track and the period based on present data would be too long even for a pulsation in the fundamental mode. A possible explanation for this inconsistency could be that HD35929 is also a rapid rotator ($``$150 Km s<sup>-1</sup>) so that the assumption of radial pulsation could be not completely correct. Future observations and numerical simulations are needed in order to properly address this problem. Fig. 4 shows the location of V351 Ori in the H-R diagram. Here, the uncertainty on the luminosity is larger than for HD 35929, because of the distance ambiguity. The lower value corresponds to the minimum distance of 260 pc given in the Hipparcos catalogue (van den Ancker et al. 1998). The upper limit (inverted triangle) assumes that V351 Ori is located in the Orion molecular cloud at a distance of 460 pc. The dashed box corresponds to an uncertainty of $`\pm 0.01`$ dex in $`\mathrm{log}T_{\mathrm{eff}}`$. Finally, the filled circle marks the position estimated by van den Ancker et al. (1996) with the associated error bar. As already pointed out, present data for V351 Ori suggest a pulsation period of $`0.058\pm 0.002`$. Using the constraints provided by this preliminary period and by the topology of the instability strip, we have computed linear nonadiabatic models to find the best set of stellar parameters that reproduce the pulsation of V351 Ori. The results are shown in Fig. 4 and the stellar parameters are given in Table 2. The solutions yield a stellar mass of 1.85 M or 2.15 M, respectively, pulsating in the SO (open square in Fig. 4), or the third overtone (TO) mode (open circle in Fig. 4. For lower modes, the luminosity of the model would be lower than the estimated lower limit for V351 Ori, whereas higher modes are probably excluded by the closeness of the observational box to the second overtone blue boundary and we did not consider their occurrence. These solutions would tend to favor a distance of V351 Ori, smaller than that of the young stellar population of the Orion complex. Recently, Koval’chuk & Pugach (1998) have argued on the basis of several peculiar photospheric properties that V351 Ori is in fact more evolved than previously thought and conclude that this star does not belong to the group of Herbig stars. From Fig. 4, we see that the the post-MS track of a 2 M star intersects the corresponding PMS track at about the position of the best pulsational models. The degeneracy of the tracks does not allow to use the pulsational analysis to discriminate between the two evolutionary phases. However, this uncertainty would disappear if the distance of V351 Ori were the same as that of the Orion population stars. Then, the difference between the pre- and post-MS tracks would be large enough that the period analysis would rule out one of the two solutions. Future studies of V351 Ori should address this important aspect. As in the case of HD35929, we used the measured Strömgren indices \[c1\] and \[m1\] to provide an independent evaluation of the location in the HR diagram of this star. The results, $`\mathrm{log}T_{\mathrm{eff}}=3.89`$, $`L30L_{}`$, now suggest a stellar mass of $`2.25M_{}`$ for this pulsator, in which case the observed periods would be consistent with an oscillation in the TO pulsation mode. In conclusion, the present observations yield compelling evidence for the occurrence of $`\delta `$ Scuti-type pulsation in two Herbig Ae stars, HD 35929 and V351 Ori. The comparison with evolutionary and pulsational models provides independent, even if quite preliminary, constraints on the mass and evolutionary state of these stars. ###### Acknowledgements. We thank the referee, J. Matthews, for his very helpful report. We also thank Dr. R. Silvotti for useful discussions. This research has made use of the Simbad database, operated by CDS, Strasbourg, France.
no-problem/0002/cond-mat0002064.html
ar5iv
text
# Depinning and dynamic phases in driven three-dimensional vortex lattices in anisotropic superconductors ## Abstract We use three-dimensional molecular dynamics simulations of magnetically interacting pancake vortices to study the dynamic phases of vortex lattices in highly anisotropic materials such as BSCCO. Our model treats the magnetic interactions of the pancakes exactly, with long-range logarithmic interactions both within and between planes. The pancake vortices decouple at low drives and show two-dimensional plastic flow. The vortex lattice both recouples and reorders as the driving current is increased, eventually forming a recoupled crystalline-like state at high drives. We construct a phase diagram as a function of interlayer coupling and show the relationship between the recoupling transition and the single-layer reordering transitions. In highly anisotropic superconductors such as BSCCO, the vortex lattice is composed of individual pancake vortices that may be either coupled or decoupled between layers depending on such factors as the material stoichiometry or the magnitude and angle of the applied magnetic field. Of particular interest is the possible relationship between a coupling/decoupling transition and the widely studied second peak or fishtail effect . As a function of interlayer coupling strength $`s`$, there are two limits of vortex behavior in a system containing pointlike disorder. For zero interlayer coupling $`s=0`$, each plane behaves as an independent two-dimensional (2D) system. For infinite interlayer coupling $`s=\mathrm{}`$, the vortices form perfectly straight three-dimensional (3D) lines, and all of the planes move in unison. At finite coupling strength $`s0`$, a transition between these types of behaviors should occur with coupling strength, but it is unclear whether this transition is sharp or if an intermediate state of the lattice exists. Furthermore, it is known that 2D systems with pointlike pinning can exhibit dynamic reordering under the influence of an applied driving current, passing from a liquid-like state at zero drive to a recrystallized state at high current . Thus, in a 3D system, a dynamically driven recoupling transition could be expected, but it is unclear where this transition falls in relation to the 2D reordering transitions already seen. To study the coupling transitions, we have developed a simulation containing the correct magnetic interactions between pancakes . This interaction is long range both in and between planes, and is treated according to Ref. . The overdamped equation of motion, $`T=0`$, for vortex $`i`$ is given by $`𝐟_i=_{j=1}^{N_v}𝐔(\rho _{i,j},z_{i,j})+𝐟_i^{vp}+𝐟_d=𝐯_i`$, where $`N_v`$ is the number of vortices, $`\rho `$ and $`z`$ are the distance between pancakes in cylindrical coordinates. The magnetic energy between pancakes is $`𝐔(\rho _{i,j},0)=2dϵ_0\left((1{\displaystyle \frac{d}{2\lambda }})\mathrm{ln}{\displaystyle \frac{R}{\rho }}+{\displaystyle \frac{d}{2\lambda }}E_1(\rho )\right)`$ $`𝐔(\rho _{i,j},z)=s{\displaystyle \frac{d^2ϵ_0}{\lambda }}\left(\mathrm{exp}(z/\lambda )\mathrm{ln}{\displaystyle \frac{R}{\rho }}E_1(R)\right)`$ where $`R=\sqrt{z^2+\rho ^2}`$, $`E_1(x)=_\rho ^{\mathrm{}}\mathrm{exp}(x/\lambda )/\rho ^{}`$ and $`ϵ_0=\mathrm{\Phi }_0^2/(4\pi \xi )^2`$. The pointlike pins are randomly distributed in each layer and modeled by parabolic traps. We vary the relative strength of the interlayer coupling using the prefactor $`s`$. We have simulated a $`16\lambda \times 16\lambda `$ system containing 89 vortices and 4 layers, with a total of 356 pancake vortices. Further work on systems containing up to 16 layers will be reported elsewhere . In Fig. 1(a) we present a phase diagram as a function of interlayer coupling strength $`s`$ and driving force $`f_d`$. At zero drive, we find a recoupling transition for a coupling strength of $`s>4.5`$. In samples with $`s5`$, the pancakes remain coupled into lines at all drives and show the same transitions seen in previous work , exhibiting plastic flow of stiff lines above depinning, and reordering into first a smectic state and then a recrystallized state of stiff lines at higher drives. For samples with weaker interlayer coupling, $`s<5`$, the vortex lattice is broken into decoupled planes at zero drive. Upon application of a driving current, the samples exhibit 2D plastic flow in which each layer moves independently of the others. Once the individual layers reach the driving force at which a transition to a smectic state occurs, the vortices simultaneously form the smectic state and recouple, as can be seen from the measure shown in Fig. 1(c) of the $`z`$-axis correlation $`C_z=1(𝐫_{i,L}𝐫_{i,L+1})\mathrm{\Theta }(a_0/2|(𝐫_{i,L}𝐫_{i,L+1})|)`$, where $`a_0`$ is the vortex lattice constant. The dynamic recoupling transition line follows the smectic transition line down to $`s=2`$ and is associated with a peak in the $`dV/dI`$ curve seen in Fig. 1(b). Both the static and dynamic transition lines between decoupled 2D and recoupled 3D behavior are sharp. As a function of the number of layers, we observe the same behavior, but the depinning current in the 3D stiff state drops with the number of layers. The depinning current in the 2D decoupled phase is not affected since in this case the individual planes behave as isolated entities. The recoupling transition sharpens as the number of layers is increased . We acknowledge helpful discussions with L. N. Bulaevskii, A. Kolton, C. Reichhardt, R.T. Scalettar, G. T. Zimányi. This work was supported by CLC and CULAR (LANL/UC) and by the Director, Office of Adv. Scientific Comp. Res., Div. of Math., Information, and Comp. Sciences, U.S. DoE contract DE-AC03-76SF00098.
no-problem/0002/nlin0002019.html
ar5iv
text
# Phase synchronization in coupled nonidentical excitable systems and array enhanced coherence resonance ## Abstract We study the dynamics of a lattice of coupled nonidentical Fitz Hugh-Nagumo system subject to independent external noise. It is shown that these stochastic oscillators can lead to global synchronization behavior without an external signal. With the increase of the noise intensity, the system exhibits coherence resonance behavior. Coupling can enhance greatly the noise-induced coherence in the system. The study of coupled oscillators is one of the fundamental problems with applications in various fields . Mutual synchronization of the oscillators is of great interest and importance among the collective dynamics of the coupled oscillators. The notation of synchronization has been extended to include a variety of phenomena in the context of interating chaotic oscillators, such as complete synchronization , generalized synchronization , phase synchronization and lag synchronization . Also, synchronization phenomenon has been studied in stochastic systems. Stochastic resonance can be understood from the view point of frequency locking and phase synchronization of the noise-induced motion to the external signal . Due to this synchronization, stochastic bistable elements subject to the same periodic signal, when appropriately coupled, can display global synchronization to the external periodic signal. This synchronization has the effect of enhancing the stochastic resonance in the array . Noise-induced coherent motion has been a topic of great interest recently. For a system at a Hopf bifurcation point, an optimal amount of noise can induce the most coherent motion in the system . More generally, noise-induced coherent motion has been demonstrated in a variety of excitable systems and system with delay . Although described by different terms such as stochastic resonance without external periodic force, autonomous stochastic resonance or coherence resonance, a common feature of this type of systems is the increased coherence of the motion and resonant like behavior of the coherence induced purely by noise without an external signal. An interesting question about this type of systems is how these stochastic elements behave when coupled. Do the elements subjected to independent noise display synchronization? and can the synchronization enhance the coherence in the system motion? A recent study demonstrated noise-induced global oscillation and resonant behavior in a subexcitable media . In Ref. , it is shown that two interacting coherence oscillators can be synchronized. However, whether the coherence of the motion can been enhanced by the interaction between the elements are not clear. This paper studies these problems by the following $`N`$ coupled nonidentical Fitz Hugh-Nagumo (FHN) neurons, a simple but representative model of excitable system and nerve pulses , $`ϵ\dot{x_i}`$ $`=`$ $`x_i{\displaystyle \frac{x_i^3}{3}}y_i+g(x_{i+1}+x_{i1}2x_i),`$ (1) $`\dot{y_i}`$ $`=`$ $`x_i+a_i+D\xi _i(t),`$ (2) where $`a_i`$ is a parameter of the $`i`$th element. For a single FHN model, if $`|a|>1`$, the system has only a stable fixed point, while $`|a|<1`$ a limit cycle. The system with fixed point dynamics ($`|a|`$ slightly larger than one) is excitable because it will return to the fixed point only after a large excursion (“near limit cycle”) when perturbed away from the fixed point. To make the study more general, we suppose $`a_i`$ is not the same for the elements, but has a uniform distribution $`a_i(1,1.1)`$. This implementation of the model is physically significant because many physical systems are diffusively coupled, and nonidentity is more natural in physical situations. With nonidentical $`a_i`$, the uncoupled elements will have different response to the same level of noise $`\xi _i(t)`$, e.g., different average firing frequency. The Gaussian noise is uncorrelated in different elements, i.e., $`\xi _i(t)\xi _j(t^{})=\delta _{ij}\delta (tt^{})`$. Periodic boundary condition is employed in our studies. The parameter of the time scale is fixed at $`ϵ=0.01`$. To characterize synchronization behavior in the lattice of nonidentical excitable systems, we introduce the phase of the elements $$\varphi _i(t)=2\pi \frac{t\tau _k}{\tau _{k+1}\tau _k}+2\pi k,$$ (3) where $`\tau _k`$ is the time of the $`k`$th firing of the element defined in simulations by threshold crossing of $`x_i(t)`$ at $`x=1.0`$. The quantity $$s_i=\mathrm{sin}^2\left(\frac{\varphi _i\varphi _{i+1}}{2}\right)$$ (4) measures the phase synchronization effect of neighbouring elements . A spatiotemporal average of $`s_i`$, i.e., $$S=\underset{T\mathrm{}}{lim}\frac{1}{T}_0^T\left(\frac{1}{N}\underset{i=1}{\overset{N}{}}s_i\right)𝑑t$$ (5) gives a measure of the degree of phase synchronization in the coupled system. For completely unsynchronized motion $`S0.5`$, while for globally synchronized system $`S0`$. To measure the temporal coherence of the noise-induced motion, we examine the distribution of the pulse duration $`T_k=\tau _{k+1}\tau _k`$. For a single element subject to noise, the distribution $`P(T)`$ has a peak at a certain value of $`T_k`$ and an exponential tail at large values . A measure of the sharpness of the distribution, for example, $$R=T_k/\sqrt{\text{Var}(T_k)},$$ (6) which can be viewed as signal-to-noise ratio, provides an indication of the coherence of the firing event. Biologically, this quantity is of importance because it is related to the timing precision of the information processing in neural systems . For a single element, it has been shown that $`R`$ possesses an optimal value at a certain level of noise . Here we adopt the same measure of coherence, with the distribution $`P(T)`$ constructed by pulse duration of all the $`N`$ elements during a long enough period of time. Typical behaviors observed in numerical simulations of the system are in the following: (1) Without external noise, each element comes to a fixed point state and no firing takes place. The fixed points are slightly different due to the nonidentity. (2) When strong enough noise is added to the originally quiescent system, the system is excited. The firing process of the elements may be in phase synchronization if the coupling is strong enough. (3) The coherence of the temporal and spatial pattern of the firing process is greatly enhanced in a region of noise and coupling strength. The main features of the system are illustrated in Fig. 1, where the results of $`R`$ and $`S`$ in the parameter space $`(\mathrm{log}_{10}(g),\mathrm{log}_{10}(D))`$ are shown for a lattice of $`N=100`$ elements. For a fixed coupling $`g`$, $`R`$ increases first with the noise level $`D`$, reaches a maximum, then decreases again, showing the typical resonant behavior without an external signal. In general, for a stronger coupling, a higher level of noise is needed to excite the system. Similarly, for a fixed noise level $`D`$, $`R`$ increases with increasing $`g`$ until it reaches an optimal value; after that, it decrease again. From Figure 1, one can observe several dynamical regimes in the systems. For very weak coupling ($`g<10^2`$), the firing of the elements are essentially independent, because a noise-induced firing of an element cannot excite its neighbouring lattices. Due to the independent firing, the phase difference has a uniform random distribution on $`(0,2\pi )`$, resulting $`S0.5`$. In this region, with the increase of $`D`$, the temporal behavior of the system display coherence resonance similar to that observed in a single element. The enhancement of the coherence by the noise is not very pronounced. A maximal $`R4.5`$ is found around $`D10^1`$. A typical spatiotemporal pattern of $`x_i`$ and $`s_i`$ for weakly coupled elements subject to relatively weak noise is shown in Fig. 2(a). Both the spatial and temporal behaviors are quite irregular. Each element has its individual average firing frequency due to the nonidentity (smaller frequency for larger $`a_i`$). With the increase of the coupling strength, the system becomes sensitive to weak noise because the firing events induced by noise now become the source of excitation of the neighbouring elements. This mutual excitation enhances the coherence of the motion in the coupled system, as indicated by increasing $`R`$ and decreasing $`S`$. The lattice displays clusters of synchronization. The clusters breaks and reunites during the evolution, so that each element has slightly different firing frequency. Typical behavior of this partial synchronization regime is shown in Fig. 2(b). However, if the external noise is strong enough so that the noise dominates over the coupling, firing of each element is governed mainly by its individual noise, and synchronization clusters cannot survive, as seen in Fig. 2(c). The coherence of the temporal behavior is relatively low because quite large noise deforms greatly the “near limit” cycle. Note that firing frequency is not locked, but the difference is not as pronounced as in weak noise region (Fig. 2(a)). The next regime where $`R`$ takes large values ($`R18`$) while $`S`$ is very close to zero, is the most interesting, because the system performs quite regular motion globally, as seen in Fig. 2(d). All elements are locked to a relatively large firing frequency, and the distribution of the pulse duration becomes very sharp. After that, with stronger coupling, the system keeps global synchronization, however, the temporal behavior becomes irregular again, as indicated by decreasing $`R`$. This can be understood qualitatively from the global dynamics $`X=x_i_N`$ and $`Y=Y_i_N`$, with $`_N`$ denoting average over the lattice. From Eqs. (1-2), we get approximately $`ϵ\dot{X}`$ $`=`$ $`X{\displaystyle \frac{X^3}{3}}r^2XY,`$ (7) $`\dot{Y}`$ $`=`$ $`X+a_0+{\displaystyle \frac{D}{\sqrt{N}}}\xi (t),`$ (8) where $`r^2=(x_iX)^2_N`$ is the fluctuation level of local dynamics and higher order terms of this fluctuation are ignored; $`a_0=a_i_N`$. The summation of the independent noise is still a Gaussian noise $`\xi (t)`$, but with a weaker strength $`D/\sqrt{N}`$. For strong enough coupling, the system achieves global synchronization, $`x_iX`$, and $`r^20`$, and the coupled system can be viewed as a single element subject to a white noise with strength $`D/\sqrt{N}`$. For a single element, a rather weak noise may not excite the system, or the excited motion is quite irregular . This explains the globally synchronized but irregular motion of the lattice, as shown in Fig. 2(e). The locations of the above five representative dynamical regimes shown in Fig. 2 have been indicated by the black dots in Fig. 1. These regimes are typical for different size of the lattice. For a larger lattice, the regime of global regular motion ($`R`$ large and $`S0`$) is wider in the parameter space. Now let us discuss the phenomenon of array enhanced coherence resonance. For a fixed size of lattice and a certain value of coupling $`g`$, there is an optimal level of noise at which the system has a maximal value $`R_{max}`$. Figure 3 shows $`R_{max}`$ as a function of $`g`$ for different size of the lattice. The dashed line represents $`R_{max}`$ for a single element with $`a=1.05`$. As seen from this figure, in the weak coupling region, $`R_{max}`$ of the coupled lattice is lower than that of a single element, because the nonidentity makes the distribution of the pulse duration broader. However, in some intermediate coupling region, even only two coupled elements can enhance the coherence of the temporal behavior, even though they are nonidentical. The enhancement is larger for larger $`N`$. For large enough lattice, $`R_{max}`$ seems to be saturated, but the region of coupling in which $`R_{max}`$ takes large values is broader for larger lattice. As pointed above, for strong enough coupling, the coupled system tends to act as a single element, and $`R_{max}`$ converges to that of the single element at large value of $`g`$. Clearly, for a larger lattice, stronger coupling is needed to make the whole lattice act as a single element due to the local diffusive coupling, resulting in a slower convergence of $`R_{max}`$ to that of the single element. The properties observed here have some similarity to that in coupled bistable stochastic resonance oscillators subject to the same periodic signal . The difference is that in Refs. and , global synchronization is observed only at an optimal level of noise, while in the present system, global synchronization occurs for strong enough coupling if the noise level is not too high. The global motion can be regular or irregular. Here we should emphasize that there is no periodic signal in our system, and the coherent motion is purely induced by noise and enhanced by coupling. To conclude, we demonstrate global synchronization and array enhanced coherence in a lattice of locally coupled, nonidentical FHN model neurons. The results are similar for identical neurons. Interaction between the elements not only renders synchronization of the firing process induced by independent noise, but also improves the coherence of the noise-induced motion greatly. The phenomena demonstrated may be of importance in neurophysiology, where background noise may play an active role to increase the order and timing precision of a large ensemble of interacting neurons in biological information processing. Zhou Thanks Prof. Gang Hu and Dr. Jinghua Xiao for helpful discussions. This work is supported in part by grants from the Hong Kong Research Grants Council (RGC) and the Hong Kong Baptist University Faculty Research Grant (FRG).
no-problem/0002/astro-ph0002208.html
ar5iv
text
# The superbubble model for LiBeB production and Galactic evolution ## 1. Introduction Galactic nucleosynthesis and chemical evolution are about how a given element is produced in the universe and how its abundance evolved from the primordial universe on. In the case of the light elements, it is widely agreed that the nucleosynthesis occurs through spallation reactions induced by energetic particles (EPs) interacting with the interstellar medium (ISM). In these reactions, an heavier nucleus (most significantly C, N or O) is ‘broken into pieces’ and transmuted into one of the lighter <sup>6</sup>Li, <sup>7</sup>Li, <sup>9</sup>Be, <sup>10</sup>B or <sup>11</sup>B nuclei. Except for <sup>7</sup>Li, this spallative nucleosynthesis is thought to be the main (if not the only) light element production mechanism. The case of <sup>11</sup>B is slightly more complicated, as neutrino-induced spallation in supernovae (the so-called $`\nu `$-process) is sometimes invoked to increase the B/Be and $`{}_{}{}^{11}\mathrm{B}/^{10}\mathrm{B}`$ ratios which one would expect should the light elements be produced by nucleo-spallation alone. Concerning the Galactic evolution of light element abundances, Fields et al. (2000) have recently re-analyzed the available data as a function of O/H, discussing the uncertainties associated with the methods used to derive the O abundance, the stellar parameters and the incompleteness of the samples. According to their results, Be and B evolution can be described in terms of two distinct production processes: i) a primary process dominating at low metallicity and leading to a linear increase of the Be and B abundances with respect to O – ‘slope 1’ – followed by ii) a secondary process compatible with the standard expectations of the Galactic cosmic ray nucleosynthesis scenario (GCRN) – ‘slope 2’. This behaviour is characterized by a transition metallicity, $`Z_\mathrm{t}(\mathrm{O}/\mathrm{H})_\mathrm{t}`$, below which the Be/O and B/O ratios are constant and above which they are proportional to O/H. Although the value of $`Z_\mathrm{t}`$ is rather uncertain because very few data points have yet been reported at $`Z<Z_\mathrm{t}`$, energetics arguments show that a primary process *is* indeed required below, say, $`10^2Z_{}`$ (Parizot & Drury 1999a,b,2000b, Ramaty et al. 2000), and therefore the very existence of a transition metallicity separating a primary from a secondary evolution scheme seems reasonably well established. In spite of the current large uncertainties on the exact value of $`Z_\mathrm{t}`$, Fields et al. find a range of possible values between $`10^{1.9}`$ and $`10^{1.4}(\mathrm{O}/\mathrm{H})_{}`$ (see also Olive, this conference). This two-slope picture seems to reconcile the two competing theories for light element nucleosynthesis, namely GCRN which predicts a secondary behaviour for Be and B evolution (e.g. Vangioni-Flam et al. 1990, Fields & Olive 1999), and the superbubble model which predicts a primary behaviour at low metallicity (Parizot & Drury 1999b,2000b, Ramaty et al. 2000). However, we show here that when applied to the whole lifetime of the Galaxy (not only to the early Galaxy), the superbubble model *alone* actually predicts the entire two-slope behaviour inferred from observations, and accounts for all the qualitative and quantitative constraints currently available. Implications for particle acceleration inside a superbubble (SB) are also analyzed. ## 2. Description of the SB model The superbubble model is based on the observation that most massive stars are born in associations, and evolve quickly enough to explode as SNe in the vicinity of their parent molecular cloud. The dynamical effect of repeated SN explosions in a small region of the Galaxy is to blow large bubbles – superbubbles – of hot, rarefied material, surrounded by shells of swept-up and compressed ISM. The interior of superbubbles consists of the ejecta and stellar winds of evolved massive stars *plus* a given amount of ambient ISM evaporated off the shell and dense clumps passing through the bubble. The exact fraction of the ejecta material inside the SB is not well known, and can be expected to vary with time and from one SB to another. However, this fraction, which we note $`x`$, is all we need to know in order to fully determine the mean composition of the matter inside SBs. Noting $`\alpha _{\mathrm{ej}}(\mathrm{X})`$ and $`\alpha _{\mathrm{ISM}}(\mathrm{X})`$ the abundances of element X among the SN ejecta and in the ISM, respectively, we can indeed write the abundance of X inside the SB as: $$\alpha _{\mathrm{SB}}(\mathrm{X})=x\alpha _{\mathrm{ej}}(\mathrm{X})+(1x)\alpha _{\mathrm{ISM}}(\mathrm{X}).$$ (1) The second assumption of the SB model is that the material inside superbubbles is efficiently accelerated by a combination of shocks produced by SN explosions and supersonic stellar winds, secondary shocks reflected by other shocks or clumps of denser material, and a strong magnetic turbulence created by the global activity of all the massive stars. Two different SB models have been proposed so far, assuming different EP compositions and energy spectra. In our model (Parizot & Drury 1999b,2000b), we follow Bykov & Fleishman (1992) and Bykov (1995,1999) and argue that the SB acceleration process produces a rather flat spectrum at low energy, namely in $`E^1`$, as expected from multiple shock acceleration theory (Markowith & Kirk 1999), up to a few hundreds of MeV/n, say. Above this value, the spectrum of the superbubble EPs (SBEPs) is either cut off through a steep power-law or turned into the standard cosmic ray source spectrum (CRS), in $`E^2`$. The exact behaviour of this so-called ‘SB spectrum’ at high energy is important in itself and should be derived from a detailed calculation of the particle acceleration, but we do no consider it here, as it is not relevant to our problem (most of the LiBeB production arises from the most numerous low-energy particles anyway). The other SB model proposed so far (Ramaty & Lingenfelter 1999, Ramaty et al. 2000) assumes that the SBEPs *are* actually the cosmic rays and thus their energy spectrum is the standard CRS spectrum ($`Q(p)p^2`$). To summarize, the essence of the SB model is that repeated SN explosions occurring in OB associations lead to the acceleration of EPs having either the CRS spectrum or the SB spectrum, and a composition given by Eq. (1), where the only free parameter is the proportion of the ejecta inside the SB: $`x`$. In principle, $`x`$ can be derived from the study of SB evolution dynamics, coupled with a gas evaporation model. But we shall first study LiBeB evolution for itself with no external prejudice about the value of $`x`$, and therefore consider it as a free parameter which we vary from 0 (i.e. SBEPs have the ambient ISM composition) to 1 (i.e. SBEPs are made of pure SN ejecta). Later, we compare the value derived from the LiBeB constraints with the value expected from standard SB dynamical models. ## 3. Be and B Galactic evolution ### 3.1. Qualitative features Having parameterized our problem as above, we can easily calculate the Be/O production ratio in the Galaxy as a function of $`Z_{\mathrm{ISM}}(\mathrm{O}/\mathrm{H})_{\mathrm{ISM}}`$. We consider SBs blown by 100 SNe exploding continuously over a lifetime of 30 Myr. We then integrate the Be production rates induced by the SBEPs and divide the result by the total O yield (added up assuming a Salpeter IMF and SN yields from Woosley & Weaver 1995). The result is plotted in Fig. 1 for various values of $`x`$ and the two investigated spectra. The main difference between the latter is the Be production efficiency, i.e. the number of Be produced per erg of SBEP. Apart from the SB spectrum being more efficient, both figures show distinctively the sought-for two-slope behaviour, with a transition metallicity $`Z_\mathrm{t}`$ depending on the actual value of $`x`$. This behaviour derives directly from Eq. (1). Replacing X by O there, we see that the abundance of O among the SBEPs is essentially $`x\alpha _{\mathrm{ej}}(\mathrm{O})`$ at low metallicity, and $`(1x)\alpha _{\mathrm{ISM}}(\mathrm{O})`$ above a transition metallicity $`Z_\mathrm{t}\frac{x}{1x}Z_{\mathrm{ej}}`$ (where $`Z_{\mathrm{ej}}10Z_{}`$). Therefore, remembering that O is the main progenitor of Be, we find that the SB model predicts a primary behaviour below $`Z_\mathrm{t}`$ (production efficiency independent of $`Z_{\mathrm{ISM}}`$), and a secondary behaviour above $`Z_\mathrm{t}`$, since the SB model is then essentially identical to the GCRN model (except maybe for the assumed energy spectrum). Incidentally, it is interesting to note that the SB model can be considered as a correction of the GCRN scenario, taking into account the chemical inhomogeneity of the early Galaxy. Indeed, since particle acceleration occurs precisely in those places where metals are released (i.e. superbubbles), the SBEP composition is considerably richer in O than the average ISM, as long as the SN ejecta dominate the O content of the SBs. Afterwards, it makes little difference, as far as composition is concerned, whether the EPs producing LiBeB are accelerated inside SBs or in the regular ISM. Finally, we see from Fig. 1 that the predicted slope 1 and slope 2 correlations between Be and O are limit behaviours for very low and very high metallicity respectively. Depending on the value of $`x`$, any intermediate value for the Be-O slope is reached over a given range of stellar metallicity. This is in contrast with what would arise if the two-slope behaviour were to be explained in terms of two different mechanisms (e.g. the SB model at low $`Z`$ and GCRN at high $`Z`$). In that case, indeed, one would have a sharp change of slope at the precise metallicity where the secondary process becomes dominant, with no intermediate values. Of course, expected physical fluctuations of the parameters would weaken this effect, and current observational error bars prevent us from distinguishing conclusively between the two pictures. But we argue that the observed ‘slope 1.45’ behaviour reported by Boesgaard & Ryan (this conference) can be explained (in principle) only if there is a continuous *transition* from slope 1 to slope 2 within a *unique* model (as in the SB model above ), rather than two unrelated models with a slope 2 eventually superseding a slope 1. ### 3.2. Quantitative features Quantitatively, the Be/O ratio at low metallicity derived from the observations is about $`\mathrm{4\hspace{0.17em}10}^9`$ (Parizot & Drury 2000b). This can be achieve either by the CRS spectrum model, provided $`x50\%`$, or by the SB spectrum model, provided $`x2or3\%`$ (see Fig. 1). So far, both models are equally acceptable since we chose not to accept any prejudice about the value of $`x`$ from outside the restricted field of LiBeB evolution. But when considering the transition metallicity associated with the two possible models, we see that the CRS spectrum implies $`Z_\mathrm{t}>10^1Z_{}`$, well outside the range derived by Fields et al. On the other hand, the value of $`Z_\mathrm{t}`$ predicted by SB spectrum model falls exactly in the required range. As a conclusion, the SB model is fully consistent with the observations provided that i) the SBEP spectrum is flattened at low energy (in $`E^1`$), and ii) the SN ejecta amount to a few percent of all the matter present inside SBs. Now let us extend the scope of our study. Quite remarkably, the first condition above is exactly what is expected from the SB acceleration model developed by Bykov et al. As for the second condition, it is in perfect agreement with the dynamical model for SB evolution worked out by Mac Low & Mc Cray (1988). In other words, had we looked beforehand for a theoretically preferred value of $`x`$, we would have chosen just the particular value which turns out to account for the various constraints of Be Galactic evolution. Therefore, our results actually bring support not only to the SB model as the natural framework for Be and B evolution studies, but also to the SB acceleration model and standard SB dynamics. Concerning B, unfortunately, only qualitative constraints can be checked against the SB model (successfully in this instance), since either a significant $`\nu `$-process or a LECR component is required anyway to account for the observed B/Be and $`{}_{}{}^{11}\mathrm{B}/^{10}\mathrm{B}`$ ratios. However, Li does provide additional quantitative constraints. First, in order not to break the Spite plateau, the Li/Be production ratio must be lower than $`100`$. This is shown to be satisfied for any value of $`x`$ greater than about 1% in Fig. (2a). Second, the measurement of the <sup>6</sup>Li abundance in two halo stars of metallicity $`Z10^{2.3}Z_{}`$ indicates that the <sup>6</sup>Li/<sup>9</sup>Be ratio in these stars should be in the range 20–80 (see Vangioni-Flam, Cassé, & Audouze 2000 and references therein), in contrast with the solar value of $`6`$. This could not be explained if the proportion of SN ejecta inside SBs were of the order of 50% (CRS spectrum model). However, it is quite remarkable again that the value of a few percent derived from the SB spectrum model is totally consistent with the observed value of the <sup>6</sup>Li/<sup>9</sup>Be ratio, both a low metallicity and at solar metallicity. ## 4. Conclusion The SB model described above has been shown to be fully consistent with the qualitative and quantitative constraints of LiBeB Galactic evolution: 1) it explains the inferred two-slope behaviour in the framework of one sole model; 2) it provides the correct value of Be/O at low metallicity; 3) it predicts the correct value of the transition metallicity; 4) it does not break the Spite plateau; 5) it is consistent with the <sup>6</sup>Li/<sup>9</sup>Be ratio at any metallicity. Most importantly, these successes rely on the value of only one free parameter, namely the proportion of SN ejecta inside a SB. The value which we find is of the order of a few percent, i.e. exactly in the range derived from standard SB dynamical evolution. Likewise, the SB model is found to be successful only if the SBEPs have the SB spectrum, i.e a flattened shape at low energy (in $`E^1`$). But this is exactly what is predicted by the SB acceleration model of Bykov et al. As a conclusion, the SB model appears to account for all the available constraints about LiBeB evolution by making only the most standard assumptions about the involved models relating to other fields of astrophysics. This may be considered as lending support to these models as well. #### Acknowledgments. This work were supported by the TMR programme of the European Union under contract FMRX-CT98-0168. ## References Bykov A. M. 1995, Space Sci. Rev., 74, 397 Bykov A. M. 1999, in “LiBeB, cosmic rays and gamma-ray line astronomy”, ASP Conference Series, eds. R. Ramaty, E. Vangioni-Flam, M. Casse, K. Olive Bykov A. M., & Fleishman G. D. 1992, MNRAS, 255, 269 Fields B. D., & Olive K. A., 1999, ApJ, 516, 797 Fields B. D., Olive K. A., Vangioni-Flam E., & Cassé M. 2000, submitted to ApJ (astro-ph/9911320) Mac Low M. M., & McCray R., 1988, ApJ, 324, 776 Markowith A., & Kirk J. G. 1999, A&A, 347, 391 Parizot E., & Drury L. 1999a, A&A, 346, 686 Parizot E., & Drury L. 1999b, A&A, 349, 673 Parizot E., & Drury L. 2000a, A&A, submitted Parizot E., & Drury L. 2000b, A&A, submitted Ramaty R., & Lingenfelter R. E. 1999, in “LiBeB, cosmic rays and gamma-ray line astronomy”, ASP Conference Series, eds. R. Ramaty, E. Vangioni-Flam, M. Casse, K. Olive Ramaty R., Scully S. T., Lingenfelter R. E., & Kozlovsky B. 2000, ApJ, in press (astro-ph/9909021) Vangioni-Flam, E., Cassé, M., Audouze, J., & Oberto, Y. 1990, ApJ 364, 586 Vangioni-Flam, E., Cassé, M., & Audouze, J. 2000, submitted to Physics Reports Woosley S. E., & Weaver T. A. 1995, ApJSS, 101, 181
no-problem/0002/cond-mat0002177.html
ar5iv
text
# Congested Traffic States in Empirical Observations and Microscopic Simulations ## I Introduction Recently, there is much interest in the dynamics of traffic breakdowns behind bottlenecks . Measurements of traffic breakdowns on various freeways in the USA , Germany, , Holland , and Korea suggest that many dynamic aspects are universal and therefore accessible to a physical description. One common property is the capacity drop (typically of the order of 20%) associated with a breakdown , which leads to hysteresis effects and is the basis of applications like dynamic traffic control with the aim of avoiding the breakdown. In the majority of cases, traffic breaks down upstream of a bottleneck and the congestion has a stationary donstream front at the bottleneck. The type of bottleneck, e.g., on-ramps , lane closings, or uphill gradients , seems not to be of importance. Several types of congested traffic have been found, among them extended states with a relatively high traffic flow. These states, sometimes referred to as “synchronized traffic” , can be more or less homogeneously flowing, or show distinct oscillations in the time series of detector data . Very often, the congested traffic flow is, apart from fluctuations, homogeneous near the bottleneck, but oscillations occur further upstream . In other cases, one finds isolated stop-and-go waves that propagate in the upstream direction with a characteristic velocity of about 15 km/h . Finally, there is also an observation of a traffic breakdown to a pinned localized cluster near an on-ramp . There are several possibilities to delineate traffic mathematically, among them macroscopic models describing the dynamics in terms of aggregate quantities like density or flow , and microsopic models describing the motion of individual vehicles. The latter include continuous-in-time models (car-following models) , and cellular automata . Traffic breakdowns behind bottlenecks have been simulated with the non-local, gaskinetic-based traffic model (GKT model) , the Kühne-Kerner-Konhäuser-Lee model (KKKL model) , and with a new car-following model, which will be reported below. For a direct comparison with empirical data, one would prefer car-following models. As the position and velocity of each car is known in such models, one can reconstruct the way how data are obtained by usual induction-loop detectors. To this end, one introduces “virtual” detectors recording passage times and velocities of crossing vehicles and compares this ouput with the empirical data. Because traffic density is not a primary variable, this avoids the problems associated with determining the traffic density by temporal averages . The present study refers to publication , where, based on a gas-kinetic-based macroscopic simulation model, it has been concluded that there should be five different congested traffic states on freeways with inhomogeneities like on-ramps. The kind of congested state depends essentially on the inflow into the considered freeway section and on the “bottleneck strength” characterizing the inhomogeneity. This can be summarized by a phase diagram depicting the kind of traffic state as a function of these two parameters. A similar phase diagram has been obtained for the KKKL model . The question is, whether this finding is true for some macroscopic models only, or universal for a larger class of traffic models and confirmed by empirical data. The relative positions of some of the traffic states in this phase space has been qualitatively confirmed for a Korean freeway , but only one type of extended state has been measured there. Furthermore, this state did not have the characteristic properties of extended congested traffic on most other freeways, cf. e.g., . It is an open question to confirm the relative position of the other states. Moreover, to our knowledge, there are no direct simulations of the different breakdowns using empirical data as boundary conditions, neither with microscopic nor with macroscopic models. Careful investigations with a new follow-the-leader model (the IDM model) show that (i) the conclusions of Ref are also valid for certain microscopic traffic models (at least deterministic models with a metastable density range), (ii) the results can be systematically transferred to more general (in particular flow-conserving) kinds of bottlenecks, and a formula allowing to quantify the bottleneck strength is given \[see Eq. (15)\], (iii) the existence of all predicted traffic states is empirically supported, and finally, (iv) all different kinds of breakdowns can be simulated with the IDM model with empirically measured boundary conditions, varying only one parameter (the average time headway T), which is used to specify the capacity of the stretch. The applied IDM model belongs to the class of deterministic follow-the-leader models like the optimal velocity model by Bando et al. , but it has the following advantages: (i) it behaves accident-free because of the dependence on the relative velocity, (ii) for similar reasons and because of metastability, it shows the self-organized characteristic traffic constants demanded by Kerner et al. (see Fig. 4), hysteresis effects , and complex states , (iii) all model parameters have a reasonable interpretation, are known to be relevant, are empirically measurable, and have the expected order of magnitude , (iv) the fundamental diagram and the stability properties of the model can be easily (and separately) calibrated to empirical data, (v) it allows for a fast numerical simulation, and (vi) an equivalent macroscopic version of the model is known , which is not the case for most other microscopic traffic models. These aspects are discussed in Sec. II, while Sec. III is not model-specific at all. Section III presents ways to specify and quantify bottlenecks, as well as the traffic states resulting for different traffic volumes and bottleneck strengths. The analytical expressions for the phase boundaries of the related phase diagram allow to conclude that similar results will be found for any other traffic model with a stable, metastable, and unstable density range. Even such subtle features like tristability first found in macroscopic models are observed. It would be certainly interesting to investigate in the future, whether the same phenomena are also found for CA models or stochastic traffic models like the one by Krauss . Section IV discusses empirical data using representative examples out of a sample of about 100 investigated breakdowns. Thanks to a new method for presenting the cross section data (based on a smoothing and interpolation procedure), it is possible to present 3d plots of the empirical density or average velocity as a function of time and space. This allows a good imagination of the traffic patterns and a direct visual comparison with simulation results. In the IDM microsimulations, we used a very restricted data set, namely only the measured flows and velocities at the upstream and downstream boundaries omitting the data of the up to eight detectors in between. Although the simulated sections were up to 13 km long and the boundaries were typically outside of congestions, the simulations reproduced qualitatively the sometimes very complex observed collective dynamics. All in all the study supports the idea of the suggested phase diagram of congested traffic states quite well and suggests ways to simulate real traffic breakdowns at bottlenecks with empirical boundary conditions. ## II The Microscopic “Intelligent Driver Model” (IDM) For about fifty years now, researchers model freeway traffic by means of continuous-in-time microscopic models (car-following models) . Since then, a multitude of car-following models have been proposed, both for single-lane and multi-lane traffic including lane changes. We will restrict, here, to phenomena for which lane changes are not important and only consider single-lane models. To motivate our traffic model we first give an overview of the dynamical properties of some popular microscopic models. ### A Dynamic Properties of Some Car-Following Models Continuous-time single-lane car-following models are defined essentially by their acceleration function. In many of the earlier models , the acceleration $`\dot{v}_\alpha (t+T_r)`$ of vehicle $`\alpha `$, delayed by a reaction time $`T_r`$, can be written as $$\dot{v}_\alpha (t+T_r)=\frac{\lambda v_\alpha ^m\mathrm{\Delta }v_\alpha }{s_\alpha ^l}.$$ (1) The deceleration $`\dot{v}_\alpha (t+T_r)`$ is asumed to be proportional to the approaching rate $$\mathrm{\Delta }v_\alpha (t):=v_\alpha (t)v_{\alpha 1}(t)$$ (2) of vehicle $`\alpha `$ with respect to the leading vehicle $`(\alpha 1)`$. In addition, the acceleration may depend on the own velocity $`v_\alpha `$ and decrease with some power of the net (bumper-to-bumper) distance $$s_\alpha =x_{\alpha 1}x_\alpha l_\alpha $$ (3) to the leading vehicle (where $`l_\alpha `$ is the vehicle length) . Since, according to Eq. (1), the acceleration depends on a leading vehicle, these models are not applicable for very low traffic densities. If no leading vehicle is present (corresponding to $`s_\alpha \mathrm{}`$), the acceleration is either not determined $`(l=0)`$ or zero $`(l>0)`$, regardless of the own velocity. However, one would expect in this case that drivers accelerate to an individual desired velocity. The car-following behavior in dense traffic is also somewhat unrealistic. In particular, the gap $`s_\alpha `$ to the respective front vehicle does not necessarily relax to an equilibrium value. Even small gaps will not induce braking reactions if the velocity difference $`\mathrm{\Delta }v_\alpha `$ is zero. These problems are solved by the car-following model of Newell . In this model, the velocity at time $`(t+T_r)`$ depends adiabatically on the gap, i.e., the vehicle adapts exactly to a distance-dependent function $`V`$ within the reaction time $`T_r`$, $$v_\alpha (t+T_r)=V\left(s_\alpha (t)\right).$$ (4) The “optimal velocity function” $`V(s)=v_0\{1\mathrm{exp}[(ss_0)/(v_0T)]\}`$ includes both, a desired velocity $`v_0`$ for vanishing interactions ($`s\mathrm{}`$) and a safe time headway $`T`$ characterizing the car-following behavior in dense (equilibrium) traffic. The Newell model is collision-free, but the immediate dependence of the velocity on the density leads to very high accelerations of the order of $`v_0/T_r`$. Assuming a typical desired velocity of 30 m/s and $`T_r=1`$ s, this would correspond to 30 m/s<sup>2</sup>, which is clearly unrealistic . More than 30 years later, Bando et al. suggested a similar model, $$\dot{v}_\alpha =\frac{V(s_\alpha )v_\alpha }{\tau }$$ (5) with a somewhat different optimal velocity function. This “optimal-velocity mdoel” has been widely used by physicists because of its simplicity, and because some results could be derived analytically. The dynamical behavior does not greatly differ from the Newell model, since the reaction time delay $`T_r`$ of the Newell model can be compared with the velocity relaxation time $`\tau `$ of the optimal-velocity model. However, realistic velocity relaxation times are of the order of 10 s (city traffic) to 40 s (freeway traffic) and therefore much larger than reaction delay times (of the order of 1 s). For typical values of the other parameters of the optimal-velocity model , crashes are only avoided if $`\tau <0.9`$ s, i.e., the velocity relaxation time is of the order of the reaction time, leading again to unrealistically high values $`v_0/\tau `$ of the maximum acceleration. The reason of this unstable behavior is that effects of velocity differences are neglected. However, they play an essential stabilizing role in real traffic, especially when approaching traffic jams. Moreover, in models (4) and (5), accelerations and decelerations are symmetric with respect to the deviation of the actual velocity from the equilibrium velocity, which is unrealistic. The absolute value of braking decelerations is usually stronger than that of accelerations. A relatively simple model with a generalized optimal velocity function incorporating both, reactions to velocity differences and different rules for acceleration and braking has been proposed rather recently . This “generalized-force model” could successfully reproduce the time-dependent gaps and velocities measured by a sensor-equipped car in congested city traffic. However, the acceleration and deceleration times in this model are still unrealistically small which requires inefficiently small time steps for the numerical simulation. Besides these simple models intended for basic investigations, there are also highly complex “high-fidelity models” like the Wiedemann model or MITSIM , which try to reproduce traffic as realistically as possible, but at the cost of a large number of parameters. Other approaches that incorporate “intelligent” and realistic braking reactions are the simple and fast stochastic models proposed by Gipps and Krauss . Despite their simplicity, these models show a realistic driver behavior, have asymmetric accelerations and decelerations, and produce no accidents. Unfortunately, they lose their realistic properties in the deterministic limit. In particular, they show no traffic instabilities or hysteresis effects for vanishing fluctuations. ### B Model Equations The acceleration assumed in the IDM is a continuous function of the velocity $`v_\alpha `$, the gap $`s_\alpha `$, and the velocity difference (approaching rate) $`\mathrm{\Delta }v_\alpha `$ to the leading vehicle: $$\dot{v}_\alpha =a^{(\alpha )}\left[1\left(\frac{v_\alpha }{v_0^{(\alpha )}}\right)^\delta \left(\frac{s^{}(v_\alpha ,\mathrm{\Delta }v_\alpha )}{s_\alpha }\right)^2\right].$$ (6) This expression is an interpolation of the tendency to accelerate with $`a_f(v_\alpha ):=a^{(\alpha )}[1(v_\alpha /v_0^{(\alpha )})^\delta ]`$ on a free road and the tendency to brake with deceleration $`b_{\mathrm{int}}(s_\alpha ,v_\alpha ,\mathrm{\Delta }v_\alpha ):=a^{(\alpha )}(s^{}/s_\alpha )^2`$ when vehicle $`\alpha `$ comes too close to the vehicle in front. The deceleration term depends on the ratio between the “desired minimum gap” $`s^{}`$ and the actual gap $`s_\alpha `$, where the desired gap $$s^{}(v,\mathrm{\Delta }v)=s_0^{(\alpha )}+s_1^{(\alpha )}\sqrt{\frac{v}{v_0^{(\alpha )}}}+T^\alpha v+\frac{v\mathrm{\Delta }v}{2\sqrt{a^{(\alpha )}b^{(\alpha )}}}$$ (7) is dynamically varying with the velocity and the approaching rate. In the rest of this paper, we will study the case of identical vehicles whose model parameters $`v_0^{(\alpha )}=v_0`$, $`s_0^{(\alpha )}=s_0`$, $`T^{(\alpha )}=T`$, $`a^{(\alpha )}=a`$, $`b^{(\alpha )}=b`$, and $`\delta `$ are given in Table I. Here, our emphasis is on basic investigations with models as simple as possible, and therefore we will set $`s_1^\alpha =0`$ resulting in a model where all parameters have an intuitive meaning with plausible and often easily measurable values. While the empirical data presented in this paper can be nevertheless reproduced, a distinction of different driver-vehicle types and/or a nonzero $`s_1`$ is necessary for a more quantitative agreement. A nonzero $`s_1`$ would also be necessary for features requiring an inflection point in the equilibrium flow-density relation, e.g., for certain types of multi-scale expansions . ### C Dynamic Single-Vehicle Properties Special cases of the IDM acceleration (6) with $`s_1=0`$ include the following driving modes: #### a Equilibrium traffic: In equilibrium traffic of arbitrary density ($`\dot{v}_\alpha =0`$, $`\mathrm{\Delta }v_\alpha =0`$), drivers tend to keep a velocity-dependent equilibrium gap $`s_e(v_\alpha )`$ to the front vehicle given by $$s_e(v)=s^{}(v,0)\left[1\left(\frac{v}{v_0}\right)^\delta \right]^{\frac{1}{2}}=(s_0+vT)\left[1\left(\frac{v}{v_0}\right)^\delta \right]^{\frac{1}{2}}.$$ (8) In particular, the equilibrium gap of homogeneous congested traffic (with $`v_\alpha v_0`$) is essentially equal to the desired gap, $`s_e(v)s_0+vT`$, i.e., it is composed of a bumper-to-bumper space $`s_0`$ kept in standing traffic and an additional velocity-dependent contribution $`vT`$ corresponding to a constant safe time headway $`T`$. This high-density limit is of the same functional form as that of the Newell model, Eq. (4). Solving Eq. (8) for $`v:=V_e(s)`$ leads to simple expressions only for $`\delta =1`$, $`\delta =2`$, or $`\delta \mathrm{}`$. In particular, the equilibium velocity for $`\delta =1`$ and $`s_0=0`$ is $$V_e(s)|_{\delta =1,s_0=0}=\frac{s^2}{2v_0T^2}\left(1+\sqrt{1+\frac{4T^2v_0^2}{s^2}}\right).$$ (9) Further interesting cases are $$V_e(s)|_{\delta =2,s_0=0}=\frac{v_0}{\sqrt{1+\frac{v_0^2T^2}{s^2}}},$$ (10) and $$V_e(s)|_\delta \mathrm{}=\text{min}\{v_0,(ss_0)/T\}.$$ (11) From a macroscopic point of view, equilibrium traffic consisting of identical vehicles can be characterized by the equilibrium traffic flow $`Q_e(\rho )=\rho V_e(\rho )`$ (vehicles per hour and per lane) as a function of the traffic density $`\rho `$ (vehicles per km and per lane). For the IDM model, this “fundamental diagram” follows from one of the equilibrium relations (8) to (11), together with the micro-macro relation between gap and density: $$s=1/\rho l=1/\rho 1/\rho _{\mathrm{max}}.$$ (12) Herein, the maximum density $`\rho _{\mathrm{max}}`$ is related to the vehicle length $`l`$ by $`\rho _{\mathrm{max}}l=1`$. Figure 1 shows the fundamental diagram and its dependence on the parameters $`\delta `$, $`v_0`$, and $`T`$. In particular, the fundamental diagram for $`s_0=0`$ and $`\delta =1`$ is identical to the equilibrium relation of the macroscopic GKT model, if the GKT parameter $`\mathrm{\Delta }A`$ is set to zero (cf. Eq. (23) in Ref. ), which is a necessary condition for a micro-macro correspondence . #### b Acceleration to the desired velocity: If the traffic density is very low ($`s`$ is large), the interaction term is negligible and the IDM acceleration reduces to the free-road acceleration $`a_f(v)=a(1v/v_0)^\delta `$, which is a decreasing function of the velocity with a maximum value $`a_f(0)=a`$ and $`a_f(v_0)=0`$. In Fig. 2, this regime applies for times $`t60`$ s. The acceleration exponent $`\delta `$ specifies how the acceleration decreases when approaching the desired velocity. The limiting case $`\delta \mathrm{}`$ corresponds to approaching $`v_0`$ with a constant acceleration $`a`$, while $`\delta =1`$ corresponds to an exponential relaxation to the desired velocity with the relaxation time $`\tau =v_0/a`$. In the latter case, the free-traffic acceleration is equivalent to that of the optimal-velocity model (5) and also to acceleration functions of many macroscopic models like the KKKL model , or the GKT model . However, the most realistic behavior is expected in between the two limiting cases of exponential acceleration (for $`\delta =1`$) and constant acceleration (for $`\delta \mathrm{}`$), which is confirmed by our simulations with the IDM. Throughout this paper we will use $`\delta =4`$. #### c Braking as reaction to high approaching rates: When approaching slower or standing vehicles with sufficiently high approaching rates $`\mathrm{\Delta }v>0`$, the equilibrium part $`s_0+vT`$ of the dynamical desired distance $`s^{}`$, Eq. (7), can be neglected with respect to the nonequilibrium part, which is proportional to $`v\mathrm{\Delta }v`$. Then, the interaction part $`a(s^{}/s)^2`$ of the acceleration equation (6) is given by $$b_{\mathrm{int}}(s,v,\mathrm{\Delta }v)\frac{(v\mathrm{\Delta }v)^2}{4bs^2}.$$ (13) This expression implements anticipative “intelligent” braking behavior, which we disuss now for the spacial case of approaching a standing obstacle ($`\mathrm{\Delta }v=v`$). Anticipating a constant deceleration during the whole approaching process, a minimum kinematic deceleration $`b_k:=v^2/(2s)`$ is necessary to avoid a collision. The situation is assumed to be “under control”, if $`b_k`$ is smaller than the “comfortable” deceleration given by the model parameter $`b`$, i.e., $`\beta :=b_k/b1`$. In contrast, an emergency situation is characterized by $`\beta >1`$. With these definitions, Eq. (13) becomes $$b_{\mathrm{int}}(s,v,v)=\frac{b_k^2}{b}=\beta b_k.$$ (14) While in safe situations the IDM deceleration is less than the kinematic collision-free deceleration, drivers overreact in emergency situations to get the situation again under control. It is easy to show that in both cases the acceleration approaches $`\dot{v}=b`$ under the deceleration law (14). Notice that this stabilizing behavior is lost if one replaces in Eq. (6) the braking term $`a(s^{}/s)^2`$ by $`a^{}(s^{}/s)^\delta ^{}`$ with $`\delta ^{}1`$ corresponding to $`b_{\mathrm{int}}(s,v,v)=\beta ^{\delta ^{}1}b_k`$. The “intelligent” braking behavior of drivers in this regime makes the model collision-free. The right parts of the plots of Fig. 2 ($`t>70`$ s) show the simulated approach of an IDM vehicle to a standing obstacle. As expected, the maximum deceleration is of the order of $`b`$. For low velocities, however, the equilibrium term $`s_0+vT`$ of $`s^{}`$ cannot be neglected as assumed when deriving Eq. (14). Therefore, the maximum deceleration is somewhat lower than $`b`$ and the deceleration decreases immediately before the stop while, under the dynamics (14), one would have $`\dot{v}=b`$. Similar braking rules have been implemented in the model of Krauss , where the model is formulated in terms of a time-discretizised update scheme (iterated map), where the velocity at timestep $`(t+1)`$ is limited to a “safe velocity” which is calculated on the basis of the kinematic braking distance at a given “comfortable” deceleration. #### d Braking in response to small gaps: The forth driving mode is active when the gap is much smaller than $`s^{}`$ but there are no large velocity differences. Then, the equilibrium part $`s_0+vT`$ of $`s^{}`$ dominates over the dynamic contribution proportional to $`\mathrm{\Delta }v`$. Neglecting the free-road acceleration, Eq. (6) reduces to $`\dot{v}(s_0+vT)^2/s^2`$, corresponding to a Coulomb-like repulsion. Such braking interactions are also implemented in other models, e.g., in the model of Edie , the GKT model , or in certain regimes of the Wiedemann model . The dynamics in this driving regime is not qualitatively different, if one replaces $`a(s^{}/s)^2`$ by $`a(s^{}/s)^\delta ^{}`$ with $`\delta ^{}>0`$. This is in contrast to the approaching regime, where collisions would be provoked for $`\delta ^{}1`$. Figure 3 shows the car-following dynamics in this regime. For the standard parameters, one clearly sees an non-oscillatory relaxation to the equilibrium distance (solid curve), while for very high values of $`b`$, the approach to the equilibrium distance would occur with damped oscillations (dashed curve). Notice that, for the latter parameter set, the collective traffic dynamics would already be extremely unstable. ### D Collective Behavior and Stability Diagram Although we are interested in realistic open traffic systems, it turned out that many features can be explained in terms of the stability behavior in a closed system. Figure 4(a) shows the stability diagram of homogeneous traffic on a circular road. The control parameter is the homogeneous density $`\rho _\mathrm{h}`$. We applied both a very small and a large localized perturbation to check for linear and nonlinear stability, and plotted the resulting minimum ($`\rho _{\mathrm{out}}`$) and maximum ($`\rho _{\mathrm{jam}}`$) densities after a stationary situation was reached. The resulting diagram is very similar to that of the macroscopic KKKL and GKT models . In particular, it displays the following realistic features: (i) Traffic is stable for very low and high densities, but unstable for intermediate densities. (ii) There is a density range $`\rho _{\mathrm{c1}}\rho _\mathrm{h}\rho _{\mathrm{c2}}`$ of metastability, i.e., only perturbations of sufficiently large amplitudes grow, while smaller perturbations disappear. Note that, for most IDM parameter sets, there is no second metastable range at higher densities, in contrast to the GKT and KKKL models. Rather, traffic flow becomes stable again for densities exceeding the critical density $`\rho _{\mathrm{c3}}`$, or congested flows below $`Q_{\mathrm{c3}}=Q_\mathrm{e}(\rho _{\mathrm{c3}})`$. (iii) The density $`\rho _{\mathrm{jam}}`$ inside of traffic jams and the associated flow $`Q_{\mathrm{jam}}=Q_\mathrm{e}(\rho _{\mathrm{jam}})`$, cf. Fig. 4(b), do not depend on $`\rho _\mathrm{h}`$. For the parameter set chosen here, we have $`\rho _{\mathrm{jam}}=\rho _{\mathrm{c3}}=140`$ vehicles/km, and $`Q_{\mathrm{jam}}=0`$, i.e., there is no linearly stable congested traffic with a finite flow and velocity. For other parameters, especially for a nonzero IDM parameter $`s_1`$, both $`Q_{\mathrm{jam}}`$ and $`Q_{\mathrm{c3}}`$ can be nonzero and different from each other . As further “traffic constants”, at least in the density range 20 veh./km $`\rho _\mathrm{h}`$ 50 veh./km, we observe a constant outflow $`Q_{\mathrm{out}}=Q_\mathrm{e}(\rho _{\mathrm{out}})`$ and propagation velocity $`v_\mathrm{g}=(Q_{\mathrm{out}}Q_{\mathrm{jam}})/(\rho _{\mathrm{out}}\rho _{\mathrm{jam}})15`$ km/h of traffic jams. Figure 4(b) shows the stability diagram for the flows. In particular, we have $`Q_{\mathrm{c1}}<Q_{\mathrm{out}}Q_{\mathrm{c2}}`$, where $`Q_{\mathrm{c}i}=Q_\mathrm{e}(\rho _{\mathrm{c}i})`$, i.e., the outflow from congested traffic is at the margin of linear stability, which is also the case in the GKT for most parameter sets . For other IDM parameters, the outflow $`Q_{\mathrm{out}}[Q_{\mathrm{c1}},Q_{\mathrm{c2}}]`$ is metastable , or even at the margin of nonlinear stability . In open systems, a third type of stability becomes relevant. Traffic is convectively stable, if, after a sufficiently long time, all perturbations are convected out of the system. Both, in the macroscopic models and in the IDM, there is a considerable density region $`\rho _{\mathrm{cv}}\rho _\mathrm{h}\rho _{\mathrm{c3}}`$, where traffic is linearly unstable but convectively stable. For the parameters chosen in this paper, congested traffic is always linearly unstable, but convectively stable for flows below $`Q_{\mathrm{cv}}=Q_\mathrm{e}(\rho _{\mathrm{cv}})=1050`$ vehicles/h. A nonzero jam distance $`s_1`$ is required for linearly stable congested traffic with nonzero flows , at least, if the model should simultaneously show traffic instabilities. ### E Calibration Besides the vehicle length $`l`$, the IDM has seven parameters, cf. Table I. The fundamental relations of homogeneous traffic are calibrated with the desired velocity $`v_0`$ (low density), safe time headway $`T`$ (high density), and the jam distances $`s_0`$ and $`s_1`$ (jammed traffic). In the low-density limit $`\rho (v_0T)^1`$, the equilibrium flow can be approximated by $`Q_eV_0\rho `$. In the high density regime and for $`s_1=0`$, one has a linear decrease of the flow $`Q_e[1\rho (l+s_0)]/T`$ which can be used to determine $`(l+s_0)`$ and $`T`$. Only for nonzero $`s_1`$, one obtains an inflection point in the equilibrium flow-density relation $`Q_e(\rho )`$. The acceleration coefficient $`\delta `$ influences the transition region between the free and congested regimes. For $`\delta \mathrm{}`$ and $`s_1=0`$, the fundamental diagram (equilibrium flow-density relation) becomes triangular-shaped: $`Q_e(\rho )=\text{min}(v_0\rho ,[1\rho (l+s_0)]/T)`$. For decreasing $`\delta `$, it becomes smoother and smoother, cf. Fig. 1(a). The stability behavior of traffic in the IDM model is determined mainly by the maximum acceleration $`a`$, desired deceleration $`b`$, and by $`T`$. Since the accelerations $`a`$ and $`b`$ do not influence the fundamental diagram, the model can be calibrated essentially independently with respect to traffic flows and stability. As in the GKT model, traffic becomes more unstable for decreasing $`a`$ (which corresponds to an increased acceleration time $`\tau =v_0/a`$), and with decreasing $`T`$ (corresponding to reduced safe time headways). Furthermore, the instability increases with increasing $`b`$. This is also plausible, because an increased desired deceleration $`b`$ corresponds to a less anticipative or less defensive braking behavior. The density and flow in jammed traffic and the outflow from traffic jams is also influenced by $`s_0`$ and $`s_1`$. In particular, for $`s_1=0`$, the traffic flow $`Q_{\mathrm{jam}}`$ inside of traffic jams is typically zero after a sufficiently long time \[Fig. 4(b)\], but nonzero otherwise. The stability of the self-organized outflow $`Q_{\mathrm{out}}`$ depends strongly on the minimum jam distance $`s_0`$. It can be unstable (small $`s_0`$), metastable, or stable (large $`s_0`$). In the latter case, traffic instabilities can only lead to single localized clusters, not to stop-and go traffic. ## III Microscopic Simulation of Open Systems with an Inhomogeneity We simulated identical vehicles of length $`l=5`$ m with the typical IDM model parameters listed in Table I. Moreover, although the various congested states discussed in the following were observed on different freeways, all of them were qualitatively reproduced with very restrictive variations of one single parameter (the safe time headway $`T`$), while we always used the same values for the other parameters (see Table I). This indicates that the model is quite realistic and robust. Notice that all parameters have plausible values. The value $`T=1.6`$ s for the safe time headway is slightly lower than suggested by German authorities (1.8 s). The acceleration parameter $`a=0.73`$ m/s<sup>2</sup> corresponds to a free-road acceleration from $`v=0`$ to $`v=100`$ km/h within 45 s, cf. Fig. 2(b). This value is obtained by integrating the IDM acceleration $`\dot{v}=a[1(v/v_0)^\delta ]`$ with $`v_0=120`$ km/h, $`\delta =4`$, and the initial value $`v(0)=0`$. While this is considerably above minimum acceleration times (10 s - 20 s for average-powered cars), it should be characteristic for everyday accelerations. The comfortable deceleration $`b=1.67`$ m/s<sup>2</sup> is also consistent with empirical investigations , and with parameters used in more complex models . With efficient numerical integration schemes, we obtained a numerical performance of about $`10^5`$ vehicles in realtime on a usual workstation . ### A Modelling of Inhomogeneities Road inhomogeneities can be classified into flow-conserving local defects like narrow road sections or gradients, and those that do not conserve the average flow per lane, like on-ramps, off-ramps, or lane closings. Non-conserving inhomogeneities can be incorporated into macroscopic models in a natural way by adding a source term to the continuity equation for the vehicle density . An explicit microscopic modelling of on-ramps or lane closings, however, would require a multi-lane model with an explicit simulation of lane changes. Another possibility opened by the recently formulated micro-macro link is to simulate the ramp section macroscopically with a source term in the continuity equation , and to simulate the remaining stretch microscopically. In contrast, flow conserving inhomogeneities can be implemented easily in both microscopic and macroscopic single-lane models by locally changing the values of one or more model parameters or by imposing external decelerations . Suitable parameters for the IDM and the GKT model are the desired velocity $`v_0`$, or the safe time headway $`T`$. Regions with locally decreased desired velocity can be interpreted either as sections with speed limits, or as sections with uphill gradients (limiting the maximum velocity of some vehicles) . Increased safe time headways can be attributed to more careful driving behavior along curves, on narrow, dangerous road sections, or a reduced range of visibility. Local parameter variations act as a bottleneck, if the outflow $`Q_{\mathrm{out}}^{}`$ from congested traffic (dynamic capacity) in the downstream section is reduced with respect to the outflow $`Q_{\mathrm{out}}`$ in the upstream section. This outflow can be determined from fully developed stop-and go waves in a closed system, whose outflow is constant in a rather large range of average densities $`\rho _\mathrm{h}`$ \[20 veh./km, 60 veh./km\], cf. Fig. 4(b). It will turn out that the outflow $`Q_{\mathrm{out}}^{}`$ is the relevant capacity for understanding congested traffic, and not the maximum flow $`Q_{\mathrm{max}}`$ (static capacity), which can be reached in (spatially homogeneous) equilibrium traffic only. The capacities are decreased, e.g., for a reduced desired velocity $`v_0^{}<v_0`$ or an increased safe time headway $`T^{}>T`$, or both. Figure 5 shows $`Q_{\mathrm{out}}`$ and $`Q_{\mathrm{max}}`$ as a function of $`T`$. For $`T^{}>4`$ s, traffic flow is always stable, and the outflow from jams is equal to the static capacity. For extended congested states, all types of flow-conserving bottlenecks result in a similar traffic dynamics, if $`\delta Q=(Q_{\mathrm{out}}Q_{\mathrm{out}}^{})`$ is identical, where $`Q_{\mathrm{out}}^{}`$ is the outflow for the changed model parameters $`v_0^{}`$, $`T^{}`$, etc. Qualitatively the same dynamics is also observed in macroscopic models including on-ramps, if the ramp flow satisfies $`Q_{\mathrm{rmp}}\delta Q`$ . This suggests to introduce the following general definition of the “bottleneck strength” $`\delta Q`$: $$\delta Q:=Q_{\mathrm{rmp}}+Q_{\mathrm{out}}Q_{\mathrm{out}}^{},$$ (15) In particular, we have $`\delta Q=Q_{\mathrm{rmp}}`$ for on-ramp bottlenecks, and $`\delta Q=(Q_{\mathrm{out}}Q_{\mathrm{out}}^{})`$ for flow-conserving bottlenecks, but formula (15) is also applicable for a combination of both. ### B Phase Diagram of Traffic States in Open Systems In contrast to closed systems, in which the long-term behavior and stability is essentially determined by the average traffic density $`\rho _h`$, the dynamics of open systems is controlled by the inflow $`Q_{\mathrm{in}}`$ to the main road (i.e., the flow at the upstream boundary). Furthermore, traffic congestions depend on road inhomogeneities and, because of hysteresis effects, on the history of previous perturbations. In this paper, we will implement flow-conserving inhomogeneities by a variable safe time headway $`T(x)`$. We chose $`T`$ as variable model parameter because it influences the flows more effectively than $`v_0`$, which has been varied in Ref. . Specifically, we increase the local safe time headway according to $$T(x)=\{\begin{array}{cc}T\hfill & xL/2\hfill \\ T^{}\hfill & xL/2\hfill \\ T+(T^{}T)\left(\frac{x}{L}+\frac{1}{2}\right)\hfill & |x|<L/2,\hfill \end{array}$$ (16) where the transition region of length $`L=600`$ is analogous to the ramp length for inhomogeneities that do not conserve the flow. The bottleneck strength $`\delta Q(T^{})=[Q_{\mathrm{out}}(T)Q_{\mathrm{out}}(T^{})]`$ is an increasing function of $`T^{}`$, cf. Fig. 5(a). We investigated the traffic dynamics for various points $`(Q_{\mathrm{in}},\delta Q)`$ or, alternatively, $`(Q_{\mathrm{in}},T^{}T)`$ in the control-parameter space. Due to hysteresis effects and multistability, the phase diagram, i.e., the asymptotic traffic state as a function of the control parameters $`Q_{\mathrm{in}}`$ and $`\mathrm{\Delta }T`$, depends also on the history, i.e., on initial conditions and on past boundary conditions and perturbations. Since we cannot explore the whole functional space of initial conditions, boundary conditions, and past perturbations, we used the following three representative “standard” histories. * Assuming very low values for the initial density and flow, we slowly increased the inflow to the prescribed value $`Q_{\mathrm{in}}`$. * We started the simulation with a stable pinned localized cluster (PLC) state and a consistent value for the inflow $`Q_{\mathrm{in}}`$. Then, we adiabatically changed the inflow to the values prescribed by the point in the phase diagram. * After running history A, we applied a large perturbation at the downstream boundary. If traffic at the given phase point is metastable, this initiates an upstream propagating localized cluster which finally crosses the inhomogeneity, see Fig. 7(a)-(c). If traffic is unstable, the breakdown already occurs during time period A, and the additional perturbation has no dynamic influence. For a given history, the resulting phase diagram is unique. The solid lines of Fig. 6 show the IDM phase diagram for History C. Spatio-temporal density plots of the congested traffic states themselfes are displayed in Fig. 7. To obtain the spatiotemporal density $`\rho (x,t)`$ from the microscopic quantities, we generalize the micro-macro relation (12) to define the density at discrete positions $`x_\alpha +\frac{l+s_\alpha }{2}`$ centered between vehicle $`\alpha `$ and its predecessor, $$\rho (x_\alpha +\frac{l+s_\alpha }{2})=\frac{1}{l+s_\alpha },$$ (17) and interpolate linearly between these positions. Depending on $`Q_{\mathrm{in}}`$ and $`\delta T:=(T^{}T)`$, the downstream perturbation (i) dissipates, resulting in free traffic (FT), (ii) travels through the inhomogeneity as a moving localized cluster (MLC) and neither dissipates nor triggers new breakdowns, (iii) triggers a traffic breakdown to a pinned localized cluster (PLC), which remains localized near the inhomogeneity for all times and either is stationary (SPLC), cf. Fig. 8(a) for $`t<0.2`$ h, or oscillatory (OPLC). (iv) Finally, the initial perturbation can induce extended congested traffic (CT), whose downstream boundaries are fixed at the inhomogeneity, while the upstream front propagates further upstream in the course of time. Extended congested traffic can be homogeneous (HCT), oscillatory (OCT), or consist of triggered stop-and-go waves (TSG). We also include in the HCT region a complex state (HCT/OCT) where traffic is homogeneous only near the bottleneck, but growing oscillations develop further upstream. In contrast to OCT, where there is permanently congested traffic at the inhomogeneity (“pinch region” ), the TSG state is characterized by a series of isolated density clusters, each of which triggers a new cluster as it passes the inhomogeneity. The maximum perturbation of History C, used also in Ref. , seems to select always the stable extended congested phase. We also scanned the control-parameter space $`(Q_{\mathrm{in}},\delta Q)`$ with Histories A and B exploring the maximum phase space of the (meta)stable FT and PLC states, respectively, cf. Fig. 6. In multistable regions of the control-parameter space, the three histories can be used to select the different traffic states, see below. ### C Multistability In general, the phase transitions between free traffic, pinned localized states, and extended congested states are hysteretic. In particular, in all four examples of Fig. 7, free traffic is possible as a second, metastable state. In the regions between the two dotted lines of the phase diagram Fig. 6, both, free and congested traffic is possible, depending on the previous history. In particular, for all five indicated phase points, free traffic would persist without the downstream perturbation. In contrast, the transitions PLC-OPLC, and HCT-OCT-TSG seem to be non-hysteretic, i.e., the type of pinned localized cluster or of extended congested traffic, is uniquely determined by $`Q_{\mathrm{in}}`$ and $`\delta Q`$. In a small subset of the metastable region, labelled “TRI” in Fig. 6, we even found tristability with the possible states FT, PLC, and OCT. Figure 8(a) shows that a single moving localized cluster passing the inhomogeneity triggers a transition from PLC to OCT. Starting with free traffic, the same perturbation would trigger OCT as well \[Fig. 8(b)\], while we never found reverse transitions OCT $``$ PLC or OCT $``$ FT (without a reduction of the inflow). That is, FT and PLC are metastable in the tristable region, while OCT is stable. We obtained qualitatively the same also for the macroscopic GKT model with an on-ramp as inhomogeneity \[Fig. 8(c)\]. Furthermore, tristability between FT, OPLC, and OCT has been found for the IDM model with variable $`v_0`$ , and for the KKKL model . Such a tristability can only exist if the (self-organized) outflow $`Q_{\mathrm{out}}^{\mathrm{OCT}}=\stackrel{~}{Q}_{\mathrm{out}}`$ from the OCT state is lower than the maximum outflow $`Q_{\mathrm{out}}^{\mathrm{PLC}}`$ from the PLC state. A phenomenological explanation of this condition can be inferred from the positions of the downstream fronts of the OCT and PLC states shown in Fig. 8(a). The downstream front of the OCT state ($`t>1`$ h) is at $`x300`$ m, i.e., at the downstream boundary of the $`L=600`$ m wide transition region, in which the safe time headway Eq. (16) increases from $`T`$ to $`T^{}>T`$. Therefore, the local safe time headway at the downstream front of OCT is $`T^{}`$, or $`Q_{\mathrm{out}}^{\mathrm{OCT}}Q_{\mathrm{out}}^{}`$, which was also used to derive Eq. (18). In contrast, the PLC state is centered at about $`x=0`$, so that an estimate for the upper boundary $`Q_{\mathrm{out}}^{\mathrm{PLC}}`$ of the outflow is given by the self-organized outflow $`Q_{\mathrm{out}}`$ corresponding to the local value $`T(x)=(T+T^{})/2`$ of the safe time headway at $`x=0`$. Since $`(T+T^{})/2<T^{}`$ this outflow is higher than $`Q_{\mathrm{out}}^{\mathrm{OCT}}`$ (cf. Fig. 5). It is an open question, however, why the downstream front of the OCT state is further downstream compared to the PLC state. Possibly, it can be explained by the close relationship of OCT with the TSG state, for which the newly triggered density clusters even enter the region downstream of the bottleneck \[cf. Fig. 7(c)\]. In accordance with its relative location in the phase diagram, it is plausible that the OCT state has a “penetration depth” into the downstream area that is in between the one of the PLC and the TSG states. ### D Boundaries between and Coexistence of Traffic States Simulations show that the outflow $`\stackrel{~}{Q}_{\mathrm{out}}`$ from the nearly stationary downstream fronts of OCT and HCT satisfies $`\stackrel{~}{Q}_{\mathrm{out}}Q_{\mathrm{out}}^{}`$, where $`Q_{\mathrm{out}}^{}`$ is the outflow from fully developed density clusters in homogeneous systems for the downstream model parameters. If the bottleneck is not too strong (in the phase diagram Fig. 6, it must satisfy $`\delta Q<350`$ vehicles/h), we have $`\stackrel{~}{Q}_{\mathrm{out}}Q_{\mathrm{out}}^{}`$. Then, for all types of bottlenecks, the congested traffic flow is given by $`Q_{\mathrm{cong}}=\stackrel{~}{Q}_{\mathrm{out}}Q_{\mathrm{rmp}}Q_{\mathrm{out}}^{}Q_{\mathrm{rmp}}`$, or $$Q_{\mathrm{cong}}Q_{\mathrm{out}}\delta Q.$$ (18) Extended congested traffic (CT) only persists, if the inflow $`Q_{\mathrm{in}}`$ exceeds the congested traffic flow $`Q_{\mathrm{cong}}`$. Otherwise, it dissolves to PLC. This gives the boundary $$\text{CT}\text{PLC}:\delta QQ_{\mathrm{out}}Q_{\mathrm{in}}.$$ (19) If the traffic flow of CT states is linearly stable (i.e., $`Q_{\mathrm{cong}}<Q_{\mathrm{c3}}`$), we have HCT. If, for higher flows, it is linearly unstable but convectively stable, $`Q_{\mathrm{cong}}[Q_{\mathrm{c3}},Q_{\mathrm{cv}}]`$, one has a spatial coexistence HCT/OCT of states with HCT near the bottleneck and OCT further upstream. If, for yet higher flows, congested traffic is also convectively unstable, the resulting oscillations lead to TSG or OCT. In summary, the boundaries of the nonhysteretic transitions are given by $$\begin{array}{cc}\text{HCT}\text{HCT/OCT}:\hfill & \delta QQ_{\mathrm{out}}Q_{\mathrm{c3}},\hfill \\ \text{OCT}\text{HCT/OCT}:\hfill & \delta QQ_{\mathrm{out}}Q_{\mathrm{cv}}.\hfill \end{array}$$ (20) Congested traffic of the HCT/OCT type is frequently found in empirical data . In the IDM, this frequent occurrence is reflected by the wide range of flows falling into this regime. For the IDM parameters chosen here, we have $`Q_{\mathrm{c3}}=0`$, and $`Q_{\mathrm{cv}}=1050`$ vehicles/h, i.e., all congested states are linearly unstable and oscillations will develop further upstream, while $`Q_{\mathrm{c3}}`$ is nonzero for the parameters of Ref. . Free traffic is (meta)stable in the overall system if it is (meta)stable in the bottleneck region. This means, a breakdown necessarily takes place if the inflow $`Q_{\mathrm{in}}`$ exceeds the critical flow $`[Q_{\mathrm{c2}}^{}(\delta Q)Q_{\mathrm{rmp}}]`$, where the linear stability threshold $`Q_{\mathrm{c2}}^{}(\delta Q)`$ in the downstream region is some function of the bottleneck strength. For the IDM with the parameters chosen here, we have $`Q_{\mathrm{c2}}^{}Q_{\mathrm{out}}^{}`$, see Fig. 4(b). Then, the condition for the maximum inflow allowing for free traffic simplifies to $$\text{FT}\text{PLC or FT}\text{CT}:Q_{\mathrm{in}}Q_{\mathrm{out}}\delta Q,$$ (21) i.e., it is equivalent to relation (19). In the phase diagram of Fig. 6, this boundary is given by the dotted line. For bottleneck strengths $`\delta Q350`$ vehicles/h, this line coincides with that of the transition CT $``$ PLC, in agrement with Eqs (21) and (19). For larger bottleneck strengths, the approximation $`\stackrel{~}{Q}_{\mathrm{out}}Q_{\mathrm{out}}^{}`$ used to derive relation (19) is not fulfilled. ## IV Empirical Data of Congested Traffic States and their Microscopic Simulation We analyzed one-minute averages of detector data from the German freeways A5-South and A5-North near Frankfurt, A9-South near Munich, and A8-East from Munich to Salzburg. Traffic breakdowns occurred frequently on all four freeway sections. The data suggest that the congested states depend not only on the traffic situation but also on the specific infrastructure. On the A5-North, we mostly found pinned localized clusters (ten times during the observation period). Besides, we observed moving localized clusters (two times), triggered stop-and-go traffic (three times), and oscillating congested traffic (four times). All eight recorded traffic breakdowns on the A9-South were to oscillatory congested traffic, and all emerged upstream of intersections. The data of the A8 East showed OCT with a more heavily congested HCT/OCT state propagating through it. Besides this, we found breakdowns to HCT/OCT on the A5-South (two times), one of them caused by lane closing due to an external incident. In contrast, HCT states are often found on the Dutch freeway A9 from Haarlem to Amsterdam behind an on-ramp with a very high inflow . Before we present representative data for each traffic state, some remarks about the presentation of the data are in order. ### A Presentation of the Empirical Data In all cases, the traffic data were obtained from several sets of double-induction-loop detectors recording, separately for each lane, the passage times and velocities of all vehicles. Only aggregate information was stored. On the freeways A8 and A9, the numbers of cars and trucks that crossed a given detector on a given lane in each one-minute interval, and the corresponding average velocities was recorded. On the freeways A5-South and A5-North, the data are available in form of a histogram for the velocity distribution. Specifically, the measured velocities are divided into $`n_r`$ ranges ($`n_r=15`$ for cars and 12 for trucks), and the number of cars and trucks driving in each range are recorded for every minute. This has the advantage that more “microscopic” information is given compared to one-minute averages of the velocity. In particular, the local traffic density $`\rho ^{}(x,t)=Q/V^{}`$ could be estimated using the “harmonic” mean $`V^{}=1/v_\alpha ^1`$ of the velocity, instead of the arithmetic mean $`\rho =Q/V`$ with $`V=v_\alpha `$. Here, $`Q`$ is the traffic flow (number of vehicles per time iterval), and $`\mathrm{}`$ denotes the temporal average over all vehicles $`\alpha `$ passing the detector within the given time interval. The harmonic mean value $`V^{}`$ corrects for the fact that the spatial velocity distribution differs from the locally measured one . However, for better comparison with those freeway data, where this information is not available, we will use always the arithmetic velocity average $`V`$ in this paper. Unfortunately, the velocity intervals of the A5 data are coarse. In particular, the lowest interval ranges from 0 to 20 km/h. Because we used the centers of the intervals as estimates for the velocity, there is an artificial cutoff in the corrsponding flow-density diagrams 10(b), and 15(c). Below the line $`Q_{\mathrm{min}}(\rho )=V_{\mathrm{min}}\rho `$ with $`V_{\mathrm{min}}=10`$ km/h. Besides time series of flow and velocity and flow-density diagrams, we present the data also in form of three-dimensional plots of the locally averaged velocity and traffic density as a function of position and time. This representation is particularly useful to distinguish the different congested states by their qualitative spatio-temporal dynamics. Two points are relevant, here. First, the smallest time scale of the collective effects (i.e., the smallest period of density oscillations) is of the order of 3 minutes. Second, the spatial resolution of the data is restricted to typical distances between two neighboring detectors which are of the order of 1 kilometer. To smooth out the small-timescale fluctuations, and to obtain a continuous function $`Y_{\mathrm{emp}}(x,t)`$ from the one-minute values $`Y(x_i,t_j)`$ of detector $`i`$ at time $`t_j`$ with $`Y=\rho `$, $`V`$, or $`Q`$, we applied for all three-dimensional plots of an empirical quantity $`Y`$ the following smoothing and interpolation procedure: $$Y_{\mathrm{emp}}(x,t)=\frac{1}{N}\underset{x_i}{}\underset{t_j}{}Y(x_i,t_j)\mathrm{exp}\left\{\frac{(xx_i)^2}{2\sigma _x^2}\frac{(tt_j)^2}{2\sigma _t^2}\right\}.$$ (22) The quantity $$N=\underset{x_i}{}\underset{t_j}{}\mathrm{exp}\left\{\frac{(xx_i)^2}{2\sigma _x^2}\frac{(tt_j)^2}{2\sigma _t^2}\right\},$$ (23) is a normalization factor. We used smoothing times and length scales of $`\sigma _t=1.0`$ min and $`\sigma _x=0.2`$ km, respectively. For consistency, we applied this smoothing operation also to the simulation results. Unless explicitely stated otherwise, we will understand all empirical data as lane averages. ### B Homogeneous Congested Traffic Figure 9 shows data of a traffic breakdown on the A5-South on August 6, 1998. Sketch 9(a) shows the considered section. The flow data at cross section D11 in Fig. 9(b) illustrate that, between 16:20 h and 17:30 h, the traffic flow on the right lane dropped to nearly zero. For a short time interval between 17:15 h and 17:25 h also the flow on the middle lane dropped to nearly zero. Simultaneously, there is a sharp drop of the velocity at this cross section on all lanes, cf. Fig. 10(d). In contast, the velocities at the downstram cross section D12 remained relatively high during the same time. This suggests a closing of the right lane at a location somewhere between the detectors D11 and D12. Figures 10(d) and (f) show that, in most parts of the congested region, there were little variations of the velocity. The traffic flow remained relatively high, which is a signature of synchronized traffic . In the immediate upstream (D11) and downstream (D12) neighbourhood of the bottleneck, the amplitude of the fluctuations of traffic flow was low as well, in particular, it was lower than in free traffic (time series at D11 and D12 for $`t<`$ 16:20 h, or $`t>`$ 18:00 h). Further upstream in the congested region (D10), however, the fluctuation amplitude increases. After the bottleneck was removed at about 17:35 h, the previously fixed downstream front started moving in the upstream direction at a characteristic velocity of about 15 km/h . Simultaneously, the flow increased to about 1600 vehicles/h, see plots 10(e) and 10(g). After the congestion dissolved at about 17:50 h, the flow dropped to about 900 vehicles/h/lane, which was the inflow at that time. Figure 10(a) shows the flow-density diagram of the lane-averaged one-minute data. In agreement with the absence of large oscillations (like stop-and-go traffic), the regions of data points of free and congested traffic were clearly separated. Furthermore, the transition from the free to the congested state and the reverse transition showed a clear hysteresis. The spatio-temporal plot of the local velocity in Fig. 11(a) shows that the incident induced a breakdown to an extended state of essentially homogeneous congested traffic. Only near the upstream boundary, there were small oscillations. While the upstream front (where vehicles entered the congested region) propagated upstream, the downstream front (where vehicles could accelerate into free traffic) remained fixed at the bottleneck at $`x478`$ km. In the spatio-temporal plot of the traffic flow Fig. 11(b), one clearly can see the flow peak in the region $`x>476`$ km after the bottleneck was removed. We estimate now the point in the phase diagram to which this situation belongs. The average inflow $`Q_{\mathrm{in}}`$ ranging from 1100 vehicles/h at $`t=`$ 16:00 h to about 900 vehicles/h ($`t=`$ 18:00 h) can be determined from an upstream cross section which is not reached by the congestion, in our case D6. Because the congestion emits no stop-and go waves, we conclude that the free traffic in the inflow region is stable, $`Q_{\mathrm{in}}(t)<Q_{\mathrm{c1}}`$. We estimate the bottleneck strength $`\delta Q=Q_{\mathrm{out}}\stackrel{~}{Q}_{\mathrm{out}}`$ 700 vehicles/h by identifying the time- and lane-averaged flow at D11 during the time of the incident (about 900 vehicles/h) with the outflow $`\stackrel{~}{Q}_{\mathrm{out}}`$ from the bottleneck, and the average flow of 1600 vehicles/h during the flow peak (when the congestion dissolved) with the (universal) dynamical capacity $`Q_{\mathrm{out}}`$ on the homogeneous freeway (in the absence of a bottleneck-producing incident). For the short time interval where two lanes were closed, we even have $`\stackrel{~}{Q}_{\mathrm{out}}500`$ vehicles/h corresponding to $`\delta Q1100`$ vehicles/h. (Notice, that the lane averages were always carried out over all three lanes, also if lanes were closed.) Finally, we conclude from the oscillations near the upstream boundary of the congestion, that the congested traffic flow $`Q_{\mathrm{cong}}=(Q_{\mathrm{out}}\delta Q)`$ is linearly unstable, but convectively stable. Thus, the breakdown corresponds to the HCT/OCT regime. We simulated the situation with the IDM parameters from Table I, with upstream boundary conditions taken from the data of cross section D6 \[cf. Fig. 10 (h) and (i)\] and homogeneous von Neumann downstream boundary conditions. We implemented the temporary bottleneck by locally increasing the model parameter $`T`$ to some value $`T^{}>T`$ in an 1 km long section centered around the location of the incident. This section represents the actually closed road section and the merging regions upstream and downstream from it. During the incident, we chose $`T^{}`$ such that the outflow $`\stackrel{~}{Q}_{\mathrm{out}}^{}`$ from the bottleneck agrees roughly with the data of cross-section D11. At the beginning of the simulated incident, we increased $`T^{}`$ abruptly from $`T=1.6`$ s to $`T^{}=5s`$, and decreased it linearly to 2.8 s during the time interval (70 minutes) of the incident. Afterwards, we assumed again $`T^{}=T=1.6`$ s. The grey lines of Figs. 10 (c) to 10 (j) show time series of the simulated velocity and flow at some detector positions. Figures 11(c) and (d) show plots of the smoothed spatio-temporal velocity and flow, respectively. Although, in the microscopic picture, the modelled increase of the safe time headway is quite different from lane changes before a bottleneck, the qualitative dynamics is essentially the same as that of the data. In particular, (i) the breakdown occured immediately after the bottleneck has been introduced. (ii) As long as the bottleneck was active, the downstream front of the congested state remained stationary and fixed at the bottleneck, while the upstream front propagated further upstream. (iii) Most of the congested region consisted of HCT, but oscillations appeared near the upstream front. The typical period of the simulated oscillations ($`3`$ min), however, was shorter than that of the measured data ($``$ 8 min). (iv) As soon as the bottleneck was removed, the downstream front propagated upstream with the well-known characteristic velocity $`v_g=15`$ km/h, and there was a flow peak in the downstream regions until the congestion had dissolved, cf. Figs. 10(c), 10(e), and 11(d). During this time interval, the velocity increased gradually to the value for free traffic. Some remarks on the apparently non-identical upstram boundary conditions in the empirical and simulated plots 11(a) and 11(c) are in order. In the simulation, the velocity relaxes quickly from its prescribed value at the upstream boundary to a value corresponding to free equilibrium traffic at the given inflow. This is a rather generic effect which also occurs in macroscopic models . The relaxation takes places within the boundary region $`3\sigma _x=0.6`$ km needed for the smoothing procedure (22) and is, therefore, not visible in the figures. Consequently, the boundary conditions for the velocity look different, although they have been taken from the data. In contrast, the traffic flow cannot relax because of the conservation of the number of vehicles , and the boundary conditions in the corresponding empirical and simulated plots look, therefore, consistent \[see. Figs. 11 (b) and (d)\]. These remarks apply also to all other simulations below. ### C Oscillating Congested Traffic We now present data from a section of the A9-South near Munich. There are two major intersections I1 and I2 with other freeways, cf. Fig. 12(a). In addition, the number of lanes is reduced from three to two downstream of I2. There are three further small junctions between I1 and I2 which did not appear to be dynamically relevant. The intersections, however, were major bottleneck inhomogeneities. Virtually on each weekday, traffic broke down to oscillatory congested traffic upstream of intersection I2. In addition, we recorded two breakdowns to OCT upstream of I1 during the observation period of 14 days. Figure 12(b) shows a spatio-temporal plot of the smoothed velocity of the OCT state occurring upstream of I2 during the morning rush hour of October 29, 1998. The oscillations with a period of about 12 min are clearly visible in both the time series of the velocity data, plots 12(d)-(f), and the flow, 12(g)-(i). In contrast to the observations of Ref. , the density waves apparently did not merge. Furthermore, the velocity in the OCT state rarely exceeded 50 km/h, i.e., there was no free traffic between the clusters, a signature of OCT in comparison with triggered stop-and go waves. The clusters propagated upstream at a remarkably constant velocity of 15 km/h, which is nearly the same propagation velocity as that of the detached downstream front of the HCT state described above. Figure 12(c) shows the flow-density diagram of this congested state. In contrast to the diagram 10(b) for the HCT state, there is no separation between the regions of free and congested traffic. Investigating flow-density diagrams of many other occurrences of HCT and OCT, it turned out that this difference can also be used to empirically distinguish HCT from OCT states. Now we show that this breakdown to OCT can be qualitatively reproduced by a microsimulation with the IDM. As in the previous simulation, we used empirical data for the upstream boundary conditions. (Again, the velocity relaxes quickly to a local equilibrium, and only for this reason it looks different from the data.) We implemented the bottleneck by locally increasing the safe time headway in the downstream region. In contrast to the previous simulations, the local defect causing the breakdown was a permanent inhomogeneity of the infrastructure (namely an intersection and a reduction from three to two lanes) rather than a temporary incident. Therefore, we did not assume any time dependence of the bottleneck. As upstream boundary conditions, we chose the data of D20, the only cross section where there was free traffic during the whole time interval considered here. Furthermore, we used homogeneous von Neumann boundary conditions at the downstream boundary. Without assuming a higher-than-observed level of inflow, the simulations showed no traffic breakdowns at all. Obviously, on the freeway A9 the capacity per lane is lower than on the freeway A5 (which is several hundret kilometers apart). This lower capacity has been taken into account by a site-specific, increased value of $`T=2.2`$ s in the upstream region $`x<0.2`$ km. An even higher value of $`T^{}=2.5`$ s was used in the bottleneck region $`x>0.2`$ km, with a linear increase in the 400 m long transition zone. The corresponding microsimulation is shown in Fig. 13. In this way, we obtained a qualitative agreement with the A9 data. In particular, (i) traffic broke down at the bottleneck spontaneously, in contrast to the situation on the A5. (ii) Similar to the situation on the A5, the downstream front of the resulting OCT state was fixed at the bottleneck while the upstream front propagated further upstream. (iii) The oscillations showed no mergers and propagated with about 15 km/h in upstream direction. Furthermore, their period (8-10 min) is comparable with that of the data, and the velocity in the OCT region was always much lower than that of free traffic. (iv) After about 1.5 h, the upstream front reversed its propagation direction and eventually dissolved. The downstream front remained always fixed at the permanent inhomogeneity. Since, at no time, there is a clear transition from congested to free traffic in the region upstream of the bottleneck (from which one could determine $`Q_{\mathrm{out}}`$ and compare it with the outflow $`\stackrel{~}{Q}_{\mathrm{out}}Q_{\mathrm{out}}^{}`$ from the bottleneck), an estimate of the empirical bottleneck strength $`(Q_{\mathrm{out}}Q_{\mathrm{out}}^{})`$ is difficult. Only at D26, for times around 10:00 h, there is a region where the vehicles accelerate. Using the corresponding traffic flow as coarse estimate for $`Q_{\mathrm{out}}`$, and the minimum smoothed flow at D26 (occurring between $`t`$ 8:00 h and 8:30 h) as an estimate for $`Q_{\mathrm{out}}^{}`$, leads to an empirical bottleneck strength $`\delta Q`$ of 400 vehicles/h. This is consistent with the OCT regime in the phase diagram Fig. 6. However, estimating the theoretical bottleneck strength directly from the difference $`(TT^{})`$, using Fig. 5, would lead to a smaller value. To obtain a full quantitative agreement, it would probably be necessary to calibrate more than just one IDM parameter to the site-specific driver-vehicle behavior, or to explicitely model the bottleneck by on- and off-ramps. ### D Oscillating Congested Traffic Coexisting with Jammed Traffic Figure 14 shows an example of a more complex traffic breakdown that occurred on the freeway A8 East from Munich to Salzburg during the evening rush hour on November 2, 1998. Two different kinds of bottlenecks were involved, (i) a relatively steep uphill gradient from $`x=38`$ km to $`x=40`$ km (“Irschenberg”), and (ii) an incident leading to the closing of one of the three lanes between the cross sections D23 and D24 from $`t=`$ 17:40 h until $`t=`$ 18:10 h. The incident was deduced from the velocity and flow data of the cross sections D23 and D24 as described in Section IV B. As further inhomogeneity, there is a small junction at about $`x=41.0`$ km. However, since the involved ramp flows were very small, we assumed that the junction had no dynamical effect. The OCT state caused by the uphill gradient had the same qualitative properties as that on the A9 South. In particular, the breakdown was triggered by a short flow peak corresponding to a velocity dip in Plot 14(b), the downstream front was stationary, while the upstream front moved, and all oscillations propagated upstream with a constant velocity. The combined HCT/OCT state caused by the incident had similar properties as that on the A5 South. In particular, there was HCT near the location of the incident, corresponding to the downstream boundary of the velocity plot 14(b), while oscillations developped further upstream. Furthermore, similarly to the incident on the A5, the downstream front propagated upstream as soon as the incident was cleared. The plot clearly shows, that the HCT/OCT state propagated seemingly unperturbed through the OCT state upstream of the permanent uphill bottleneck. The upstream propagation velocity $`v_g=15`$ km/h of all perturbations in the complex state was remarkably constant, in particular that of (i) the upstream and downstream fronts separating the HCT/OCT state from free traffic (for $`x>40`$ km at $`t`$ 17:40 h and 18:10 h, respectively), (ii) the fronts separating the HCT/OCT from the OCT state (35 km $`x`$ 40 km), and (iii) the oscillations within both the HCT and HCT/OCT states. In contrast, the propagation velocity and direction of the front separating the OCT from free traffic varied with the inflow. We simulated this scenario using empirical (lane averaged) data both for the upstream and downstream boundaries. For the downstream boundary, we used only the velocity information. Specifically, when, at some time $`t`$, a simulated vehicle $`\alpha `$ crosses the downstream boundary of the simulated section $`x[0,L]`$, we set its velocity to that of the data, $`v_\alpha =V_{D15}(t)`$, if $`x_\alpha L`$, and use the velocity and positional information of this vehicle to determine the acceleration of the vehicle $`\alpha +1`$ behind. Vehicle $`\alpha `$ is taken out of the simulation as soon as vehicle $`\alpha +1`$ has crossed the boundary. Then, the velocity of vehicle $`(\alpha +1)`$ is set to the actual boundary value, and so on. The downstream boundary conditions are only relevant for the time interval around $`t=18:00`$ h where traffic near this boundary is congested. For other time periods, the simulation result is equivalent to using homogeneous Von-Neumann downstream boundary conditions. We modelled the stationary uphill bottleneck in the usual way by increasing the parameter $`T`$ to a constant value $`T^{}>T`$ in the downstream region. The incident was already reflected by the downstream boundary conditions. Figure 14(c) shows the simulation result in form of a spatio-temporal plot of the smoothed velocity. Notice that, by using only the boundary conditions and a stationary bottleneck as specific information, we obtained a qualitative agreement of nearly all dynamical collective aspects of the whole complex scenario described above. In particular, for all times $`t<`$ 17:50 h, there was free traffic at both upstream and downstream detector positions. Therefore, the boundary conditions (the detector data) did not contain any explicit information about the breakdown to OCT inside the road section, which nevertheless was reproduced correctly as an emergent phenomenon. ### E Pinned and Moving Localized Clusters Finally, we consider a 30 km long section of the A5-North depicted in Fig. 15(a). On this section, we found one or more traffic breakdowns on six out of 21 days, all of them Thursdays or Fridays. On three out of 20 days, we observed one or more stop-and-go waves separated by free traffic. The stop-and go waves were triggered near an intersection and agreed qualitatively with the TSG state of the phase diagram. On one day, two isolated density clusters propagated through the considered region and did not trigger any secondary clusters, which is consistent with moving localized clusters (MLC) and will be discussed below. Moving localized clusters were observed quite frequently on this freeway section . Again, they have a constant upstream propagation speed of about 15 km/h, and a characteristic outflow . In addition, we found four breakdowns to OCT, and ten occurrences of pinned localized clusters (PLC). The PLC states emerged either at the intersection I1 (Nordwestkreuz Frankfurt), or 1.5 km downstream of intersection I2 (Bad Homburg) at cross section D13. Furthermore, the downstream fronts of all four OCT states were fixed at the latter location. On August 6, 1998, we found an interesting transition from an OCT state whose downsteam front was at D13, to a TSG state with a downstram front at intersection I2 (D15). Consequently, we conclude that, around detector D13, there is a stationary flow-conserving bottleneck with a stronger effect than the intersection itself. Indeed, there is an uphill section and a relatively sharp curve at this location of the A5-North, which may be the reason for the bottleneck. The sudden change of the active bottleneck on August 6 can be explained by perturbations and the hysteresis associated with breakdowns. The different types of traffic breakdowns are consistent with the relative locations of the traffic states in the $`(Q_{\mathrm{in}},\delta Q)`$ space of the phase diagram in Fig. 6. Three of the four occurrences of OCT and two of the three TSG states were on Fridays (August 14 and August 21, 1998), on which traffic flows were about 5% higher than on our reference day (Friday, August 7, 1998), which will be discussed in detail below. Apart from the coexistent PLCs and MLCs observed on the reference day, all PLC states occurred on Thursdays, where average traffic flows were about 5% lower than on the reference day. No traffic breakdowns were observed on Saturdays to Wednesdays, where the traffic flows were at least 10% lower compared to the reference day. As will be shown below, for complex bottlenecks like intersections, the coexistence of MLCs and PLCs is only possible for flows just above those triggering pure PLCs, but below those triggering OCT states. So, we have, with increasing flows, the sequence FT, PLC, MLC-PLC, and OCT or TSG states, in agreement with the theory. Now, we discuss the traffic breakdowns in August 7, 1998 in detail. Figure 15(b) shows the situation from $`t=`$ 13:20 h until 17:00 h in form of a spatio-temporal plot of the smoothed density. During the whole time interval, there was a pinned localized cluster at cross section D13. Before $`t=`$ 14:00 h, the PLC state showed distinct oscillations (OPLC), while it was essentially stationary (HPLC) afterwards. Furthermore, two moving localized clusters (MLC) of unknown origin propagated through nearly the whole displayed section and also through a 10 km long downstream section (not shown here) giving a total of at least 30 km. Remarkably, as they crossed the PLC at D13, neither of the congested states seemed to be affected. This complements the observations of Ref. , desribing MLC states that propagated unaffected through intersections in the absence of PLCs. As soon as the first MLC state reached the location of the on-ramp of intersection I1 ($`x=488.8`$ km, $`t`$ 15:10 h), it triggered an additional pinned localized cluster, which dissolved at $`t`$ 16:00 h. The second MLC dissolved as soon as it reached the on-ramp of I1 at $`t16:40`$ h. Figure 15(c) demonstrates that the MLC and PLC states have characteristic signatures also in the empirical flow-density diagram. As is the case for HCT and OCT, the PLC state is characterized by a two-dimensional flow-density regime (grey squares). In contrast to the former states, however, there is no flow reduction (capacity drop) with respect to free traffic (black bullets). As is the case for flow-density diagrams of HCT compared to OCT, it is expected that HPLCs are characterized by an isolated region, while the points of OPLC lie in a region which is connected to the region for free traffic. During the periods were the MLCs crossed the PLC at D13, the high traffic flow of the PLC state dropped drastically, and the traffic flow had essentially the property of the MLC, see also the velocity plot 15(e). Therefore, we omitted in the PLC data the points corresponding to these intervals. The black bullets for densities $`\rho >30`$ vehicles/km indicate the region of the MLC (or TSG) states. Due to the aforementioned difficulties in determining the traffic density for very low velocities, the theoretical line $`J`$ given by $`Q_\mathrm{J}(\rho )=Q_{\mathrm{out}}(Q_{\mathrm{out}}Q_{\mathrm{jam}})(\rho \rho _{\mathrm{out}})/(\rho _{\mathrm{jam}}\rho _{\mathrm{out}})`$ (see Ref. and Fig. 15(b)) is hard to find empirically. In any case, the data suggest that the line $`J`$ would lie below the PLC region. To simulate this scenario it is important that the PLC states occurred in or near the freeway intersections. Because at both intersections, the off-ramp is upstream of the on-ramp \[Fig. 15(a)\], the local flow at these locations is lower. In the following, we will investigate the region around I2. During the considered time interval, the average traffic flow of both, the on-ramp and the off-ramp was about 300 vehicles per hour and lane. With the exception of the time intervals, during which the two MLCs pass by, we have about 1200 vehicles per hour and lane at I2 (D15), and 1500 vehicles per hour and lane upstram (D16) and downstream (D13) of I2. This corresponds to an increase of the effective capacity by $`\delta Q300`$ vehicles per hour and lane in the region between the off-ramp and the subsequent on-ramp. In the simulation, we captured this qualitatively by decreasing the parameter $`T`$ in a section $`x[x_1,x_2]`$ upstream of the empirically observed PLC state. The hypothetical bottleneck located at D13, i.e., about 1 km upstream of the on-ramp, was neglected. Using real traffic flows as upstream and downstram boundary conditions and varying only the model parameter $`T`$ within and outside of the intersection, we could not obtain satisfactory simulation results. This is probably because of the relatively high and fluctuating traffic flow on this highway. It remains to be shown if simulations with other model parameters can successfully reproduce the empirical data when applying real boundary conditions. Now, we show that the main qualitative feature on this highway, namely, the coexistence of pinned and moving localized clusters can, nevertheless, be captured by our model. For this purpose, we assume a constant inflow $`Q_{\mathrm{in}}=1390`$ vehicles per hour and lane to the freeway, with the corresponding equilibrium velocity. We initialize the PLC by a triangular-shaped density peak in the initial conditions, and initialize the MLCs by reducing the velocity at the downstream boundary to $`V=12`$ km/h during two five-minute intervals (see caption of Fig. 16). Again, we obtained a qualitative agreement with the observed dynamics. In particular, the simulation showed that also an increase of the local capacity in a bounded region can lead to pinned localized clusters. Furthermore, the regions of the MLC and PLC states in the flow-density diagram were reproduced qualitatively, in particular, the coexistence of pinned and moving localized clusters. We did not observe such a coexistence in the simpler system underlying the phase diagram in Fig. 6, which did not include a second low-capacity stretch upstream of the high-capacity stretch. To explain the coexistence of PLC and MLC in the more complex system consisting of one high-capacity stretch in the middle of two low-capacity stretches, it is useful to interpret the inhomogeneity not in terms of a local capacity increase in the region $`x[x_1,x_2]`$, but as a capacity decrease for $`x<x_1`$ and $`x>x_2`$. (For simplicity, we will not explicitely include the 400 m long transition regions of capacity increase at $`x_1`$ and decrease at $`x_2`$ in the following discussion.) Then, the location $`x=x_2`$ can be considered as the beginning of a bottleneck, as in the system underlying the phase diagram. If the width $`(x_2x_1)`$ of the region with locally increased capacity is larger than the width of PLCs, such clusters are possible under the same conditions as in the standard phase diagram. In particular, traffic in the standard system is stable in regions upstream of a PLC, which is the reason why any additional MLC, triggered somewhere in the downstream region $`x>x_2`$ and propagating upstream, will vanish as soon as it crosses the PLC at $`x=x_2`$. However, this disappearance is not instantaneous, but the MLC will continue to propagate upstream for an additional flow-dependent “dissipation distance” or “penetration depths”. If the width $`(x_2x_1)`$ is smaller than the dissipation distance for MLCs, crossing MLCs will not fully disappear before they reach the upstream region $`x<x_1`$. There, traffic is metastable again, so that the MLCs can persist. Since, in the metastable regime, the outflow of MLCs is equal to their inflow (in this regime, MLCs are equivalent to “narrow” clusters, cf. Ref. ), the passage of the MLC does not change the traffic flow at the position of the PLC, which can, therefore, persist as well. We performed several simulations varying the inflow within the range where PLCs are possible. For smaller inflows, the dissipation distance became smaller than $`(x_2x_1)`$, and the moving localized cluster was absorbed within the inhomogeneity. An example for this can be seen in Fig. 15(b) at $`t`$ 16:40 h and $`x`$ 489 km. Larger inflows lead to an extended OCT state upstream of the capacity-increasing defect, which is also in accordance with the observations. ## V Conclusion In this paper, we investigated, to what extent the phase diagram Fig, 6 can serve as a general description of collective traffic dynamics in open, inhomogeneous systems. The original phase diagram was formulated for on-ramps and resulted from simulations with macroscopic models . By simulations with a new car-following model we showed that one can obtain the same phase diagram from microsimulations. This includes even such subtle details as the small region of tristability. The proposed intelligent-driver model (IDM) is simple, has only a few intuitive parameters with realistic values, reproduces a realistic collective dynamics, and also leads to a plausible “microscopic” acceleration and deceleration behaviour of single drivers. An interesting open question is whether the phase diagram can be reproduced also with cellular automata. We generalized the phase diagram from on-ramps to other kinds of inhomogeneities. Microsimulations of a flow-conserving bottleneck realized by a locally increased safe time headway suggest that, with respect to collective effects outside of the immediate neighbourhood of the inhomogeneity, all types of bottlenecks can be characterized by a single parameter, the bottleneck strength. This means, that the type of traffic breakdown depends essentially on the two control parameters of the phase diagram only, namely the traffic flow, and the bottleneck strength. However, in some multistable regions, the history (i.e., the previous traffic dynamics) matters as well. We checked this also by macroscopic simulations with the same type of flow-conserving inhomogeneity and with microsimulations using a locally decreased desired velocity as bottleneck . In all cases, we obtained qualitatively the same phase diagram. What remains to be done is to confirm the phase diagram also for microsimulations of on-ramps. These can be implemented either by explicit multi-lane car-following models , or, in the framework of single-lane models, by placing additional vehicles in suitable gaps between vehicles in the “ramp” region. By presenting empirical data of congested traffic, we showed that all congested states proposed by the phase diagram were observed in reality, among them localized and extended states which can be stationary as well as oscillatory, furthermore, moving or pinned localized clusters (MLCs or PLCs, respectively). The data suggest that the typical kind of traffic congestion depends on the specific freeway. This is in accordance with other observations, for example, moving localized clusters on the A5 North , or homogeneous congested traffic (HCT) on the A5-South . In contrast to another empirical study , the frequent oscillating states (OCT) in our empirical data did not show mergings of density clusters, although these can be reproduced with our model with other parameter values . The relative positions of the various observed congested states in the phase diagram were consistent with the theoretical predictions. In particular, when increasing the traffic flow on the freeway, the phase diagram predicts (hysteretic) transitions from free traffic to PLCs, and then to extended congested states. By ordering the various forms of congestion on the A5-North with respect to the average traffic flow, the observations agree with these predictions. Moreover, given an extended congested state and increasing the bottleneck strength, the phase diagram predicts (non-hysteretic) transitions from triggered stop-and-go waves (TSG) to OCT, and then to HCT. To show the qualitative agreement with the data, we had to estimate the bottleneck strength $`\delta Q`$. This was done directly by identifying the bottleneck strength with ramp flows, e.g., on the A5-North, or indirectly, by comparing the outflows from congested traffic with and without a bottleneck, e.g., for the incident on the A5-South. With OCT and TSG on the A5-North, but HCT on the A5-South, where the bottleneck strength was much higher, we obtained again the right behavior. However, one needs a larger base of data to determine an empirical phase diagram, in particular with its boundaries between the different traffic states. Such a phase diagram has been proposed for a Japanese highway . Besides PLC states, many breakdowns on this freeway lead to extended congestions with fixed downstream and upstream fronts. We did not observe such states on German freeways and believe that the fixed upstream fronts were the result of a further inhomogeneity, but this remains to be investigated. Our traffic data indicate that the majority of traffic breakdowns is triggered by some kind of stationary inhomogeneity, so that the phase diagram is applicable. Such inhomogeneities can be of a very general nature. They include not only ramps, gradients, lane narrowings or -closings, but also incidents in the oppositely flowing traffic. In the latter case, the bottleneck is constituted by a temporary loss of concentration and by braking maneuvers of curious drivers at a fixed location. From the more than 100 breakdowns on various German and Dutch freeways investigated by us, there were only four cases (among them the two moving localized clusters in Fig. 15), where we could not explain the breakdowns by some sort of stationary bottleneck within the road sections for which data were available to us. Possible explanations for the breakdowns in the four remaining cases are not only spontaneous breakdowns , but also breakdowns triggered by a nonstationary perturbation, e.g., moving “phantom bottlenecks” caused by two trucks overtaking each other , or inhomogeneities outside the considered sections. Our simulations showed that stationary downstream fronts are a signature of non-moving bottlenecks. Finally, we could qualitatively reproduce the collective dynamics of several rather complex traffic breakdowns by microsimulations with the IDM, using empirical data for the boundary conditions. We varied only a single model parameter, the safe time headway, to adapt the model to the individual capacities of the different roads, and to implement the bottlenecks. We also performed separate macrosimulations with the GKT model and could reproduce the observations as well. Because both models are effective-single lane models, this suggests that lane changes are not relevant to reproduce the collective dynamics causing the different types of congested traffic. Furthermore, we assumed identical vehicles and therefore conclude that also the heterogeneity of real traffic is not necessary for the basic mechanism of traffic instability. We expect, however, that other yet unexplained aspects of congested traffic require a microscopic treatment of both, multi-lane traffic and heterogeneous traffic. These aspects include the wide scattering of flow-density data (see Fig. 17), the description of platoon formation , and the realistic simulation of speed limits , for which a multi-lane generalization of the IDM seems to be promising . Acknowledgments: The authors want to thank for financial support by the BMBF (research project SANDY, grant No. 13N7092) and by the DFG (grant No. He 2789). We are also grateful to the Autobahndirektion Südbayern and the Hessisches Landesamt für Straßen und Verkehrswesen for providing the freeway data.
no-problem/0002/cond-mat0002208.html
ar5iv
text
# Thermodynamic stability of Fe/O solid solution at inner-core conditions \[ ## Abstract We present a new technique which allows the fully ab initio calculation of the chemical potential of a substitutional impurity in a high-temperature crystal, including harmonic and anharmonic lattice vibrations. The technique uses the combination of thermodynamic integration and reference models developed recently for the ab initio calculation of the free energy of liquids and anharmonic solids. We apply the technique to the case of the substitutional oxygen impurity in h.c.p. iron under Earth’s core conditions, which earlier static ab initio calculations indicated to be thermodynamically very unstable. Our results show that entropic effects arising from the large vibrational amplitude of the oxygen impurity give a major reduction of the oxygen chemical potential, so that oxygen dissolved in h.c.p. iron may be stabilised at concentrations up a few mol % under core conditions. \] The thermodynamic stability of oxygen dissolved in iron is a key factor in considering the physics and chemistry of the Earth’s core. We present here a new technique which allows the ab initio calculation of the chemical potential of an impurity in a high-temperature solid solution, including harmonic and anharmonic lattice vibrations. We report the application of the technique to substitutional oxygen dissolved in hexagonal close-packed (h.c.p.) iron at Earth’s core conditions, and we show that the Fe/O solid solution is thermodynamically far more stable than expected from earlier work. The new technique should find wide application to a range of other earth-science problems. It has long been recognised that the liquid outer core must contain a substantial fraction of light impurities, since its density is $`610`$ % less than that estimated for pure liquid Fe ; similar arguments suggest that the inner core contains a smaller, but still appreciable impurity fraction . The leading impurity candidates are S, Si and O, and arguments have been presented for and against each of them . Ringwood and others have argued strongly on grounds of geochemistry that oxygen must account for a large part of the impurity content. However, it has proved difficult to assess these ideas, because the Fe/O phase diagram is so poorly known at Earth’s core conditions. (For reference, we note that the pressures at the core-mantle boundary, the inner-core boundary (ICB) and the centre of the Earth are 136, 330 and 364 GPa respectively; the temperatures at the core-mantle boundary and the ICB are poorly established, but are believed to be in the region of 4000 and 6000 K respectively.) The thermodynamic stability of dissolved oxygen is governed by the free energy change in the reaction $$(n1)\mathrm{Fe}(\mathrm{solid})+\mathrm{FeO}(\mathrm{solid})\mathrm{Fe}_n\mathrm{O}(\mathrm{solid}\mathrm{solution})$$ (1) Let $`\mathrm{\Delta }G`$ be the increase of Gibbs free energy as this reaction goes from left to right, excluding the configurational contribution associated with the randomness of the lattice sites occupied by dissolved O. Then the maximum concentration (number of O atoms per crystal lattice site) at which dissolved O is thermodynamically stable with respect to precipitation of FeO is $`c_{\mathrm{max}}=\mathrm{exp}(\mathrm{\Delta }G/k_\mathrm{B}T)`$. Several years ago, Sherman used ab initio calculations based on density functional theory (DFT) to calculate the zero-temperature limit of $`\mathrm{\Delta }G`$, i.e. the enthalpy $`\mathrm{\Delta }H`$ of reaction (1). He found that $`\mathrm{\Delta }H`$ is very large ($`5`$ eV at the ICB pressure of 330 GPa), and concluded that the concentration of dissolved O in the inner core must be completely negligible. His argument has been widely cited. However, these were static, zero-temperature calculations, which entirely ignored entropic effects. We shall show here that the high-temperature entropy of dissolved O produces such a large reduction of free energy that Sherman’s argument should be treated with caution when considering core temperatures. Our ab initio calculations are based on the well established DFT methods used in virtually all ab initio investigations of solid and liquid Fe , including Sherman’s . We employ the generalised gradient approximation for exchange-correlation energy, as formulated by Perdew et al. , which is known to give very accurate results for the low-pressure elastic, vibrational and magnetic properties of body-centred cubic (b.c.c.) Fe, the b.c.c. $``$ h.c.p. transition pressure, and the pressure-volume relation for h.c.p. Fe up to over 300 GPa . We use the ultra-soft pseudopotential implementation of DFT with plane-wave basis sets, an approach which has been demonstrated to give results for solid Fe that are virtually identical to those of all-electron DFT methods . Our calculations are performed using the VASP code , which is exceptionally stable and efficient for metals. The technical details of pseudopotentials, plane-wave cut-offs, etc. are the same as in our previous work on the Fe/O system . We first report static zero-temperature results for the enthalpy $`\mathrm{\Delta }H`$ of the reaction (1). Sherman’s results , later confirmed by the present authors , were obtained for the high O concentration of 25 mol %, corresponding to $`n=3`$, but here we wish to focus on the dilute limit, and we take $`n=63`$, which gives a mole fraction of 1.5 %. To do this, we treat a 64-atom supercell with the h.c.p. structure containing a single O substitutional, and we calculate the total ground-state energy and pressure for a sequence of atomic volumes, with all atoms relaxed to their equilibrium positions at each volume. The enthalpy of the pure iron system is obtained from total-energy and pressure calculations for a single unit cell of the h.c.p. crystal. For the FeO crystal, we obtain the enthalpy from total-energy and pressure calculations on a unit cell of the NiAs structure. (The high-pressure stable structure of FeO is believed to be either NiAs or inverse-NiAs; the relative stability of the two structures has been controversial , but our own ab initio calculations indicate that the NiAs structure is slightly more stable at pressures above ca. 145 GPa.) The enthalpy $`\mathrm{\Delta }H`$ is reported as a function of pressure in Fig. 1, where we also show Sherman’s results and our own for the 25 mol % case. We see that $`\mathrm{\Delta }H`$ at 1.5 mol % is between 0.5 and 1.0 eV lower than at 25 mol %, having a value of ca. 4.7 eV at 330 GPa, but this is still very large and Sherman’s arguments would remain valid if this represented a good estimate of the free energy of reaction (1) at Earth’s core temperatures. To get an idea of the freedom of movement of the substitutional O atom, and hence its vibrational entropy, we now perform a series of calculations in which the O atom is displaced by different amounts from its equilibrium site, with all other atoms held fixed at the equilibrium positions they have when the O atom is at its own equilibrium position. Results for the energy variation with displacement along the $`c`$-axis and in the basal plane for the crystal volume of 6.97 Å<sup>3</sup>/atom are shown in Fig. 2. Since we are interested in core temperatures, we also mark the energy $`k_\mathrm{B}T`$ for $`T=6000`$ K. We see that in the energy range set by this $`T`$ the energy surface is extremely anharmonic, with almost vanishing curvature at the equilibrium site and large curvatures for large displacements. Clearly, the vibrational root-mean-square (r.m.s.) O displacement will be $`0.45`$ Å (our direct ab initio molecular-dynamics calculations confirm this value). For comparison, we estimate the r.m.s. O displacement in FeO at the same $`P`$ and $`T`$ to be $`0.23`$ Å. This implies a substantial vibrational entropy for the O impurity, because it fits so loosely into the Fe lattice. For the same reason, the vibrations of the 12 Fe neighbours of the O impurity will be softened, and this will also increase the entropy. To make quantitative statements about high-temperature behaviour, we need to calculate the ab initio Gibbs free energies, rather than zero-temperature enthalpies. The ab initio calculation of the Helmholtz free energies $`F`$ of the Fe and FeO perfect crystals is straightforward in the harmonic approximation, since this requires only the energy of the static lattice and the ab initio lattice vibrational frequencies, which we calculate by the small-displacement method, as discussed in several previous papers . From $`F`$, we then directly obtain the Gibbs free energy $`G=F+PV`$, by calculating the pressure $`P`$ as $`(F/V)_T`$, with $`V`$ the volume. The only difficult part of the present problem is therefore the calculation of $`F`$ (and hence $`G`$) for the $`\mathrm{Fe}_n\mathrm{O}`$ crystal containing the substitutional O atom. This free energy must include the vibrations of many shells of neighbours of the O impurity. The harmonic approximation will clearly not suffice. We meet this challenge by drawing on recently developed ab initio methods for calculating the free energies of liquids and anharmonic solids . These methods rely on two things: empirical reference models, parameterised to accurately mimic the ab initio energies; and the technique of ‘thermodynamic integration’, used to determine free energy differences. Our overall strategy will be to obtain the ab initio free energy $`F_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{AI}}`$ of the O-substitutional system by starting from the ab initio free energy $`F_{\mathrm{Fe}}^{\mathrm{AI}}`$ of pure Fe and using thermodynamic integration to compute the free energy change $`F_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{AI}}F_{\mathrm{Fe}}^{\mathrm{AI}}`$ that results from converting a single Fe atom into an O atom. We recall briefly that thermodynamic integration is a general technique for calculating the difference of free energies $`F_1F_0`$ of two systems containing the same number $`N`$ of atoms but having different total-energy functions $`U_0(𝐫_1,𝐫_2,\mathrm{}𝐫_N)`$ and $`U_1(𝐫_1,𝐫_2,\mathrm{}𝐫_N)`$, with $`𝐫_i(i=1,2\mathrm{}N)`$ the atomic positions. The technique relies on the equivalence between the free energy difference and the reversible work done on switching the total energy function continuously from $`U_0`$ to $`U_1`$. The work done is: $$F_1F_0=_0^1𝑑\lambda U_1U_0_\lambda ,$$ (2) where the thermal average $`_\lambda `$ is evaluated in the canonical ensemble generated by the switched energy function $`U_\lambda `$ defined by: $$U_\lambda =(1\lambda )U_0+\lambda U_1.$$ (3) To apply this in practice, we use molecular dynamics simulation to evaluate the average $`U_1U_0_\lambda `$ at a sequence of $`\lambda `$ values and we perform the integration over $`\lambda `$ numerically. In principle, we could calculate $`F_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{AI}}F_{\mathrm{Fe}}^{\mathrm{AI}}`$ by identifying $`U_0`$ and $`U_1`$ as the ab initio total energy functions $`U_{\mathrm{Fe}}^{\mathrm{AI}}`$ and $`U_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{AI}}`$ of the pure-Fe and O-substitutional systems, but this brute-force approach is computationally prohibitive at present. It is also unnecessary, since exactly the same result can be achieved much more cheaply by using empirical reference models. In our recent work on liquid Fe , we have found that a simple inverse-power pair potential $`\varphi (r)=A/r^\alpha `$ reproduces the ab initio total energy very accurately; for the anharmonic high-temperature Fe crystal, a linear combination of this pair-potential model with an ab initio harmonic description has been very effective . We denote the total energy of this latter anharmonic model by $`U_{\mathrm{Fe}}^{\mathrm{ref}}(𝐫_1,\mathrm{}𝐫_N)`$, where $`𝐫_i`$ are the atomic positions. In order to make a reference system for the Fe/O system containing a single substitutional O atom, whose total energy is $`U_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{ref}}(𝐫_1,\mathrm{}𝐫_N)`$ ($`𝐫_1`$ is the position of the O atom), we simply delete all terms in $`U_{\mathrm{Fe}}^{\mathrm{ref}}`$ involving atom 1 and replace them with a pair interaction potential $`\chi (r)=B/r^\beta `$. All parts of $`U_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{ref}}`$ involving the $`N1`$ Fe atoms $`2,3,\mathrm{}N`$ remain precisely as they are in $`U_{\mathrm{Fe}}^{\mathrm{ref}}`$. Our procedure for determining $`B`$ and $`\beta `$ starts by requiring that the pair potential $`\chi (r)`$ should reproduce as well as possible the dependence of the total energy on position of the O atom with all Fe atoms held fixed, i.e. the curves shown in Fig. 2. The values for $`B`$ and $`\beta `$ thus obtained give us an initial form for $`U_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{ref}}`$. This initial $`U_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{ref}}`$ is then used in a classical MD simulation to generate a long trajectory at the temperature of interest, from which we take 100 statistically independent configurations. The full ab initio energies are calculated for these configurations, and the $`B`$ and $`\beta `$ parameters are readjusted to give a least squares fit to these energies. Finally, the $`U_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{ref}}`$ obtained from the new $`B`$, $`\beta `$ is used to generate a further 100 statistically independent configurations, and $`B`$ and $`\beta `$ are adjusted once more to fit the ab initio energies of these configurations. The $`B`$ and $`\beta `$ produced by this final step are found to be essentially identical to those in the previous step, and we accept them as the optimal values for the assumed form of $`\chi (r)`$. The free energy difference $`F_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{AI}}F_{\mathrm{Fe}}^{\mathrm{AI}}`$ between the Fe/O and pure Fe systems is now expressed as the sum of three contributions: the difference $`F_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{ref}}F_{\mathrm{Fe}}^{\mathrm{ref}}`$ between the reference systems, and the two differences $`F_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{AI}}F_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{ref}}`$ and $`F_{\mathrm{Fe}}^{\mathrm{ref}}F_{\mathrm{Fe}}^{\mathrm{AI}}`$ between the ab initio and reference systems. All three of these differences are calculated by thermodynamic integration. We emphasise that, although the reference systems play a vital role, the final result for $`F_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{AI}}F_{\mathrm{Fe}}^{\mathrm{AI}}`$ does not depend on how they are chosen. Thermodynamic integration to get $`F_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{ref}}F_{\mathrm{Fe}}^{\mathrm{ref}}`$ is easy and rapid, since only simple potential models are involved. We have tested size effects by using simulated systems containing up to 768 atoms, but we find that with only 288 atoms the size-effect errors are less than the statistical errors of ca. 30 meV. For the ab-initio/reference differences $`F_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{AI}}F_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{ref}}`$ and $`F_{\mathrm{Fe}}^{\mathrm{AI}}F_{\mathrm{Fe}}^{\mathrm{ref}}`$, the fluctuations of $`U_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{AI}}U_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{ref}}`$ and $`U_{\mathrm{Fe}}^{\mathrm{AI}}U_{\mathrm{Fe}}^{\mathrm{ref}}`$ are so small that explicit thermodynamic integration over $`\lambda `$ is unnecessary, and we can use instead the small-$`\lambda `$ approximation explained elsewhere . We have studied the size errors for these ab initio/reference differences, and we find that results obtained with 36, 64 and 96 atoms are identical within statistical errors. The overall statistical error on the ab initio difference $`F_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{AI}}F_{\mathrm{Fe}}^{\mathrm{AI}}`$ is ca. 90 meV. We have repeated all the above calculations at the four volumes 6.86, 6.97, 7.20 and 7.40 Å<sup>3</sup>/atom, and from the dependence on volume we obtain the pressure change on replacing Fe by O and hence the Gibbs free energy difference $`G_{\mathrm{Fe}/\mathrm{O}}^{\mathrm{AI}}G_{\mathrm{Fe}}^{\mathrm{AI}}`$. Our calculated Gibbs free energies $`\mathrm{\Delta }G`$ for reaction (1) are displayed in Fig. 1 for the two temperatures 5000 and 7000 K. We note the very large entropic lowering of $`\mathrm{\Delta }G`$, which, at $`P=330`$ GPa comes down from 4.7 to ca. 1.7 eV at the temperature $`T6000`$ K expected at the ICB. This is still a substantial positive value, but implies that the stability-limit concentration $`c_{\mathrm{max}}=\mathrm{exp}(\mathrm{\Delta }G/k_\mathrm{B}T)`$ is ca. 3 mol %, which is far from negligible. In assessing our $`c_{\mathrm{max}}`$ value, one should note the remaining uncertainties in our calculations. First, we have ignored anharmonicity in the pure Fe and FeO crystals. Our recent work on the effect of anharmonicity in pure Fe showed that at the melting point, anharmonicity can contribute as much as 70 meV/atom to the free energy; the same might be true of FeO. These effects could shift $`\mathrm{\Delta }G`$ by perhaps 0.15 eV. Second, there is the question of strong electronic correlation in FeO, which is a prototypical Mott insulator at low pressures. Such correlation effects will be much weakened at Earth’s core pressures, but could still shift $`\mathrm{\Delta }G`$ by a few tenths of an eV. This means that our prediction for $`c_{\mathrm{max}}`$ at a given temperature is probably not reliable to better than a factor of 3. We are therefore cautious about the detailed numerical value of $`c_{\mathrm{max}}`$, and claim only that it could be a few mol % at ICB pressure and temperature. In summary, we conclude that, because substitutional oxygen atom fits so loosely into the Fe lattice and has so much freedom of movement, it undergoes a very large entropic lowering of free energy at high temperatures, this lowering being as much as 3 eV at 6000 K and 330 GPa. The consequence is that substitutional O dissolved in h.c.p. Fe may be thermodynamically stabilised at concentrations up to a few mol %. Earlier ab initio calculations which ignored entropic effects should therefore not be taken at face value. Finally, we point out that a wide range of geological problems depend on an understanding of chemical potentials – for example, all problems concerned with the partitioning of elements between coexisting phases. The ab initio techniques for calculating chemical potentials outlined here should therefore be of wide interest. Acknowledgments. The work of D.A. was supported by NERC Grant No. GR3/12083. Allocations of time on the Cray T3E machines at the Manchester CSAR service and at Edinburgh Parallel Computer Centre were provided by the UK Car-Parrinello Consortium (EPSRC grant GR/M01753) and the Minerals Physics Consortium (NERC grant GST/02/1002). Some of the calculations were performed at the UCL HiPerSPACE Centre, partially funded by the Joint Research Equipment Initiative.
no-problem/0002/quant-ph0002079.html
ar5iv
text
# 1 Introduction ## 1 Introduction Recently there have been proposals to reconstruct the quantum state of electromagnetic fields inside cavities . The reconstruction of non- classical states is a central topic in quantum optics and related fields, and there have been a number of proposals to achieve it (see for instance ). In fact, the full reconstruction of nonclassical field states as well as of (motional) states of an ion have been experimentally accomplished. The reconstruction is normally achieved through a finite set of either field homodyne measurements, or selective measurements of atomic states . However, the presence of noise and dissipation has normally destructive effects. In fact, the reconstruction schemes themeselves also indicate loss of coherence in quantum systems . Schemes for compensation of losses have already been proposed and the relation between losses and $`s`$-parametrized quasiprobabilities has been pointed out in ref . A method of reconstruction of the Wigner function that takes into account losses (at $`T=0`$) has also been presented . We consider here a single mode high-$`Q`$ cavity where we suppose that a nonclassical field is prepared. The first step of our method consists in driving the generated state by a coherent pulse. The reconstruction of the field is done after turning-off the driving field, i.e. at a time when the cavity field has interacted with its environment at finite temperature. We show that by measuring the density matrix diagonal elements and properly weighting them, we can obtain directly the Wigner function even at $`T0`$. We should remark that to know a state, one has to have information about all the density matrix elements (diagonal and off-diagonal), however, with the method presented here (see also ), it is only necessary to have information about diagonal matrix elements. ## 2 Master equation and its solution The master equation in the interaction picture for the reduced density operator $`\widehat{\rho }`$ relative to a driven cavity mode, taking into account cavity losses at non-zero temperature and under the Born-Markov approximation is given by (in a frame rotating at the field frequency $`\omega `$) $$\frac{\widehat{\rho }}{t}=(\widehat{}+\widehat{})\widehat{\rho },$$ (1) where $$\widehat{}\widehat{\rho }=(\widehat{}_1+\widehat{}_2)\widehat{\rho }$$ (2) with $$\widehat{}_1\widehat{\rho }=\frac{\gamma (\overline{n}+1)}{2}\left(2\widehat{a}\widehat{\rho }\widehat{a}^{}\widehat{a}^{}\widehat{a}\widehat{\rho }\widehat{\rho }\widehat{a}^{}\widehat{a}\right),\widehat{}_2\widehat{\rho }=\frac{\gamma \overline{n}}{2}\left(2\widehat{a}^{}\widehat{\rho }\widehat{a}\widehat{a}\widehat{a}^{}\widehat{\rho }\widehat{\rho }\widehat{a}\widehat{a}^{}\right),$$ (3) and $$\widehat{}\widehat{\rho }=\frac{i}{\mathrm{}}[\widehat{H},\widehat{\rho }],$$ (4) where $$\widehat{H}=i\mathrm{}\left(\alpha ^{}\widehat{a}\alpha \widehat{a}^{}\right).$$ (5) $`\widehat{a}`$ and $`\widehat{a}^{}`$ are the annihilation and creation operators, $`\gamma `$ the (cavity) decay constant, $`\overline{n}`$ is the mean number of thermal photons and $`\alpha `$ the amplitude of the driving field. ### 2.1 Displacing the field The formal solution to Eq. (1) is given by (see for instance ) $$\widehat{\rho }(t)=\mathrm{exp}\left[(\widehat{}+\widehat{})t\right]\widehat{\rho }(0).$$ (6) It is not difficult to show that Eq. (6) can be factorized in the product of two exponentials, one containing the reservoir (super) operators and the other the interaction (5), the latter one yielding an effective displacement on the initial field. In order to show this we calculate the commutator $$[\widehat{},\widehat{}]\widehat{\rho }=\frac{\gamma }{2}\widehat{}\widehat{\rho },$$ (7) which allows the factorization $$\widehat{\rho }(t)=\mathrm{exp}(\widehat{}t)\mathrm{exp}\left[\frac{2\widehat{}}{\gamma }(1e^{\gamma t/2})\right]\widehat{\rho }(0).$$ (8) After driving the initial field during a time $`t`$, the resulting field density operator will read $$\widehat{\rho }(t)=e^{\widehat{}t}\widehat{\rho }_\beta (0),$$ (9) where $$\widehat{\rho }_\beta (0)=\widehat{D}^{}(\beta )\widehat{\rho }(0)\widehat{D}(\beta ),$$ (10) with the effective (displacing) amplitude $$\beta =2\alpha \frac{1e^{\gamma t/2}}{\gamma }.$$ (11) ### 2.2 Solution to the Master Equation at finite temperature We now obtain the density matrix (4), by defining $$\widehat{J}_{}\widehat{\rho }=\widehat{a}\widehat{\rho }\widehat{a}^{},\widehat{J}_+\widehat{\rho }=\widehat{a}^{}\widehat{\rho }\widehat{a},\widehat{J}_3\widehat{\rho }=\widehat{a}^{}\widehat{a}\widehat{\rho }+\widehat{\rho }\widehat{a}^{}\widehat{a}+\widehat{\rho },$$ (12) where the superoperators $`\widehat{J}_{}`$, $`\widehat{J}_+`$ and $`\widehat{J}_3`$ obey the commutation relations $`[\widehat{J}_{},\widehat{J}_+]\widehat{\rho }=\widehat{J}_3\widehat{\rho }`$ and $`[\widehat{J}_3,\widehat{J}_\pm ]\widehat{\rho }=\pm 2\widehat{J}_\pm \widehat{\rho }`$, which can be written as $$\widehat{\rho }(t)=e^{\frac{\gamma t}{2}}e^{\mathrm{\Gamma }_{\overline{n}}(t)\widehat{J}_+}\left[\frac{e^{\gamma t/2}}{1+N_t}\right]^{\widehat{J}_3}e^{\mathrm{\Gamma }_{\overline{n}+1}(t)\widehat{J}_{}}\widehat{\rho }_\beta (0),$$ (13) and we have defined $$\mathrm{\Gamma }_{\overline{n}}(t)=\frac{\overline{n}(1e^{\gamma t})}{1+N_t},\mathrm{\Gamma }_{\overline{n}+1}(t)=\frac{(\overline{n}+1)(1e^{\gamma t})}{1+N_t},$$ (14) with $`N_t=\overline{n}(1e^{\gamma t})`$. ## 3 The reconstruction method We now calculate the diagonal density matrix elements $`<m|\widehat{\rho }(t)|m>`$ from (13) $$m|\widehat{\rho }_\beta (t)|m=\frac{1}{1+N_t}\underset{k=0}{\overset{\mathrm{}}{}}\underset{n=0}{\overset{\mathrm{}}{}}\left(\begin{array}{c}k\\ n\end{array}\right)\left(\begin{array}{c}m\\ n\end{array}\right)[\mathrm{\Gamma }_{\overline{n}}(t)]^{mn}[\mathrm{\Gamma }_{\overline{n}+1}(t)]^{kn}\frac{e^{n\gamma t}}{[1+N_t]^{2n}}k|\widehat{\rho }_\beta (0)|k.$$ (15) Multipliying the expression above by powers of the weight function $`\chi _s`$, where $$\chi _s=\frac{\frac{s+1}{s1}\mathrm{\Gamma }_{\overline{n}+1}(t)}{\frac{e^{\gamma t}}{[1+N_t]^2}+\mathrm{\Gamma }_{\overline{n}}(t)\left(\frac{s+1}{s1}\mathrm{\Gamma }_{\overline{n}+1}(t)\right)}$$ (16) and adding over $`m`$ we obtain $$F(\beta ;s)=\underset{m=0}{\overset{\mathrm{}}{}}\chi _s^mm|\widehat{\rho }_\beta (t)|m=\frac{1}{[1+N_t][1\chi _s\mathrm{\Gamma }_{\overline{n}}(t)]}\underset{k=0}{\overset{\mathrm{}}{}}\left(\frac{s+1}{s1}\right)^kk|\widehat{\rho }_\beta (0)|k,$$ (17) where we have used the fact that $$\underset{m=0}{\overset{\mathrm{}}{}}\left(\begin{array}{c}m\\ n\end{array}\right)x^m=\frac{x^n}{[1x]^{n+1}}.$$ (18) If we now multyply $`F(\beta ;s)`$ by the quantity $$\frac{2[1+N_t][1\chi _s\mathrm{\Gamma }_{\overline{n}}(t)]}{\pi (s1)},$$ (19) we finally obtain $$W(\beta ;s)=\frac{2}{\pi (s1)}\underset{k=0}{\overset{\mathrm{}}{}}\left(\frac{s+1}{s1}\right)^kk|\widehat{\rho }_\beta (0)|k,$$ (20) which is the $`s`$-parametrized quasiprobability distribution . Therefore, by measuring the diagonal elements of the evolved field density matrix, Eq. (15), we may obtain complete information on the initial state. We should remark that in this case (thermal environment), more information on the $`P_m(t)=<m|\widehat{\rho }(t)|m>`$ is required than in the zero temperature case. Nevertheless, it is possible to find a weight function that allows the reconstruction of the initial field. ### 3.1 Measuring the photon distribution For completness, we suggest a way to measure the photon number distribution $`P_m(\beta ,t)`$ of Eq. (17). It is not difficult to show that the atomic inversion for the case of a three-level atom in a cascade configuration with the upper and the lower levels having the same parity and satisfaying the two-photon resonance condition is given by (see for instance ) $$W(\beta ;t+\tau )\underset{m=0}{\overset{\mathrm{}}{}}P_m(\beta ,t)\mathrm{cos}([2m+3]\lambda \tau ),$$ (21) where $`\lambda `$ is the atom-field coupling constant. In order to obtain $`P_m`$ from a family of measured population inversions, we invert the Fourier series in Eq. (21), or $$P_m(\beta ,t)=\frac{2\lambda }{\pi }_0^{\tau _{max}}𝑑\tau W(t+\tau )\mathrm{cos}([2m+3]\lambda \tau ).$$ (22) We need a maximum interaction time $`\tau _{max}=\pi /\lambda `$ much shorter than the cavity decay time, which implies we must be in the strong-coupling regime, i.e. $`\lambda >>\gamma `$. ## 4 Conclusions We have presented a method to reconstruct the Wigner function (and in general any quasiprobability distribution) of an initial nonclassical state at times when the field would have normally lost its quantum coherence because of the interaction with an environment at finite temperature. This is an extension of our previous work where we considered the interaction with the environment at zero temperature. The crucial point of our approach is the apropriate weighting of the evolved (driven and decayed) photon number distribution. Driving the initial field immediately after preparation, is not only useful for covering a region in phase space but also makes possible, together with the weight function, $`\chi _s`$, to store quantum coherences in the diagonal elements of the time evolved density matrix. We have shown here that it is possible to find a weight function that allows to reconstruct the initial field even at finite temperature, and this is the main result of our paper. Similar conclusions may be reached by employing the method of generating functions . The possibility of reconstructing quantum states when the system interacts with an environment maybe relevant for applications in quantum computing. Loss of coherence for such interactions is likely to occur in such devices, and our method could be used, for instance, as a scheme to refresh the state of a quantum computer in order to minimize the destructive action of a hot environment. Acknowledgments This work was partially supported by CONACYT (Mexico), FAPESP and CNPq (Brazil).
no-problem/0002/quant-ph0002076.html
ar5iv
text
# Fast Quantum Search Algorithms in Protein Sequence Comparison - Quantum Biocomputing ## Abstract Quantum search algorithms are considered in the context of protein sequence comparison in biocomputing. Given a sample protein sequence of length $`m`$ (i.e $`m`$ residues), the problem considered is to find an optimal match in a large database containing $`N`$ residues. Initially, Grover’s quantum search algorithm is applied to a simple illustrative case - namely where the database forms a complete set of states over the $`2^m`$ basis states of a $`m`$ qubit register, and thus is known to contain the exact sequence of interest. This example demonstrates explicitly the typical $`O(\sqrt{N})`$ speedup on the classical $`O(N)`$ requirements. An algorithm is then presented for the (more realistic) case where the database may contain repeat sequences, and may not necessarily contain an exact match to the sample sequence. In terms of minimizing the Hamming distance between the sample sequence and the database subsequences the algorithm finds an optimal alignment, in $`O(\sqrt{N})`$ steps, by employing an extension of Grover’s algorithm, due to Boyer, Brassard, H$`ø`$yer and Tapp for the case when the number of matches is not a priori known. The fantastic possibilities of quantum parallelism in computing, suggested by the convergence of quantum mechanics and information theory in the past two decades, are fast being enumerated in the guise of quantum algorithms. First, and foremost, among these is the factoring algorithm of Shor, which provided great impetus to the field of quantum computing. Shor’s algorithm applied to a given number $`N`$ requires $`O((\mathrm{log}N)^3)`$ steps, and represents an exponential speed-up over the best classical algorithms. Another important result, due to Grover, was the discovery of a quantum search algorithm for finding a particular element in an unordered set of $`N`$ elements in only $`O(\sqrt{N})`$ steps - a significant improvement over the classical cost $`O(N)`$. In this paper the application of quantum search algorithms to an important problem at the heart of biocomputing (or bioinformatics), that of protein sequence comparison and alignment, is considered. As the mapping and sequencing of the human genome (some $`3\times 10^9`$ base pairs) nears completion, the relatively new field of biocomputing has become obvious in its importance to the quantitative analysis of this vast amount of data. Some fundamental tasks in biocomputing involving sequence analysis include: searching databases in order to compare a new sub-sequence to existing sequences, inferring protein sequence from DNA sequence, and calculation of sequence alignment in the analysis of protein structure and function. A tremendous amount of computing is required, much of which is devoted to search-type problems, either directly in large databases, or in configuration space of alignment possibilities. While it is possible that all of these problems may be amenable to quantum algorithmic speed-up, it is explicitly demonstrated in this work how the fundamental task of sequence alignment can be approached using a quantum computer. Indeed, this problem is a very natural application of the quantum search algorithm (perhaps a strange reflection of the possibility that the machinery of DNA itself may actually function using quantum search algorithms). In general terms Grover’s search algorithm relies on the existence of a quantum computer $`Q`$ operating using an oracle function, F. The set of search possibilities is represented by states in the Hilbert space of $`Q`$. The oracle function simply tests whether a given state is the actual target state. Grover found a unitary operator $`U`$ (involving the oracle function test) which evolves the quantum computer in such a way that the amplitude of the target state in the wave function of $`Q`$ is amplified. Furthermore, Grover showed that there exists a number $`k<\sqrt{N}`$, such that after $`k`$ applications of $`U`$, the probability of finding the target state is at least 1/2. Subsequently, Boyer, Brassard, H$`ø`$yer and Tapp (BBHT) proved a tighter bound: one must iterate the algorithm on average at least $`\left(\mathrm{sin}\frac{\pi }{8}\right)\sqrt{N}`$ times to achieve a probability of 1/2 for finding the target. To begin the application of quantum search algorithms to protein sequence analysis, the problem of sequence alignment to a large database of sequence domains is considered. That is, given a sample sequence the task is to find out the location in the database of an exact or closest match (with respect to some defined measure). Application of the Grover algorithm directly to this search task would cause trouble immediately because, by definition, it is not known if the target exists in the database, or if it actually exists multiple times. If there are actually $`N_t`$ solutions, the number of iterations required to find a solution with probability 1/2 is $`\left(\mathrm{sin}\frac{\pi }{8}\right)\sqrt{N/N_t}`$. Thus, if one does not know the number of solutions at the outset, the computer may inadvertently be halted when the amplitude of the target states is very small. This happens because the process of amplitude amplification is not monotonic, but rather oscillates with the number of iterations. Fortunately, this difficult impasse has been solved by BBHT and they provide an algorithm, based on Grover’s algorithm as a subroutine, for finding a solution in the case where the number of solutions is unknown. This result allows for the application of quantum search algorithms to the field of biocomputing. In terms of protein sequences, the human genome is composed of about 150,000 domains, each containing on average 300 residues (amino acids). An interesting feature of approaching the sequence analysis problem using a quantum computer is that the entire database could in principle be stored in a single wave function superposition, and then be presented simultaneously for inspection. To illustrate the basic idea, a very simple case of sequence comparison is initially considered, followed by a more realistic problem later. Consider a database, $`D`$, constructed from the domains of the human genome placed end-to-end, so that a continuous list of $`N`$ residues, $`D=\{R_0,R_1,\mathrm{},R_{N1}\}`$, is created. Independently, a sample sequence is given $`s=\{r_0,r_1,\mathrm{},r_{m1}\}`$ composed of $`m`$ residues; the task is to compare this with the database. Each residue is labeled by a letter of the 20-letter amino acid alphabet, so in order to encode the database 5 bits per residue is needed. Thus, the residues $`R_i`$ and $`r_i`$ are represented by bit strings, $`_{\alpha =0}^4B_{i\alpha }`$ and $`_{\alpha =0}^4b_{i\alpha }`$, respectively. The quantum computer to analyze this system is composed of two registers, with number of qubits $`Q_1`$ and $`Q_2`$, respectively. The bit-wise representation of the protein sequences will be encoded into the qubits of this system. Leaving issues of data transfer aside, the entire database is represented by a quantum superposition over the two registers: $$|\mathrm{\Psi }_D\frac{1}{\sqrt{Nm+1}}\underset{i=0}{\overset{Nm}{}}|\varphi _i|i,$$ (1) where all the consecutive sub-sequences in the database of length $`m`$ are encoded in the first register with $`Q_1=5m`$ as $$|\varphi _i=\underset{\alpha =i}{\overset{i+m1}{}}\underset{\beta =0}{\overset{4}{}}|B_{\alpha \beta }\underset{\alpha =0}{\overset{5m1}{}}|q_{i\alpha }.$$ (2) That is, from from the database of length $`N`$ residues, $`Nm+1`$ sub-sequences of length $`m`$ are constructed by moving along from the first position (allowing domain crossing). Position information of the sub-sequences is meanwhile tagged explicitly by binary numbers, $`|i`$, in the second register, and is accessed by an operator, $`\widehat{X}`$, acting in the Hilbert space of the second register, which gives the position as $`\widehat{X}|i=i|i`$ ($`0iNm`$). In order that this register can encode all positions $`Q_2`$ must satisfy $`2^{Q_2}>Nm`$. The number of qubits required in this register is relatively small: taking the database size to be that for the number of residues in the human genome implies $`Q_2=26`$ suffices. In the first register, typical sequence comparison problems require $`mO(300)`$. The next step in the initialization process is the coding of a table, $`T[0\mathrm{}Nm]`$, into the quantum state, which measures the difference between the database states $`|\varphi _i`$ and the sample sequence state in terms on the total number of bit flips required to transform any database state into the sample sequence. In other words, $`T[0\mathrm{}Nm]`$ is the set of Hamming distances. Remarkably, the set of Hamming distances for the entire database can be created by simply acting on each qubit of the computer with a CNOT operation with respect to the sample sequence state: $$|\mathrm{\Psi }_H=U_{\mathrm{CNOT}}(s)|\mathrm{\Psi }_D\frac{1}{\sqrt{Nm+1}}\underset{i=0}{\overset{Nm}{}}|\overline{\varphi _i}|i.$$ (3) Denoting the individual qubits of the “Hamming states” $`|\overline{\varphi _i}`$ by $$|\overline{\varphi _i}=\underset{\alpha =0}{\overset{5m1}{}}|\overline{q}_{i\alpha },$$ (4) an operator, $`\widehat{T}`$, is introduced which, acting on a state $`|\overline{\varphi _i}`$, gives the Hamming distance table value $`T[i]`$ as: $$\widehat{T}:\widehat{T}|\overline{\varphi _i}=T[i]|\overline{\varphi _i},T[i]=\underset{\alpha =0}{\overset{5m1}{}}\overline{q}_{i\alpha }.$$ (5) With the computer design completed and initialized, a simple search problem can be defined in order to demonstrate how the computer works. First, the database is taken to be of length $`N=2^m+m1`$ so that there are exactly $`2^m`$ states in the superposition, and furthermore demand that all these states are distinct. The problem is to search the database for the sub-sequence $`s`$, which occurs exactly once, but at an unknown location. Classically, this would require $`O(N)`$ steps. However, by using Grover’s search algorithm, the match can be found in $`O(\sqrt{N})`$ steps. In this example, the database decomposition has been artificially arranged to be over a complete set of states of the first register, which means that Grover’s search algorithm can be applied directly. The problem defined by Grover has been modified slightly, but the applicability of the search algorithm remains. The original problem was defined in terms of an oracle function, $`F(x)`$, over a set of values $`x\{0,\mathrm{},N1\}`$, which is zero everywhere except at some value $`t`$, the target of the search, where $`F(t)=1`$. The sequence comparison problem here has been re-structured so that a value of $`x`$ represents a subsequence of the database, and the oracle function is just a direct comparison with the sample sequence. In a sense, the black box nature of the oracle function has been simplified, at the cost of increasing the complexity of the initial wave function with position information. It remains to be seen whether this is a feasible way of coding a sequence database. Of course, an alternative is to sweep all details of the database look-up and comparison into the oracle function. The difference is subtle, and perhaps non-trivial in practice. The advantage of the latter approach might be in the initialization of the quantum computer state. The algorithms presented here would still apply in this case. In the computer design defined here, Grover’s search algorithm is applied to the first register containing the sub-sequence state superposition. The problem is to find the state $`|\overline{s}=U_{\mathrm{CNOT}}|s=|0\mathrm{}0`$ (zeros in all $`m`$ qubits of the first register) with table value $`T[i_s]=0`$, occurring at position $`i_s`$ (as yet unknown). Once the state is found, the location of the sequence in the database can be determined by making a measurement of $`\widehat{X}`$ on the second register. To illustrate the working of the algorithm the geometrical picture, which is particularly transparent, is applied to this framework. The search algorithm is initiated by decomposing the state $`|\mathrm{\Psi }_H`$ into orthogonal components with respect to $`|\overline{s}`$ as $$|\mathrm{\Psi }_H=\sqrt{\frac{Nm}{Nm+1}}|R+\frac{1}{\sqrt{Nm+1}}|S,$$ (6) where $`|S`$ $`=`$ $`|\overline{s}|i_s`$ (7) $`|R`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{Nm}}}{\displaystyle \underset{ii_s}{}}|\overline{\varphi _i}|i.`$ (8) The evolution of the quantum computer representing the search algorithm occurs in the first register, the second register lying dormant, yet through quantum entanglement carrying the position information required at the end. The operator $`U`$ is constructed from reflection operators in the Hilbert space of the first register, $`I_S`$ $`=`$ $`12|SS|`$ (9) $`I_H`$ $`=`$ $`12|\mathrm{\Psi }_H\mathrm{\Psi }_H|.`$ (10) The operator $`I_S`$ contains the query to the oracle function, $`F(i)`$, and acts on the Hamming states $`|\overline{\varphi _i}`$ with a phase shift dependent on the search criteria $`T[i_s]=0`$: $$I_S|\overline{\varphi _i}=(1)^{F(i)}|\overline{\varphi _i}=\{\begin{array}{ccc}|\overline{\varphi _i}\hfill & & \mathrm{if}T[i]=0\hfill \\ & & \\ |\overline{\varphi _i}\hfill & & \mathrm{otherwise}.\hfill \end{array}$$ (11) In terms of these reflection operators, the unitary operator evolving the system through one step of the search algorithm is given by $`U=I_HI_S`$. The evolution of the computer proceeds through application of the operator, $`U`$, a number of times on the initial state, $`|\mathrm{\Psi }_H`$. The effect of this evolution is to amplify the component of the target state, $`|S`$ in the superposition. It is important to understand the nature of this process in order to appreciate how the quantum computer functions. To see this point it is convenient to express $`U`$ in the representation of the subspace $`\{|S,|R\}`$: $`U=\left[\begin{array}{ccc}\frac{Nm1}{Nm+1}& & \frac{2\sqrt{Nm}}{Nm+1}\\ & & \\ \frac{2\sqrt{Nm}}{Nm+1}& & \frac{Nm1}{Nm+1}\end{array}\right]=\left[\begin{array}{ccc}\mathrm{cos}\theta & & \mathrm{sin}\theta \\ & & \\ \mathrm{sin}\theta & & \mathrm{cos}\theta \end{array}\right].`$ (18) where $`\mathrm{sin}\theta 2\sqrt{Nm}/(Nm+1)`$. After $`k`$ steps of the algorithm the state of the computer is given by $$|\mathrm{\Psi }_k=U^k|\mathrm{\Psi }_H=\underset{i=0}{\overset{Nm}{}}c_i^{(k)}|\overline{\varphi _i}|i.$$ (19) The amplitude of the target state, $`c_{i_s}^{(k)}`$, can be easily calculated using the matrix representation for $`U`$. One obtains: $$c_{i_s}^{(k)}=\mathrm{cos}\left(k\theta \alpha \right),\mathrm{cos}\alpha \frac{1}{\sqrt{Nm+1}}.$$ (20) The component along $`|S`$ is amplified to near unity at $`k_{\mathrm{max}}\frac{\pi }{4}\sqrt{N}`$ (for $`N>>m`$). A measurement of $`\widehat{T}`$ on the first register will give a result $`T[i]`$ with probability $`|c_i^{(k)}|^2`$. If $`T[i]=0`$ then the algorithm has succeeded - i.e the sample sequence has been found - and a subsequent measurement of $`\widehat{X}`$ in the second register will give the position, $`i_s`$, of the sequence in the database. A crucial point is that one has to be careful interpreting the number of steps required to obtain a successful outcome - merely increasing the number of steps beyond $`k_{\mathrm{max}}`$ does not improve the chances of success because the amplification is not monotonic. Indeed, the probability of success actually decreases when $`k_{\mathrm{max}}`$ is exceeded. The search may therefore have to be run several times, however, for large $`N`$ the savings in computer time compared to a classical computer are clear, even if the search is repeated several times. While the above example serves to display the potential of quantum search algorithms in the context of sequence matching to a large database, it does not contain an important concept in bioinformatics - optimal alignment. Generally, the sample sequence may not be contained exactly in the database, and so one is interested in how close is the best match (or matches) with respect to a well defined distance measure. Often this measure involves editing of strings by insertion of gaps in order to minimize the distance; in practice this process is very complicated. In the first instance, the problem is extended to that of finding an optimal alignment with respect to the Hamming distance, without editing of sequences (which can be incorporated at a later stage). Let us first define the problem using, as far as possible, the same notation as previously. The database is taken to be of size $`N>>m`$, but the restriction that the set of database sub-sequence states is equal to $`2^m`$ is relaxed, and the possibility is allowed that the set of sub-sequences may contain repeats, and, more importantly, may or may not contain the sample sequence. The problem then is to find an optimal alignment of the sample sequence to a sub-sequence in the database. An optimal alignment here is defined in the sense of finding the smallest Hamming distance $`T[i]`$ with respect to the sample sequence state. In terms of our quantum computer, the database state in this case is also described by the state $`|\mathrm{\Psi }_D`$. An important point is that the state is still normalised by the factor $`1/\sqrt{Nm+1}`$ because the repeats occur at different locations, and thus each state in the product space of the two registers is distinct. The introduction of the position register $`Q_2`$ has ensured this. Using the CNOT operation on $`|\mathrm{\Psi }_D`$ the superposition, $`|\mathrm{\Psi }_H`$, of Hamming states is once again obtained. The algorithm strategy is to search for alignments of increasing Hamming distance. At the start of each search it is not known how many solutions exist, or if there exist matches at all, and so Grover’s algorithm cannot be used directly. However, we use now the extension of Grover’s algorithm due to Boyer, Brassard, H$`ø`$yer and Tapp, which performs a search with an a priori unknown number of solutions $`N_t`$, and finds a match (if it exists) in $`O(\sqrt{N/N_t})`$ steps. During the course of the algorithm the computer’s evolution must be tailored to accommodate the fact that the search is now based on all the target states that satisfy $`T[i]=n`$ where $`n`$ is some pre-defined Hamming distance determined by the algorithm. In order to apply the search algorithm in this case the operator $`I_SI_S(n)`$ is modified such that: $$I_S(n)|\overline{\varphi _i}=\{\begin{array}{ccc}|\overline{\varphi _i}\hfill & & \mathrm{if}T[i]=n\hfill \\ & & \\ |\overline{\varphi _i}\hfill & & \mathrm{otherwise}.\hfill \end{array}$$ (21) At each iteration the BBHT algorithm is employed, with a repeat index $`r`$ as a pre-determined measure of the search confidence level. The optimal alignment algorithm is as follows: 1. 0th iteration: search for an occurrence of the state with zero Hamming distance, $`T[i]=0`$. If successful measure position and exit, if unsuccessful after $`r`$ repeats of the BBHT search algorithm go to the next iteration. 2. $`n`$th iteration: search for a state with $`T[i]=n`$ using $`U=I_HI_S(n)`$. If successful locate position and exit, if unsuccessful after $`r`$ repeats of the BBHT search algorithm go to the next iteration, by setting $`nn+1`$. 3. Upon exit at some iteration $`n=k`$, one optimal alignment $`T[i_k]=k`$, and its position $`i_k`$ has been found. The total number of steps required is $`O(rk\sqrt{N})`$, discounting the effect of sequence repeats (which reduces the required number of iterations). At more cost a sub-loop may be introduced to search for the other optimally aligned sequences. In practice, the number of iterations required is $`k<<m`$, as one would determine a maximum Hamming distance on biological grounds, beyond which searching for an aligned state is pointless. While the focus of this paper has been on protein sequence comparison, the framework can be easily translated into that for nucleotide sequence comparison in DNA. In this case representing the four letter nucleotide alphabet requires only two qubits. Although only the algorithmic aspect of the application of quantum computing to sequence analysis has been dealt with here, an obvious point to raise is the feasibility of building such a device. With the ever increasing ability to manipulate systems at the quantum level there has been great progress in the demonstration of quantum computation at the two qubit level. Quantum logic gates were demonstrated using ion traps in 1995, and two years later in nuclear magnetic resonance (NMR) systems. In 1998 the actual experimental realization of a quantum computer solving Deutsch’s problem was reported by two groups using NMR. This was closely followed by NMR implementations of the quantum search algorithm. Of course, a realistic quantum computer needs to be scaled up significantly on these two qubit configurations. Perhaps the most promising prospect for a scalable quantum computer capable of running the algorithms presented here is based on the solid state design of Kane. The creation of a superposition representing the human genome database would be another considerable challenge. To conclude, in this work the application of quantum search algorithms in the context of biocomputing has been studied, at least at the rather simple level of sequence alignment with respect to the Hamming distance. Actual alignment problems would include alignment through editing of sequences - i.e. insertion of gaps. It is quite possible that this procedure can be achieved using a multi-qubit representation (which includes gap characters) within the quantum search algorithm process by suitable choice of qubit evolution operators. Work in this direction is in progress. The author wishes to thank S. O’Donoghue for many helpful discussions. Thanks also to G. Akemann and L. Benet Fernandez for kindly reading through the manuscript, and the theory group at MPIK for their hospitality. This work was supported by the Alexander von Humboldt Foundation.
no-problem/0002/astro-ph0002020.html
ar5iv
text
# The ages of quasar host galaxies. ## 1 Introduction Determining the nature of the host galaxies of powerful active galactic nuclei (AGN) is of importance not only for improving our understanding of different manifestations of AGN activity, but also for exploring possible relationships between nuclear activity and the evolution of massive galaxies. The recent affirmation that black-hole mass appears approximately proportional to spheroid mass in nearby inactive glaxies (Maggorian et al. 1998) has further strengthened the motivation for exploring the link between active AGN and the dynamical and spectral properties of their hosts. Over the last few years, improvements in imaging resolution offered by both space-based and ground-based optical/infrared telescopes has stimulated a great deal of research activity aimed at determining the basic structural parameters (i.e. luminosity, size and morphological type) of the hosts of radio-loud quasars, radio-quiet quasars, and lower-luminosity X-ray-selected and optically-selected AGN (e.g. Disney et al. 1995; Hutchings & Morris 1995; Bahcall et al. 1997; Hooper et al. 1997; McLure et al. 1999, Schade et al. 2000). However, relatively little corresponding effort has been invested in spectroscopic investigations of AGN hosts, despite the fact that this offers an independent way of classifying these galaxies, as well as a means of estimating the age of their stellar populations. Our own work in this field to date has focussed on the investigation of the hosts of matched samples of radio-quiet quasars (RQQs), radio-loud quasars (RLQs) and radio galaxies (RGs) at relatively modest redshift ($`z=0.2`$). Details of these samples can be found in Dunlop et al. (1993). In brief, the sub-samples of quasars (i.e. RQQs and RLQs) have been selected to be indistinguishable in terms of their two-dimensional distribution on the $`Vz`$ plane, while the sub-samples of radio-loud AGN (i.e. RLQs and RGs) have been selected to be indistinguishable in terms of their two-dimensional distribution on the $`P_{5GHz}z`$ plane (as well as having indistinguishable spectral-index distributions). Deep infrared imaging of these samples (Dunlop et al. 1993; Taylor et al. 1996) has recently been complemented by deep WFPC2 HST optical imaging (McLure et al. 1999), the final results of which are reported by Dunlop et al. (2000). As well as demonstrating that, dynamically, the hosts of all three types of luminous AGN appear indistinguishable from normal ellipticals, this work has enabled us to deduce crude spectral information on the host galaxies in the form of optical-infrared colours. However, broad-baseline colour information can clearly be most powerfully exploited if combined with detailed optical spectroscopy. Over the past few years we have therefore attempted to complement our imaging studies with a programme of deep optical off-nuclear spectroscopy of this same sample of AGN. Details of the observed samples, spectroscopic observations, and the basic properties of the observed off-nuclear spectra are given in a companion paper (Hughes et al. 2000). As discussed by Hughes et al. (2000), the key feature of this study (other than its size, depth, and sample control) is that we have endeavoured to obtain spectra from positions further off-nucleus ($`5`$ arcsec) than previous workers, in an attempt to better minimize the need for accurate subtraction of contaminating nuclear light. This approach was made possible by our deep infrared imaging data, which allowed us to select slit positions $`5`$ arcsec off-nucleus which still intercepted the brighter isophotes of the host galaxies (slit positions are shown, superimposed on the infrared images, in Hughes et al. 2000). In this paper we present the results of attempting to fit the resulting off-nuclear spectra with evolutionary synthesis models of galaxy evolution. Our primary aim was to determine whether, in each host galaxy, the optical spectrum could be explained by the same model as the optical-infrared colour, and, if so, to derive an estimate of the age of the dynamically dominant stellar population. However, this study also offered the prospect of determining whether the hosts of different classes of AGN differ in terms of their more recent star-formation activity. It is worth noting that our off-nuclear spectroscopy obviously does not enable us to say anything about the level of star-formation activity in the nucleus of a given host galaxy. Rather, any derived estimates of the level of star-formation activity refer to the region probed by our observations $`5`$ arcsec off-nucleus. However, given the large scalelengths of the hosts, their relatively modest redshift, and the fact that our spectra are derived from long-slit observations, it is reasonable to regard our conclusions as applying to fairly typical regions, still located well within the bulk of the host galaxies under investigation. The layout of this paper is as follows. In section 2 we provide details of the adopted models, and how they have been fitted to the data. The results are presented in section 3, along with detailed notes on the fitting of individual spectra. The implications of the model fits are then discussed in section 4, focussing on a comparison of the typical derived host-galaxy ages in the three AGN subsamples. Finally our conclusions are summarized in section 5. The detailed model fits, along with corresponding chi-squared plots are presented in Appendix A. ## 2 Spectral fitting. The stellar population synthesis models adopted for age-dating the AGN host galaxy stellar populations are the solar metallicity, instantaneous starburst models of Jimenez et al. (2000). We have endeavoured to fit each off-nuclear host galaxy optical spectrum by a combination of two single starburst components. For the first component, age is fitted as a free parameter, while for the second component the age is fixed at 0.1 Gyr, and the normalization relative to the first component is the only free parameter. This dual-component approach was adopted because single-age models are not able to adequately represent the data, and because it allows the age of the dynamically dominant component to be determined in a way which is not overly reliant on the level of ultraviolet flux which might be contributed either by a recent burst of star-formation, or by contamination of the slit by scattered light from the quasar nucleus. After experimentation it was found that the spectral shape of the blue light was better represented by the 0.1 Gyr-old (solar metallicity) model of Jimenez than by models of greater (intermediate) age (e.g. 1 Gyr). Moreover, the further addition of a third intermediate age component, $``$ 1 Gyr, did not significantly improve the quality of the fits achieved. In general, our data are of insufficient quality to tell whether the component which dominates at $`\lambda 3000`$Å really is due to young stars, or is produced by direct or scattered quasar light. However, while the origin of the blue light is obviously of some interest, it has little impact on the main results presented in this paper, which refer to the age of the dynamically dominant stellar population which dominates the spectrum from longward of the 4000Å break through to the near infrared. The robustness of the age determination of the dominant population is demonstrated by comparing the results of the two stellar-population model fit with the results of a fit allowi ng for a nuclear contribution as well as the two stellar populations. The model parameters determined were therefore the age of the dominant stellar population, which we can reasonably call the age of the galaxy, and the fraction, by (visible) baryonic mass, of the 0.1 Gyr component. For the fit including a nuclear contribution, the fraction of the total flux contributed by quasar light was a third parameter. The red end of each SED was further constrained by fitting $`RK`$ colour simultaneously with the optical spectral energy distribution. The fitting process is described below. First, the observed off-nuclear spectra were corrected for redshift and transformed to the rest frame. They were then rebinned to the spectral resolution of the model spectra. The rebinned flux is then the mean flux per unit wavelength in the new bin and the statistical error on each new bin is the standard error in this mean. The data were normalised to a mean flux per unit wavelength of unity across the wavelength range 5020 $``$ 5500 $`\mathrm{\AA }`$. The two-component model was built from the instantaneous-burst stellar-population synthesis model SEDs, so that $$F_{\lambda ,age,\alpha }=const(\alpha f_{\lambda ,0.1}+(1\alpha )f_{\lambda ,age})$$ where $`f_{\lambda ,age}`$ is the mean flux per unit wavelength in the bin centred on wavelength $`\lambda `$ for a single burst model of $`age`$ Gyr, $`\alpha `$ is the fraction by mass of the young (0.1 Gyr) component, and $`F_{\lambda ,age,\alpha }`$ is then the new, mean twin stellar-population flux per unit wavelength in the bin centred on wavelength $`\lambda `$ for a model of $`age`$ Gyr. This composite spectrum was then normalised in the same way as the observed spectra. A $`\chi ^2`$ fit was used to determine the age of the older stellar population and the mass-fraction, $`\alpha `$, of the younger population, for each host galaxy in the sample. The whole parameter space was searched, with the best-fit values quoted being those parameter values at the point on the grid with the minimum calculated $`\chi ^2`$. The normalization of the model spectra was allowed to float during the fitting process, to allow the best-fitting continuum shape to be determined in an unbiased way. The models were fitted across the observed (rest-frame) spectral range, within the wavelength ranges listed in Table 1. As a result of the optimisation of the instruments with which the objects were observed, some of the galaxy spectra contain a ‘splice’ region where the red and blue end of the spectrum have been observed separately and then joined together (see Hughes et al. (2000) for details). These splice regions, defined in Table 1, were masked out of the fit in order to guard against the fitting procedure being dominated by data-points whose flux calibration was potentially less robust. The main emission lines, present due to nuclear light contamination or nebular emission from within the host, were also masked out, over the wavelength ranges given in Table 2. The fit including a nuclear component was carried out in the same way, with the model flux in this case being $$F_{\lambda ,age,\alpha ,\eta }=const(\alpha f_{\lambda ,0.1}+(1\alpha )f_{\lambda ,age}+\eta f0054_\lambda )$$ where $`\eta `$ is the fraction contributed to the total model flux by the nucleus, and $`f0054_\lambda `$ is the observed flux of the nucleus of the radio quiet quasar 0054+144. $`\alpha `$ is the fraction by mass of the total stellar population contributed by the 0.1 Gyr population. The wavelength range of the observed nuclear flux of 0054+144 is 3890$``$6950 $`\mathrm{\AA }`$. This was extended to 3500$``$8500 $`\mathrm{\AA }`$ by smooth extrapolation over the wavelength ranges 3500$``$3890 $`\mathrm{\AA }`$ and 6950$``$8500 $`\mathrm{\AA }`$ in order to carry out the $`\chi ^2`$ fit across the full wavelength range of the observed off-nuclear spectr a. $`RK`$ colour was fitted in both cases with a typical error of a few percent. The observed $`RK`$ colours for the host galaxies are obtained from ukirt and hst images (McLure et al. , 1999, Dunlop et al. , 2000), and define the basic shape of the host galaxy SED out to $`\lambda 2\mu m`$. The composite model spectra were appropriately red-shifted before calculating the colour, so that they could be compared to the observed colours without introducing uncertainties in k-correction. The $`R`$ band was simulated using the filter function, including system response and CCD quantum efficiency, for the hst wfpc2 F675W filter, and the K band was reproduced using the filter data for the ircam3 K Ocli filter at 77K combined with Mauna Kea atmosphere. ## 3 Results The plots showing fits to individual spectra and $`\chi ^2`$as a function of fitted age are given in Appendix A. The plots for the two-component fit are presented in Figure A1, and those for the two stellar-component plus nuclear contribution are in Figure A2. The results for each object are summarized below, under their IAU names, with alternative names given in parentheses. Objects are listed in order of increasing right ascension, within each AGN class (radio loud quasars, radio quiet quasars and radio galaxies). The telescopes with which the spectra were obtained are also noted; M4M denotes the Mayall 4m Telescope at Kitt Peak, and WHT denotes the 4.2m William Herschel Telescope on La Palma. Where there are two spectra, the first plot is for the spectrum observed with the Mayall 4m Telescope, and the second is for the spectrum taken with the William Herschel Telescope. ### 3.1 Notes on individual objects #### 3.1.1 Radio loud quasars 0137+012 (L1093) M4M The models give a good fit at 13 Gyr, which is clearly improved by the inclusion of a small percentage (0.25%) of young stars. There is no significant nuclear contribution to the spectrum. HST imaging has shown that this host galaxy is a large elliptical, with a half-light radius $`r_e=13`$ kpc (McLure et al. 1999). 0736+017 (S0736+01, OI061) M4M,WHT 0736+017 has been observed with both telescopes, and fits to the two observed spectra are in good agreement. Both indicate an age of 12 Gyr. The M4M spectrum requires a somewhat larger young blue population (0.75%) than the WHT spectrum (0.125%). This may be due to poorer seeing at Kitt Peak leading to slightly more nuclear contamination of the slit, or to the use of slightly different slit positions at the two telescopes. However, this difference between the observed spectra shortward of 4000Å leaves the basic form of $`\chi ^2`$ versus age, and the best-fitting age of 12 Gyr unaffected. Inclusion of a nuclear component gives a much better fit to the blue end of the M4M spectrum, without changing the age estimation. The size of the fitted young populations are in much better agreement in this case. HST imaging has shown that morphologically this host galaxy is a large elliptical, with a half-light radius $`r_e=13`$ kpc (McLure et al. 1999). 1004+130 (S1004+13, OL107.7, 4C13.41) WHT The spectrum of this luminous quasar certainly appears to display significant nuclear contamination below the 4000$`\mathrm{\AA }`$ break. As a result a relatively large young population is required to attempt (not completely successfully) to reproduce the blue end of the spectrum . However, the models predict that the underlying stellar population is old (12 Gyr). Allowing a nuclear component to be fit reproduces the blue end of the spectrum much more successfully, without changing the best-fit age estimation. HST $`R`$-band imaging indicates the morphology of the host galaxy is dominated by a large ($`r_e=8`$ kpc) spheroidal component, but subtraction of this best-fit model reveals two spiral-arm-type features on either side of the nucleus (McLure et al. 1999), which may be associated with the young stellar component required to explain the spectrum. 1020$``$103 (S1020$``$103, OL133) M4M This object has the second bluest $`RK`$ of this sample, which leads to a much younger inferred age than the majority of the rest of the sample (5 Gyr), despite the presence of a rather clear 4000Å break in the optical spectrum. Ages greater than $``$10 Gyr are rejected by Jimenez’ models, primarily on the basis of $`RK`$ colour. HST imaging has shown that this host has an elliptical morphology, and a half-light radius of $`r_e=7`$ kpc. (Dunlop et al. 2000). 1217+023 (S1217+02, UM492) WHT Nuclear contamination can again be seen bluewards of 4000$`\mathrm{\AA }`$, with a correspondingly large young population prediction for the purely stellar population model, which still fails to account for the very steep rise towards 3000Å. Hence, a large nuclear contribution is required to reproduce the blue end of the spectrum. The fit achieved by the models suggests that the dominant population is old, with a best-fit age of 12 Gyr. HST imaging has shown that this host has an elliptical morphology, and a half-light radius of $`r_e=11`$ kpc. (Dunlop et al. 2000). 2135+147 (S2135$``$14, PHL1657) WHT 2135+147 has a very noisy spectrum, but a constrained fit has still been achieved, and an old population is preferred. 2135+147 requires a large $`\alpha `$, even when a nuclear contribution is fitted. HST imaging has shown that this host has an elliptical morphology, and a half-light radius of $`r_e=12`$ kpc. (Dunlop et al. 2000). 2141+175 (OX169) WHT This is another noisy spectrum, which has a relatively large quasar light contribution. An old population is again indicated by the model fits. ¿From optical and infrared imaging this object is known to be complex, but HST images indicate that it is dominated by a moderate sized ($`r_e=4`$ kpc) elliptical component (see McLure et al. (1999) for further details). 2247+140 (PKS2247+14, 4C14.82) M4M,WHT 2247+140 has been observed with both telescopes. The model fitting indicates an old population is required by both spectra - although the two observations do not agree precisely, the general level of agreement is very good, the two $`\chi ^2`$ plots have a very similar form, and the difference in $`\chi ^2`$ between the alternative best-fitting ages of 8 Gyr and 12 Gyr is very small. No significant nuclear contribution to the flux is present. HST imaging has shown that this host has an elliptical morphology, and a half-light radius of $`r_e=14`$ kpc (Dunlop et al. 2000). 2349$``$014 (PKS2349$``$01, PB5564) WHT This is a very good fit to a good-quality spectrum, showing an obvious improvement when the low-level young population is added. Jimenez’ models clearly predict that the dominant population is old, with a well-constrained age of 12 Gyr. A very small nuclear contribution ($`\eta `$ = 0.050) does not significantly change the results. HST imaging of this object strongly suggests that it is involved in a major interaction, with a massive tidal tail extending to the north of the galaxy. However, the dominant morphological component is a spheroid with a half-light radius of $`r_e=18`$ kpc (McLure et al. 1999). #### 3.1.2 Radio quiet quasars 0054+144 (PHL909) M4M,WHT There is evidence of relatively large contamination from nuclear emission in the spectrum of this luminous quasar taken on both telescopes, and the age is not well-constrained, although the fit to the WHT spectrum derived from the models again suggests an old age. The $`\chi ^2`$ plots serve to emphasize how similar the two spectra of this object actually are (as also discussed by Hughes et al. 2000). Inclusion of a nuclear component in the model better constrains the age and improves the goodness of the fit. HST imaging of this object has shown that, morphologically, it is undoubtedly an elliptical galaxy, with a half-light radius $`r_e=8`$ kpc (McLure et al. 1999). 0157+001 (Mrk 1014) M4M,WHT The age inferred from both the M4M and WHT spectrum of 0157+001 is again 12 Gyr. The apparently more nuclear-contaminated WHT spectrum does not give such a good fit, but 0157+001 is a complex object known to have extended regions of nebular emission, and the slit positions used for the two observations were not identical (Hughes et al. 2000). The age is much better constrained from the more passive M4M spectrum, to which Jimenez’ models provide a very good fit. Again, it seems that the nuclear contamination does not have a great influence on the predicted age of the old population, although the fit to the WHT spectrum is greatly improved by including a nuclear component in the model. Despite its apparent complexity in both ground-based and HST images, this host galaxy does again seem to be dominated by a large spheroidal component, of half-light radius $`r_e=8`$ kpc (McLure et al. 1999). 0204+292 (3C59) WHT Jimenez’ models fit the spectrum of this object well, indicating an old underlying stellar population ($`>6`$Gyr), with a best-fit age of 13 Gyr for the stellar population plus nuclear component model, and a very small young population. HST imaging has shown this galaxy to be an elliptical, with half-light radius $`r_e=9`$ kpc (Dunlop et al. 2000). 0244+194 (MS 02448+19) WHT The colour derived from the optical and infrared imaging of this host galaxy is rather blue ($`(RK)_{obs}`$ = 2.34) and this in part leads to a fairly young (5 Gyr) age prediction. However, the spectrum is very noisy, and, as indicated by the very flat $`\chi ^2`$ plot, the age is not strongly constrained. No nuclear flux contamination is fitted. HST imaging has shown this galaxy to have an elliptical morphology, with half-light radius $`r_e=9`$ kpc (McLure et al. 1999). 0923+201 WHT The spectrum of 0923+201 is noisy, and it also appears to have some nuclear contamination. The fit is therefore improved by inclusion of a nuclear component. An old age is strongly preferred by the form of the $`\chi ^2`$ / age plot, with a best-fit value of 12 Gyr. HST imaging has shown this galaxy to have an elliptical morphology, with half-light radius $`r_e=8`$ kpc (McLure et al. 1999). 1549+203 (1E15498+203, LB906, MS 15498+20) WHT This is a good fit, which is clearly improved by the addition of the younger population. The slope of the $`\chi ^2`$ / age plot strongly indicates an old dominant population, with a best-fit age of 12 Gyr. There is very little evidence of nuclear contamination. HST imaging has shown this galaxy to be a moderate-sized elliptical $`r_e=5`$ kpc (Dunlop et al. 2000). 1635+119 (MC1635+119, MC2) WHT This is another very successful fit. An old age (12 Gyr) is inferred. Again, HST imaging has shown this galaxy to be a moderate-sized elliptical $`r_e=6`$ kpc (McLure et al. 1999). 2215$``$037 (MS 22152$``$03, EX2215$``$037) WHT 2215$``$037 has a noisy spectrum, to which an acceptable fit has nevertheless been possible. Jimenez’ models suggest an old population, with a best-fitting age of 14 Gyr, but this age is not well constrained. This is the only object where the inclusion of a nuclear contribution to the fitted spectrum substantially changes the age predicted of the dominant stellar population. However, the $`\chi ^2`$ plots are very flat after an age of 5 Gyr, and the age is not strongly constrained in either case. HST imaging has shown this galaxy to have an elliptical morphology, with $`r_e=7`$ kpc (Dunlop et al. 2000). 2344+184 (E2344+184) M4M,WHT 2344+184 has been observed with both telescopes, and the fits to both spectra are in good agreement $``$ although, formally, two different ages are predicted. This is because the $`\chi ^2`$ / age plots are fairly flat after about 8 Gyr. The fits to both observations suggest an old dominant population, with a small young blue population improving the fit. There is no significant change in the predictions when a nuclear flux component is included in the model. HST imaging has shown this to be one of the few host galaxies in the current sample to be disc-dominated. However, the nuclear component is in fact sufficiently weak that this object should really be classified as a Seyfert galaxy rather than an RQQ (McLure et al. 1999). #### 3.1.3 Radio galaxies 0230$``$027 (PKS0230$``$027) WHT 0230$``$027 has a very noisy spectrum, and the colour of the host as derived from optical and infrared imaging is very blue ($`(RK)_{obs}`$ = 2.09). Consequently the best-fitting age derived using the models is 1 Gyr, with no younger component, but it is clear that little reliance can be placed on the accuracy of this result. Allowing for a contribution from quasar light does not improve the fit. HST imaging has shown this galaxy to have an elliptical morphology, with $`r_e=8`$ kpc (Dunlop et al. 2000). 0345+337 (3C93.1) WHT 0345+337 requires no young component or nuclear flux contribution at all, and an old age, of 12 Gyr, is clearly indicated by the models. HST imaging has shown this galaxy to be a large elliptical, with $`r_e=13`$ kpc (McLure et al. 1999). 0917+459 (3C219, 3C219.0) WHT This is an excellent fit, with an old age produced by the models (12 Gyr), together with a very small young population. HST imaging has shown this galaxy to be a large elliptical, with $`r_e=11`$ kpc (McLure et al. 1999). 1215$``$033 WHT Jimenez’ models suggest that the population of 1215$``$033 is universally old (best-fit age, 13 Gyr), with no young component or nuclear contribution required. HST imaging has shown this galaxy to be a large elliptical, with $`r_e=9`$ kpc (Dunlop et al. 2000). 1330+022 (3C287.1) M4M An excellent fit to the data is produced by Jimenez’ models, indicating that the dominant population is old, with a best-fit age of 8 Gyr. HST imaging has shown this galaxy to be a large elliptical, with $`r_e=16`$ kpc (Dunlop et al. 2000). 2141+279 (3C436) M4M,WHT This is another very successful, and well-constrained fit, with an inferred age of 12 Gyr. HST imaging has shown this galaxy to be a large elliptical, with $`r_e=21`$ kpc (McLure et al. 1999). ### 3.2 Sample overview The results illustrated in Appendix A are summarised in Table 3. It should be noted that the 4000$`\mathrm{\AA }`$ break typical of evolved stellar populations is present in the majority of the observed spectra (see Appendix A and Hughes et al. 2000), so we can be confident in fitting stellar population models to the data. The plots clearly show that the addition of even a very small amount of secondary star formation to the simple, near-instantaneous star-burst models reproduces the blue end of the observed host galaxy spectra much more successfully (and in most cases very well) than does a single stellar population. Including a nuclear component further improves the fit to the blue end, especially for those spectra not well fit by purely stellar light. At the same time, the red end of the spectra, plus the observed $`RK`$ colours generally require that the underlying stellar populations are old. The most meaningful output from the model fitting is a constraint on the minimum age of the host galaxies; the $`\chi ^2`$ plots show that often the best-fit age is not strongly constrained but the trend is clearly towards old ($``$ 8 Gyr) stellar populations. In general, young populations are strongly excluded. The peaks and troughs in $`\chi ^2`$ as a function of model age appear to be the result of real features of the population evolution synthesis, rather than being due to, for example, poor sampling (Jimenez, private communication). ## 4 Discussion Figure 1 shows the distribution of best-fit ages estimated using Jimenez’ models. The left-hand panel shows the ages of the dominant stellar component which result from fitting the two-component (stars only) model, while the right-hand panel shows the ages of the dominant stellar component in the case of the 3 component model (i.e. two stellar components + a contribution from scattered nuclear light). The host galaxies of the AGN in each sample are predominantly old, and this result is unaffected by whether or not one chooses to include some nuclear contribution to assist in fitting the blue end of the spectrum. This general result also appears to be relatively unaffected by the precise choice of stellar population model; the models of Bruzual and Charlot (2000), fitted to the host galaxies with the same process, reproduce the results for Jimenez’ models to within a typical accuracy of 1-3 Gyr. We have not attempted to fit the stellar population models of Yi et al. (2000), because of problems we have found in their MS rate of evolution (Nolan et al., 2000). Inclusion of a nuclear contribution in the 3-component models obviously raises the question of whether a young stellar population component is really necessary at all. We thus also explored the results of fitting a two-component model consisting of a single stellar population plus nuclear contribution. Such a model adequately reproduces the two-population-plus-nuclear-component results for the redder galaxies, but in fact the spectra of the bluest galaxies cannot be adequately reproduced by these models; the resulting serious increase in minimum $`\chi ^2`$ demonstrates that inclusion of a young population component is necessary to achieve a statistically acceptable fit to these data. Figure 2 shows the distribution of percentage contribution (by mass), $`\alpha `$, of the young (0.1 Gyr) component to the spectra of the hosts. The left-hand panel shows the values of $`\alpha `$ as derived from fitting the two-component (stars only) model, while the right-hand panel shows the values of $`\alpha `$ produced by the 3 component model (i.e. two stellar components + a contribution from scattered nuclear light). Where two spectra of the same object have been obtained with alternative telescopes/instruments, the derived ages of the dominant stellar componenets are reproduced reassuringly well. There are, however, small discrepancies in the estimated percentage of young population present (see Table 3). This effect is suggestive that at least some of the blue light might be due to a scattered nuclear contribution, the strength of which would be highly dependent on the seeing at the time of observation, and on the precise repeatability of slit placement relative to the galaxy core. Interestingly, when a nuclear component is included with the stellar flux model, an even smaller percentage of 0.1 Gyr stellar population is required to fit the blue end of the spectra, and (more importantly) the difference in $`\alpha `$ between two spectra of the same object is generally reduced. This provides further support for the suggestion that some of the bluest quasar host spectra remain contaminated by quasar light at the shortest wavelengths, and indicates that the right-hand panel of Figure 2 provides a more realistic estimate of the level of on-going star-formation in the host galaxies. While this figure still appears to suggest that at least some quasar hosts display higher levels of ongoing star-fomration activity than do radio galaxies, statistically this ‘result’ is not significant. There are three galaxies which have very low age estimates, namely 0230$``$027, 0244+1944 and 1020$``$103. These objects have the bluest observed $`RK`$ colours, so it may be expected that the fitted ages would be younger than the rest of the sample, and that these populations are genuinely young. As discussed above, it may be that these objects are bluer because of scattered nuclear light contaminating the host galaxy spectrum. However, it seems unlikely that their fitted ages are low simply because of this, because elsewhere in our sample, where two spectra have been obtained of the same object, the amount of nuclear contamination present does not significantly affect the age estimation (e.g. 0736+017 and 0157+001). Moreover, the inclusion of a nuclear component to the fit does not change the estimated age distribution of the host galaxies. If these ages are in error, then a more likely explanation, supported by the relative compactness of these particular host galaxies, is that nuclear and host contributions have been imperfectly separated in the $`K`$-band images, leading to an under-estimate of the near-infrared luminosity of the host. The result of the three-component fitting process which also allows a contribution from scattered nuclear light is that there are in fact only 3 host galaxies in the sample for which there is evidence that $`\alpha >0.5`$. One of these is the host of a radio-loud quasar (2135$``$147) but, as can be seen from Fig A2, this spectrum is one of the poorest (along with 0230$``$027, the only apparently young radio galaxy) in the entire dataset. The 2 convincing cases are both the hosts of radio-quiet quasars, namely 0157$`+`$001 and 2344$`+`$184. Within the somewhat larger sample of 13 RQQs imaged with the HST by McLure et al. (1999) and Dunlop et al. (2000), 4 objects showed evidence for a disk component in addition to a bulge, namely 0052+251, 0157+001, 0257+024, and 2344+184. Since we do not possess spectra of 0052+251 and 0257+024 this means that there is a 1:1 correspondence between the objects which we have identified on the basis of this spectroscopic study as having recent star-formation activity, and those which would be highlighted on the basis of HST imaging as possessing a significant disk component. This straightforward match clearly provides us with considerable confidence that the spectral decomposition attempted here has been effective and robust. Finally we note that it is almost certainly significant that 0157+001, which has the largest starburst component ($`\alpha =1.1`$) based on this spectroscopic analysis, is also the only IRAS source in the sample. ## 5 Conclusions We conclude that the hosts of all three major classes of AGN contain predominantly old stellar populations ($`11`$ Gyr) by $`z0.2`$. This agrees well with the results of McLure et al. (1999), and Dunlop et al. (2000) who compare host galaxy morphologies, luminosities, scale lengths and colours in the same sample, and conclude that the hosts are, to first order, indistinguishable from ‘normal’ quiescent giant elliptical galaxies. The best-fitting age of the dominant stellar population is not a function of AGN class. For the purely stellar models, the fitted percentage contribution of the blue component is, however, greater in the quasar hosts than in the radio galaxies; the median values are 0.6% for the 9 radio-loud quasars, 0.6% for the 9 radio-quiet quasars, and 0.05% for the 6 radio galaxies. However, when a nuclear component is included, the median values are 0.3% for the radio-loud quasars, 0.3% for the radio-quiet quasars, and 0.00% for the radio galaxies. Performing a Kolmogorov-Smirnov test on these results yields a probability greater than 0.2 that the percentage of young stellar population in host galaxies is in fact also not a function of AGN class. These results strongly support the conclusion that the host galaxies of all three major classes of AGN are massive ellipticals, dominated by old stellar populations. ACKNOWLEDGEMENTS Louisa Nolan acknowledges the support provided by the award of a PPARC Studentship. Marek Kukula and David Hughes acknowledge the support provided by the award PPARC PDRAs, while Raul Jimenez acknowledges the award of a PPARC Advanced Fellowship. We thank an anonymous referee for perceptive comments which helped to clarify the robustness of our results and improved the clarity of the paper. REFERENCES Bahcall J.N., Kirhakos S., Saxe D.H., Schneider D.P., 1997, ApJ, 479, 642 Bruzual A.G., Charlot S., 2000, in preparation Disney M.J. et al., 1995, Nat, 376, 150 Dunlop J.S., 2000, In: ‘The Hy-Redshift Universe: Galaxy Formation and Evolution at High Redshift’, ASP Conf. Ser., Vol 193, eds. A.J. Bunker & W.J.M. van Breugel, in press, (astro-ph/9912380) Dunlop J.S., McLure R.J., Kukula M.J., Baum S.A., O’Dea C.P., Hughes D.H., 2000, MNRAS, in press Dunlop J.S., Taylor G.L., Hughes D.H., Robson E.I., 1993, MNRAS, 264, 455 Hooper E.J., Impey C.D., Foltz C.B., 1997, ApJ, 480, L95 Hughes D. H., Kukula M.J., Dunlop J.S., Boroson T., 2000, MNRAS, in press Hutchings J.B., Morris S.C., 1995, AJ, 109, 1541 Jimenez R., Dunlop J.S., Peacock J.A., Padoan P., MacDonald J., J$`ø`$rgensen U.G., 2000, MNRAS, in press Jimenez R., Padoan P., Matteucci F., Heavens A.F., 1998, MNRAS, 299, 123 Magorrian J., et al., 1998, AJ, 115, 2285 McLure R.J., Kukula M.J., Dunlop J.S., Baum S.A., O’Dea C.P., Hughes D.H., 1999, MNRAS, in press, (astro-ph/9809030) Nolan L.A., Dunlop J.S., Jimenez R., 2000, MNRAS, submitted, (astro-ph/0004325) Schade, D.J., Boyle, B.J., Letawsky, M., 2000, MNRAS, in press Taylor G.T., Dunlop J.S., Hughes D.H., Robson E.I., 1996, MNRAS, 283, 930 Yi S., Brown T.M., Heap S., Hubeny I., Landsman W., Lanz T., Sweigart A., 2000, ApJ, submitted ## Appendix A Spectra and $`\chi ^2`$ plots The fits for all the off-nuclear spectra are given. In Fig A1, the rest frame spectra are in the first column (black), with the best-fitting two-component model spectra (Jimenez et al., 2000) superimposed (green). The spectra of the single-aged old popula tion (red) is given for comparison. In Fig A2, the fits allowing for an additional contribution to the flux from the nucleus are presented. The key is the same as in A1, with the additional blue line representing the best-fitting two-component model flux plus the nuclear flux contribution. The second column of plots shows the $`\chi ^2`$evolution with age for the dominant older population. The third column shows the best-fit $`\chi ^2`$as a function of percentage young population, $`\alpha `$, for fixed ages of the dominant component. The models have solar metallicity. The subscript $`\eta `$ denotes results obtained by including the nuclear contribution. Where there are two spectra of the same object, the spectrum given first is the one observed on the Mayall 4m Telescope, and the second is that observed using the William Herschel Telescope.The data for the following objects have been smoothed using a Han ning function: 2135+147 (RLQ), 2141+175 (RLQ), 0244+194 (RQQ), 0923+201 (RQQ), 1549+203 (RQQ), 2215-037 (RQQ), 0230-027 (RG) and 0345+337 (RG).
no-problem/0002/math-ph0002018.html
ar5iv
text
# Modern Mathematical Physics: what it should be When somebody asks me, what I do in science, I call myself a specialist in mathematical physics. As I have been there for more than 40 years, I have some definite interpretation of this combination of words: “mathematical physics.” Cynics or purists can insist that this is neither mathematics nor physics, adding comments with a different degree of malice. Naturally, this calls for an answer, and in this short essay I want to explain briefly my understanding of the subject. It can be considered as my contribution to the discussion about the origin and role of mathematical physics and thus to be relevant for this volume. The matter is complicated by the fact that the term “mathematical physics” (often abbreviated by MP in what follows) is used in different senses and can have rather different content. This content changes with time, place and person. I did not study properly the history of science; however, it is my impression that, in the beginning of the twentieth century, the term MP was practically equivalent to the concept of theoretical physics. Not only Henri Poincaré, but also Albert Einstein, were called mathematical physicists. Newly established theoretical chairs were called chairs of mathematical physics. It follows from the documents in the archives of the Nobel Committee that MP had a right to appear both in the nominations and discussion of the candidates for the Nobel Prize in physics . Roughly speaking, the concept of MP covered theoretical papers where mathematical formulae were used. However, during an unprecedented bloom of theoretical physics in the 20s and 30s, an essential separation of the terms “theoretical” and “mathematical” occurred. For many people, MP was reduced to the important but auxiliary course “Methods of Mathematical Physics” including a set of useful mathematical tools. The monograph of P. Morse and H. Feshbach is a classical example of such a course, addressed to a wide circle of physicists and engineers. On the other hand, MP in the mathematical interpretation appeared as a theory of partial differential equations and variational calculus. The monographs of R. Courant and D. Hilbert and S. Sobolev are outstanding illustrations of this development. The theorems of existence and uniqueness based on the variational principles, a priori estimates, and imbedding theorems for functional spaces comprise the main content of this direction. As a student of O. Ladyzhenskaya, I was immersed in this subject since the 3rd year of my undergraduate studies at the Physics Department of Leningrad University. My fellow student N. Uraltseva now holds the chair of MP exactly in this sense. MP in this context has as its source mainly geometry and such parts of classical mechanics as hydrodynamics and elasticity theory. Since the 60s a new impetus to MP in this sense was supplied by Quantum Theory. Here the main apparatus is functional analysis, including the spectral theory of operators in Hilbert space, the mathematical theory of scattering and the theory of Lie groups and their representations. The main subject is the Schrödinger operator. Though the methods and concrete content of this part of MP are essentially different from those of its classical counterpart, the methodological attitude is the same. One sees the quest for the rigorous mathematical theorems about results which are understood by physicists in their own way. I was born as a scientist exactly in this environment. I graduated from the unique chair of Mathematical Physics, established by V.I. Smirnov at the Physics Department of Leningrad University already in the 30s. In his venture V.I. Smirnov got support from V. Fock, the world famous theoretical physicist with very wide mathematical interests. Originally this chair played the auxiliary role of being responsible for the mathematical courses for physics students. However in 1955 it got permission to supervise its own diploma projects, and I belonged to the very first group of students using this opportunity. As I already mentioned, O.A. Ladyzhenskaya was our main professor. Although her own interests were mostly in nonlinear PDE and hydrodynamics, she decided to direct me to quantum theory. During last two years of undergraduate studies I was to read the monograph of K.O. Friedrichs, “Mathematical Aspects of Quantum Field Theory,” and relate it to our group of 5 students and our professor on a special seminar. At the same time my student friends from the chair of Theoretical Physics were absorbed in reading the first monograph on Quantum Electrodynamics by A. Ahieser and V. Berestevsky. The difference in attitudes and language was striking and I was to become accustomed to both. After my graduation O.A. Ladyzhenskaya remained my tutor but she left me free to choose research topics and literature to read. I read both mathematical papers (i.e. on direct and inverse scattering problems by I.M. Gelfand and B.M. Levitan, V.A. Marchenko, M.G. Krein, A.Ya. Povzner) and “Physical Review” (i.e. on formal scattering theory by M. Gell-Mann, M. Goldberger, J. Schwinger and H. Ekstein) as well. Papers by I. Segal, L. Van-Hove and R. Haag added to my first impressions on Quantum Field Theory taken from K. Friederichs. In the process of this self-education my own understanding of the nature and goals of MP gradually deviated from the prevailing views of the members of the V. Smirnov chair. I decided that it is more challenging to do something which is not known to my colleagues from theoretical physics rather than supply theorems of substantiality. My first work on the inverse scattering problem especially for the many-dimensional Schrödinger operator and that on the three body scattering problem confirm that I really tried to follow this line of thought. This attitude became even firmer when I began to work on Quantum Field Theory in the middle of the 60s. As a result, my understanding of the goal of MP drastically modified. I consider as the main goal of MP the use of mathematical intuition for the derivation of really new results in the fundamental physics. In this sense, MP and Theoretical Physics are competitors. Their goals in unraveling the laws of the structure of matter coincide. However, the methods and even the estimates of the importance of the results of work may differ quite significally. Here it is time to say in what sense I use the term “fundamental physics.” The adjective “fundamental” has many possible interpretations when applied to the classification of science. In a wider sense it is used to characterize the research directed to unraveling new properties of physical systems. In the narrow sense it is kept only for the search for the basic laws that govern and explain these properties. Thus, all chemical properties can be derived from the Schrödinger equation for a system of electrons and nuclei. Alternatively, we can say that the fundamental laws of chemistry in a narrow sense are already known. This, of course, does not deprive chemistry of the right to be called a fundamental science in a wide sense. The same can be said about classical mechanics and the quantum physics of condensed matter. Whereas the largest part of physical research lies now in the latter, it is clear that all its successes including the theory of superconductivity and superfluidity, Bose-Einstein condensation and quantum Hall effect have a fundamental explanation in the nonrelativistic quantum theory of many body systems. An unfinished physical fundamental problem in a narrow sense is physics of elementary particles. This puts this part of physics into a special position. And it is here where modern MP has the most probable chances for a breakthrough. Indeed, until recent time, all physics developed along the traditional circle: experiment — theoretical interpretation — new experiment. So the theory traditionally followed the experiment. This imposes a severe censorship on the theoretical work. Any idea, bright as it is, which is not supplied by the experimental knowledge at the time when it appeared is to be considered wrong and as such must be abandoned. Characteristically the role of censors might be played by theoreticians themselves and the great L. Landau and W. Pauli were, as far as I can judge, the most severe ones. And, of course, they had very good reason. On the other hand, the development of mathematics, which is also to a great extent influenced by applications, has nevertheless its internal logic. Ideas are judged not by their relevance but more by esthetic criteria. The totalitarianism of theoretical physics gives way to a kind of democracy in mathematics and its inherent intuition. And exactly this freedom could be found useful for particle physics. This part of physics traditionally is based on the progress of accelerator techniques. The very high cost and restricted possibilities of the latter soon will become an uncircumventable obstacle to further development. And it is here that mathematical intuition could give an adequate alternative. This was already stressed by famous theoreticians with mathematical inclinations. Indeed, let me cite a paper by P. Dirac from the early 30s: > The steady progress of physics requires for its theoretical formulation a mathematics that gets continually more advanced. This is only natural and to be expected. What, however, was not expected by the scientific workers of the last century was the particular form that the line of advancement of the mathematics would take, namely, it was expected that the mathematics would get more complicated, but would rest on a permanent basis of axioms and definitions, while actually the modern physical developments have required a mathematics that continually shifts its foundations and gets more abstract. Non-euclidean geometry and non-commutative algebra, which were at one time considered to be purely fictions of the mind and pastimes for logical thinkers, have now been found to be very necessary for the description of general facts of the physical world. It seems likely that this process of increasing abstraction will continue in the future and that advance in physics is to be associated with a continual modification and generalization of the axioms at the base of mathematics rather than with logical development of any one mathematical scheme on a fixed foundation. > > There are at present fundamental problems in theoretical physics awaiting solution, *e.g.*, the relativistic formulation of quantum mechanics and the nature of atomic nuclei (to be followed by more difficult ones such as the problem of life), the solution of which problems will presumably require a more drastic revision of our fundamental concepts than any that have gone before. Quite likely these changes will be so great that it will be beyond the power of human intelligence to get the necessary new ideas by direct attempts to formulate the experimental data in mathematical terms. The theoretical worker in the future will therefore have to proceed in a more inderect way. The most powerful method of advance that can be suggested at present is to employ all the resources of pure mathematics in attempts to perfect and generalise the mathematical formalism that forms the existing basis of theoretical physics, and *after* each success in this direction, to try to interpret the new mathematical features in terms of physical entities. Similar views were expressed by C.N. Yang. I did not find a compact citation, but all spirit of his commentaries to his own collection of papers shows this attitude. Also he used to tell this to me in private discussions. I believe that the dramatic history of setting the gauge fields as a basic tool in the description of interactions in Quantum Field Theory gives a good illustration of the influence of mathematical intuition on the development of the fundamental physics. Gauge fields, or Yang–Mills fields, were introduced to the wide audience of physicists in 1954 in a short paper by C.N. Yang and R. Mills , dedicated to the generalization of the electromagnetic fields and the corresponding principle of gauge invariance. The geometric sense of this principle for the electromagnetic field was made clear as early as in the late 20s due to the papers of V. Fock and H. Weyl . They underlined the analogy of the gauge (or gradient in the terminology of V. Fock) invariance of the electrodynamics and the equivalence principle of the Einstein theory of gravitation. The gauge group in electrodynamics is commutative and corresponds to the multiplication of the complex field (or wave function) of the electrically charged particle by a phase factor depending on the space–time coordinates. Einstein’s theory of gravity provides an example of a much more sophisticated gauge group, namely the group of general coordinate transformation. Both H. Weyl and V. Fock were to use the language of the moving frame with spin connection, associated with local Lorentz rotations. Thus the Lorentz group became the first nonabelian gauge group and one can see in essentially all formulas characteristics of nonabelian gauge fields. However, in contradistinction to the electromagnetic field, the spin connection enters the description of the space-time and not the internal space of electric charge. In the middle of the 30s, after the discovery of the isotopic spin in nuclear physics, and forming the Yukawa idea of the intermediate boson, O. Klein tried to geometrise these objects. His proposal was based on his 5-dimensional picture. Proton and neutron (as well as electron and neutrino, there were no clear distinction between strong and weak interactions) were put together in an isovector and electromagnetic field and charged vector meson comprised a $`2\times 2`$ matrix. However the noncommutative $`SU(2)`$ gauge group was not mentioned. Klein’s proposal was not received favorably and N. Borh did not recommend him to publish a paper. So the idea remained only in the form of contribution to proceedings of Warsaw Conference “New Theories in Physics” . The noncommutative group, acting in the internal space of charges, appeared for the first time in the paper of C.N. Yang and R. Mills in 1954. There is no wonder that Yang received a cool reaction when he presented his work at Princeton in 1954. The dramatic account of this event can be found in his commentaries . Pauli was in the audience and immediately raised the question about mass. Indeed the gauge invariance forbids the introduction of mass to the vector charged fields and masslessness leads to the long range interaction, which contradicts the experiment. The only known massless particles (and accompaning long range interactions) are photon and graviton. It is evident from Yang’s text, that Pauli was well acquainted with the differential geometry of nonabelian vector fields but his own censorship did not allow him to speak about them. As we know now, the boldness of Yang and his esthetic feeling finally were vindicated. And it can be rightly said, that C.N. Yang proceeded according to mathematical intuition. In 1954 the paper of Yang and Mills did not move to the forefront of high energy theoretical physics. However, the idea of the charged space with noncommutative symmetry group acquired more and more popularity due to the increasing number of elementary particles and the search for the universal scheme of their classification. And at that time the decisive role in the promotion of the Yang–Mills fields was also played by mathematical intuition. At the beginning of the 60s, R. Feynman worked on the extension of his own scheme of quantization of the electromagnetic field to the gravitation theory of Einstein. A purely technical difficulty — the abundance of the tensor indices — made his work rather slow. Following the advice of M. Gell-Mann, he exercised first on the simpler case of the Yang–Mills fields. To his surprise, he found that a naive generalization of his diagrammatic rules designed for electrodynamics did not work for the Yang-Mills field. The unitarity of the $`S`$-matrix was broken. Feynman restored the unitarity in one loop by reconstructing the full scattering amplitude from its imaginary part and found that the result can be interpreted as a subtraction of the contribution of some fictitious particle. However his technique became quite cumbersome beyond one loop. His approach was gradually developed by B. De-Witt . It must be stressed that the physical senselessness of the Yang–Mills field did not preclude Feynman from using it for mathematical construction. The work of Feynman became one of the starting points for my work in Quantum Field Theory, which I began in the middle of the 60s together with Victor Popov. Another point as important was the mathematical monograph by A. Lichnerowitz , dedicated to the theory of connections in vector bundles. From Lichnerowitz’s book it followed clearly that the Yang–Mills field has a definite geometric interpretation: it defines a connection in the vector bundle, the base being the space-time and the fiber the linear space of the representation of the compact group of charges. Thus, the Yang–Mills field finds its natural place among the fields of geometrical origin between the electromagnetic field (which is its particular example for the one-dimensional charge) and Einstein’s gravitation field, which deals with the tangent bundle of the Riemannian space-time manifold. It became clear to me that such a possibility cannot be missed and, notwithstanding the unsolved problem of zero mass, one must actively tackle the problem of the correct quantization of the Yang–Mills field. The geometric origin of the Yang–Mills field gave a natural way to resolve the difficulties with the diagrammatic rules. The formulation of the quantum theory in terms of Feynman’s functional integral happened to be most appropriate from the technical point of view. Indeed, to take into account the gauge equivalence principle one has to integrate over the classes of gauge equivalent fields rather than over every individual configuration. As soon as this idea is understood, the technical realization is rather straightforward. As a result V. Popov and I came out at the end of 1966 with a set of rules valid for all orders of perturbation theory. The fictitious particles appeared as auxiliary variables giving the integral representation for the nontrivial determinant entering the measure over the set of gauge orbits. Correct diagrammatic rules of quantization of the Yang-Mills field, obtained by V. Popov and me in 1966–1967 , did not attract immediate the attention of physicists. Moreover, the time when our work was done was not favorable for it. Quantum Field Theory was virtually forbidden, especially in the Soviet Union, due to the influence of Landau. “The Hamiltonian is dead” — this phrase from his paper , dedicated to the anniversary of W. Pauli — shows the extreme of Landau’s attitude. The reason was quite solid, it was based not on experiment, but on the investigation of the effects of renormalization, which led Landau and his coworkers to believe that the renormalized physical coupling constant is inevitably zero for all possible local interactions. So there was no way for Victor Popov and me to publish an extended article in a major Soviet journal. We opted for the short communication in “Physics Letters” and were happy to be able to publish the full version in the preprint series of newly opened Kiev Institute of Theoretical Physics. This preprint was finally translated into English by B. Lee as a Fermilab preprint in 1972, and from the preface to the translation it follows that it was known in the West already in 1968. A decisive role in the successful promotion of our diagrammatic rules into physics was played by the works of G. ’t Hooft , dedicated to the Yang–Mills field interacting with the Higgs field (and which ultimately led to a Nobel Prize for him in 1999) and the discovery of dimensional transmutation (the term of S. Coleman ). The problem of mass was solved in the first case via the spontaneous symmetry breaking. The second development was based on asymptotic freedom. There exists a vast literature dedicated to the history of this dramatic development. I refer to the recent papers of G. ’t Hooft and D. Gross , where the participants in this story share their impressions of this progress. As a result, the Standard Model of unified interactions got its main technical tool. From the middle of the 70s until our time it remains the fundamental base of high energy physics. For our discourse it is important to stress once again that the paper based on mathematical intuition preceded the works made in the traditions of theoretical physics. The Standard Model did not complete the development of fundamental physics in spite of its unexpected and astonishing experimental success. The gravitational interactions, whose geometrical interpretation is slightly different from that of the Yang–Mills theory, is not included in the Standard Model. The unification of quantum principles, Lorentz–Einstein relativity and Einstein gravity has not yet been accomplished. We have every reason to conjecture that the modern MP and its mode of working will play the decisive role in the quest for such a unification. Indeed, the new generation of theoreticians in high energy physics have received an incomparably higher mathematical education. They are not subject to the pressure of old authorities maintaining the purity of physical thinking and/or terminology. Futhermore, many professional mathematicians, tempted by the beauty of the methods used by physicists, moved to the position of the modern mathematical physics. Let use cite from the manifesto, written by P. MacPherson during the organization of the Quantum Field Theory year at the School of Mathematics of the Institute for Advanced Study at Princeton: > The goal is to create and convey an understanding, in terms congenial to mathematicians, of some fundamental notions of physics, such as quantum field theory. The emphasis will be on developing the intuition stemming from functional integrals. > > One way to define the goals of the program is by negation, excluding certain important subjects commonly pursued by mathematicians whose work is motivated by physics. In this spirit, it is not planned to treat except peripherally the magnificient new applications of field theory, such as Seiberg-Witten equations to Donaldson theory. Nor is the plan to consider fundamental new constructions within mathimatics that were inspired by physics, such as quantum groups or vertex operator algebras. Nor is the aim to discuss how to provide mathematical rigor for physical theories. Rather, the goal is to develop the sort of intuition common among physicists for those who are used to thought processes stemming from geometry and algebra. I propose to call the intuition to which MacPherson refers that of mathematical physics. I also recommend the reader to look at the instructive drawing by P. Dijkgraaf on the dust cover of the volumes of lectures given at the School . The union of these two groups constitutes an enormous intellectual force. In the next century we will learn if this force is capable of substituting for the traditional experimental base of the development of fundamental physics and pertinent physical intuition.
no-problem/0002/astro-ph0002481.html
ar5iv
text
# Precise Measurement of Cosmic-Ray Proton and Helium Spectra with the BESS Spectrometer ## 1 INTRODUCTION The absolute fluxes and spectra of primary cosmic-ray protons and helium nuclei are fundamental information as references in cosmic-ray physics. They are needed to calculate secondary anti-protons, positrons, and diffuse gamma radiation, which in turn provide important knowledge of particle propagation and the matter distribution in interstellar space. Those are also indispensable for studying atmospheric neutrinos. Although measurement of the proton and helium energy spectra has been performed in various experiments, their resultant absolute fluxes show discrepancies up to a factor of two at 50 GeV. This ambiguity causes large uncertainty in calculations of the atmospheric neutrinos, as well as secondary anti-protons, positrons, and diffuse gamma-rays. We report a new precise measurement of the cosmic-ray proton and helium spectra over the energy ranges of 1 to 120 GeV for protons and 1 to 54 GeV/nucleon for helium nuclei, based on a half of the data from a BESS balloon flight in 1998. The covered energy range is relevant to the atmospheric neutrinos observed as “fully contained events” in Super-Kamiokande. In the BESS–98 flight, a new trigger mode was implemented with a silica-aerogel Cherenkov counter to record all energetic particles instead of sampling the protons at a ratio of 1/60 as done in the previous BESS flights. This drastically improved statistics in the high-energy region above 6 GeV/nucleon, as reported here. ## 2 THE BESS SPECTROMETER The BESS detector is a high-resolution spectrometer with a large acceptance to perform highly sensitive searches for rare cosmic-ray components, as well as precise measurement of the absolute fluxes of various cosmic-ray particles (Orito, 1987; Yamamoto et al., 1994; Ajima et al., 2000). As shown in Figure 1, all detector components are arranged in a simple cylindrical configuration with a thin superconducting solenoidal magnet. In the central region, the solenoid provides a uniform magnetic field of 1 Tesla ($`\pm 7`$ % in a fiducial volume). The trajectory of an incoming charged particle is measured by using the tracking system which consists of a central JET chamber and two inner drift chambers (IDC’s) in a volume of 0.84 m in diameter and 1 m in length. The magnetic-rigidity ($`Rpc/Ze`$) is reliably determined by a simple circular-fitting of the deflection ($`R^1`$) using up-to 28 hit points each with a spatial resolution of 200 $`\mu `$m . Figure 2 shows the deflection uncertainty for protons evaluated in the track fitting procedure. Since the magnetic field is highly uniform, it has a narrow and sharp peak around 0.005, which corresponds to maximum detectable rigidity (MDR) of 200 GV, and has a small tail. The outermost detector is a set of time-of-flight (TOF) hodoscopes with 2 cm thick plastic scintillators. It provides the velocity ($`\beta v/c`$) and energy loss ($`dE/dx`$) measurements. The time resolution for energetic protons in each counter is 55 ps rms, resulting in a $`\beta ^1`$ resolution of 1.4 %. The data acquisition sequence is initiated by a first-level TOF trigger by a simple coincidence of signals in the top and bottom scintillators, with the threshold level of 1/3 pulse height from minimum ionizing particles (MIP’s). If the pulse height exceeds 2.5 times MIP signal, the TOF trigger is labeled as “helium-trigger,” otherwise “proton-trigger.” The TOF trigger efficiency was evaluated to be $`>99.95`$ % from the muon data taken at sea level; thus, the systematic error caused by the TOF trigger inefficiency is negligibly small. In order to build a sample of unbiased triggers, one of every 60 and 25 “proton-triggered” and “helium-triggered” events, respectively, was recorded irrespective of succeeding online-selections. The instrument has a threshold-type Cherenkov counter with a silica-aerogel radiator just below the top TOF hodoscope (Asaoka et al., 1998). The radiator was newly developed prior to the BESS–98 flight and it has a refractive index of 1.022. An auxiliary trigger was generated by a signal from the Cherenkov counter to record all of the high energy particles above $``$ 6 GeV without bias nor sampling. The efficiency of the Cherenkov trigger for the energetic particles was determined to be $`92.1\pm 3.0`$ % by using the data sample of unbiased triggers. The BESS spectrometer was flown 1998 July 29-30 from Lynn Lake, Manitoba, Canada. It floated at an altitude of 37 km (residual atmosphere of $`5\mathrm{g}/\mathrm{cm}^2`$) with a cutoff rigidity of 0.5 GV or smaller. The solar activity was close to the minimum. ## 3 DATA ANALYSIS In the first stage of data reduction, we selected events with a single track fully contained inside a fiducial volume defined by the central four columns out of eight columns in the JET chamber. This definition of the fiducial volume reduced the effective geometrical acceptance down to $`1/3`$ of the full acceptance, but it ensured the longest track-fitting and thus the highest resolution in the rigidity measurement. The single-track selection eliminated rare interacting events. In order to verify the selection, events were scanned randomly in the unbiased trigger sample, and it was confirmed that 995 out of 1,000 visually-identified single-track events passed this selection criteria and interacting events were fully eliminated. Thus, the track reconstruction efficiency was $`99.5\pm 0.2`$ % for a single-track event. In order to assure accuracy of the rigidity measurements, event quality such as $`\chi ^2`$ in the track fitting procedure was imposed on the single-track events. The efficiency of this quality-check was estimated from loose cuts of the flight data to be $`93.8\pm 0.3`$ % and $`93.0\pm 0.9`$ % for protons and helium nuclei, respectively. It was almost constant over the whole energy range. In the final stage of data reduction, particle identification was performed as shown in Figure 3 by requiring proper $`dE/dx`$ and 1/$`\beta `$ as a function of rigidity. According to a study of another sample of $`5\times 10^5`$ protons and $`4\times 10^4`$ helium nuclei selected by using independent information of energy loss inside the JET chamber, the $`dE/dx`$ selection efficiencies were $`99.3\pm 0.2`$ % for protons and $`98.2\pm 0.7`$ % for helium nuclei, and the contamination probabilities should be less than $`3\times 10^5`$ for protons and $`4\times 10^4`$ for helium nuclei. Since the 1/$`\beta `$ distribution is well described by Gaussian and the half width of 1/$`\beta `$ selection band was set at 3.89 $`\sigma `$, the efficiency is very close to unity (99.99 % for pure Gaussian). Very clean proton samples were obtained below 3 GV. However, deuterons started to contaminate the proton band around at 3 GV, where the contamination was observed to be 2 %. No subtraction was made for this contamination, because it was as small as the statistical errors and a deuteron-to-proton ratio decreases with energy (Seo et al., 1997) following a decrease in escape path lengths of primary cosmic-ray nuclei (Engelmann et al., 1990). In conformity with previous experiments, all doubly charged particle were treated as <sup>4</sup>He. With the data reduction described above, 826,703 protons and 77,325 helium nuclei were finally identified. The combined efficiencies were $`92.7\pm 1.0`$ % for protons and $`90.8\pm 1.5`$ % for helium nuclei. The geometrical acceptance defined for this analysis was calculated to be $`0.0851\pm 0.0003`$ $`\mathrm{m}^2\mathrm{sr}`$ for energetic particles by using simulation technique (Sullivan, 1971). The simple cylindrical shape and the uniform magnetic field make it trivial to determine the acceptance precisely. The error arises from uncertainty of the detector alignment within 1 mm. The ratio of live data-taking-time was measured exactly to be 86.4 % by counting 1 MHz clock pulses. The energy of each particle at the top of the atmosphere can be calculated by summing up the ionization energy losses with tracing back the event trajectory. Detection efficiencies were studied by using Monte Carlo (M.C.) simulation. The M.C. code was developed to incorporate detailed description of various interactions of helium nuclei into GEANT (Brun et al., 1994), where the cross-sections and angular-distributions of the nuclear interactions were evaluated by fitting experimental data (Bizard et al., 1977; Abdurakhimov et al., 1981; Gasparyan et al., 1982; Ableev et al., 1985; Grebenjuk et al., 1989; Glagolev et al., 1993; Abdullin et al., 1994) to energy dependent empirical formulas (Bradt & Peters, 1950). The electromagnetic processes, mainly due to $`\delta `$–rays, are also treated properly. They are more significant in helium nuclei interactions, because cross-sections in electromagnetic processes ($`\sigma _{em}`$) behaves as $`Z^2`$ whereas those in hadronic processes ($`\sigma _{had}`$) is approximately proportional to $`(2Z)^{2/3}`$. The M.C. well reproduced the observed event shape. The simulated and observed number of hit-counters in the bottom TOF hodoscope, for instance, were agree within a discrepancy of 0.9 % and 1.8 % for proton and helium, respectively. The systematic errors in the M.C. originate mainly in uncertainties of $`\sigma _{had}`$ and $`\sigma _{em}`$. We attributed relative errors of $`\pm 5`$ % to $`\sigma _{had}(\mathrm{p}+\mathrm{A})`$, $`\pm 5`$ % to $`\sigma _{em}(\mathrm{p}+\mathrm{A})`$, $`\pm 15`$ % to $`\sigma _{had}(\mathrm{He}+\mathrm{A})`$, $`\pm 20`$ % to $`\sigma _{em}(\mathrm{He}+\mathrm{A})`$, and $`\pm 20`$ % to $`\sigma _{had}(\mathrm{CNO}+\mathrm{A})`$. Another source of a systematic error in the M.C. was the uncertainty of the material distribution inside the BESS spectrometer, which was estimated to be $`\pm 10`$ %. The probability that cosmic-ray protons and helium nuclei, respectively, can pass through the whole detector without interaction was $`87.6\pm 2.3`$ % and $`74.5\pm 6.8`$ % at 1 GeV, and $`79.7\pm 2.9`$ % and $`64.6\pm 7.5`$ % at 100 GeV. According to similar M.C. studies, the probabilities that primary cosmic-rays can penetrate the residual atmosphere of 5 g/cm<sup>2</sup> is about 94 % and 90 % for the proton and helium, respectively, over the entire energy range discussed here. Atmospheric secondary protons, which account for about 3.5 % at 1 GeV and less than 1.5 % above 10 GeV of observed protons, were subtracted based on the calculation by Papini et al. (1996). Atmospheric secondary helium above 1 GeV/nucleon is dominated by fragments of heavier cosmic-ray nuclei (mainly Carbon and Oxygen). The flux ratio of atmospheric secondary helium to primary C+O was calculated to be 0.14 at a depth of $`5\mathrm{g}/\mathrm{cm}^2`$, based on the total inelastic cross-sections of CNO + Air interactions and the helium multiplicity in $`{}_{}{}^{12}\mathrm{C}+\mathrm{CNO}`$ interactions (Ahmad et al., 1989). The total correction of atmospheric secondary helium due to all nuclei with $`Z>2`$ was estimated to be about 2 % over the entire energy range. The ambiguity in this estimation arises from the uncertainties in $`\sigma _{had}(\mathrm{CNO}+\mathrm{A})`$ and in absolute fluxes of heavier cosmic-ray nuclei, to which we attributed relative error of $`\pm 20`$ %. The systematic errors originating in the correction of the residual air effect were estimated to be $`\pm 0.3`$ % for protons and $`\pm 2.0`$ % helium nuclei. Because of the finite resolution in rigidity measurement, and the very steep spectral shape, the observed spectrum shape may suffer deformation. The effect of finite resolution was estimated by simulation, where the error in rigidity measurement was tuned to reproduce the distribution shown in Figure 2. The effect was found to be smaller than 1 % below 25 GV, but it became visible with increasing rigidity. The observed spectrum gradually gets lower than original spectrum, with a ratio of $`2.5`$ % at 70 GV and then rapidly rises to $`\pm 0`$ % around 120 GV. No correction was made for this deformation, because the effect is as small as the statistical errors over the energy range discussed here. ## 4 EXPERIMENTAL RESULTS AND CONCLUSION The proton and helium fluxes at the top of the atmosphere have been obtained from the BESS–98 flight data as summarized in Table 1 and as shown in Figure 4 in comparison with the previous experiments. The first and second errors in Table 1 represent statistical and systematic errors, respectively. The overall errors including both errors are less than $`\pm 5`$ % for protons and $`\pm 10`$ % for helium nuclei. The dotted lines in Figure 4 indicate the spectra assumed in the calculation of atmospheric neutrinos (Honda et al., 1995). Our results, as well as other recent measurements, are favorable to lower fluxes than the ones assumed in the atmospheric neutrino calculation especially above a few tens of GeV. It may suggest an importance of the reconsideration for the atmospheric neutrino flux predictions. Precise measurements of primary cosmic-rays will help to improve the accuracy in the atmospheric neutrino calculations. The authors would thank NASA and NSBF for the balloon flight operation. This experiment was supported by Grants-in-Aid from Monbusho and Heiwa Nakajima Foundation in Japan and by NASA in the U.S.A. The analysis was performed with the computing facilities at ICEPP, University of Tokyo.
no-problem/0002/hep-ph0002161.html
ar5iv
text
# A Model Independent Determination of |𝑉_{𝑢⁢𝑏}| ## Abstract It is shown that measuring the lepton invariant mass spectrum in inclusive semileptonic $`\overline{B}X_u\mathrm{}\overline{\nu }`$ decay yields a model independent determination of $`|V_{ub}|`$. Unlike the lepton energy and hadronic invariant mass spectra, nonperturbative effects are only important in the resonance region, and play a parametrically suppressed role when $`\mathrm{d}\mathrm{\Gamma }/\mathrm{d}q^2`$ is integrated over $`q^2>(m_Bm_D)^2`$, which is required to eliminate the charm background. Perturbative and nonperturbative corrections are presented to order $`\alpha _s^2\beta _0`$ and $`\mathrm{\Lambda }_{\mathrm{QCD}}^2/m_b^2`$, and the $`\mathrm{\Lambda }_{\mathrm{QCD}}^3/m_b^3`$ corrections are used to estimate the uncertainty in our results. The utility of the $`\overline{B}X_s\mathrm{}^+\mathrm{}^{}`$ decay rate above the $`\psi (2S)`$ resonance is discussed. preprint: UTPT-00-02 FERMILAB-Pub-00/039-T hep-ph/0002161 A precise and model independent determination of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element $`V_{ub}`$ is important for testing the Standard Model at $`B`$ factories via the comparison of the angles and the sides of the unitarity triangle. The first extraction of $`|V_{ub}|`$ from experimental data relied on a study of the lepton energy spectrum in inclusive charmless semileptonic $`B`$ decay . Recently $`|V_{ub}|`$ was also measured from exclusive semileptonic $`\overline{B}\rho \mathrm{}\overline{\nu }`$ and $`\overline{B}\pi \mathrm{}\overline{\nu }`$ decay , and from inclusive decays using the reconstruction of the invariant mass of the hadronic final state . These determinations suffer from large model dependence. The exclusive $`|V_{ub}|`$ measurements rely on form factor models or quenched lattice calculations at the present time.<sup>*</sup><sup>*</sup>*A model independent determination of $`|V_{ub}|`$ from exclusive decays is possible without first order heavy quark symmetry or chiral symmetry breaking corrections , but it requires data on $`\overline{B}K^{}\mathrm{}^+\mathrm{}^{}`$. A model independent extraction is also possible from decays to wrong-sign charm , but this is very challenging experimentally. See also for a discussion of extracting $`|V_{ub}|`$ from a comparison of photon spectra in $`B`$ and $`D`$ radiative leptonic decays. Inclusive $`B`$ decay rates are currently on a better theoretical footing, since they can be computed model independently in a series in $`\mathrm{\Lambda }_{\mathrm{QCD}}/m_b`$ and $`\alpha _s(m_b)`$ using an operator product expansion (OPE) . However, the predictions of the OPE are only model independent for sufficiently inclusive observables, while the $`\overline{B}X_u\mathrm{}\overline{\nu }`$ decay rate can only be measured by imposing severe cuts on the phase space to eliminate the $`100`$ times larger $`\overline{B}X_c\mathrm{}\overline{\nu }`$ background. For both the charged lepton and hadronic invariant mass spectra, these cuts spoil the convergence of the OPE, and the most singular terms must be resummed into a nonperturbative $`b`$ quark distribution function. While it may be possible to extract this from the photon spectrum in $`BX_s\gamma `$ , it would clearly be simpler to find an observable for which the OPE did not break down in the region of phase space free from charm background. In this Letter we show that this is the situation for the lepton invariant mass spectrum. At leading order in the $`\mathrm{\Lambda }_{\mathrm{QCD}}/m_b`$ expansion the $`B`$ meson decay rate is equal to the $`b`$ quark decay rate. Nonperturbative effects are suppressed by at least two powers of $`\mathrm{\Lambda }_{\mathrm{QCD}}/m_b`$. Corrections of order $`\mathrm{\Lambda }_{\mathrm{QCD}}^2/m_b^2`$ are characterized by two heavy quark effective theory (HQET) matrix elements , which are defined by $`\lambda _1`$ $`=`$ $`B(v)|\overline{h}_v^{(b)}(iD)^2h_v^{(b)}|B(v)/2m_B,`$ (1) $`\lambda _2`$ $`=`$ $`B(v)|{\displaystyle \frac{g_s}{2}}\overline{h}_v^{(b)}\sigma _{\mu \nu }G^{\mu \nu }h_v^{(b)}|B(v)/6m_B.`$ (2) These matrix elements also occur in the expansion of the $`B`$ and $`B^{}`$ masses in powers of $`\mathrm{\Lambda }_{\mathrm{QCD}}/m_b`$, $$m_B=m_b+\overline{\mathrm{\Lambda }}\frac{\lambda _1+3\lambda _2}{2m_b}+\mathrm{},m_B^{}=m_b+\overline{\mathrm{\Lambda }}\frac{\lambda _1\lambda _2}{2m_b}+\mathrm{}.$$ (3) Similar formulae hold for the $`D`$ and $`D^{}`$ masses. The parameters $`\overline{\mathrm{\Lambda }}`$ and $`\lambda _1`$ are independent of the heavy $`b`$ quark mass, while there is a weak logarithmic scale dependence in $`\lambda _2`$. The measured $`B^{}B`$ mass splitting fixes $`\lambda _2(m_b)=0.12\mathrm{GeV}^2`$. At $`𝒪(\mathrm{\Lambda }_{\mathrm{QCD}}^3/m_b^3)`$ seven additional parameters arise in the OPE , and varying these parameters is often used to estimate the theoretical uncertainty in the OPE . In inclusive semileptonic $`B`$ decay, for a particular hadronic final state $`X`$, the maximum lepton energy is $`E_{\mathrm{}}^{(\mathrm{max})}=(m_B^2m_X^2)/2m_B`$ (in the $`B`$ rest frame), so to eliminate charm background one must impose a cut $`E_{\mathrm{}}>(m_B^2m_D^2)/2m_B`$. The maximum lepton energy in semileptonic $`b`$ quark decay is $`m_b/2`$, which is less than the physical endpoint $`m_B/2`$. Their difference, $`\overline{\mathrm{\Lambda }}/2`$, is comparable in size to the endpoint region $`\mathrm{\Delta }E_{\mathrm{}}^{(\mathrm{endpoint})}=m_D^2/2m_B0.33`$GeV. The effects which extend the lepton spectrum beyond its partonic endpoint appear as singular terms in the prediction for $`\mathrm{d}\mathrm{\Gamma }/\mathrm{d}E_{\mathrm{}}`$ involving derivatives of delta functions, $`\delta ^{(n)}(12E_{\mathrm{}}/m_b)`$. The lepton spectrum must be smeared over a region of energies $`\mathrm{\Delta }E_{\mathrm{}}`$ near the endpoint before theory can be compared with experiment. If the smearing region $`\mathrm{\Delta }E_{\mathrm{}}`$ is much smaller than $`\mathrm{\Lambda }_{\mathrm{QCD}}`$, then higher dimension operators in the OPE become successively more important and the OPE is not useful for describing the lepton energy spectrum. For $`\mathrm{\Delta }E_{\mathrm{}}\mathrm{\Lambda }_{\mathrm{QCD}}`$, higher dimension operators become successively less important and a useful prediction for the lepton spectrum can be made using the first few terms in the OPE. When $`\mathrm{\Delta }E_{\mathrm{}}\mathrm{\Lambda }_{\mathrm{QCD}}`$, there is an infinite series of terms in the OPE which are all equally important. Since $`\mathrm{\Delta }E_{\mathrm{}}^{(\mathrm{endpoint})}`$ is about $`\mathrm{\Lambda }_{\mathrm{QCD}}`$, it seems unlikely that predictions based on a few low dimension operators in the OPE can successfully determine the lepton spectrum in this region. It was shown in that the leading singularities in the OPE may be resummed into a nonperturbative light-cone distribution function $`f(k_+)`$ for the heavy quark. To leading order in $`1/m_b`$, the effects of the distribution function may be included by replacing $`m_b`$ by $`m_b^{}m_b+k_+`$, and integrating over the light-cone momentum $$\frac{\mathrm{d}\mathrm{\Gamma }}{\mathrm{d}E_{\mathrm{}}}=dk_+f(k_+)\frac{\mathrm{d}\mathrm{\Gamma }_\mathrm{p}}{\mathrm{d}E_{\mathrm{}}}|_{m_bm_b^{}},$$ (4) where $`\mathrm{d}\mathrm{\Gamma }_\mathrm{p}/\mathrm{d}E_{\mathrm{}}`$ is the parton-level spectrum. Analogous formulae hold for other differential distributions . For purposes of illustration, we will use a simple model for the structure function given by the one-parameter ansatz $$f(k_+)=\frac{32}{\pi ^2\mathrm{\Lambda }}(1x)^2\mathrm{exp}\left[\frac{4}{\pi }(1x)^2\right]\mathrm{\Theta }(1x),x\frac{k_+}{\mathrm{\Lambda }},$$ (5) taking the model parameter $`\mathrm{\Lambda }=0.48`$GeV, corresponding to $`m_b=4.8`$GeV. The charm background can also be eliminated by reconstructing the invariant mass of the hadronic final state, $`m_X`$, since decays with $`m_X<m_D`$ must arise from $`bu`$ transition. While this analysis is challenging experimentally, the $`m_X<m_D`$ cut allows a much larger fraction of $`bu`$ decays than the $`E_{\mathrm{}}>(m_B^2m_D^2)/2m_B`$ constraint. This is expected to result in a reduction of the theoretical uncertainties , although both the lepton endpoint region, $`E_{\mathrm{}}>(m_B^2m_D^2)/2m_B`$, and the low hadronic invariant mass region, $`m_X<m_D`$, receive contributions from the same set of hadronic final states (but with very different weights). However, the same nonperturbative effects which lead to the breakdown of predictive power in the lepton endpoint region also give large uncertainties in the hadron mass spectrum over the range $`m_X^2\overline{\mathrm{\Lambda }}m_b`$ . In other words, nonperturbative effects yield formally $`𝒪(1)`$ uncertainties in both cases, because numerically $`m_D^2\mathrm{\Lambda }_{\mathrm{QCD}}m_B`$. The situation is illustrated in Fig. 1. The situation is very different for the lepton invariant mass spectrum. Decays with $`q^2(p_{\mathrm{}}+p_{\overline{\nu }})^2>(m_Bm_D)^2`$ must arise from $`bu`$ transition. Such a cut forbids the hadronic final state from moving fast in the $`B`$ rest frame, and so the light-cone expansion which gives rise to the shape function is not relevant in this region of phase space.The fact that the $`b`$ quark distribution function is not relevant for large $`q^2`$ was pointed out in in the context of $`\overline{B}X_s\mathrm{}^+\mathrm{}^{}`$ decay and in for semileptonic $`\overline{B}X_u`$ decay. This is clear from the kinematics: the difference between the partonic and hadronic values of maximum $`q^2`$ is $`m_B^2m_b^22\overline{\mathrm{\Lambda }}m_b`$, and nonperturbative effects are only important in a region of comparable size. For example, the most singular term in the OPE at order $`(\mathrm{\Lambda }_{\mathrm{QCD}}/m_b)^3`$ is of order $`\left(\mathrm{\Lambda }_{\mathrm{QCD}}/m_b\right)^3\delta (1q^2/m_b^2)`$. This contribution to the decay rate is not suppressed compared to the lowest order term in the OPE only if the spectrum is integrated over a small region of width $`\mathrm{\Delta }q^2\mathrm{\Lambda }_{\mathrm{QCD}}m_b`$ near the endpoint. This is the resonance region where only hadronic final states with masses $`m_X\mathrm{\Lambda }_{\mathrm{QCD}}`$ can contribute, and the OPE is not expected to work anyway. In contrast, nonperturbative effects are important in the $`E_{\mathrm{}}`$ and $`m_X^2`$ spectra in a parametrically much larger region where final states with masses $`m_X^2\mathrm{\Lambda }_{\mathrm{QCD}}m_b`$ contribute.Similar arguments also show that the light-cone distribution function is not relevant for small hadron energy, $`E_X`$, but it does enter for $`E_X`$ near $`m_b/2`$. If $`m_b/2m_c\mathrm{\Lambda }_{\mathrm{QCD}}`$, then the constraint $`E_X<m_D`$ in the $`B`$ rest frame would also give a model independent determination of $`|V_{ub}|`$. The better behavior of the $`q^2`$ spectrum than the $`E_{\mathrm{}}`$ and $`m_X^2`$ spectra is also reflected in the perturbation series. There are Sudakov double logarithms near the phase space boundaries in the $`E_{\mathrm{}}`$ and $`m_X^2`$ spectra, whereas there are only single logarithms in the $`q^2`$ spectrum. The effect of smearing the $`q^2`$ spectrum with the model distribution function in Eq. (5) is illustrated in Fig. 2. In accord with our previous arguments, it is easily seen to be subleading over the region of interest. Table I compares qualitatively the utility of the lepton energy, the hadronic invariant mass, and the lepton invariant mass spectra for the determination of $`|V_{ub}|`$. We now proceed to calculate the $`\overline{B}X_u\mathrm{}\overline{\nu }`$ decay rate with lepton invariant mass above a given cutoff, working to a fixed order in the OPE (i.e., ignoring the light-cone distribution function which is irrelevant for our analysis). The lepton invariant mass spectrum including the leading perturbative and nonperturbative corrections is given by $`{\displaystyle \frac{1}{\mathrm{\Gamma }_0}}{\displaystyle \frac{\mathrm{d}\mathrm{\Gamma }}{\mathrm{d}\widehat{q}^2}}`$ $`=`$ $`\left(1+{\displaystyle \frac{\lambda _1}{2m_b^2}}\right)\mathrm{\hspace{0.17em}2}(1\widehat{q}^2)^2(1+2\widehat{q}^2)+{\displaystyle \frac{\lambda _2}{m_b^2}}(345\widehat{q}^4+30\widehat{q}^6)`$ (7) $`+{\displaystyle \frac{\alpha _s(m_b)}{\pi }}X(\widehat{q}^2)+\left({\displaystyle \frac{\alpha _s(m_b)}{\pi }}\right)^2\beta _0Y(\widehat{q}^2)+\mathrm{},`$ where $`\widehat{q}^2=q^2/m_b^2`$, $`\beta _0=112n_f/3`$, and $$\mathrm{\Gamma }_0=\frac{G_F^2|V_{ub}|^2m_b^5}{192\pi ^3}$$ (8) is the tree level $`bu`$ decay rate. The ellipses in Eq. (7) denote terms of order $`(\mathrm{\Lambda }_{\mathrm{QCD}}/m_b)^3`$ and order $`\alpha _s^2`$ terms not enhanced by $`\beta _0`$. The function $`X(\widehat{q}^2)`$ is given analytically in Ref. , whereas $`Y(\widehat{q}^2)`$ was computed numerically in Ref. . The order $`1/m_b^3`$ nonperturbative corrections were computed in Ref. . The matrix element of the kinetic energy operator, $`\lambda _1`$, only enters the $`\widehat{q}^2`$ spectrum in a very simple form, because the unit operator and the kinetic energy operator are related by reparameterization invariance . Any quantity which can be written independent of the heavy quark velocity $`v`$ must depend only on the combination $`(1+\lambda _1/2m_b^2)`$. The $`\widehat{q}^2`$ spectrum (and the total rate written in terms of $`m_b`$) are invariant under a redefinition of $`v`$, but, for example, the lepton energy spectrum is not since $`E_{\mathrm{}}=vp_e`$. (Equivalently, the $`\lambda _1`$ term is a time-dilation effect, and hence is universal in any quantity that is independent of the rest frame of the $`B`$ meson .) We shall compute the fraction of $`\overline{B}X_u\mathrm{}\overline{\nu }`$ events with $`q^2>q_0^2`$, $`F(q_0^2)`$, as the relation between the total $`\overline{B}X_u\mathrm{}\overline{\nu }`$ decay rate and $`|V_{ub}|`$ has been extensively discussed in the literature , and is known including the full $`\alpha _s^2`$ corrections . After integrating the spectrum in Eq. (7), we can eliminate the dependence on the $`b`$ quark mass in favor of the spin averaged meson mass $`\overline{m}_B=(m_B+3m_B^{})/45.313`$GeV, following . We find $`F(q_0^2)=1`$ $``$ $`2Q_0^2+2Q_0^6Q_0^8{\displaystyle \frac{4\overline{\mathrm{\Lambda }}}{\overline{m}_B}}\left(Q_0^23Q_0^6+2Q_0^8\right){\displaystyle \frac{6\overline{\mathrm{\Lambda }}^2}{\overline{m}_B^2}}\left(Q_0^27Q_0^6+6Q_0^8\right)`$ (9) $`+`$ $`{\displaystyle \frac{2\lambda _1}{\overline{m}_B^2}}\left(Q_0^23Q_0^6+2Q_0^8\right){\displaystyle \frac{12\lambda _2}{\overline{m}_B^2}}\left(Q_0^22Q_0^6+Q_0^8\right)`$ (10) $`+`$ $`{\displaystyle \frac{\alpha _s(m_b)}{\pi }}\stackrel{~}{X}(Q_0^2)+\left({\displaystyle \frac{\alpha _s(m_b)}{\pi }}\right)^2\beta _0\stackrel{~}{Y}(Q_0^2)+\mathrm{},`$ (11) where $`Q_0q_0/\overline{m}_B`$. The functions $`\stackrel{~}{X}(Q_0^2)`$ and $`\stackrel{~}{Y}(Q_0^2)`$ can be calculated from $`X(\widehat{q}^2)`$ and $`Y(\widehat{q}^2)`$. Converting to the physical $`B`$ meson mass has introduced a strong dependence on the parameter $`\overline{\mathrm{\Lambda }}`$, the mass of the light degrees of freedom in the $`B`$ meson. For $`q_0^2=(m_Bm_D)^211.6\mathrm{GeV}^2`$, we find $`F(11.6\mathrm{GeV}^2)=0.287+0.027\alpha _s(m_b)0.016\alpha _s^2(m_b)\beta _00.20\overline{\mathrm{\Lambda }}/(1\mathrm{GeV})0.02\overline{\mathrm{\Lambda }}^2/(1\mathrm{GeV}^2)+0.02\lambda _1/(1\mathrm{GeV}^2)0.13\lambda _2/(1\mathrm{GeV}^2)+\mathrm{}`$ . The order $`\alpha _s\overline{\mathrm{\Lambda }}`$ term is negligible and has been omitted. Using $`\overline{\mathrm{\Lambda }}=0.4`$GeV, $`\lambda _1=0.2\mathrm{GeV}^2`$ and $`\alpha _s(m_b)=0.22`$, we obtain $`F(11.6\mathrm{GeV}^2)=0.186`$. There are several sources of uncertainties in the value for $`F`$. The perturbative uncertainties are negligible, as can be seen from the size of the $`𝒪(\alpha _s^2\beta _0)`$ contributions. At the present time there is a sizable uncertainty since $`\overline{\mathrm{\Lambda }}`$ is not known accurately. In the future, a $`\pm 50`$MeV error in $`\overline{\mathrm{\Lambda }}`$ will result in a $`\pm 5\%`$ uncertainty in $`F`$. Finally, uncertainties from $`1/m_b^3`$ operators can be estimated by varying the matrix elements of the dimension six operators within the range expected by dimensional analysis, as discussed in detail in . This results in an additional $`\pm 4\%`$ uncertainty in $`F`$. We note that this is a somewhat ad hoc procedure, since there is no real way to quantify the theoretical error due to unknown higher order terms. Therefore, these estimates should be treated as nothing more than (hopefully) educated guesses. They do allow, however, for a consistent comparison of the uncertainties in different quantities. If $`q_0^2`$ has to be chosen larger, then the uncertainties increase. For example, for $`q_0^2=15\mathrm{GeV}^2`$, we obtain $`F(15\mathrm{GeV}^2)=0.158+0.024\alpha _s(m_b)0.012\alpha _s^2(m_b)\beta _00.18\overline{\mathrm{\Lambda }}/(1\mathrm{GeV})+0.01\overline{\mathrm{\Lambda }}^2/(1\mathrm{GeV}^2)+0.02\lambda _1/(1\mathrm{GeV}^2)0.13\lambda _2/(1\mathrm{GeV}^2)+\mathrm{}0.067`$, using the previous values of $`\overline{\mathrm{\Lambda }}`$ and $`\lambda _1`$. The perturbative uncertainties are still negligible, while the uncertainty due to a $`\pm 50`$MeV error in $`\overline{\mathrm{\Lambda }}`$ and unknown dimension six matrix elements increase to $`\pm 14\%`$ and $`\pm 13\%`$, respectively. (This uncertainty may be reduced using data on the rare decay $`\overline{B}X_s\mathrm{}^+\mathrm{}^{}`$ in the large $`q^2`$ region, as discussed below.) Another possible method to compute $`F(q_0^2)`$ uses the upsilon expansion . By expressing $`\widehat{q}^2`$ in terms of the $`\mathrm{{\rm Y}}`$ mass instead of $`\overline{m}_B`$, the dependence of $`F(q_0^2)`$ on $`\overline{\mathrm{\Lambda }}`$ and $`\lambda _1`$ is eliminated. Instead, the result is sensitive to unknown nonperturbative contributions to $`m_\mathrm{{\rm Y}}`$. The uncertainty related to these effects can be systematically taken into account and has been estimated to be small . One finds, $$|V_{ub}|=(3.04\pm 0.06\pm 0.08)\times 10^3\left(\frac{(\overline{B}X_u\mathrm{}\overline{\nu })|_{q^2>q_0^2}}{0.001\times F(q_0^2)}\frac{1.6\mathrm{ps}}{\tau _B}\right)^{1/2}.$$ (12) The errors explicitly shown in Eq. (12) are the estimates of the perturbative and nonperturbative uncertainties in the upsilon expansion, respectively. For $`q_0^2=(m_Bm_D)^2`$ we find $`F(11.6\mathrm{GeV}^2)=0.168+0.016ϵ+0.014ϵ_{\mathrm{BLM}}^20.17\lambda _2+\mathrm{}0.178`$, where $`ϵ1`$ shows the order in the upsilon expansion. This result is in good agreement with 0.186 obtained from Eq. (9). The uncertainty due to $`\overline{\mathrm{\Lambda }}`$ is absent in the upsilon expansion, however the size of the perturbative corrections has increased. The uncertainties due to $`1/m_b^3`$ operators is estimated to be $`\pm 7\%`$. For $`q_0^2=15\mathrm{GeV}^2`$, we obtain $`F(15\mathrm{GeV}^2)=0.060+0.011ϵ+0.011ϵ_{\mathrm{BLM}}^20.14\lambda _2+\mathrm{}0.064`$, which is again in good agreement with 0.067 obtained earlier. For this value of $`q_0^2`$, the $`1/m_b^3`$ uncertainties increase to $`\pm 21\%`$. $`F(q_0^2)`$ calculated in the upsilon expansion is plotted in Fig. 3, where the shaded region shows our estimate of the uncertainty due to the $`1/m_b^3`$ corrections. Concerning experimental considerations, measuring the $`q^2`$ spectrum requires reconstruction of the neutrino four-momentum, just like measuring the hadronic invariant mass spectrum. Imposing a lepton energy cut, which may be required for this technique, is not a problem. The constraint $`q^2>(m_Bm_D)^2`$ automatically implies $`E_{\mathrm{}}>(m_Bm_D)^2/2m_B1.1`$GeV in the $`B`$ rest frame. Even if the $`E_{\mathrm{}}`$ cut has to be slightly larger than this, the utility of our method will not be affected, but a dedicated calculation including the affects of arbitrary $`E_{\mathrm{}}`$ and $`q^2`$ cuts may be warranted. If experimental resolution on the reconstruction of the neutrino momentum necessitates a significantly larger cut than $`q_0^2=(m_Bm_D)^2`$, then the uncertainties in the OPE calculation of $`F(q_0^2)`$ increase. In this case, it may instead be possible to obtain useful model independent information on the $`q^2`$ spectrum in the region $`q^2>m_{\psi (2S)}^213.6\mathrm{GeV}^2`$ from the $`q^2`$ spectrum in the rare decay $`\overline{B}X_s\mathrm{}^+\mathrm{}^{}`$, which may be measured in the upcoming Tevatron Run-II. There are four contributions to this decay rate, proportional to the combination of Wilson coefficients $`\stackrel{~}{C}_9^2`$, $`C_{10}^2`$, $`C_7\stackrel{~}{C}_9`$, and $`C_7^2`$. $`\stackrel{~}{C}_9`$ is a $`q^2`$-dependent effective coefficient which takes into account the contribution of the four-quark operators. Its $`\widehat{q}^2`$-dependence yields negligible uncertainties if we use a mean $`\stackrel{~}{C}_9`$ obtained by averaging it in the region $`0.5<\widehat{q}^2<1`$ weighted with the $`b`$ quark decay rate $`(1\widehat{q}^2)^2(1+2\widehat{q}^2)`$. The resulting numerical values of the Wilson coefficients are $`\stackrel{~}{C}_9=4.47+0.44i`$, $`C_{10}=4.62`$, and $`C_7=0.31`$, corresponding to the scale $`\mu =m_b`$. In the $`q^2>m_{\psi (2S)}^2`$ region the $`C_7^2`$ contribution is negligible, and the $`C_7\stackrel{~}{C}_9`$ term makes about a 20% contribution to the rate. For the $`\stackrel{~}{C}_9^2+C_{10}^2`$ contributions nonperturbative effects are identical to those which occur in $`\overline{B}X_u\mathrm{}\overline{\nu }`$ decay, up to corrections suppressed by $`|\stackrel{~}{C}_9+C_{10}|/|\stackrel{~}{C}_9C_{10}|0.02`$. Therefore, the relation $$\frac{\mathrm{d}\mathrm{\Gamma }(\overline{B}X_u\mathrm{}\overline{\nu })/\mathrm{d}\widehat{q}^2}{\mathrm{d}\mathrm{\Gamma }(\overline{B}X_s\mathrm{}^+\mathrm{}^{})/\mathrm{d}\widehat{q}^2}=\frac{|V_{ub}|^2}{|V_{ts}V_{tb}|^2}\frac{8\pi ^2}{\alpha ^2}\frac{1}{|\stackrel{~}{C}_9|^2+|C_{10}|^2+12\mathrm{Re}(C_7\stackrel{~}{C}_9)/(1+2\widehat{q}^2)},$$ (13) is expected to hold to a very good accuracy. There are several sources of corrections to this formula which need to be estimated: i) nonperturbative effects that enter the $`C_7\stackrel{~}{C}_9`$ term differently, ii) mass effects from the strange quark and muon, iii) higher $`c\overline{c}`$ resonance contributions in $`\overline{B}X_s\mathrm{}^+\mathrm{}^{}`$, and iv) scale dependence. Of these, i) and ii) are expected to be small unless $`q^2`$ is very close to $`m_B^2`$. The effects of iii) have also been estimated to be at the few percent level , although these uncertainties are very hard to quantify and could be comparable to the $`\pm 8\%`$ scale dependence of the $`\overline{B}X_s\mathrm{}^+\mathrm{}^{}`$ rate. Integrating over a large enough range of $`q^2`$, $`q_0^2<q^2<m_B^2`$ with $`m_{\psi (2S)}^2<q_0^217\mathrm{GeV}^2`$, the result implied by Eq. (13), $$\frac{(\overline{B}X_u\mathrm{}\overline{\nu })|_{q^2>q_0^2}}{(\overline{B}X_s\mathrm{}^+\mathrm{}^{})|_{q^2>q_0^2}}=\frac{|V_{ub}|^2}{|V_{ts}V_{tb}|^2}\frac{8\pi ^2}{\alpha ^2}\frac{1}{|\stackrel{~}{C}_9|^2+|C_{10}|^2+12\mathrm{Re}(C_7\stackrel{~}{C}_9)B(q_0^2)},$$ (14) is expected to hold at the $`15\%`$ level. Here $`Q_0q_0/\overline{m}_B`$, and $`B(q_0^2)=2/[3(1+Q_0^2)]4(\overline{\mathrm{\Lambda }}/\overline{m}_B)Q_0^2/[3(1+Q_0^2)^2]+\mathrm{}`$ . For $`q_0^2`$ significantly above $`(m_Bm_D)^2`$, this formula may lead to a determination of $`|V_{ub}|`$ with smaller theoretical uncertainty than the one using the OPE calculation of $`F(q_0^2)`$. In conclusion, we have shown that the $`q^2`$ spectrum in inclusive semileptonic $`\overline{B}X_u\mathrm{}\overline{\nu }`$ decay gives a model independent determination of $`|V_{ub}|`$ with small theoretical uncertainty. Nonperturbative effects are only important in the resonance region, and play a parametrically suppressed role when $`\mathrm{d}\mathrm{\Gamma }/\mathrm{d}q^2`$ is integrated over $`q^2>(m_Bm_D)^2`$, which is required to eliminate the charm background. This is a qualitatively better situation than the extraction of $`|V_{ub}|`$ from the endpoint region of the lepton energy spectrum, or from the hadronic invariant mass spectrum. ###### Acknowledgements. We thank Craig Burrell for discussions and Adam Falk for comments on the manuscript. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada and the Sloan Foundation. Fermilab is operated by Universities Research Association, Inc., under DOE contract DE-AC02-76CH03000.
no-problem/0002/cond-mat0002434.html
ar5iv
text
# Sonoluminescence as a physical vacuum excitation ## Abstract It is discussed a physical vacuum excitation as a mechanism of sonoluminescence of gas bubble. This Schwinger’s theory was based on the assumption that the sudden change of the collapse rate of bubble in the water leads to the jump of the dielectric constant of gas. It is shown that the dependence of the dielectric constant on the gas density really leads to the jump of the dielectric constant at the shock-wave propagation in a collapsing gas bubble. Sonoluminescence is a transformation of sound into the light. The liquid (it can be the distillate water) is incurred to the acoustic waves, and photons are radiated at that. The radiated light energy is comparable with introducing sound energy. The sound energy density is of the order $`10^{11}`$ eV/atom, and as the energy of radiation photons is of the order 10 eV, one can say that the density of the energy is increases in $`10^{12}`$ times. Sonoluminescence is well-known and studied for 60 years. The cavitation nature of the radiation is determined . The dependence of sonoluminescence intensity from chemical structure of liquid and dissolved gas in this liquid, their thermal conductivity coefficients, from temperature, from sound frequency and amplitude is studied. In spite of, that after stable luminescence observation of single gas bubble , the sonoluminescence attracted attention of many investigators \[4-12\], the conclusive explanation of physical mechanism of this effect is not find out now. The sonoluminescence in Schwinger’s works is considered as a manifestation of nonstationary Casimir effect , which is a result of the change of vacuum properties under different external influences. At the studying of sonoluminescence the gas bubbles (cavities in the water with radius $`r`$, which are filled up by the gas) are putting to compression and expansion in the response to the positive and negative changes of pressure under the acoustic field influence. Schwinger’s theory is based on the assumption that the sudden change of the collapse rate of bubble in the water leads to the jump of its dielectric constant. This is accompanied by the excitation of electromagnetic vacuum and by the photons radiation. In the probability of the photons creation is calculated and the numeral accounts which are qualitatively explain the part of experiment datas, are doing. The task of this paper is to lead some reasons for the sustention of Schwinger’s idea and to show that the dielectric constant of gas is really jump function of the time at nonadiabatic stage of the bubble collapse. Well-known Schwinger’s expression for the total number of photons created in the volume V : $$N=\frac{(\sqrt{\epsilon }1)^2}{4\sqrt{\epsilon }}\frac{d^3k}{(2\pi )^3}V.$$ (1) can be obtained as solution of an equation, which is described the scalar field of photons . This fact is evidenced about the validity of Schwinger’s idea. For removing the integration divergency in the formula (1) we must to make cut-off wave-number $`k_{max}`$, which corresponds to the minimum wave length $`\lambda _{min}<r_m`$ ( $`r_m`$ is a bubble radius at the moment of photons radiation). Disregarding the dispersion, Schwinger obtained the following expression for the energy $`E`$ emitted in unite volume: $$\frac{E}{V}=\frac{(\sqrt{\epsilon }1)^2}{4\sqrt{\epsilon }}\stackrel{k_{max}}{}\mathrm{}\omega \frac{d^3k}{(2\pi )^3}=2\pi ^2\frac{(\sqrt{\epsilon }1)^2}{4\sqrt{\epsilon }}\frac{\mathrm{}c}{\lambda _{min}^4}.$$ (2) The expression (2) is in qualitative accordance with the result of Ref , which was obtained at the divergency removal by taking into account the contribution of the surface energy. This is the serious substantiation of the validity for conducted by Shwinger removal of the divergency. The value $`\epsilon 1`$ for the many gases in normal conditions at adiabatic compression. From formula (2) one can see that the number of photons and the emitting energy are little as $`(\sqrt{\epsilon }1)^2`$. But the value of gas temperature within the bubble ( $`10^5`$ K) which observed in the experiment, compelled to remember the Jarman’s idea about excitation of gathering shock wave at the nonadiabatic stage of bubble collapse . The numerical solution of Rayleigh-Plesset equation of a bubble surface, of van der Waals equations for the gas inside bubble and of tasks about generation and motion of the shock wave was carried out in a work . At a first stage of a bubble growth it was used an adiabatic approximation: by the time $`16,65\mu s`$, the bubble radius increased from the initial value $`r_0=4,5\mu m`$ to the maximum value $`37,09\mu m`$ and the gas density decreased to the value $`0,0023kgm^3`$. At the second stage of a bubble collapse calculations were done in a nonadiabatic approximation, which leads to the Guderley’s decision for an ideal gas . As it was shown in , at nonadiabatic stage of collapse at the focusing shock wave in the centre of bubble the density of nonideal gas increases in $`10^5`$ times by the time $`3,84\mu s`$, and achieving the maximum value $`\rho \rho _m794kgm^3`$. This conducts by the change of the rate of bubble collapse to $`210^4ms^1`$. Further it will be shown that it leads to the jump of the dielectric constant of gas. The value $`\epsilon `$ for gas is determined by Clausis-Mossotti formula $$\epsilon 1=\frac{4\pi p\rho }{w(1\frac{4\pi p\rho }{3w})},$$ (3) where $`p`$ is the molar polarizability of gas, $`\rho `$ is the gas density and $`w`$ is his molecular weight. The liquid which surrounds the gas bubble (in the experiment this is the water with small addition of glycerin) leads to the additional polarization and to the molar polarizability of gas dependence on the dielectric constant of liquid $`\epsilon _1`$: $$p=\frac{p_0}{3(1+\frac{\epsilon }{2\epsilon _1})}.$$ (4) where $`p_0`$ is the molar polarizability of gas in normal conditions. Even taking into account that $`\rho _m\rho `$ at focusing shock wave, the ratio $`\epsilon /\epsilon _1<<1`$, so $$\epsilon (t)1\frac{4\pi p_0\rho (t)}{3w}.$$ (5) Such dependence of $`\epsilon (t)`$ on $`\rho (t)`$ leads to the jump of the dielectric constant at the jump of the gas density as shock wave focuses at the centre of bubble. As we can see from (2) this jump accompanies by the vacuum excitation and to the photons radiation, the intensity of which depends from the characteristic time of the jump of the gas density.
no-problem/0002/astro-ph0002102.html
ar5iv
text
# The NASA Astrophysics Data System: The Search Engine and its User Interface ## 1 Introduction The Astrophysics Data System (ADS) provides access to the astronomical literature through the World Wide Web (WWW). It is widely used in the astronomical community. It is accessible to anybody world-wide through a forms based WWW interface. A detailed description of the history of the ADS is presented in the ADS Overview article (Kurtz et al. (2000), hereafter OVERVIEW). The system contains information from many sources (journals, other data centers, individuals). A detailed description of the data that we get and how they are included in the ADS is presented in the ADS Data article (Grant et al. (2000), hereafter DATA). The incoming data are processed and indexed with custom-built software to take advantage of specialized knowledge of the data and the astronomical context. A description of this processing is given in the ADS Architecture article (Accomazzi et al. (2000), hereafter ARCHITECTURE). This article describes the development and the current status of the ADS Abstract Service user interface and search engine. The ADS was created as a system to provide access to astronomical data (Murray et al. (1992)). In 1993 the ADS started to provide access to a set of abstracts obtained from the NASA/STI (National Aeronautics and Space Administration/Scientific and Technical Information) project (Kurtz et al. (1993)). The user interface was built with the proprietary software system that the ADS used at that time. The search engine of this first implementation used a commercial database system. A description of the system at that time is in Eichhorn (1994). In 1994, the World Wide Web (WWW, www.w3.org (1999)) became widely useful through the NCSA Mosaic Web Browser (Schatz & Hardin (1994)). The design of the ADS Abstract Service with a clean separation between the user interface and the search engine made it very easy to move the user interface from the proprietary ADS system to the WWW. In February 1994, a WWW interface to the ADS Abstract Service was made available publicly. The WWW interface to the ADS is described by Eichhorn et al. 1995b and Eichhorn et al. 1995a . Within one month of the introduction of the WWW interface, the usage of the Abstract Service tripled, and it has continued to rise ever since (Eichhorn (1997)). With the increased usage of the system due to the easy access through the WWW, severe limitations of the underlying commercial database system very quickly became apparent. We soon moved to an implementation of the search engine that was custom-built and tailored to the specific requirements of the data that we used. In January 1995 we started to provide access to scanned journal articles (Accomazzi et al. (1996)). The user interface to these scans provided the user with the capability to access the scans in various formats, both for viewing and for printing. With time, other interfaces to the abstracts and scanned articles were developed to provide other data systems the means to integrate ADS data into their system (Eichhorn et al. 1996b ). With the adoption of the WWW user interface and the development of the custom-built search engine, the current version of the ADS Abstract Service was basically in place. The following sections describe the current status of the different access capabilities (sections 2 and 3), the search engine (sections 4 and 5), access statistics for the ADS system (section 6), and future plans for the ADS interface and search engine (section 7). ## 2 Data Access The ADS services can be accessed through various interfaces. Some of these interfaces use WWW based forms, others allow direct access to the database and search system through Application Program Interfaces (APIs). This section describes the various interfaces and their use, as well as the returned results. ### 2.1 Forms Based Interfaces #### 2.1.1 Abstract Service a. User Interface The main query forms (figures 123) provide access to the different abstract databases. These forms are generated on demand by the ADS software. This allows the software to check the user identification through the HTTP (HyperText Transfer Protocol) cookie mechanism (see section 3), so that the software can return a customized query form if one has been defined by the user. It also adapts parts of the form according to the capabilities of the user’s web browser. The query form allows the user to specify search terms in different fields. The input parameters in each query field can be combined in different ways, as can the results obtained from the different fields (figure 1). The user can specify how the results are combined through settings on the query form (figure 3). The combined results can then be filtered according to various criteria (figure 2). The database can be queried for author names, astronomical objects names, title words, and words in the abstract text. References can be selected according to the publication date. The author name, title, and text fields are case insensitive. The object field is case sensitive when the IAU (International Astronomical Union) Circulars (IAUC) object name database is searched, since the IAU object names are case sensitive. In the author and object name fields, the form expects one search term per line since the terms can contain blanks. In the title and text fields line breaks are not significant. ##### Author Name Field The author names are indexed by last name and by a combination of last name and first initial, separated by a comma. To account for differences in the spelling of the same author name, the search system contains a list of author names that are spelled differently but are in fact names of the same author. This allows for automatically retrieving all versions of common spelling differences. This is useful for instance for German umlaut spelled as Muller and Mueller, or variations in the transliteration of names from non-English alphabets like Cyrillic. An example of such an entry in the author synonym list is: ``` AFANASJEV, V AFANAS’EV, V AFANAS’IEV, V AFANASEV, V AFANASYEV, V AFANS’IEV, V AFANSEV, V ``` Without this synonym replacement capability, author searches would obviously be much less effective. On user request we also include name changes (e.g. due to marriage) in the author synonym list. Combinations of search results within the author field use “OR”, “AND”, or simple logic (see below), depending on user selection. Author names are quite often spelled differently in different publications. First names are sometimes spelled out, sometimes only first initials are given, and sometimes middle initials are left out. This makes it impossible to index all different spellings of a name together automatically. To handle these different requirements, author names are indexed three times, once with the last name only, once with the last name and first initial, and once with the complete name as it is specified in the article. To access these different indexes, we provide two user interfaces for author queries. The regular user interface allows the user to search for either a last name or a last name combined with the first initial. This allows for fairly discriminating author searches. It is a compromise between the need to discriminate between different authors, and the need to find all instances of a given author. It identifies all different versions of a given author quite reliably, but it indexes together different authors with the same first initial. For cases where this search method is not discriminating enough, we provide a second user interface to the index of the full names, which does not attempt to index different spellings of the same author together. When the user selects “Exact Author Search” and specifies an author’s last name or last name and first initial, a form is returned with all distinct full author names that match the specified name. The user then selects all the different spellings of the desired name and queries the database for articles that contain any one of these different versions of an author’s name. For instance specifying: ``` Eichhorn, G ``` in the exact author name form returns the list: ``` EICHHORN, G. EICHHORN, GERHARD EICHHORN, GUENTHER EICHHORN, GUNTHER ``` Selecting the first, third, and fourth author name from that list will return all articles by the first author of this article. Any articles by the second author containing only the first initial will also be returned, but this is unavoidable. ##### Object Name Field This field allows the user to query different databases for references with different astronomical objects. The databases that provide object information are: SIMBAD (Set of Identifications, Measurements and Bibliographies for Astronomical Data) at the Centre des Données Astronomique de Strasbourg (CDS), France (Egret et al. (1991)); the NASA Extragalactic Database (NED) at the Infrared Processing and Analysis Center (IPAC), Jet Propulsion Laboratory (JPL), Pasadena, CA (Madore et al. (1992)); the IAU Circulars (IAUC) and the Minor Planet Electronic Circulars (MPEC), both provided by the Central Bureau for Astronomical Telegrams (CBAT) at the Harvard-Smithsonian Center for Astrophysics in Cambridge, MA (Marsden (1980)); and a database with objects from publications from the Lunar and Planetary Institute (LPI) in Houston (mainly Lunar sample numbers and meteorite names). The user can select which of these databases should be queried. If more than one database is searched, the results of these queries are merged. The LPI database does not have any entries in common with the other databases. The SIMBAD, NED, and IAUC databases sometimes have information about the same objects. ##### Title and Abstract Text Fields These fields query for words in the titles of articles or books, and in the abstracts of articles or descriptions of books respectively. The words from the title of each reference are also indexed in the text field so they will be found through either a title or a text search. Before querying the database the input in these fields is processed as follows: 1. Apply translation rules. This step merges common expressions into a single word so that they are searched as one expression. Regular expression matching is used to convert the input into a standard format that is used to search the database. For instance M 31 (with a space) is translated to M31 (without a space) for searching as one search term. In order to make this general translation, a regular expression matching and substitution is performed that translates all instances of an ‘M’ followed by one or more spaces or a hyphen followed by a number into ‘M’ directly followed by the number. Other translation rules include the conversion of NGC 1234 to NGC1234, contractions of T Tauri, Be Star, Shoemaker Levy, and several others (see ARCHITECTURE). 2. Remove punctuation. In this step all non-alphanumeric characters are removed, unless they are significant (for instance symbols used in the simple logic (see below), ‘+’ and ‘–’ before numbers, or ‘.’ within numbers). 3. Translate to uppercase. All information in the index files is in uppercase, except for object names from the IAU Circulars. 4. Remove kill words. This step removes all non-significant words. This includes words like ‘and’, ‘although’, ‘available’, etc (for more details see ARCHITECTURE). In the title and text fields, searching for phrases can be specified by enclosing several words in either single or double quotes, or concatenating them with periods (‘.’) or hyphens (‘–’). All these accomplish the same goal of searching the database for references that contain specified sequences of words. The database is indexed for two-word phrases in addition to single words. Phrases with more than two words are treated as a search for sets of two-word phrases containing the first and second word in the first phrase, the second and third word in the second phrase, etc. b. Searching After the search terms are pre-processed, the databases of the different fields are searched for the resulting list of words, the results are combined according to the selected combination rules, and the resulting score is calculated according to the selected scoring criteria. These combination rules provide the means for improving the selectivity of a query. ##### Search Word Selection The database is searched for the specified words as well as for words that are synonymous with the specified term. One crucial part to successful searches in a free text search system is the ability to not only find words exactly as specified, but also similar words. This starts with simply finding singular and plural forms of a word, but then needs to be extended to different words with the same meaning in the normal usage of words in a particular field of science. In Astronomy for instance “spectrograph” and “spectroscope” have basically the same meaning and both need to be found when one of these words is specified in the query. Even further reaching, more discipline-specific synonyms are necessary for efficient searches such as “metallicity” and “abundance” which have the same meaning in astronomical word usage. In order to exhaustively search the database for a given term, it is important to search for all synonyms of a given word. The list of synonyms was developed manually by going through the list of words in the database and grouping them according to similar meanings. This synonym list is a very important part of the ADS search system and is constantly being improved (see ARCHITECTURE). The list of synonyms also contains non-English words associated with their English translations. These words came from non-English reference titles that we included in the database. This allows searches with either the English or non-English words to find references with either the English word or the non-English translation. We are in the process of extending this capability by including translations of most of the words in our database into several languages (German, French, Italian, Spanish). This will allow our users to phrase queries in any of these languages. We expect to complete this project sometime in 2000. By default a search will return references that contain the search word or any of its synonyms. The user can choose to disable this feature if for some reason a specific word needs to be found. The synonym replacement can be turned off completely for a field in the “Settings” section of the query form. This can be used to find a rare word that is a synonym of a much more frequent word, for instance if you want to look for references to “dateline”, which is a synonym to “date”. Synonym replacement can also be enabled or disabled for individual words by prefixing a word with ‘=’ to force an exact match without synonym replacement. When synonym replacement is disabled for a field, it can be turned on for a particular word by prefixing it with ‘#’. ##### Selection Logic Within a Field There are four different types of combinations of results for searches within a field possible. ``` 1. OR 2. AND 3. Simple logic 4. Full boolean logic ``` 1. Combination by ‘OR’: The resulting list contains all references that contain at least one of the search terms. 2. Combination by ‘AND’: The resulting list has only references that contain every one of the search terms. 3. Combination by simple logic: The default combination in this logic is by ‘OR’. Individual terms can be either required for selection by prefixing them with a ‘+’, or can be selected against by prefixing them with a ‘–’. In the latter case only references that do not contain the search term are returned. If any of the terms in the search is prefixed by a ‘+’, any other word without a prefix does not influence the resulting list of references. However, the final score (see below) for each reference will depend on whether the other search terms are present. 4. Combination by full boolean logic: In this setting, the user specifies a boolean expression containing the search terms and the boolean operators ‘and’, ‘or’, and ‘not’, as well as parentheses for grouping. A boolean expression could for instance look like: (pulsar or “neutron star”) and (“red shift” distance) and not 1987A This expression searches for references that contain either the word pulsar or the phrase “neutron star” and either the phrase “red shift” or the word distance (“or” being the default), but not the word 1987A. ##### Selection Logic Between Fields In the settings part of the query form, the user can specify fields that will be required for selection. If a field is selected as “Required for Selection” only references that were selected in the search specified in that field will be returned. If one field is selected as “Required for Selection”, the searches in fields that are not set as “Required for Selection” do not influence the resulting list, but they influence the final score. c. Scoring The list of references resulting from a query is sorted according to a “score” for each reference. This score is calculated according to how many of the search items were matched. The user has the choice between two scoring algorithms: ``` 1. proportional scoring 2. weighted scoring ``` These scoring algorithms have been analyzed by Salton & McGill (1983). In proportional scoring, the score is directly proportional to the number of terms found in the reference. In weighted scoring, the score is proportional to the inverse logarithm of the frequency of the matched word. This weighting gives higher scores for words that are less frequent in the database and therefore presumably more important indicators of the relevance of a match. In the settings section of the query form the user can select which type of scoring should be used for each query field separately. The default setting for title and text searches is the weighted scoring. For author searches proportional scoring is the default. Once the score for each query field is calculated, the scores are normalized so that a reference that matches all words in a field receives a score of 1. The normalized scores from the different fields are then combined to calculate a total score. Again the result is normalized so that a reference that matches all words in each query field has a score of 1. The user can influence this combining of scores from the different search fields by assigning weights to the different fields. This allows the user to put more emphasis in the selection process on, for instance, the object field by assigning a higher weight to that field. Another use of the weight field is to select against a field. For instance specifying an object name and an author name and selecting a negative weight to the author field will select articles about that object that were not written by the specified author. The relative weights for the different search fields can be set by the user. The ADS provides default weights as follows: ``` Authors: 1.0 Objects: 1.0 Title: 0.3 Text: 3.0 ``` These default weights were determined on theoretical grounds, combined with trial and error experimentation. We used different search inputs from known research fields and different weights and ranked the resulting lists according to how well they represented articles from these research fields. The weights listed above gave the best results. d. Filtering of Selected References The selected references can be filtered according to different criteria (see section 4.5) in order to reduce the number of returned references. The user can select references according to their entry date in the database, a minimum score (see above), the journal they are published in, whether they have pointers to selected external data sources, or whether they belong to one or more of several groups of references. This allows a user for instance to select only references from refereed journals or from one particular journal by specifying its abbreviation. It also allows a user to select only references that have links to external data sets, on-line articles, or that have been scanned and are available through the ADS Article Service. e. Display of Search Results The ADS system returns different amounts of information about a reference, depending on what the user request was. This section describes the different reference formats. ##### Short Reference Display The list of references returned from a query is displayed in a tabular format. The returned references are sorted by score first. For equal scores, the references are sorted by publication date with the latest publications displayed first. A typical reference display is shown in figure 4. The fields in such a reference are shown in figure 5. They are as follows: 1. Bibliographic Code: This code identifies the reference uniquely (see DATA and Schmitz et al. (1995)). Two important properties of these codes are that they can be generated from a regular journal reference, and that they are human readable and can be understood and interpreted. 2. Score: The score is determined during the search according to how well each reference fits the query. 3. Date: The publication date of the reference is displayed as mm/yyyy. 4. Links: The links are an extremely important aspect of the ADS. They provide access to information correlated with the article. Table 1 shows the links that we currently provide when available. A more detailed description of resources in the ADS that these links point to is provided in DATA. Some of these links (for instance the ‘D’ links) can point to more than one external information provider. In such cases the link points to a page that lists the available choices of data sources. The user can then select the more convenient site for that resource, depending on the connectivity between the user site and the data site. 5. Authors: This is the list of authors for the reference. Generally these lists are complete. For some of the older abstracts that we received from NASA/STI, the author lists were truncated at 5 or 10 authors, but every effort has been made to correct these abbreviated author lists (see DATA). 6. Title: The complete title of the reference. The reference lists are returned as forms if table display is selected (see section 3). The user can select some or all of the references from that list to be returned in any one of several formats: i. HTML format: The HTML (HyperText Markup Language) format is for screen viewing of the formatted record. ii. Portable Format: This is the format that the ADS uses internally and for exchanging references with other data centers. A description of this format is available on-line at: ``` http://adsabs.harvard.edu/abs_doc/ abstract_format.html ``` iii. BibTeX format: This is a standard format that is used to build reference lists for TeX (a typesetting language especially suited for mathematical formulas) formatted articles. iv. ASCII format: This is a straight ASCII text version of the abstract. All formatting is done with spaces, not with tabs. v. User Specified Format: This allows the user to specify in which format to return the reference. The default format for this selection is the bibitem format from the AASTeX macro package. The user can specify an often used format string in the user preferences (see section 3). This format string will then be used as the default in future queries. The user can select whether to return the selected abstracts to the browser, a printer, a local file for storage, or email it to a specified address. ##### Full Abstract Display In addition to the information in the short reference list, the full abstract display (see figure 6) includes, where available, the journal information, author affiliations, language, objects, keywords, abstract category, comments, origin of the reference, a copyright notice, and the full abstract. It also includes all the links described above. For abstracts that are displayed as a result of a search, the system will highlight all search terms that are present in the returned abstract. This makes it easy to locate the relevant parts in an abstract. Since the highlighting is somewhat resource intensive, it can be turned off in the user preference settings (see section 3). For convenience, the returned abstract contains links that allow the user to directly retrieve the BibTex or the custom formatted version of the abstract. The full abstract display also includes a form that provides the capability to use selected information from the reference to build a new query to find similar abstracts. The query feedback mechanism makes in-depth literature searches quick and easy. The user can select which parts of the reference to use for the feedback query (e.g. authors, title, or abstract). The feedback query can either be executed directly, or be returned as a query form for further modification before executing it, for instance to change the publication date range or limit the search to specific journals. This query feedback mechanism is a very powerful means to do exhaustive literature searches and distinguishes the ADS system from most other search systems. A query feedback ranks the database against the record used for the feedback and sorts it according to how relevant each reference is to the search record. The query feedback can be done across databases. For instance a reference from the Astronomy database can be used as query feedback in the preprint database to see the latest work in the field of this article. If the article for the current reference has been scanned and is available through the ADS Article Service (see below), printing options are available in the abstract display as well. These printing options allow the printing of the article without having to retrieve the article in the viewer first. #### 2.1.2 Article Service This part of the ADS provides access to the scanned images of articles. We have received permission from most astronomy journals to scan their volumes and make them available on-line free of charge. A more detailed description of these data is in the DATA. The most common access to the scanned articles is through the ADS Abstract Service via the ‘G’-links (see above). However they can also be accessed directly through the article query page by publication year and month or by volume and page at: ``` http://adsabs.harvard.edu/article_service.html ``` This form returns the specified article in the user specified format (see section 3). If a page within an article is specified, and the single page display is selected, the specified page within the article is returned with the links to the other available pages as usual. The article display normally shows the first page (figure 7) of an article at the selected resolution and quality (see section 3). The user can select resolutions of 75, 100, or 150 dots per inch (dpi) and image qualities of 1, 2, 3, or 4 bits of greyscale per pixel. These gif images are produced on demand from the stored tiff images (see DATA). The default version of the gif images (100 dpi, 3 bit greyscale) is cached on disk. The cache of these gif images is managed to stay below a maximum size. Any time the size of the cached gif images exceeds the preset cache size, the gif images of pages that have not been accessed recently are deleted. Below the page image on the returned page are links to every page of the article individually. This allows the user to directly access any page in the article. Wherever possible, plates that have been printed separately in the back of the journal volume have been bundled with the articles to which they belong for ease of access. The next part of the displayed document provides access to plates in that volume if the plates for this journal are separate from the articles. Another link retrieves the abstract for this article. The next part of the page allows the printing of the article. If the browser works with HTTP persistent cookies (see section 3), there is just one print button in that section with a selection to print either the whole paper or individual pages. This print button will print the article in the format that the user has specified in the user preferences. If the browser does not handle cookies, several of the more commonly used print options are made available here. All possible printing options can be accessed through the next link called “More Article Retrieval Options”. This page allows the user to select all possible retrieval options. These include: i. Postscript: Access to two resolutions is provided (200 dpi and 600 dpi). For compatibility with older printers, there is also an option to retrieve Postscript Level 1 files. ii. PCL (Printer Control Language): This language is for printing on PCL printers such as the HP desk jets and compatibles. iii. PDF (Portable Document Format): PDF can be viewed with the Adobe Acrobat reader (Adobe, Inc ). From the Acrobat reader the article can be printed. iv. TIFF (Tagged Image File Format): The original images can be retrieved for local storage. This would allow further processing like extraction of figures, or Optical Character Recognition (OCR) in order to translate the article into ASCII text. v. Fax retrieval: Within the USA, articles can be retrieved via fax at no cost. Again, the retrieval is greatly facilitated through the preferences setting capability. The preferences allow the user to store a fax number that will be used for the fax service. vi. Email retrieval: Articles can be retrieved through email instead of through a WWW browser. MIME (Multipurpose Internet Mail Extension, Vaudreuil (1992)) capable email systems should be able to send the retrieved images directly to the printer, to a file, or to a viewer, depending on what retrieval option was selected by the user. For most of the retrieval options, the data can optionally be compressed before they are sent to the user. Unix compress and GNU gzip are supported compression algorithms. Instead of displaying the first page of an article together with the other retrieval links, the user has the option (selected through the preferences system, see section 3) to display thumbnails of all article pages simultaneously. This allows an overview of the whole article at once. One can easily find specific figures or sections within an article without having to download every page. This should be especially useful for users with slow connections to the Internet. Each thumbnail image ranges in size from only 700 bytes to 3000 bytes, depending on the user selected thumbnail image quality. The rest of this type of article page is the same as for the page-by-page display option. #### 2.1.3 Other Forms Based User Interfaces There are several forms available to directly access references or articles and other relevant information. All abstract query forms return the short reference format as described above. One form allows access to references through bibliographic codes: ``` http://adsabs.harvard.edu/bib_abs.html ``` This form allows the user to retrieve abstracts by specifying directly a bibliographic code or the individual parts of a bibliographic (year, journal, volume, page). This can be very useful in retrieving references from article reference lists, since such reference lists generally contain enough information to build the bibliographic codes for the references. This form also accepts partial codes and returns all references that match the partial code. It accepts the wildcard character ‘?’. The ‘?’ wildcard stands for one character in the code. For partial codes that are shorter than 19 characters, matching is done on the first part of the bibliographic codes. For instance: 1989ApJ…341?…1 will retrieve the articles on page 1 of the ApJ (Astrophysical Journal) and ApJ Letters volume 341, regardless of the author name. Another form allows access to the Tables of Contents (ToCs) of selected journals by month/year or volume: ``` http://adsabs.harvard.edu/toc_service.html ``` One option on that form is to retrieve the latest published issue of a particular journal. Access to the last volumes of a set of journals is also available though a page with a graphical display of selected journals’ cover pages: ``` http://adsabs.harvard.edu/tocs.html ``` By clicking on a journal cover page either the last published volume of that journal or the last volume that the user has not yet read is returned, depending on the user preference settings (see section 3). The information necessary for that service is stored with the user preferences in our internal user preferences database. A customized ToC query page is available at: ``` http://adsabs.harvard.edu/custom_toc.html ``` It will display only icons for journals that have issues available that have not been read by the user. This allows a user to see at a glance which new issues for this set of journals have been published. The set of journals that is included in the customized ToC query page can be specified in the user preferences (see section 3). As mentioned in section 2.1.1 and in ARCHITECTURE, one important aspect of the ADS search system is the list of synonyms. Sometimes it is important for our users to be able to see what words are in a particular synonym group to properly interpret the search results. Another question that is asked is what words are in the database and how often. The list query page (linked to the words “Authors”, “Title Words”, and “Text Words” above the corresponding entry fields on the main query form) allows the user to find synonym groups and words in the database. The user can specify either a complete word in order to find its synonyms, or a partial word with wildcard characters to find all matching words in the database. When a word without wildcard characters is specified, the list query form returns all of its synonyms (if any). To find words matching a given pattern, the users can specify a partial word with either or both of two wildcard characters. The question mark ‘?’ stands for any single character, the asterisk ‘\*’ stands for zero or more characters (see section 4.2.3). For a wildcard search, the list query form returns all words in the database that match the specified pattern, together with the frequencies of these words in the database. #### 2.1.4 Journal specific access forms The regular query forms search the complete database. The user can select the return of only specific journals in the “Filter” section of the query form. In order to allow different journals to use the ADS system for searching their references, journal specific pages are available. The URL (Uniform Resource Locator) for an abstract search page for a specific journal is: ``` http://adsabs.harvard.edu/Journals/ search/bibstem ``` The page for retrieving scanned articles of a specific journal is: ``` http://adsabs.harvard.edu/Journals/ articles/bibstem ``` and the page for retrieving the tables of contents by volume or publication date is: ``` http://adsabs.harvard.edu/Journals/toc/bibstem ``` In each case, bibstem is the abbreviation for the selected journal. For instance: ``` http://adsabs.harvard.edu/Journals/search/ApJ ``` returns a query form for searching only references of the Astrophysical Journal. These forms are available for linking by anybody. ### 2.2 Direct Access Interfaces Both abstracts and articles can be accessed directly though HTML hyperlinks. The references are identified through the bibliographic codes (or bibcodes for short) mentioned above and described in detail in DATA. The syntax for such links to access abstracts is: ``` http://adsabs.harvard.edu/cgi-bin/ bib_query?bibcode=1989ApJ...342L..71R ``` Scanned articles can be accessed directly through links of the form: ``` http://adsabs.harvard.edu/cgi-bin/ article_query?bibcode=1989ApJ...342L..71R ``` These links will return the abstract or scanned article respectively for the specified bibliographic code. They are guaranteed to work in this form. You may see other URLs while you use the ADS. These are internal addresses that are not guaranteed to work in the future. They may change names or parameters. Please use only links of the form described above to directly access abstracts and articles. ### 2.3 Embedded Queries Embedded queries can be used to build hyperlinks that return the results of a pre-formulated query. One frequently used example is a link that returns all articles written by a specific user. The syntax for such a link is: ``` <a href="http://adsabs.harvard.edu/cgi-bin/ abs_connect?return_req=no_params& param1=val1&param2=val2&...">...</a> ``` There are no spaces allowed in a URL. All blanks need to be encoded as ‘+’. The parameter return\_req=no\_params sets all the default settings. Individual settings can be changed by including the name of the specific setting and its value after the return\_req=no\_params parameter in the URL. A list of available parameters can be accessed at: ``` http://adsdoc.harvard.edu/cgi-bin/ get_field_names.pl ``` We try to make changes to parameters backward compatible, but that is not always possible. We encourage you to use this capability, but it is advisable to use only the more basic parameters. This type of interface allows users to link to the ADS for a comprehensive list of references on a specific topic. Many users use this to provide an up-to-date publication list for themselves by encoding an author query into an embedded query. ### 2.4 Perl Script Queries The ADS database can be used by other systems to include ADS data in documents returned from that site. This allows programmers at other sites to dynamically include the latest available information in their pages. An example is the interface that SPIE (the International Society for Optical Engineering) provides to our database. It is available at: ``` http://www.spie.org/incite/ ``` This site uses Perl scripts to query our database and format the results according to their conventions. These Perl scripts are available at: ``` http://adsabs.harvard.edu/www/adswww-lib/ ``` The Perl scripts allow the programmer to specify all the regular parameters. The results are returned in Perl arrays. If you use these scripts, we would appreciate it if you would credit the ADS somewhere on your pages. ### 2.5 Email Interface The ADS Abstract Service can be accessed through an email interface. This service is somewhat difficult to use since it involves an interface between two relatively incompatible interface paradigms. This makes describing it quite difficult as well. It is intended for users who do not have access to web browsers. If you have questions about how to use this access, please contact the ADS at ads@cfa.harvard.edu. To get information about this capability, send email to: adsquery@cfa.harvard.edu with the word “help” in the message body. This interface accepts email messages with commands in the message body. The subject line is ignored. The commands that are available are: ``` 1) help (see above) 2) action=URL ``` The second command allows a user to retrieve a document at the specified URL. Three qualifiers allow the user to specify what retrieval method to use, what format to return, and to which address to return the results: ``` a) method=‘method’ b) return=‘return-type’ c) address=‘e-mail-address’ ``` a. ‘method’ is either ‘get’ or ‘post’ (without the quotes). This determines what kind of query will be executed. To retrieve a form for further queries, use the ‘get’ method. To execute a forms query you need to know what type of query the server can handle. If you execute a forms query after retrieving the form through this service, the correct method line will already be in place. Default method is ‘get’. b. ‘return-type’ is either ‘text’, ‘form’, or ‘raw’ (without the quotes). If text return is requested, only the text of the query result is returned, formatted as if viewed by a WWW browser. If form return is requested, the text of the result is returned as well as a template of the form that can again be executed with this service. If raw return is requested, the original document is returned without any processing. Default return is ‘form’. The capability to return MIME encoded files is in preparation. c. ‘e-mail-address’ specifies to which e-mail address the result should be sent. This line is optional. If no address is specified, the result is sent to the address from where the request came. To execute a query via email, the user first retrieves the abstract query form with: ``` action=http://adsabs.harvard.edu/ abstract_service.html ``` This will return an executable form. This form can be returned to the ADS in an email message to execute the query. The user enters input for the different fields as required in the forms template. For forms tags like checkboxes or radio buttons, the user can uncomment the appropriate line in the form. Comments in the form that are included with each forms field provide guidance for completing the form before submission. The text part of the form is formatted as comment lines so that the user does not have to modify any irrelevant parts of the form. The retrieval method is already set appropriately. ### 2.6 Z39.50 Interface The recently implemented Z39.50 interface (Lynch (1997)) conforms to the Library of Congress Z39.50 conventions (http://lcweb.loc.gov/z3950/lcserver.html). This allows any library that uses this protocol to access the ADS through their interface. The databases supported through this interface are listed in table 2, search fields supported are listed in table 3, supported relationship attributes are listed in table 4 and the supported structure attributes are listed in table 5. Table 6 lists the supported record formats and table 7 shows the supported record syntax. Each table notes which search fields support which attribute. The relationship attributes ‘equal’ and ‘relevance’ are used to search without and with synonym replacement respectively. If the structure attribute ‘Phrase’ is selected, the input is considered to be one phrase and is not separated into individual words. If the structure attribute ‘Word’ is selected and several words are specified, the input is treated as if the structure attribute ‘Word List’ were specified. As output, brief and full records are supported. These record formats are the same as in the short results list and in the full abstract display as described above. The supported record syntax is either SUTRS (Simple Unstructured Text Record Syntax), HTML (HyperText Markup Language), or the ADS tagged format. In the HTML record syntax, links to other supported ADS internal and external data sources are included. A description of this interface is available on-line at: ``` http://adsabs.harvard.edu/abs_doc/ads_server.html ``` An example of the ADS Z39.50 interface can be accessed from the Library of Congress Z39.50 Gateway at: ``` http://lcweb.loc.gov/z3950/ ``` ## 3 Preferences The ADS user interface is customized through the use of so-called HTTP persistent cookies (see Kristol & Montulli (1997)). These “cookies” are a means of identifying individual users. They are strings that are created by the server software. Web browsers that accept cookies store these identification strings locally on the user’s computer. Anytime a user makes a request, the ADS software checks whether the requesting browser is capable of accepting cookies. If so, it sends a unique string to the browser and asks it to store this string as an identifier for that user. From then on, every time the same user accesses the ADS from that account, the browser sends this cookie back to the ADS server. The ADS software contains a database with a data structure for each cookie that the ADS has issued. The data structure associated with each cookie contains information such as the type of display the user prefers, whether tables should be used to format data, which mirror sites the user prefers for certain external data providers, the preferred printing format for ADS articles, and which journal volumes the user has read. It also can store the email address of the user and a fax number, in case the user wants to retrieve articles via email or fax. The preference settings form includes a field for the user name as well as the email address. However, neither is necessary for the functioning of any of the features of the ADS. The system is completely anonymous. None of the information stored through this cookie system is made available to anybody outside the ADS. There is no open interface to this database and the database files are not accessible through the WWW. Any particular user can only access their own preferences, not the preferences set by any other user. Most of these preferences can be set by the user through a WWW forms interface. Some fields in a user’s preference record are for ADS internal use only. For instance the system is being used to display announcements to users in a pop-up window. The cookie database remembers when the message was last displayed, so that each message is displayed only once to each user. This preference saving system also allows each user to store a query form with filled-in fields. This enables users to quickly query the ADS for a frequently used set of criteria. The cookie identification system is implemented as a Berkeley DBM (DataBase Manager) database with the cookie as the record key. The data block that is stored in the database is a C structure. The binary settings (e.g. “Use Tables”, or “Use graphical ToC Page”) are stored as bits in several preference bytes. Other preferences are stored as bytes, short integer, or long integer, depending on their dynamic range. HTTP cookies are based on host names. Whenever a particular host is accessed by the browser, the browser sends the cookie for that host to the server software. Since the ADS uses several host names as well as mirror sites, we developed software that, on first contact between a new user and the ADS, sets the same cookie for all our host names at the main sites, as well as for the mirror hosts. This is essential in order to provide a seamless system with different servers handling different tasks. ## 4 Search Engine Details ### 4.1 General The search engine software is written in C. It accepts as input a structure that contains all the search fields and flags for the user specified settings and filters. For each search field that contains user input a separate POSIX (Portable Operating System Interface) thread is started that searches the database for the terms specified in that field. The results obtained for each term in that field are combined in the thread according to the specified combination logic. The resulting list of references is returned to the main thread. The main thread combines the results from the different field searches and calculates the final score for each reference. The final combined list with the scores is returned to the user interface routines that format the results according to the user specified output format. ### 4.2 Searching #### 4.2.1 Database Files The abstracts are indexed in separate fields: Author names, titles, abstract text, and objects. Each of these fields is indexed similarly into an index file and a list file (see ARCHITECTURE). The index file contains a list of all terms in the field together with the frequency of the term in the database, and the position and length of two blocks of data in the list file. One block contains all references that include the exact word as specified. The other block contains all references that contain either the word or any of its synonyms. A search for a particular word in the index file is done through a binary search in the index. The indexes are resident in memory, loaded during boot time (see section 5). Once the word is found in the index, the position and length of the data block is used to directly access the data block in the list file. This data block contains the identifier for each reference that contains the search word. If a quick update has occurred (see ARCHITECTURE) since the list file was last built (indicated by a negative in part of the last identifier), an auxiliary block of reference identifiers is read. Its position is contained in the structure of the last reference identifier. This auxiliary block is merged with the original one. The identifier for each reference is the position of the bibliographic code for that reference in the list of all bibliographic codes. This system saves one lookup of the identifier in a list of identifiers, since the number can be used directly as an index in the array of bibliographic codes. #### 4.2.2 Synonym Searches As mentioned above, the index files contain information about two blocks of data, the data for the individual word and for the synonym group to which this word belongs. When a search with synonym lookup enabled is requested, the block of data for the whole synonym group is used, otherwise the data for only the individual word is returned. All the processing that enables these two types of searches is done during indexing time, therefore the speeds for both searches are similar. Even though our synonym list is quite extensive (see ARCHITECTURE) our users will sometimes use words that are not in the database or the synonym list. In these cases the search software uses a stemming algorithm from the Unix utility ispell to find the stem of the search word and the searches for the word stem. The indexing software has indexed the stems of all words in the database together with their original words (see ARCHITECTURE). This word stemming is done as a last resort if no regular match has been found in an attempt to find any relevant references. #### 4.2.3 Wildcard Searches In order to be able to search for families of words, a limited wildcard capability is available. Two wildcard characters are defined: The question mark ‘?’ is used to specify a single wildcard character and the asterisk ‘\*’ is used to specify zero or more wildcard characters. The ‘?’ can be used anywhere in a word. For instance a search for M1? will find all Messier objects between M10 and M19. A search for a?sorb will find references with absorb as well as adsorb. The asterisk can only be used at the beginning or at the end of a word. For instance 3C\* searches for all 3C objects. \*sorb searches for words that end in sorb like absorb, desorb, etc. When synonym replacement is on, all their synonyms (e.g. absorption) will be found as well. The ‘?’ and the ‘\*’ can be combined in the same search string. ### 4.3 Results Combining within a Field #### 4.3.1 Combining results As mentioned above, the user can select between four types of combination methods: “OR”, “AND”, simple logic, and full boolean logic. For the first three cases, the references for all search terms are retrieved and sorted first. The reference lists are then merged by going through the sorted reference lists sequentially and synchronously and selecting references according to the chosen logic. The search algorithm for the full boolean logic is different. The boolean query is parsed from left to right. For each search term a function is called that finds the references for this term. A search term is either a search word, a phrase, or an expression enclosed in parentheses. If the search term contains other terms (if it is enclosed in parentheses), the parsing function is called recursively. The next step is to determine the boolean operator that follows the search term, and then to evaluate the next search term after the operator. Once the reference lists for the two terms and the combining operator are determined, the two lists are combined according to the operator. This new list is then used as the first term of the next expression. If the boolean operator is ‘OR’, the combining of the lists is deferred, and the next operator and search term are evaluated. This ensures the correct precedence of ‘OR’ and ‘AND’ operators. The ‘NOT’ operator is implemented by getting the reference list for the term, making a copy of all references in the database, and then removing the references from the search term from the complete list. This yields a very large list of references. If the first search term in a boolean expression is a ‘NOT’ term, the search will take very long, because this large list has to be propagated through all the subsequent parsing of the boolean expression. Care should therefore be taken to put a ‘NOT’ term to the right of at least one other term, since processing is done left to right. As an example, figure 8 shows the processing of the boolean expression mentioned in section 2: ``` (pulsar or ‘‘neutron star’’) and (‘‘red shift’’ distance) and not 1987A ``` #### 4.3.2 Scoring In addition to the information about the references for each word, the index file also contains its frequency in the database. The frequency is already pre-calculated as int(10000/(log(frequency))) during indexing (see ARCHITECTURE). This saves considerable time during execution of the search engine since all server calculations can be done as integer computations, no floating point operations are necessary. During the first part of the search, this frequency is attached to each retrieved reference. In the next step, the retrieved references are combined according to the selected combination logic for that field. For ‘OR’ combination logic, the lists retrieved for each word are merged and uniqued. As described in section 5, this is done by going through the sorted reference lists synchronously and adding each new reference to the output list. The score for that reference is determined by adding up the frequencies from each of the lists for weighted scoring, or by setting a score equal to the number of matched words for proportional scoring. For ‘AND’ combination logic, only references that appear in every one of the lists are selected. Each of these references receives a score of 1. For simple logic and full boolean logic, the score for the returned references is determined only from the search terms that were combined with ‘OR’. All words that have mandatory selection criteria (prefixed by ‘+’ or ‘–’ in simple logic, and combined with ‘AND’ or ‘NOT’ in full boolean logic) do not affect the final score. ### 4.4 Combining Results among Fields #### 4.4.1 Combining After the POSIX threads for each search field are started, the main program waits for all threads to complete the search. When all searches are completed the search engine combines the results of the different searches according to the selected settings. If for instance one field was selected as required for selection in the settings section of the query form, only references that were found in the search for that field will be in the final reference list. The combined list is then uniqued and sorted by score. The resulting list of references is passed back to the user interface software. If the user did not specify any search terms, a date range has to be selected. The software queries the database for all references in the selected date range and uses this list for further processing, e.g. filtering (see section 4.5). #### 4.4.2 Scoring The score for each reference in the final results list is determined by adding the scores from each list multiplied by the user specified weight for each field and then normalizing the score such that a reference that matches all search terms from all fields receives a score of 1. ### 4.5 Selection Filters After the search is completed according to the specified search words and the settings that control the combination logic and the scoring algorithms, the resulting list of references can be filtered according to several criteria. During the design of the software a decision had to be made whether to filter the results while selecting the references or after completing the search. The first approach has the advantage that the combining of the selected references will be faster because fewer references need to be combined. The second approach has the advantage that the first selection is faster. We chose the second approach because, except for selecting by publication date, only a small number of queries use filtering (see section 6). Because of that, filtering by publication date is done during selection of the references, while the other filtering is done after the search is completed. References can be filtered by five criteria: ``` 1. Entry date in the data base 2. Minimum score 3. Journal 4. Available links 5. Group membership ``` 1. + 2. Entry date and minimum score. These two filters can be used to query the database automatically on a regular basis for new information that is relevant to a selected topic. The user can build a query form that returns relevant references, and then save this query form locally. This query form can then be sent to the ADS email interface (see above) on a weekly or monthly basis. By specifying an entry day of -7 for instance, the query will retrieve all references that fit the query and that were entered within the last seven days. The minimum score can be used to limit the returned number of references to only the ones that are really relevant. The references are returned via email as described in section 2.5 about the email interface. 3. Journal filter. This filter allows the user to select references from individual journals or groups of journals. Available options for this filter are: ``` a. All journals (default) b. Refereed journals only c. Non-refereed journals only d. Selected journals ``` If the last option is selected, the user can specify one or more journal abbreviations (e.g. ApJ, AJ (Astronomical Journal)) that should be selected. More than one abbreviation can be specified by separating them with semicolons or blanks. The filter for journals can also include the volume number (but not the page number). The journal abbreviation is compared with the bibliographic codes over the length of the specified abbreviation. For instance if the user specifies ApJ, the system selects all articles published in the ApJ, ApJ Supplement and ApJ Letters. ApJ.. will select only articles from the ApJ and the ApJ Letters. A special abbreviation, ApJL will select only articles from the ApJ Letters. If a journal abbreviation is specified with a prepended ‘–’, all references that are NOT from that journal are returned. The journal abbreviations (or bibstems) used in the ADS are available at: ``` http://adsdoc.harvard.edu/abs_doc/ journal_abbr.html ``` 4. Available links. This filter allows the user to select references that have specific other information available. The returned references can be filtered for instance to include only references that have data links or scanned articles available. As an example, a user needs to find on-line data about a particular object. A search for that object in the object field and a filter for references with on-line data returns all articles about that object that link to on-line data. 5. Groups. We provide the capability to build a reference collection for a specific subset of references. This can be either articles written by members of a particular research institute or about a particular subject. Currently there are 5 groups in the ADS. We encourage larger institutes or groups to compile a list of their references and send it to us to be included in the list of groups. ## 5 Optimizations The search engine is entirely custom-built software. As mentioned in the introduction, the first version of the Abstract Service used commercial database software. Because of too many restrictions and serious performance problems, a custom-designed system was developed. The main design goal was to make the search engine as fast as possible. The most important feature that helped speed up the system was the use of permanent shared memory segments for the search index tables. In order to make searching fast, these index tables need to be in Random Access Memory. Since they are tens of megabytes long, they cannot be loaded for each search. The use of permanent shared memory segments allows the system to have all the index tables in memory all the time. They are loaded during system boot. When a search engine is started, it attaches to the shared segments and has the data available immediately without any loading delays. The shared segments are attached as read-only, so even if the search engine has serious bugs, it cannot compromise the integrity of the shared segments. Using shared segments with the custom-built software improved the speed of a search by a factor of 2 – 20, depending on the type of search. Access to the list files (see section 4.2) was optimized too. These files cannot be loaded into memory since they are too large (each is over one hundred megabytes in size). To optimize access to these files, they are memory mapped when they are accessed for the first time. From then on they can be accessed as if they were arrays in memory. The data blocks specified in the index tables can be accessed directly. Access is still from file, but it is handled through the paging in the operating system, rather than through the regular I/O system, which is much more efficient. Once the search engine was completed and worked as designed, it was further optimized by profiling the complete search engine and then optimizing the modules that used significant amounts of time. Further analysis of the performance of the search engine revealed instances where operations were done for each search that could be done during indexing of the data and during loading of the shared segments. Overall these optimizations resulted in speed improvements of a factor of more than 10 over the performance of the first custom-built version. These optimizations were crucial for the acceptance of the ADS search system by the users. In order to further speed up the execution, the search engine uses POSIX threads to exploit the inherent parallel nature of the search task. The search for each field, and in the case of the object field for each database, is handled by a separate POSIX thread. These threads execute in parallel, which can provide speedups in our multiprocessor server. Even for single processor systems this will provide a decrease in search time, since each thread sometimes during its execution needs to wait for I/O to complete. During these times other threads can execute and therefore decrease the overall execution time of a search. Another important part of the optimization was the decision on how to structure the index and list files. The index files contain the word frequency information that is used to calculate scores for the weighted scoring (see section 2.1.1c). The score for a matching reference is calculated from the inverse logarithm of the frequency of the word in the database. This requires time consuming floating point calculations. To avoid these calculations during the searches, the floating point arithmetic is done at indexing time. The index file contains the inverse log of the word frequency multiplied by a normalization factor of 10,000. This allows all subsequent calculations to be done in integer arithmetic, which is considerably faster than floating point calculations. Another optimization was to pre-compile the translation rules (see section 2.1.1). These translation rules are pre-compiled and stored in a shared memory segment to which the search process attaches. This allows for faster execution of these pattern matching routines. Overall, these optimizations improved the speed of the searches by two orders of magnitude between the original design using a commercial database and the current software. ## 6 Access Statistics The ADS software keeps extensive logs about the use of the search and access software. In this section, usage statistics for the search software and for access patterns to the Article Service are reported. If not otherwise indicated, the statistics in this section are for the one-year period from 1 April 1998 through 31 Mar 1999. ### 6.1 Abstract Service The ADS is accessed by users from many different countries. In the one-year period of this section the ADS was accessed by 127,000 different users, using 100,000 different hosts from 112 different countries. An individual user is defined as having a unique cookie (see section 3). Users without cookies are distinguished by the hostnames from which the requests came. This may overestimate the number of users, since some users may have more than one cookie, for instance when accessing the ADS from home. The number of different hosts is a lower limit of the number of users. Many hosts are used by multiple users, so the real number is certainly considerably higher than that. The development of the number of users and queries over the life of the ADS is described in OVERVIEW. This section describes some more detailed investigations of the access statistics. The total number of users at first comes as quite a surprise. The number of working astronomers in the world is probably between 10,000 and 20,000. The number of ADS users is much larger than that. This is probably due to several factors. First, there are certainly many accidental users. They somehow find our search page through some link, execute a query to see what they get back, and then never come back because it is of no interest to them. Other possible users are media people. There are certainly many reporters occasionally looking up something in astronomy. I have spoken with several of them that use the ADS occasionally for that. Another group of users are amateur astronomers. The ADS was described in Sky & Telescope by Eichhorn 1996a a few years ago. this has certainly made amateur astronomers aware of this resource. The number of amateur astronomers world-wide is certainly in the millions, so they comprise a potentially large number of users. Another large group of users visits the ADS through links from other web sites. One particularly popular one is NASA’s Image of the Day, which frequently includes links to abstracts or articles in the ADS. Since this NASA page is visited by millions of people, a large number of them will access the ADS through these links. The use of the ADS in different countries depends on several factors. One of these is certainly the population of the country. Figure 9 shows the number of queries per capita as a function of the population of the country (CIA (1999)). There seems to be an upper limit of about 0.1 – 0.2 queries per person per year. The one exception is the Vatican with almost 3 queries per person per year. This is understandable since the Vatican has an active Astronomy program, which generates a large number of queries for a small population. Another factor for querying the ADS is the funding available for Astronomy in a country, and the available infrastructure to do astronomical research. Figure 10 shows the number of references retrieved per capita as a function of the Gross Domestic Product (GDP, CIA (1999)) of the country. The symbols are the Internet codes for each country. The highly industrialized countries cluster in the upper right part of the plot (area 1). A closeup of this region is shown in figure 11. Other clusters are the countries of the former Soviet Union (area 2), and Central and South American countries (area 3). The high number of references retrieved per capita combined with the lower GDP per capita of the former Soviet Union is probably due to a recent decline in GDP, but a still existing infrastructure for astronomical research. The ADS is used 24 hours per day. The distribution of queries throughout the day is shown in figure 12. This figure shows the number of queries at the two largest mirror sites, as well as the queries at the main ADS site. The usage distribution data are for the time period from 1 November 1998 to 31 March 1999, not the full year, to avoid complications due to different periods where daylight savings time is in effect. The queries at the main site are separated into US users and non-US users on the basis of their Internet hostnames. All the individual curves show a distinct two-peaked basic shape, with additional smaller peaks in some cases. This distribution of queries over the day shows the usage throughout a workday, with a small minimum during lunch hour. The SAO-US distribution does not show a real minimum between the two peaks, presumably because of the distribution of US researchers over three time zones. There are three features in this figure that deserve special notice. The first is that the shape of the accesses to the ADS mirror in France is the same as the shape of the non-US access to the SAO site. This indicates that the large majority of the non-US use on the SAO site is from European users. This non-US usage is about 50% higher than the total usage of the ADS mirror site at the CDS in France. The reason for this is most probably the fact that the connectivity within Europe is not yet very good. We know that for instance that our users in England and Sweden have better access to the main ADS site in the USA than to our mirror site in France. The same is true for other parts of Europe. Another reason for the use of the USA site by European users is the fact that our European mirror sites do not yet have the complete set of scanned articles on-line. This forces some users to access the main ADS site in order to retrieve scanned articles. Second, there is a slight peak in the distribution of queries to the NAO mirror in Japan around 21:00 UTC (Universal Time Coordinated, formerly Greenwich Mean Time). This is probably due to US west coast users using the Japanese mirror site instead of the US site. The access to Japan is frequently very fast and response times from Japan may be better than from SAO during peak traffic times. Third, there is a distinct peak in the SAO-US usage at 9:00 UTC. This feature was so unusual that we tracked down the reason for it. It turns out that one of our users has set up web pages that include about 200 links to ADS abstracts. He had set up a link verifier that every night at 9:00 UTC checked all the links on his pages. This meant that the link verifier executed 200 queries every night at the same time, which showed up in this evaluation of our access statistics. The following section shows statistics of how our users use the different capabilities of the ADS query system. Figure 13 shows a histogram of the relative usage of the different search fields (authors, objects, title, text). It shows clearly that the majority of queries are queries by author name (66%). Object names are used in fewer than 5% of the queries. The title field is used in about 21% of the queries, and the text field in 26%. Queries that use more than one field make up about 18% of the total. This usage pattern justifies for instance including tables of contents (ToCs) in the database that do not have abstracts for searching. Since a large part of the usage is through author and title queries, such ToC entries will still be found. Figure 14 shows the number of queries as a function of the number of query items in each input field. The query frequency generally decreases exponentially with increasing number of search terms. For title and text queries, the frequency is approximately constant up to 3 query words, before the frequency starts to decrease. For abstract queries there is a significant increase in frequency of queries with more than 20 query words, for title queries there is a similar increase for queries with more than about 8 query words. This is due to queries generated through the query feedback mechanism which allows the user to use a given abstract and its title as a new query. Figures 15 and 16 show the usage of non-default query settings (see section 4.2). The default settings were chosen to suffice for most queries. Figure 15 shows the percentage of non-default settings for the different settings and query fields available. It shows that 29% of author queries, 78% of title queries, and 85% of text queries use non-default settings. This was at first disappointing, because it suggested that the default settings might not be a reasonable selection of settings. The two main settings that were non-default were combining words with “AND” (see section 2.1.1.b.ii), and disabled weighted scoring (see section 2.1.1.c). On closer examination of the statistics it turns out that the straight weighting settings come from mainly two systems, the NASA Techreports and the International Society for Optical Engineering (SPIE). Both of these systems use our Perl scripts (see section 2.4) to access the ADS database. They do not set our normal default settings during these queries. Figure 16 shows the non-default settings for all queries that did not come from either of these two servers. There is still a small percentage of queries that use straight weighting, probably mostly due to other systems that use our Perl script interface routines. The one remaining non-default setting that is used frequently is the combination of words with “AND”. We believe that the “OR” combination as default is more useful since it returns more information. The beginning of the list of returned references is the same, regardless of whether “AND” or “OR” combination is selected, since references that match all words are sorted to the beginning of the list. When “OR” combination is selected, partial matches will be returned after the ones with perfect matches. This is desirable since there may be relevant references that for some reason do not match all query words. The other selecting mechanism that is available is the filtering of references according to what other information is available for a reference. The usage of the filtering is shown in table 8. About 10% of the total queries use the filter option. Almost all of these filter by journal or select refereed journals only. The sum of the numbers for required data types adds up to more than the number for “Required data”, since more than one data type can be selected. Table 9 shows the number of links available and the usage pattern of the data links that the ADS provides. The highest usage is access to the abstracts, followed by the links to full text articles, links to citations, and links to on-line electronic articles. Reference links and links to SIMBAD objects are next. ### 6.2 Article Access Statistics The ADS Article Service provides access to full journal articles. The usage statistics should show how astronomy researchers read and use journal articles. In this section we describe a few of the statistics of the article server. More statistics on the usage of the scanned articles are described in OVERVIEW. Figure 17 shows the number of pages of scanned articles retrieved over the life of the ADS, figure 18 shows the number of articles retrieved. The number of articles represents the sum of the selected links to on-line electronic articles, PDF and Postscript articles at the journals, and scanned articles at the ADS. Both the number of pages and the number of articles retrieved is steadily increasing. This is due to both the increased coverage in the ADS of scanned journals and the increase in the number of users that use the system. Table 10 shows the number of retrievals in the various formats. Postscript is a printer control language developed by Adobe (see Adobe Postscript (1990)). Postscript Level 1 is the first version of the Postscript language. It generates much larger files than Level 2 Postscript. Some older printers can process only Level 1 Postscript files. PDF (Portable Document Format) is a newer page description format, also developed by Adobe. PCL (Printer Control Language) is a printer control language developed by Hewlett Packard. It is used in low end PC printers. Low resolution is 200 dpi for Postscript and PDF, and 150 dpi for PCL. High resolution is 600 dpi for Postscript and PDF, 300 for PCL. The majority of retrievals are of medium resolution Postscript files. This is the default setting in the ADS Article Service. The number of Postscript Level 1 articles (compatible with older printers, but much larger file sizes) retrieved is low compared with Level 2 Postscript articles, and slowly declining. The number of PCL articles retrieved is even smaller and also declining. The number of PDF articles retrieved was slowly increasing throughout 1998. It has increased much more rapidly in 1999. In early 1998 less than 15% of the high resolution articles were retrieved as PDF files. This fraction increased to 40% by March, 1999. ## 7 Future Plans The ADS Abstract Service is only seven years old, but it is already an indispensable part of the astronomical research community. We get regular feedback from our users and we implement any reasonable suggestions. In this section we mention some of the plans for improvements that we are currently working on to provide even more functionality to our users. ### 7.1 Historical Literature One important part of the ADS Digital Library will be the historical literature from the 19th century and earlier. This part of the early literature is especially suited for being on-line in a central digital library. It is not available in many libraries, and if available is often in dangerously deteriorating condition. The access statistics of the ADS show that even old journal articles are accessed regularly (see OVERVIEW). The ADS is working on scanning this historical literature through two approaches: Scanning the journals, and scanning the observatory literature. 1. We are in the process of scanning the historical journal literature. We already have most of the larger journals scanned completely. Table 11 shows how much we have scanned of each of the journals and conference proceedings series for which we have permission. The oldest journal we have scanned completely is the Astronomical Journal (Vol. 1, 1849). We plan to have the Monthly Notices of the Royal Astronomical Society on-line completely by early in 2000. After that we plan to scan the oldest astronomical journal, Astronomische Nachrichten (Vol. 1, 1821), Icarus, Solar Physics, Zeitschrift für Astrophysik, Bulletin of the American Astronomical Society, and L’Observateur, and the conference series of IAU Symposia. Other journals that we plan to scan are the other precursor journals for Astronomy & Astrophysics, if we can obtain permission to do so. We will also scan individual conference proceedings for which we can obtain permission. 2. One very important part of the astronomical literature in the 19th century and earlier were the observatory publications. Much of the astronomical research before the 20th century was published in such reports. We are currently collaborating with the Harvard library in a project to make this part of the astronomical literature available through the ADS. The Harvard library has a grant from the National Endowment of the Humanities to make preservation microfilms of this (and other) literature. This project is generating an extra copy of each microfilm. We will scan these microfilms and produce electronic images of all the microfilmed volumes. The resolution of the microfilm and the scanning process is approximately equivalent to our 600 dpi scans. This project, once completed, will provide access to all astronomical observatory literature that is available at the Harvard library and the library of the Smithsonian Astrophysical Observatory from the 19th century back. For the more recent observatory literature we will have to resolve copyright issues before we can put it on-line. In order to complete our data holdings, we still need issues of several journals for scanning. A list of journals and volumes that we need is on-line at: ``` http://adsabs.harvard.edu/pubs/ missing_journals.html ``` In order to feed the pages through a sheet feeder, we need to cut the back of the volumes to be scanned. If they have not been bound before, they can be bound after the scanning. If they have been bound before, there is not enough margin left to bind them again. If you have any of the journals/volumes that we need, and you are willing to donate them to us, please contact the first author of this article. We would like to have even single volumes of any of the missing journals. ### 7.2 Search Capabilities The capabilities of the search system and user interface have been developed in close cooperation with our users. We always welcome suggestions for improvements and usually implement reasonable suggestions very quickly (within days or a few weeks). Because of this rapid implementation of new features we have hardly any backlog of improvements that we want to implement. There are currently two larger projects that we are investigating. 1. We plan to convert all our scanned articles into electronic text through Optical Character Recognition (OCR). In order to be able to search full text articles we will need to develop new search algorithms. Our current search system depends on the abstracts being of fairly uniform length. This caused some problems at one time when we included sets of data with abstracts that were 4-5 times as long as our regular abstracts. These long abstracts would be found disproportionally often in searches with many query words (for instance in query feedback searches), since they generally matched more words. We had to reduce the sizes of these abstracts in order to make the searching work consistently with these data sets. OCR’d full text will require new search algorithms to make the search work at all. We currently estimate that the implementation of the full text search capability will take at least 2 years. 2. The scanned articles frequently contain plots of data. For most of these plots the numerical data are not available. We plan to develop an interface that will allow our users to select a plot, display it at high resolution, and digitize the points in the plot by clicking on them. This will allow our users to convert plots into data tables with as much precision as is available on the printed pages. At this time we do not know how long it will take us to implement this new capability. ### 7.3 Article Access We are currently investigating several new data compression schemes that would considerably reduce the size of our scanned articles. This could considerably improve the utility of the ADS Article Service, especially on slow links. We plan to be quite conservative in our approach to this, since we do not want to be locked into proprietary compression algorithms that could be expensive to utilize. At this point we do not have any time frame in which this might be accomplished. ## 8 Summary The ADS Abstract Service has been instrumental in changing the way astronomers search and access the literature. One of the reasons for the immediate acceptance of the ADS by the astronomical community when the WWW based version became available was the ease of use of the interface. It provided access to many advanced features, while still making it extremely easy to execute simple queries. Most of the use is for simple queries, but a significant number of queries utilize the more sophisticated capabilities. The search engine that provides this access was crucial to the success of the ADS as well. The most important aspects of a search engine are its speed and its flexibility to accommodate special features. The custom-designed software of the ADS search engine proved that at least in some instances a custom design has considerable advantage over a general purpose system. The search speed that we were able to achieve and the flexibility of the custom design that allows us to quickly adapt to our users’ needs have justified the efforts of developing a system from scratch that is tailored to the specific data set. The ADS had 31,533 users issue 780,711 queries and retrieve 15,238,646 references, 615,181 abstracts, and 1,119,122 scanned pages in November, 1999. Since the start of the ADS we have served 311,594.261 references and 17,146,370 scanned article pages. These numbers speak for them self about the success of the ADS. ## 9 Acknowledgment Funding for this project has been provided by NASA under NASA Grant NCC5-189.
no-problem/0002/hep-ex0002013.html
ar5iv
text
# Q-balls in Underground Experiments ## 1 INTRODUCTION Dark Matter (DM) is one of the most intriguing problems in particle physics and cosmology. Several types of stable particles hypothesized in theories beyond the Standard Model of particle physics have been considered as candidates for DM. One example of such particles is the lightest supersymmetric particle (LSP) coming from a supersymmetric extension of the Standard Model . In theories where scalar fields, carry a conserved global quantum number, $`Q`$, there may exist non-topological solitons which are stabilized by global charge conservation. These particles are spherically symmetric and for large values of $`Q`$ their masses and volumes grow linearly with $`Q`$. Thus they act like homogenous balls of ordinary matter, with $`Q`$ playing the role of particle number; Coleman called this type of matter Q-balls . The conditions for the existence of absolutely stable Q-balls are satisfied in supersymmetric theories with low energy supersymmetry breaking. According to abelian non-topological solitons with Baryon and/or Lepton quantum numbers naturally appear in the spectrum of the Minimal Supersymmetric Standard Model. The role of conserved quantum number is played by the baryon number. The same reasoning applies to sleptons for the lepton number and also to scalar Higgs particles. Q-balls can thus be considered like coherent states of squarks, sleptons and Higgs fields. Under certain assumptions about the internal self interaction of these particles and field the Q-balls are absolutely stable . In this note we recall the main physical and astrophysical properties of these particles, the interaction of Q-balls with matter and their energy losses in matter, and the possibility of traversing the Earth to reach the MACRO detector . We neglect the possibility of : $`(i)`$ electromagnetic radiation emitted by Q-balls of high $`\beta `$. $`(ii)`$ strong interaction of Q-balls in the upper atmosphere capable of destroying the Q-ball ; this last point deserves further investigations. ## 2 PROPERTIES OF Q-BALLS Q-balls could have been produced in the Early Universe, and could contribute to the DM. Several mechanisms could have lead to the formation of Q-balls in the Early Universe. They may have been created in the course of a phase transition, which is sometime called “solitogenesis”, or they could have been produced via fusion in processes reminiscent of the big bang nucleosynthesis, wich have been called “ solitosynthesis”; small Q-balls can be pair-produced in high energy collisions . The astrophysical consequenses of Q-balls in many ways resemble those of strange quark matter, “nuclearites”; the peculiarity of Q-balls is that their mass grows as $`Q^{3/4}`$, while for nuclearites the mass grows linearly with baryon number . The Q-ball mass and size are related to its baryon number . For a supersymetic potential $`U(\varphi )M_S^4=constant`$ for large scalar $`\varphi `$, the Q-ball mass $`M`$ and radius $`R`$ are given by . $$M=\frac{4\pi \sqrt{2}}{3}M_SQ^{3/4}$$ (1) $$R=\frac{1}{\sqrt{2}}M_S^1Q^{1/4}$$ (2) The parameter $`M_S`$ is the energy scale of the SUSY breaking symmetry. A stability condition was found in ref. : the Q-ball mass $`M`$ is related to the nucleon mass $`M_N`$ by $$MQM_N$$ (3) From Eq. 1 and Eq. 3 one has the stability constraint : $$Q1.6\times 10^{15}\left(\frac{M_S}{1TeV}\right)^4$$ (4) In Fig. 1 the allowed region for stable Q-balls is indicated. Q-balls are expected to concentrate in galactic halos and to move with the typical galactic velocity $`v=\beta c10^3c`$. Assuming that Q-balls constitute the cold galactic dark matter with $`\rho _{DM}0.3GeV/cm^3`$, their number density is $$N_Q\frac{\rho _{DM}}{M}5\times 10^5Q^{3/4}\left(\frac{1TeV}{M_S}\right)cm^3$$ (5) The corresponding flux is $$\varphi 10^2Q^{3/4}\left(\frac{M_S}{1TeV}\right)^1cm^2s^1sr^1$$ (6) If we assume that $`\beta _{Qball}10^3`$, $`\rho _{DM}0.3GeV/cm^3`$, $`M_S=1TeV`$, the Q-ball flux cannot be greater that $`4\times \frac{10^{19}}{M_Q}cm^2s^1sr^1`$ ($`M_Q`$ in g). Q-balls are expected to be distributed uniformly in our part of the galaxy and there should not be enhancements in the solar system, as for example a cloud of Q-balls around the sun. Q-balls can be classified in two classes: SECS (Supersymmetric Electrically Charged Solitons) and SENS (Supersymmetric Electrically Neutral Solitons). The interaction of Q-balls with matter and their detection differ drastically for SECS or SENS. ## 3 INTERACTION WITH MATTER ### 3.1 Interaction with matter of Q-balls type SECS SECS are Q-balls with a net positive electric charge in the interior. The charge of SECS originates from the unequal rate of absorption in the condensate of quarks (squarks) and electrons (selectrons). This positive electric charge is neutralized by a surrounding cloud of electrons. The positive charge interacts via elastic or quasi elastic collisions. The positive electric charge can vary from one to several tens; the cross section is the Bohr cross section for Q-ball hydrogen-interaction $$\sigma =\pi a_0^210^{16}cm^2$$ (7) Where $`a_0`$ is the Bohr radius. The formula is valid for $`Ra_0`$, which happens for $`Q2.7\times \left(\frac{M_S}{1TeV}\right)^4`$. The main energy losses of SECS passing throuth matter with velocities in the range $`10^4<\beta <10^2`$ is due to two contributions: the energy losses due to $`(i)`$ the interaction of the SECS core with the nuclei (nuclear contribution) and, $`(ii)`$ with the electrons of the traversed medium (electronic contribution). The total energy loss is the sum of the two contributions. SECS could cause the catalysis of proton decay, but only if they are large and have large velocities . The possibility that SECS can cause the catalysis of proton decay does not concern our range of interest for velocities, masses and radii of SECS. Electronic losses of SECS: The electronic contribution to the energy loss of SECS is calculated with the following formula $$\frac{dE}{dx}=\frac{8\pi a_0e^2\beta }{\alpha }\frac{Z_1^{7/6}N_e}{(Z_1^{2/3}+Z_2^{2/3})^{3/2}}forZ_11$$ (8) where $`\alpha `$ is the fine structure constant, $`\beta =v/c`$, $`Z_1`$ is the positive core charge of SECS, $`Z_2`$ is the atomic number of the medium and $`N_e`$ is the density of electrons in the medium. Electronic losses dominate for $`\beta >10^4`$. Nuclear losses of SECS: The nuclear contribution to the energy loss of SECS is due to the interaction of the SECS positive core with the nuclei of the medium and it is given by $$\frac{dE}{dx}=\frac{\pi a^2\gamma NE}{ϵ}S_n(ϵ)$$ (9) where $$S_n(ϵ)\frac{0.56Log(1.2ϵ)}{1.2ϵ(1.2ϵ)^{0.63}},ϵ=\frac{aM_2E}{Z_1Z_2e^2M_1}$$ (10) and $$a=\frac{0.885a_0}{(\sqrt{Z_1}+\sqrt{Z_2})^{2/3}},\gamma =\frac{4M_2}{M_1}$$ (11) $`M_1=M`$ is the mass of the incident Q-ball; $`M_2`$ is the mass of the target nuclei; $`Z_1e`$ and $`Z_2e`$ are their electric charges; we assume that $`M_1>M_2`$. Nuclear losses dominates for $`\beta 10^4`$. The energy losses of SECS in the earth mantle and earth core: The energy losses of SECS in the earth mantle and earth core have been computed for different $`\beta `$-regions and for different charges of the Q-ball core, using the same general procedures used in the past for computing the energy losses in the earth of magnetic monopoles and nuclearites . In Fig. 2 is presented the energy losses of Q-balls type SECS in the Earth mantle. ### 3.2 Interaction of Q-balls type SENS The Q-ball interior of SENS is characterized by a large Vacuum Expectation Value (VEV) of certain squarks, and may be sleptons and Higgs fields . The $`SU(3)_c`$ symmetry is broken and deconfinement takes place inside the Q-ball. If a nucleon enters this region of deconfinement, it dissociates into three quarks, some of which may then become absorbed in the condensate via the reaction . $$qq\stackrel{~}{q}\stackrel{~}{q}$$ (12) In practice the reaction looks like $$(Q)+Nucleon(Q+1)+pions$$ (13) and sometimes as $$(Q)+Nucleon(Q+1)+Kaons$$ (14) If it is assumed that the energy released in (13) is the same as in typical hadronic processes (about 1 GeV per nucleon), this energy is carried by 2 or 3 pions (or 2 kaons). The cross section for reactions (13) and (14) is determined by the Q-ball radius $`R`$ $$\sigma 6\times 10^{34}Q^{1/2}\left(\frac{1TeV}{M_S}\right)^2cm^2$$ (15) The corresponding mean free path $`\lambda `$ is $$\lambda =\frac{1}{\sigma N}$$ (16) According to References \[5-8\] the energy loss of SENS moving with velocities in the range $`10^4<\beta <10^2`$ is constant and is given by $$\frac{dE}{dx}\frac{\zeta }{\lambda }=\zeta N10^{34}Q^{1/2}\left(\frac{1TeV}{M_S}\right)^2cm^2$$ (17) where $`Nisthenumberofatoms/cm^3`$ and $`\zeta `$ is the energy released in the decay. The energy loss of SENS is due to the energy released from the absorbed nucleon mass. SENS lose a very small fraction of their kinetic energy and are able to traverse the Earth without attenuation for all masses of our interest. ## 4 ENERGY LOSSES IN DETECTORS ### 4.1 Light Yield of Q-balls type SECS For SECS we distinguish two contributions to the Light Yield in scintillators: the primary Light Yield and the secondary Light Yield. The primary Light Yield: is due to the direct excitation and ionization produced by the SECS in the medium. The energy losses in the MACRO liquid scintillator is computed from the energy losses of protons in hydrogen and carbon $$\left(\frac{dE}{dx}\right)_{SECS}=\frac{1}{14}[2\left(\frac{dE}{dx}\right)_H+12\left(\frac{dE}{dx}\right)_C]$$ (18) $$=SP=\frac{SL\times SH}{SL+SH}$$ (19) where $`SP`$ is the stopping power of SECS, which reduces to $`SL`$ at low $`\beta `$ and to $`SH`$ at high $`\beta `$, so at very high $`\beta `$; the $`SP`$ stopping power coincides with the Bethe Bloch formula for electric energy losses. 1. For $`Q=1`$ the energy losses of SECS in hydrogen and carbon is computed from adding an exponential factor due to the experimental data . i) For $`10^5<\beta <5\times 10^3`$ we obtained the following formula $$\left(\frac{dE}{dx}\right)_{SECS}=C[1exp(\frac{\beta }{7\times 10^4})^2]\frac{MeV}{cm}$$ (20) where $`C=1.3\times 10^5\beta `$. ii) For $`5\times 10^3<\beta <10^2`$ we used the following formula $$SP=SP_H+SP_C=\left(\frac{dE}{dx}\right)_{SECS}$$ (21) where $$SP_H=\frac{SL_H\times SH_H}{SL_H+SH_H}$$ (22) $$SP_C=\frac{SL_C\times SH_C}{SL_C+SH_C}$$ (23) and $$SL=A_1E^{0.45},SH=A_2Ln(1+\frac{A_3}{E}+A_4E)$$ (24) where ($`A_{i=1,4}`$) are constants obtained from experimental data, and $`E`$ is the energy of a proton with velocity $`\beta `$. 2. For SECS with $`Q=Z_1e`$ the energy losses for $`10^5<\beta <10^2`$ are given by $$\left(\frac{dE}{dx}\right)_{SECS}=F(Z_1,Z_2)[1exp(\frac{\beta }{7\times 10^2})^2]$$ (25) where $$F(Z_1,Z_2)=\frac{8\pi e^2a_0\beta }{\alpha }\frac{Z_1^{7/6}N_e}{(Z_1^{2/3}+Z_2^{2/3})^{3/2}}$$ (26) where $`Z_2`$ is the atomic number of the target atom, $`N_e`$ the density of electrons and $`\alpha `$ is the fine structure constant. The primary Light Yield of SECS is given by $$\left(\frac{dL}{dx}\right)_{SECS}=A[\frac{1}{1+AB\frac{dE}{dx}}]\frac{dE}{dx}$$ (27) where $`\frac{dE}{dx}`$ is the energy loss of SECS; $`A`$ and $`B`$ are parameters depending only on the velocity of SECS. The secondary Light Yield: we considered the elastic or quasielastic recoil of hydrogen and carbon nuclei. The light yield $`L_p`$ from a hydrogen or carbon nucleus of given initial energy $`E`$ is computed as $$L_p(E)=_0^E\frac{dL}{dx}(ϵ)S_{\text{tot}}^1𝑑ϵ$$ (28) where $`S_{\text{tot}}`$ is the sum of electronic and nuclear energy losses. The nuclear energy losses are given in ref. . The secondary light yield is then $$\left(\frac{dL}{dx}\right)_{\text{secondary}}=N_0^{T_m}L_p(T)\frac{d\sigma }{dT}𝑑T$$ (29) where $`T_m`$ is the maximum energy transferred and $`\frac{d\sigma }{dT}`$ is the differential scattering cross section, given in ref. . In Fig. 3 is presented the light yield of SECS in MACRO liquid scintillator as function of the SECS velociy $`\beta `$. ### 4.2 Energy losses of Q-balls type SECS in streamer tubes The composition of the gas in the MACRO limited streamer tubes is 73% helium, 27% n-pentane in volume . The pressure is about one atmosphere and the resulting density is low (in comparison with the density of the other detectors): $`\rho _{gas}=0.856\text{mg/cm}^3`$. The energy losses of MMs in the streamer tubes have been discussed in . The ionization energy losses of SECS in the streamer tubes of the MACRO experiment for $`10^3<\beta <10^2`$ was computed with the same procedure used for scintillators, but using the density and the chemical composition of streamer tubes. For $`Q=13e`$ the energy losses are calculated from ref. but we have omitted the exponential factor which takes into account the energy gap in organic scintillators. The Drell effect does not occurs because SECS are not magnetically charged. The threshold for ionizing n-pentane occur at $`\beta 10^3`$. ### 4.3 Restricted Energy Losses of SECS in the Nuclear Track Detector CR39 The quantity of interest for the CR39 nuclear track detector is the Restricted Energy Loss (REL), that is, the energy deposited within $``$ 100 Å diameter from the track. The REL in CR39 has already been computed for MMs of $`g=g_D`$ and $`g=3g_D`$ and for dyons with $`q=e`$, $`g=g_D`$ . We have checked these calculations and extended them to other cases of interest . The chemical composition of CR39 is $`\text{C}_{12}\text{H}_{18}\text{O}_7`$ , and the density is 1.31 $`\text{g/cm}^3`$. For the computation of the REL only energy transfers to atoms above $`12`$ eV are considered, because it is estimated that $`12`$ eV are necessary to break the molecular bonds . At low velocities ($`3\times 10^5<\beta <10^2`$) there are two contributions to REL: the ionization and the atomic recoil contributions. The ionization contribution was computed with Ziegler’s fit to the experimental data . The atomic recoil contribution to REL was calculated using the interaction potential between an atom and a SECS which is equal to the electric potential given by $$V(r)=\frac{Z_1Z_2e^2}{r}\varphi (r)$$ (30) where $`r`$ is the distance between the core of SECS and the target atom, $`e`$ is the electric charge of the core, $`Z_2`$ is the atomic number of the target atom. The function $`\varphi (r)`$ is the screening function given by $$\varphi (r)=\underset{1}{\overset{3}{}}C_iexp[\frac{b_ir}{a}]$$ (31) where $`a`$ is the screening length $$a=0.8853\frac{a_0}{(Z_1^{\frac{1}{2}}+Z_2^{\frac{1}{2}})^{\frac{2}{3}}}$$ (32) where $`a_0`$ is the Bohr radius; the coefficients are restricted such that $$\underset{1}{\overset{3}{}}C_i=1$$ (33) Assuming the validity of this potential, we calculated the relation between the scattering angle $`\theta `$ and the impact parameter b. From this relation, the differential cross section $`\sigma (\theta )`$ is readily obtained as $$\sigma (\theta )=(db/d\theta )b/\mathrm{sin}\theta $$ (34) The relation between the transferred kinetic energy K and the scattering angle $`\theta `$ is given by the relation $$K=4E_{\text{inc}}\mathrm{sin}^2(\theta /2)$$ (35) where $`E_{\text{inc}}`$ is the energy of the atom relative to the SECS in the center of mass system. The Restricted Energy Losses are finally obtained by integrating the transferred energies as $$\frac{dE}{dx}=N\sigma (K)𝑑K$$ (36) where $`N`$ is the number density of atoms in the medium, $`\sigma (K)`$ is the differential cross section as function of the transferred kinetic energy K. In Fig. 4 is presented the restricted energy losses of the Q-ball type SECS in the nuclear solide track detector CR39. ## 5 CONCLUSION Supersymmetric generalizations of the Standard Model allow for stable non-topological solitons of Q-ball type which may be considered bags of squarks and sleptons and thus have non-zero baryon and lepton numbers, as well as the electric charge \[1-3\]. These solitons can be produced in the Early Universe, can affect the nucleosynthesis of light elements, and can lead to a variety of other cosmological consequences. In this paper, we computed the energy losses of Q-balls of type SENS and SECS. Using these energy losses and a rough model of the earth’s composition and density profiles, we have computed the geometrical acceptance of the MACRO detector for Q-balls type SECS with $`v=250km/s`$ as function of the Q-ball mass $`M`$. We have calculated the accessible region in the plane (mass, velocity) of SECS reaching MACRO from above and below. We also presented a systematic analysis of the energy deposited in scintillators, streamer tubes and $`CR39`$ nuclear track detectors by SECS in forms useful for their detection. In particular we computed the light yield in scintillators, the ionization in streamer tubes and the REL in nuclear track detectors. MACRO is sensitive to both SECS and SENS. A good upper flux limit may be obtained at the level of few times $`10^{16}cm^2s^1sr^1`$. Acknowledgements : I gratefully acknowledge Prof. G. Gaicomelli for his continous disponibility and for very useful critical discussions. This work was supported by ICTP and INFN grants.
no-problem/0002/cond-mat0002362.html
ar5iv
text
# Number of distinct sites visited by 𝑁 random walkers on a Euclidean lattice ## I INTRODUCTION Usually, the extremely successful theory of random walks is only concerned with problems that involve a single ($`N=1`$) random walker. A solid reason for this is the understanding that the average properties of the single diffusing walker serve to describe the global properties of systems formed by many walkers. However, there are other interesting diffuson problems that involve many random walkers for which the diffusive behavior of every walker of the total of $`N`$ is relevant, i.e., diffusion process that cannot be described by averaging over the properties of a single walker . The problem of evaluating the time spent by the first $`j`$ particles out of a total of $`N`$ to escape from a given region is a clear example . Another important example, which is the subject of this paper, is the problem of evaluating the average number $`S_N(t)`$ of distinct sites visited (or territory explored) by a set of $`N`$ independently diffusing random walkers up to time $`t`$ . The case $`N=1`$ has been studied in detail since it was posed by Dvoretzky and Erdös and is discussed in many general references . However, the multiparticle ($`N>1`$) version of this problem has been systematically treated only after the pioneering works of Larralde et al. . These authors addressed the problem of evaluating the territory covered by a set of $`N`$ independent random walkers, all initially placed at the same point, that diffuse with steps of finite variance on Euclidean lattices. They found asymptotic expressions for $`S_N(t)`$ for $`N1`$, and described the existence of three time regimes. Their results can be summarized as follows: $$S_N(t)\{\begin{array}{cc}t^d\hfill & \hfill t<tt_\times \\ \multicolumn{2}{c}{}\\ t^{d/2}\mathrm{ln}^{d/2}\left(x\right),\hfill & \hfill t_\times tt_\times ^{^{}}\\ \multicolumn{2}{c}{}\\ NS_1(t),\hfill & \hfill t_\times ^{^{}}t\end{array},$$ (1) where $`x=N`$ for $`d=1`$, $`x=N/\mathrm{ln}t`$ for $`d=2`$ and $`x=N/\sqrt{t}`$ for $`d=3`$ . The properties of $`S_1(t)`$ are well known; in particular, $`S_1(t)t^{1/2}`$ for $`d=1`$, $`S_1(t)t/\mathrm{ln}t`$ for $`d=2`$ and $`S_1(t)t`$ for $`d=3`$. In the very short-time regime ($`tt_\times `$), or regime I, there are so many particles at every site that all the nearest neighbors of the already visited sites are reached at the next step, so that the number of distinct sites visited grows as the volume of an hypersphere of radius $`t`$, $`S_N(t)t^d`$. The regime III ($`t_\times ^{^{}}t`$), or long-time regime, corresponds to the final stage in which the walkers move far away from each other so that their trails (almost) never overlap and $`S_N(t)NS_1(t)`$. The crossover time from regime I to regime II is given by $`t_\times \mathrm{ln}N`$ for every lattice. This can be easily understood if we take into account that the number of particles on the outer visited sites for very short times will decrease as $`N/z^t`$, where $`z`$ is the coordination number of the lattice, so that the overlapping regime will break approximately when $`N/z^t1`$ or, equivalently, $`t_\times \mathrm{ln}N`$. Regime III never appears in the one-dimensional case (i.e., $`t_\times ^{^{}}\mathrm{}`$), but $`t_\times ^{^{}}e^N`$ for $`d=2`$ and $`t_\times ^{^{}}=N^2`$ for $`d=3`$. These crossover times will be obtained readily from the mathematical formalism discussed in the present paper. The most interesting regime is regime II ($`t_\times tt_\times ^{^{}}`$), or the intermediate regime. For this time regime we will obtain explictly the main term and the first two corrective terms of the asymptotic expression of $`S_N(t)`$ for $`N1`$. Higher corrective terms could be calculated as our method allows them to be obtained in a systematic way. The contribution of these corrective terms cannot be ignored even for very large values of $`N`$ because they decay logarithmically with $`N`$. However, as we will see in section V, the use of two corrective terms leads to a very good agreement with simulation results for relatively small values of $`N`$ ($`N100`$). The paper is organized as follows. The asymptotic evaluation of $`S_N(t)`$ for a $`d`$-dimensional Euclidean lattice is discussed in detail in Sec. II. Some geometric implications of this result are discussed in Sec. III. In Sec. IV, we compare our zeroth- (i.e., main), first- and second-order term approximation for $`S_N(t)`$ with other approximations and with computer simulations for one-, two- and three-dimensional simple Euclidean cubic lattices. The paper ends with some remarks on the applicability of this method to other diffusion problems and different media. Some technical details are discussed in an Appendix. ## II THE NUMBER OF DISTINCT SITES VISITED We consider a group of $`N`$ random walkers starting from an origin site $`𝐫=0`$ at time $`t=0`$. A survival probability, $`\mathrm{\Gamma }_N(t,𝐫)`$, is defined as the probability that site $`𝐫`$ has not been visited by the random walkers before time $`t`$. Similarly, we can define a mortality function, $`1\mathrm{\Gamma }_N(t,𝐫)`$, as the probability that site $`𝐫`$ has been visited by at least one walker in the time interval $`(0,t)`$. The relationship between the number of distinct sites visited, $`S_N(t)`$, and the survival probability is $$S_N(t)=\underset{𝐫}{}\left\{1\mathrm{\Gamma }_N(t,𝐫)\right\}.$$ (2) For independent random walkers, we have $`\mathrm{\Gamma }_N(t,𝐫)=\left[\mathrm{\Gamma }_t(𝐫)\right]^N`$, where $`\mathrm{\Gamma }_t(𝐫)\mathrm{\Gamma }_1(t,𝐫)`$ is the one-particle survival probability. Next, the discrete analysis implicit in Eq. (2) is replaced by a continuous one. Thus, we write $$S_N(t)=_0^{\mathrm{}}\left[1\mathrm{\Gamma }_t^N(r)\right]𝑑v_0r^{d1}𝑑r,$$ (3) where $`v_0`$ is the volume (i.e., the number of lattice sites) of the hyphersphere with unit radius. It has been found for Euclidean lattices that $$\mathrm{\Gamma }_t(r)\stackrel{~}{\mathrm{\Gamma }}_t(\xi )=1A\xi ^{2\mu }e^{d\xi ^2/2}\left(1+\underset{n=1}{\overset{\mathrm{}}{}}h_n\xi ^{2n}\right),$$ (4) for $`\xi r/\sqrt{2dDt}1`$. Here $`D`$ is the diffusion coefficient defined through the Einstein relation $`r^22dDt`$, $`t\mathrm{}`$, with $`r^2`$ being the mean-square displacement of a single random walker. The values of $`A`$, $`\mu `$ and $`h_1`$ for $`d=1`$, $`2`$ and $`3`$ are shown in Table I. A change to the new variable $`\xi `$ and integration by parts (taking into account that $`\stackrel{~}{\mathrm{\Gamma }}_t(\mathrm{})=1`$), yields $$S_N(t)=v_0(2dDt)^{d/2}J_N(d;0,\mathrm{}),$$ (5) where $$J_N(d;a,b)=_a^bN\mathrm{\Gamma }_t^{N1}(\xi )\frac{d\mathrm{\Gamma }_t(\xi )}{d\xi }\xi ^d𝑑\xi ,$$ (6) In order to evaluate the asymptotic behavior of $`J_N(d;0,\mathrm{})`$ it is convenient to make the decomposition $$J_N(d;0,\mathrm{})=J_N(d;0,\xi _\times )+J_N(d;\xi _\times ,\mathrm{}),$$ (7) where $`\xi _\times `$ is a value that should satisfy the following two conditions: (a) $`\xi _\times `$ is large enough so that $`\mathrm{\Gamma }_t(r)`$ can be well approximated by its asymptotic approximation $`\stackrel{~}{\mathrm{\Gamma }}_t(\xi )`$ for $`\xi \xi _\times `$, and (b) small enough so that $$\stackrel{~}{\mathrm{\Gamma }}_t^N(\xi _\times )=1/N^p,$$ (8) with $`p>1`$ (say $`p=2`$). From Eq. (4) it is straightforward to see that $$\xi _\times ^2\mathrm{ln}N$$ (9) satisfies both conditions. On the other hand, because at most $`d\mathrm{\Gamma }/d\xi =𝒪(1)`$, and $`\mathrm{\Gamma }_t(\xi )`$ is a monotonous growing function, $`J_N(d;0,\xi _\times )`$ is bounded by a term that goes as $`N\stackrel{~}{\mathrm{\Gamma }}_t^{N1}(\xi _\times )\xi _\times ^d`$, or equivalently, from Eq. (8), by a term that goes mainly as $`N^{1p}`$. But shortly we will show that $`J_N(d;\xi _\times ,\mathrm{})`$ goes essentially as $`\mathrm{ln}^{d/2}N`$; this means that $`J_N(d;0,\xi _\times )`$ is asymptotically smaller than any term in the asymptotic expansion for $`J_N(d;0,\mathrm{})`$ and thus we can write $$J_N(d;0,\mathrm{})J_N(d;\xi _\times ,\mathrm{}),N1.$$ (10) The previous discussion is illustrated in Fig. 1 for the one-dimensional case. In this figure we have represented the integrand of $`J_N(1;0,\mathrm{})`$ for increasing values of $`N`$ using as survival probability the exact value $`\mathrm{\Gamma }_t(\xi )=\text{erf}(\xi /\sqrt{2})`$ and the asymptotic expression of Eq. (4) up to first order ($`n=1`$). Notice that the area below the solid \[broken\] curve is just the exact \[asymptotic approximate\] value of $`S_N(t)/(8Dt)^{1/2}=J_N(1;0,\mathrm{})`$. The value of $`\xi _\times `$ as given by Eq. (8) with $`p=2`$ is marked with a symbol. It is clear from the figure that, for large $`N`$, (a) the integrand of $`J_N(1;\xi _\times \mathrm{})`$ is well represented by the asymptotic expression of $`\mathrm{\Gamma }_t(\xi )`$, and (b) that, as stated for the general case in Eq. (10), $`J_N(1;0,\xi _\times )J_N(1;\xi _\times \mathrm{})J_N(1;0,\mathrm{})`$. From Eq. (4), one easily finds that $$\frac{d\stackrel{~}{\mathrm{\Gamma }}_t(\xi )}{d\xi }\left[1\stackrel{~}{\mathrm{\Gamma }}_t(\xi )\right]^1=2\xi \underset{n=0}{\overset{\mathrm{}}{}}j_n\xi ^{2n},$$ (11) with $`j_0=d/2`$, $`j_1=\mu `$, $`j_2=h_1`$, …By inserting Eq. (11) into Eq. (6) one has the following expansion for $`J_N(d;\xi _\times ,\mathrm{})`$: $$J_N(d;\xi _\times ,\mathrm{})2N\underset{n=0}{\overset{\mathrm{}}{}}j_nK_{N1}\left(d2n+1\right),$$ (12) with $$K_N(\alpha )=_{\xi _\times }^{\mathrm{}}\xi ^\alpha \stackrel{~}{\mathrm{\Gamma }}_t^N(\xi )\left[1\stackrel{~}{\mathrm{\Gamma }}_t(\xi )\right]𝑑\xi .$$ (13) By means of the substitution $$\stackrel{~}{\mathrm{\Gamma }}_t(\xi )=e^z,$$ (14) we get a more convenient expression for $`K_N(\alpha )`$: $$K_N(\alpha )=_0^{z_\times }e^{Nz}\left(e^z1\right)\xi ^\alpha \frac{d\xi }{dz}𝑑z,$$ (15) where, from Eq. (8), $`z_x\mathrm{ln}N/N`$. The integral in (15) is of Laplace type but it is not possible to use Watson’s lemma directly to get its asymptotic behavior because $`\xi ^\alpha (d\xi /dz)`$ has a logarithmic singularity at $`z=0`$ . The evaluation of $`K_N(\alpha )`$ requires the inversion of (14) to obtain $`\xi (z)`$. By using Eqs. (4) and (14) we get $$\frac{d}{2}\xi ^2+\mathrm{ln}A+\mu \mathrm{ln}\xi ^2+\mathrm{ln}\left(1+\underset{n=1}{\overset{\mathrm{}}{}}h_n\xi ^{2n}\right)=\mathrm{ln}\left(1e^z\right).$$ (16) The function $`\xi (z)`$ can be readily obtained from this equation to first approximation: Notice that, as long as $$\xi ^2|\mathrm{ln}A|,$$ (17) the left hand side of Eq. (16) can be approximated by $`d\xi ^2/2`$, so that the first-order solution to Eq. (16) is $`\xi ^2(z)2\mathrm{ln}[1\mathrm{exp}(z)]/d`$. Equation (16) can be systematically solved in order to get higher-order approximations (see Appendix). The result is $$\xi =x^{1/2}\underset{n=1}{\overset{\mathrm{}}{}}\delta _nx^n,$$ (18) where $`x=(d/2)/\mathrm{ln}[1\text{exp}(z)]`$. The substitution of Eq. (18) into Eq. (15) (see Eq. (47) in the Appendix) yields $$K_N(\alpha )=\underset{n=0}{\overset{\mathrm{}}{}}\underset{m=0}{\overset{n}{}}\frac{2^{\left(\alpha 1\right)/2}}{d^{\left(\alpha +1\right)/2}}k_m^{(n)}I(\frac{\alpha }{2}n\frac{1}{2},m;N),$$ (19) where $$I(n,m;N)_0^{z_\times }𝑑ze^{Nz}(\mathrm{ln}z)^n\mathrm{ln}^m(\mathrm{ln}z).$$ (20) The evaluation of $`I_N(n,m;N)`$ for $`N\mathrm{}`$ has been discussed in . For the sake of completeness, we give here explicitly their expressions up to the order required to find $`S_N(t)`$ to second order in $`1/\mathrm{ln}N`$: $`I(n,0;N)`$ $``$ $`{\displaystyle \frac{1}{N}}\mathrm{ln}^nN\left[1+{\displaystyle \frac{n\gamma }{\mathrm{ln}N}}+{\displaystyle \frac{n(n1)}{2}}{\displaystyle \frac{\gamma ^2+\pi ^2/6}{\mathrm{ln}^2N}}+\mathrm{}\right],`$ (21) $`I(n,1;N)`$ $``$ $`{\displaystyle \frac{1}{N}}\mathrm{ln}^nN\left[\mathrm{ln}\mathrm{ln}N\left(1+{\displaystyle \frac{n\gamma }{\mathrm{ln}N}}\right)+{\displaystyle \frac{\gamma }{\mathrm{ln}N}}+\mathrm{}\right],`$ (22) $`I(n,2;N)`$ $``$ $`{\displaystyle \frac{1}{N}}\left(\mathrm{ln}^nN\right)\mathrm{ln}^2\mathrm{ln}N+\mathrm{}`$ (23) where $`\gamma 0.577215`$ is the Euler constant. Using these results we get from Eqs. (5), (12) and (19) the following expansion for the average number of distinct sites visited on a Euclidean lattice of dimension $`d`$: $$S_N(t)\widehat{S}_N(t)(1\mathrm{\Delta })$$ (24) with $`\widehat{S}_N(t)`$ $`=`$ $`v_0\left(4Dt\mathrm{ln}N\right)^{d/2},`$ (25) $`\mathrm{\Delta }\mathrm{\Delta }(N,t)`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}\mathrm{ln}^nN{\displaystyle \underset{m=0}{\overset{n}{}}}s_m^{(n)}\mathrm{ln}^m\mathrm{ln}N`$ (26) and where, up to second order ($`n=2`$), $`s_0^{(1)}`$ $`=`$ $`d\omega ,`$ (27) $`s_1^{(1)}`$ $`=`$ $`d\mu ,`$ (28) $`s_0^{(2)}`$ $`=`$ $`d\left(1{\displaystyle \frac{d}{2}}\right)\left({\displaystyle \frac{\pi ^2}{12}}+{\displaystyle \frac{\omega ^2}{2}}\right)d\left({\displaystyle \frac{dh_1}{2}}\mu \omega \right),`$ (29) $`s_1^{(2)}`$ $`=`$ $`d\left(1{\displaystyle \frac{d}{2}}\right)\mu \omega d\mu ^2,`$ (30) $`s_2^{(2)}`$ $`=`$ $`{\displaystyle \frac{d}{2}}\left(1{\displaystyle \frac{d}{2}}\right)\mu ^2.`$ (31) Here $`\omega =\gamma +\mathrm{ln}A+\mu \mathrm{ln}(d/2)`$, and $`A`$, $`\mu `$ and $`h_1`$ are given in Table I for $`d=1`$, $`2`$ and $`3`$. Notice that the time dependence of $`\mathrm{\Delta }(N,t)`$ comes from the term $`\omega `$ through the function $`A(t)`$. However, this function does not depend on time for the one-dimensional case and thus $`\mathrm{\Delta }`$ only depends on $`N`$. Recently, Sastry and Agmon have found an approximate formula for $`S_N(t)`$ for the one-dimensional case. The straightforward method used by these authors is based on the fact that the function $`\mathrm{\Gamma }_t^N(r)`$ that appears in the integrand of Eq. (2) approaches a step function when $`N\mathrm{}`$. In this way they found $`S_N(t)`$ $``$ $`4\sqrt{Dt}\sqrt{\mathrm{ln}N\mathrm{ln}\sqrt{\alpha \mathrm{ln}N}}`$ (32) $``$ $`4\sqrt{Dt\mathrm{ln}N}\left[1{\displaystyle \frac{1}{4}}{\displaystyle \frac{\mathrm{ln}\mathrm{ln}N\mathrm{ln}\alpha }{\mathrm{ln}N}}+𝒪\left({\displaystyle \frac{\mathrm{ln}^2\mathrm{ln}N}{\mathrm{ln}^2N}}\right)\right]`$ (33) where $`\alpha `$ is given by $`\alpha =\pi \mathrm{exp}(2/\pi )1.66`$. It is instructive to compare this formula with the first-order approximation of Eq. (24) for the one-dimensional case: $$S_N(t)4\sqrt{Dt\mathrm{ln}N}\left[1\frac{1}{4}\frac{\mathrm{ln}\mathrm{ln}N2\omega }{\mathrm{ln}N}+𝒪\left(\frac{\mathrm{ln}^2\mathrm{ln}N}{\mathrm{ln}^2N}\right)\right],$$ (34) Note that the prefactor $`4(Dt\mathrm{ln}N)^{1/2}`$ of the formula of Sastry and Agmon is in agreement with that of Eq. (34). In Ref. , they found it “amusing” that the value $`\alpha =1`$ produces very good agreement between the approximation of Eq. (32) and the exact numerical integration. Our Eq. (34) enlightens this point: Comparing Eqs. (33) and Eq. (34) one sees that $`\mathrm{ln}\alpha `$ is playing the role of $`2\omega `$. But $`\omega =\gamma \frac{1}{2}\mathrm{ln}\pi =0.0048507\mathrm{}`$ for the one-dimensional lattice, so that $`\mathrm{ln}\alpha `$ when $`\alpha =1`$ leads to a good approximation to the rigorous coefficient $`2\omega `$. The equation of Sastry and Agmon for $`\alpha =1`$ and our first-order approximation should thus be very close. This is clearly confirmed in Fig. 4. A question to be answered is why Eq. (24) is valid for time regime II only, i.e., why it is not always valid for arbitrarily large values of time. The reason is that our formulas have been obtained by assuming that the condition (17) holds for those values of $`\xi `$ which are inside the integration interval $`[\xi _\times ,\mathrm{}]`$ of the relevant integral $`J_N(d;\xi _\times ,\mathrm{})`$ that is responsible for the asymptotic behavior of $`S_N(t)`$. This implies that for our procedure to work, it is necessary that $`\xi _\times ^2\mathrm{ln}A`$ or, from Eq. (9), that $$\mathrm{ln}N|\mathrm{ln}A|.$$ (35) Thus we can estimate the time $`\tau _\times `$ for which our method breaks down by solving $`|\mathrm{ln}A(\tau _\times )|\mathrm{ln}N`$. From the expressions for $`A`$ quoted in Table I one finds that $`\tau _\times e^N`$ for $`d=2`$ and $`\tau _\times N^2`$ for $`d=3`$. For $`d=1`$ and large $`N`$, the condition (35) always holds because $`A=(2/\pi )^{1/2}`$ is a constant and then $`\tau _\times =\mathrm{}`$. We see that the upper times $`\tau _\times `$ beyond which Eq. (24) is no longer valid coincide with the crossover times $`t_\times ^{^{}}`$ defined in Sec. I, so we can say that the expressions for $`S_N(t)`$ given in this paper are valid only in the time regime II. This means that our procedure marks its own limit of validity as that of regime II, thus predicting the existence of a crossover time in a natural way, i.e., as a consequence of the mathematical formalism. ## III GEOMETRIC PROPERTIES OF THE EXPLORED REGION In this section we will give a geometric interpretation of the main result of this paper, namely, Eq. (24). The quantity $`S_N(t)`$ is by definition the volume of the region $`\mathrm{\Omega }`$ explored by $`N`$ random walkers after a time $`t`$ from their initial deposition on a given site of the lattice (if the length of the lattice bonds is taken as the unit). For very short times (regime I or $`t\mathrm{ln}N`$) the exploration is performed in a compact way because all the neighbor sites of any visited site are always visited at the next time step. Therefore, the explored region $`\mathrm{\Omega }`$ is an hypersphere whose radius grows ballistically and its volume is proportional to $`t^d`$. After the regime II is reached, the development of two qualitatively different zones in the explored volume is observed: (i) an hyperspherical compact core of visited sites, and (ii) a corona of dendritic nature characterized by filaments created by those relatively few walkers that are wandering in the outer regions, i.e., wandering at distances significantly larger than the root-mean-square displacement $`r^2^{1/2}=\sqrt{2dDt}`$ of a single walker. Figure 2 shows a snapshot of the set of sites visited by $`N=1000`$ random walkers at time $`t=900`$ (every walker makes a jump at each time unit) for dimension two. The visited sites are in white and the inner black and outer white circles delimit the corona. The radius $`R_+`$ of the outer circle is equal to the maximum displacement from the origin reached by any of the walkers at time $`t`$. It has been argued in that the volume of this outer circle is on average given by the main term of (24), i.e., by $`\widehat{S}_N(t)=v_0(4Dt\mathrm{ln}N)^{d/2}`$. From this statement we can draw two conclusions: First, that the average radius of this outer circle is $$R_+\left(4Dt\mathrm{ln}N\right)^{1/2},$$ (36) and second, that the asymptotic corrective terms (given by $`\mathrm{\Delta }`$) to $`S_N(t)`$ account for the number of unvisited sites that are inside the corona. In other words, $`\mathrm{\Delta }`$ is the fraction of the volume inside the external circumference that has not been visited by any of the $`N`$ random walkers. This result can be used to easily estimate that the thickness of the dendritic corona is approximately given by $`R_+\mathrm{\Delta }`$. It is also noteworthy that $`\mathrm{\Delta }(N,t)`$ depends on $`t`$ very smoothly in the time regime II as this dependence is due to terms proportional to powers of $`\mathrm{ln}A(t)`$ \[and $`A(t)`$ does not change exponentially: see Table I\]. For the two-dimensional case, this statement is especially valid because $`A(t)1/\mathrm{ln}t`$. Therefore, the ratio (given by $`\mathrm{\Delta }`$) between the radial size of the corona of $`\mathrm{\Omega }`$ and the radial size of $`\mathrm{\Omega }`$ itself remains almost constant throughout the time regime II. This implies that a conveniently scaled sequence of snapshots of the set of visited sites should be very similar (in a statistical sense), i.e., we find that $`\mathrm{\Omega }`$ grows, to a large extent, in a self-similar way inside time regime II. This property is illustrated in Fig. 3. As Eq. (36) shows, the appropriate scale factor must be proportional to $`\sqrt{t}`$. This “almost” self-similar behavior disappears as the regime III is approached because the correction to the main term of $`S_N(t)`$ becomes as large as this main term, i.e., because $`\mathrm{\Delta }(N,t)`$ approaches the value 1. This transition takes place when $`t\tau _\times `$ as follows from (26), i.e., this value coincides with the threshold for regime III deduced in the previous section. From the geometric point of view this transition corresponds to the breaking of the self-similar growing behavior by the appearance of a corona of filaments as large as the compact core, which finally gives rise to a set of separated trails that (almost) never more overlap. For the two-dimensional case the transition time from regime II to regime III is so great for any significant number of walkers that it can not be studied by numerical simulation. ## IV NUMERICAL RESULTS We carried out numerical simulations for the number of distinct sites visited by $`N=2^m`$, with $`m=0,1,\mathrm{},14`$ in two and three dimensions. For the one-dimensional case it is not necessary to carry out simulations because the survival probability is exactly known on this lattice, $`\mathrm{\Gamma }_t(𝐫)=\text{erf}\left(\xi /\sqrt{2}\right),`$ and therefore the integral for $`S_N(t)`$ as given by Eq. (3) can be computed numerically. In our simulations, the random walkers are placed initially at the center of an hypercubic box of side $`L`$. The regime II is reached almost immediately with the number of random walkers we have used ($`t_\times 10`$ for $`N=2^{14}`$). The simulations were carried out only to a maximum time $`t=200`$ which is sufficient for the stabilization of regime II conditions. The square box side for $`d=2`$ was taken to be $`L=400`$ to avoid any random walker reaching the edge of the box before the maximum time $`t=200`$. Memory limitations forced us to reduce the box side to $`L=200`$ for the three-dimensional case. While this implies a possible appearance of finite-size effects, we can consider them as to be negligible because the average displacement of the random walkers at the maximum time is small compared with $`L/2`$. Each experiment was repeated $`10^4`$ times in order to achieve reasonable statistics. Results are plotted in Fig. 4 for one, two and three dimensions. The dots are the simulation results (numerical integration results in one dimension) and the broken and solid lines are the prediction of Eq. (24) to first and second order, respectively. The crosses are the results of Sastry and Agmon given by Eq. (32) with $`\alpha =1`$. The dotted lines correspond to the result of Larralde et al. given by Eq. (1) using the correct amplitude of the main term (see ). The quantity plotted is $$𝐒\frac{1}{d}\left[\frac{S_N}{\widehat{S}_N}\right]^{2/d},$$ (37) versus $`1/\mathrm{ln}N`$. From Eq. (24) one sees that the theoretical prediction for this quantity is $`𝐒(1/d)(1\mathrm{\Delta })^{2/d}`$. The agreement between the second-order approximation and the simulations is found to be excellent for $`N100`$. Good agreement for lower values of $`N`$ would be expected if higher-order terms in the series were included. The importance of the corrective terms is evident. For example, for the one-dimensional case, we would need to use values of $`N`$ as large as $`10^{25}`$ in order to obtain the same precision with the main term as we get with the main and two corrective terms for values of $`N`$ as small as $`2^6`$. Similar statements can be made for the other lattices, as Fig. 4 shows. ## V REMARKS In this paper we have developed a method for calculating the mean number of distinct sites visited by $`N`$ independent random walkers on Euclidean lattices. The method allows the systematic calculation of the main and corrective asymptotic terms to any order for large $`N`$. These corrective terms are generally non-negligible as they (essentially) decay as powers of $`1/\mathrm{ln}N`$. However, we found that the main and first two corrective terms lead to reasonably good results when relatively small values of $`N`$ are used (say, for $`N2^7`$). In Sec. III we proposed a geometric meaning for the main and corrective terms: the main term would account for the volume of the set of visited sites if the exploration of the random walkers were compact, and the corrective terms just improve this rough estimate because, in the outer regions, the exploration performed by the (relatively few) random walkers that move there is really not compact, thus leading to the formation of a non-compact (a dendritic) external ring in the set of visited sites. We hope the above results and ideas could serve as a basis to gain insight into problems with interacting random walkers. The method developed here for calculating $`S_N(t)`$ is also useful for evaluating other statistical quantities related to the diffusion of a set of independent random walkers. An example is the number $`S_{N+}(t)`$ of sites visited by $`N`$ random walkers on an one-dimensional lattice along a given direction . It turns out that the moments (of arbitrary order) of $`S_{N+}(t)`$ can be readily obtained through a slight modification of Eq. (24). Another example is the first passage time $`t_{1,N}(r)`$ to a distance $`r`$ of the first random walker of a set of $`N`$. First passage times are relevant statistical quantities in the study of diffusion processes where the arrival of the first particles at a given site produces a significant effect (a “trigger” effect). These quantities have been calculated for one dimension (and for some classes of fractals ) but little is known for dimensions greater than one . The approximate compact form of the set of visited sites allows one to estimate the first passage time via the relation $`S_N(t_{1,N}(r))v_0r^d`$ , which means geometrically that we consider the region inside the hypersphere of radius $`r`$ where a random walker has arrived by time $`t_{1,N}(r)`$ as completely visited (a compact exploration in the sense of de Gennes ). Results on $`S_{N+}(t)`$ and $`t_{1,N}(r)`$ obtained using the above ideas will be reported elsewhere. The function $`S_N(t)`$ we have studied is indeed an important quantity concerning the diffusion of $`N`$ independent random walkers but there are still many open questions in this problem. One can think, for example, of the absorption probability of the set of $`N`$ random walkers on a lattice with a random distribution of point-like traps. This problem can be formulated in terms of the moments of the number of distinct sites visited by the set of $`N`$ walkers. A prediction for the variance of the number of visited sites is a necessary requisite to tackle this interesting problem as the first-order approximation based only on its first moment, i.e., on $`S_N(t)`$, seems to be very imprecise . As no relationship is known for moments of order higher than one, the absorption problem remains unsolved. Finally, it should be pointed out that the expression for $`S_N(t)`$ given in this paper can be extended to fractal media with some slight changes. We are currently running simulations for deterministic (Sierpinski gasket) and stochastic (percolation aggregate) fractals. Results for these substrates will be published elsewhere. ###### Acknowledgements. Partial support from the DGICYT (Spain) through grant no PB97-1501 and from the Junta de Extremadura-Fondo Social Europeo through grant IPR99C031 is acknowledged. ## We will show in this Appendix how to get Eq. (19) from Eq. (15). Let us start by showing that the solution $`\xi (z)`$ of Eq. (16) for $`z0`$ has the form given in Eq. (18). For simplicity of notation we will write $`u=\xi ^2`$, $`\varphi =1\text{exp}(z)`$ and $`c=d/2`$. Hence, Eq. (16) takes the form $$\frac{c}{u}+\mu \mathrm{ln}u+\mathrm{ln}A+\mathrm{ln}\left(1+\underset{n=1}{\overset{\mathrm{}}{}}h_nu^n\right)=\mathrm{ln}\varphi .$$ (38) In the limit $`z0`$, it is clear that $`u0`$ and $`\varphi 0`$. This means that, as long as $`1/u|\mathrm{ln}(A)|`$, the first term on the right-hand side of (38) is the most divergent one so that, as a first approximation, we have $$u\frac{c}{\mathrm{ln}\varphi }x.$$ (39) This first-order approximation was already obtained in Sec. II \[see below Eq. (17)\]. A better approximation is achieved by writing $`u=x(1+ϵ)`$, with $`ϵ`$ an small quantity. The substitution of this expression in (38) yields $$ϵϵ^2+\frac{\mu x}{c}\mathrm{ln}x+\frac{x}{c}\mathrm{ln}A+\frac{\mu x}{c}ϵ\frac{\mu x}{2c}ϵ^2+\frac{h_1x^2}{c}+\frac{h_1x^2ϵ}{c}+\mathrm{}=0,$$ (40) where (39) has been taken into account. This equation can be solved by writing $`ϵ`$ as $$ϵ=\underset{n=1}{\overset{\mathrm{}}{}}ϵ_nx^n,$$ (41) and inserting it in (40). We thus find the following values for $`ϵ_n`$ up to $`n=2`$: $`ϵ_1`$ $`=`$ $`{\displaystyle \frac{1}{c}}\mathrm{ln}\left(Ax^\mu \right),`$ (42) $`ϵ_2`$ $`=`$ $`{\displaystyle \frac{1}{c^2}}\mathrm{ln}^2\left(Ax^\mu \right)+{\displaystyle \frac{\mu }{c^2}}\mathrm{ln}\left(Ax^\mu \right){\displaystyle \frac{h_1}{c}}.`$ (43) Therefore $$\xi (z)=u^{1/2}=x^{1/2}\left(1+ϵ\right)^{1/2}=x^{1/2}\underset{n=0}{\overset{\mathrm{}}{}}\delta _nx^n,$$ (44) where $`\delta _0=1`$ and $`\delta _1`$ $`=`$ $`{\displaystyle \frac{\mathrm{ln}\left(Ax^\mu \right)}{2c}},`$ (45) $`\delta _2`$ $`=`$ $`{\displaystyle \frac{1}{8c^2}}\left[\mathrm{ln}^2\left(Ax^\mu \right)+4\mu \mathrm{ln}\left(Ax^\mu \right)4ch_1\right].`$ (46) The evaluation of the integral for $`K_N(\alpha )`$ in (15) requires the expression of $`\xi ^\alpha d\xi /dz`$ as a function of $`z`$. From Eq. (44) and taking into account that $`d\xi /dz=(d\xi /dx)(dx/dz)`$ and $`dx/dz=[x^2/(c\varphi )]d\varphi /dz`$, we find that $$\xi ^\alpha \frac{d\xi }{dz}=\frac{1}{2c\varphi }\frac{d\varphi }{dz}x^{(1\alpha )/2}\left[1+\underset{n=1}{\overset{\mathrm{}}{}}x^n\underset{m=1}{\overset{n}{}}\widehat{k}_m^{(n)}\mathrm{ln}^m\left(Ax^\mu \right)\right],$$ (47) where the coefficients $`\widehat{k}_m^{(n)}`$, $`m=0,\mathrm{},n`$ for $`n=1`$, $`2`$ are $`\widehat{k}_0^{(1)}`$ $`=`$ $`{\displaystyle \frac{\mu }{c}},`$ (48) $`\widehat{k}_1^{(1)}`$ $`=`$ $`{\displaystyle \frac{\alpha 1}{2c}},`$ (49) $`\widehat{k}_0^{(2)}`$ $`=`$ $`{\displaystyle \frac{(\alpha 3)h_1}{2c}}+{\displaystyle \frac{\mu ^2}{c^2}},`$ (50) $`\widehat{k}_1^{(2)}`$ $`=`$ $`{\displaystyle \frac{\mu (2\alpha )}{2c^2}},`$ (51) $`\widehat{k}_2^{(2)}`$ $`=`$ $`{\displaystyle \frac{\alpha (\alpha 4)+3}{8c^2}}.`$ (52) Let us use $`\widehat{K}_N(\alpha ,z)`$ to denote the integrand of Eq. (15), i.e., $$K_N(\alpha )=_0^{z_\times }\widehat{K}_N(\alpha ,z)𝑑z.$$ (53) Then, from Eq. (47), $$\widehat{K}_N(\alpha ,z)=\frac{1}{2c}e^{Nz}e^zx^{(1\alpha )/2}\left[1+\underset{n=1}{\overset{\mathrm{}}{}}x^n\underset{m=1}{\overset{n}{}}\widehat{k}_m^{(n)}\mathrm{ln}^m\left(Ax^\mu \right)\right].$$ (54) Writing $`e^z=1+𝒪(z)`$, $`x=(c/\mathrm{ln}z)[1+𝒪(z/\mathrm{ln}z)]`$ and $`\mathrm{ln}(Ax^\mu )=\mathrm{ln}A\mu \mathrm{ln}(\mathrm{ln}z)+\mu \mathrm{ln}c+𝒪(z/\mathrm{ln}z)`$, Eq. (54) becomes $$\widehat{K}_N(\alpha ,z)=[1+𝒪(z)]\frac{1}{2c^{(\alpha +1)/2}}e^{Nz}(\mathrm{ln}z)^{(\alpha 1)/2}\underset{n=0}{\overset{\mathrm{}}{}}\underset{m=0}{\overset{n}{}}k_m^{(n)}(\mathrm{ln}z)^n\mathrm{ln}^m(\mathrm{ln}z)$$ (55) where the coefficients $`k_m^{(n)}`$ up to second order ($`n=2`$) are $`k_0^{(1)}`$ $`=`$ $`(\alpha 1){\displaystyle \frac{\omega }{2}}\mu ,`$ (56) $`k_1^{(1)}`$ $`=`$ $`(1\alpha ){\displaystyle \frac{\mu }{2}},`$ (57) $`k_0^{(2)}`$ $`=`$ $`(3\alpha )(1\alpha ){\displaystyle \frac{\omega ^2}{8}}+\mu (2\alpha )\omega +\mu ^2+{\displaystyle \frac{h_1c}{2}}\left(\alpha 3\right),`$ (58) $`k_1^{(2)}`$ $`=`$ $`\mu \left[(\alpha 2)\mu +(\alpha 3)(1\alpha ){\displaystyle \frac{\omega }{4}}\right],`$ (59) $`k_2^{(2)}`$ $`=`$ $`{\displaystyle \frac{\mu ^2}{8}}(\alpha 3)(\alpha 1),`$ (60) and $`\omega =\gamma +\mathrm{ln}A+\mu \mathrm{ln}c`$. Finally, inserting Eq. (55) into Eq. (53) we get Eq. (19). It should be noted that we have approximated the factor $`1+𝒪(z)`$ of Eq. (55) by 1. This can be done safely because the contribution of the neglected terms to the asymptotic behavior of $`K_N(\alpha )`$ decays as least as $`(\mathrm{ln}N)^{(\alpha 1)/2}/N^2`$, i.e., decays to zero faster than the contribution of the retained terms by (roughly) a factor $`N`$ \[see Eqs. (19)-(23)\].
no-problem/0002/astro-ph0002348.html
ar5iv
text
# ASCA and BeppoSAX observations of the peculiar X–ray source 4U1700+24/HD154791 ## I Introduction In optical astronomy the identification of a binary system comes in most cases from the observation of photometric and/or radial velocity variations. As not all X-ray binaries have known optical counterparts, a further effective criterium in galactic X–ray astronomy for the identification of a binary system with an accreting compact object was often based on the observed X–ray luminosity. For X–ray binaries harbouring a neutron star or possibly a black hole, luminosities L<sub>X</sub> of the order of 10<sup>34</sup> – 10<sup>35</sup> erg s<sup>-1</sup> are easily reached. The diagnosis of the presence of a neutron star in most cases is directly confirmed by the observation of pulsations or thermonuclear bursts, apart from bright persistent Low Mass X–Ray Binaries (LMXRBs). X–ray binaries harbouring white dwarfs also show some distinctive features. As an example in polars and intermediate polars optical and UV observations often reveal the distinctive signatures of the presence of a white dwarf in the system. Orbital periods and light curves also add unambiguous and reliable evidence of the presence of white dwarfs in this class of X–ray binaries. For a number of X–ray sources the identification of a class or even the diagnosis of binarity is rather difficult, especially when the observed X–ray luminosity is $`10^{33}`$ erg s<sup>-1</sup> . 4U1700+24/HD154791 belongs to this class. The optical counterpart was identified by Garcia et al. garcia as a late type giant on the basis of the positional coincidence with a HEAO1–A3 error box. The optical spectrum of this giant looks quite normal ddf ; garcia , even if Gaudenzi and Polcaro gaudenzi find some interesting and variable features in its spectrum. Variable UV line emission was detected ddf ; garcia in different IUE pointings, showing at last some unusual features in the emission from this otherwise normal giant. These high excitation lines are likely linked to the same mechanism that produces the observed X–ray emission. In spite of various attempts, no evidence of a binary orbit was obtained from radial velocity analysis of optical spectra. The X–ray source shows extreme erratic variability, but no pulsations were detected. The rapid (10–1000 s) time variability is strongly suggestive of turbulent accretion, often observed in X–ray binaries. The X–ray spectrum is rather energetic and was measured up to 10 keV. The X–ray luminosity L<sub>X</sub> $`10^{33}`$ erg s<sup>-1</sup> at an assumed distance of 730 pc garcia may be marginally consistent with coronal emission, even if an evolved giant is not expected to be a strong X-ray emitter. Therefore the picture emerging from observations gives only hints in favour of a binary system, given that no “classical” feature to be associated to the presence of a compact object was found. We have observed this source for $``$15 years both with X–ray satellites (EXOSAT, ASCA and BeppoSAX ) and with ground optical observations from the Loiano 1.5 m and 0.6 m telescopes of the Bologna Astronomical Observatory. Here we report on the ASCA and BeppoSAX observations, performed respectively on March 8, 1995 and on March 27, 1998. We also report on photometric optical UBVRI monitoring. ## II Observations In Figure 1 we show the observed 1.5–9 keV count rate from the GIS2 and GIS3 instruments on board ASCA and the 1.5–10 keV observed count rate from the MECS2 and MECS3 instruments on board BeppoSAX . A clear increase of the 4U1700+24 count rate was detected in November 1997 by RXTE/ASM (http://space.mit.edu/XTE/ASM\_lc.html ). The BeppoSAX observation was performed approximately five months after this event, when the source had already recovered its quiescent flux. The substantial erratic variability already detected with EXOSAT ddf is clearly present also in both observations. The source flux in the BeppoSAX observation is significantly lower than that in the ASCA observation. The erratic source variability is clearly visible in the Power Spectral Density (PSD) shown in Figure 2, calculated on the time series of GIS2 and GIS3 count rate binned on 0.1 s. The spectra were calculated for runs with typical duration of 3000s. The PSD shown in Figure 2 is obtained by averaging the spectra of different runs and by summing adjacent frequencies with a logarithmic rebinning. The observed X–ray source luminosity (2–10 keV) was $`L_X=1.7\times 10^{33}`$ erg s<sup>-1</sup> in the ASCA observation and $`L_X=6\times 10^{32}`$ erg s<sup>-1</sup> in the BeppoSAX observation assuming a distance of $``$700 pc garcia . The X–ray energy spectrum cannot be fitted by simple single component models. The high energy ($`>`$2 keV) spectrum can be fitted by an absorbed thermal continuum, but the extrapolation of such a model at lower energies lies significantly below the measured spectrum, both in ASCA and in BeppoSAX observations. For a thermal model, similar to that used in the low luminosity source $`\gamma `$ Cas (a suspected Be/white dwarf binary kubo ; owens ), the addition of a complex absorber (e.g. a partial absorber) is needed to model the low energy part of the spectrum. The lack of Fe emission line however requires a very low Fe abundance. As an example the count rate spectra from the ASCA and BeppoSAX observations fitted with an optically thin thermal bremsstrahlung continuum with partial absorber (“bremss” and “pcfabs” models in XSPEC) are shown in Figure 3. The BeppoSAX spectrum is softer than that observed with ASCA , and very similar to that observed with EXOSAT ddf at almost exactly the same flux level of the BeppoSAX observation. Optical observations were performed at the Loiano 1.5 m telescope of the Bologna Astronomical Observatory during the last 15 years. HD154791, the optical counterpart of the X–ray source, is a M2–M3 giant garcia ; gaudenzi with a rather normal optical spectrum. A simple comparison with M1–M3 III templates shows a close match with the M2 template of HD104216. In Figure 4 we report the long term UBVRI photometry of HD154791. No clear long term trend is visible. Some variability is present, in particular in the U measurements, that may be intrinsic to the source. The long-term spectral/photometric monitoring of the source is continuing. ## III Discussion The observations we report still cannot be used to perform a “classical” diagnosis of binarity. We nevertheless note some interesting similarities with other low luminosity X–ray sources. In particular some interesting similarities can be found with the X–ray emission from $`\gamma `$ Cas. The power spectrum is strikingly similar and the energy spectrum shows a similar shape, even if no iron line is detected in 4U1700+24 . However this close resemblance of the properties of the X–ray emission does not help to determine the presence of a compact object in a binary system, as for $`\gamma `$ Cas itself the diagnosis of binarity is not completely assessed. In fact Owens et al. owens favour the hypothesis of a WD binary, but a completely different point of view is based on recent UV/X–ray observations of $`\gamma `$ Cas smith . Smith et al. support the hypothesis that the X–ray emission of $`\gamma `$ Cas comes from continuous flaring from the Be star. This hypothesis cannot be easily adapted to the case of 4U1700+24/HD154791 , as the much colder M giant star should not be expected to have strong and persistent X–ray flaring activity. If this is the case, i.e. the observed X–ray emission from 4U1700+24 is coronal, HD154791 should be an exception in its own class. If the similarity of the properties of the X–ray emission from 4U1700+24 and $`\gamma `$ Cas comes from a common origin, we suggest that the WD binary hypothesis is much more comfortable and more easily met. Acknowledgements. This research is supported by the Agenzia Spaziale Italiana (ASI) and the Consiglio Nazionale delle Ricerche (CNR) of Italy. BeppoSAX is a joint program of ASI and of the Netherlands Agency for Aerospace Programs (NIVR). The ASCA observation was performed as part of the joint ESA/Japan scientific program. CB, AG and AP acknowledge a grant from “Progetti di ricerca ex-quota 60%” of Bologna University.
no-problem/0002/nlin0002028.html
ar5iv
text
# Simple denoising algorithm using wavelet transform <sup>1</sup><sup>1</sup>footnotetext: e-mail address for correspondence: ravi@che.ncl.res.in keywords: Noise, Discrete wavelet transform, Chaos, Differentiation Application of wavelets and multiresolution analysis to reaction engineering systems from the point of view of process monitoring, fault detection, systems analysis etc. is an important topic and of current research interest (see, Bakshi and Stephanopoulos, 1994; Safavi et. al., 1997; Luo et. al., 1998; Carrier and Stephanopoulos, 1998). In the present paper we focus on one such important application, where we propose a new and simple algorithm for the reduction of noise from a scalar time series data. Presence of noise in a time–varying signal restricts one’s ability to obtain meaningful information from the signal. Measurement of correlation dimension can get affected by a noise level as small as 1% of signal, making estimation of invariant properties of a dynamical system, such as the dimension of the attractor and Lyapunov exponents, almost impossible (Kostelich and Yorke, 1988). Noise in experimental data can also cause misleading conclusions (Grassberger et. al., 1991). A host of literature exists on various techniques for noise reduction (Kostelich and Yorke, 1988; Härdle, 1990; Farmer and Sidorowich, 1991; Sauer, 1992; Cawley and Hsu, 1992; Cohen, 1995; Donoho and Johnstone, 1995; Kantz and Schreiber, 1997). For instance, Fast Fourier Transform (FFT) reduces noise effectively in those cases where the frequency distribution of noise is known (Kostelich and Yorke, 1988; Cohen, 1995; Kantz and Schreiber, 1997); singular value analysis methods (Cawley and Hsu, 1992) project the original time–series onto an optimal subspace, whereby noise components are left behind in the remaining orthogonal directions, etc. In the existing wavelet–based denoising methods (Donoho and Johnstone, 1995) two types of denoising are introduced: linear denoising and nonlinear denoising. In linear denoising, noise is assumed to be concentrated only on the fine scales and all the wavelet coefficients below these scales are cut off. Nonlinear denoising, on the other hand, treats noise reduction by either cutting off all coefficients below a certain threshold (so called ‘hard–thresholding’), or reducing all coefficients by this threshold (so called ‘soft–thresholding’). The threshold values are obtained by statistical calculations and has been seen to depend on the standard deviation of the noise (Nason, 1994). The noise reduction algorithm that we propose here makes use of the wavelet transform (WT) which in many ways complements the well known Fourier Transform (FT) procedure. We apply our method, firstly, to three model flow systems, viz. Lorenz, Autocatalator, and Rössler systems, all exhibiting chaotic dynamics. The reasons for choosing these systems are the following: Firstly, all of them are simplified models of well–studied experimental systems. For instance, Lorenz is a simple realization of convective systems (Lorenz, 1963), while the Autocatalator and Rössler have their more complicated analogs in chemical multicomponent reactions (Rössler, 1976; Lynch, 1992). Secondly, chaotic dynamics is extremely nonlinear, highly sensitive, possesses only short–time correlations and is associated with a broad range of frequencies (Guckenheimer and Holmes, 1983; Strogatz, 1994). Because of these properties it is well known that FT methods are not applicable in a straightforward way to chaotic dynamical systems (Abarbanel, 1993). On the other hand, WT methods are particularly suited to handle not only nonlinear but nonstationary signals (Strang and Nguyen, 1996). This is because the properties of the data are studied at varying scales with superior time localization analysis when compared to FT technique. Our noise reduction algorithm is advantageous, because, as shall be shown, the threshold level for noise is identified automatically. In this study, we have used the discrete analog of the wavelet transform (DWT) which involves transforming a given signal with orthogonal wavelet basis functions by dilating and translating in discrete steps (Daubechies, 1990; Holschneider, 1995). For study purposes we corrupt one variable $`x(t)`$ for each of these systems with noise of zero mean, and then apply our algorithm for denoising. We analyze the performance of this method in all the three systems for a wide range of noise strengths, and show its effectiveness. Importantly, we then validate the applicability of the method to experimental data obtained from two chemical systems. In one system the time series data was obtained from pressure fluctuation measurements of the hydrodynamics in a fluidized bed. In the other the conductivity measurements in a liquid surfactant manufacturing experiment were analyzed. Methodology The noise reduction algorithm based on DWT consists of the following five steps: Step 1: In first step, we differentiate the noisy signal $`x(t)`$ to obtain the data $`x_d(t)`$, using the method of central finite differences with fourth order correction to minimize error (Constantinides, 1987), i.e., $`x_d(t)={\displaystyle \frac{dx(t)}{dt}}.`$ (1) Step 2: We then take DWT of the data $`x_d(t)`$ and obtain wavelet coefficients $`W_{j,k}`$ at various dyadic scales $`j`$ and displacements $`k`$. A dyadic scale is the scale whose numerical magnitude is equal to 2 (two) raised to an integer exponent, and is labeled by the exponent. Thus, the dyadic scale $`j`$ refers to a scale of magnitude $`2^j`$. In other words, it indicates a resolution of $`2^j`$ data points. Thus a low value of $`j`$ implies finer resolution while high $`j`$ analyzes the signal at coarser resolution. This transform is the discrete analog of continuous WT (Holschneider, 1995), and is given by the formula $`W_{j,k}={\displaystyle _{\mathrm{}}^+\mathrm{}}x_d(t)\psi _{j,k}(t)𝑑t,`$ (2) with $`\psi _{j,k}(t)=\mathrm{\hspace{0.33em}\hspace{0.33em}2}^{j/2}\psi (2^jtk)`$ where $`j,k`$ are integers. As for the wavelet function $`\psi (t)`$ we have chosen Daubechies compactly supported orthogonal function with four filter coefficients (Daubechies, 1990; Press et. al., 1996). Step 3: In this step we estimate the power $`P_j`$ contained in different dyadic scales $`j`$, via $`P_j(x)={\displaystyle \underset{k=\mathrm{}}{\overset{+\mathrm{}}{}}}|W_{j,k}|^2(j=1,2,\mathrm{})`$ (3) By plotting the variation of $`P_j`$ with $`j`$, we see that it is possible to identify a scale $`j_m`$ at which the power due to noise falls off rapidly. This is important because as we shall see from the studies of the case examples that it provides a mean for automation in detection of threshold. The identification of the scale $`j_m`$ at which power due to noise shows the first minimum allows us to reset all $`W_{j,k}`$ upto scale $`j_m`$ to zero, i.e., $`W_{j,k}=0,`$ for $`j=1,2,\mathrm{},j_m`$. Step 4: In the fourth step, we reconstruct the denoised data $`\widehat{x}_d(t)`$ by taking inverse transform of the coefficients $`W_{j,k}`$ : $`\widehat{x}_d(t)=c_\psi {\displaystyle \underset{j=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{k=\mathrm{}}{\overset{\mathrm{}}{}}}W_{j,k}\psi _{j,k}(t),`$ (4) where $`c_\psi `$ is normalization constant given by $`c_\psi =\mathrm{\hspace{0.33em}\hspace{0.33em}1}/_{\mathrm{}}^{\mathrm{}}\frac{|\widehat{\psi }(\omega )|^2}{\omega }𝑑\omega <\mathrm{},`$ with $`\widehat{\psi }(\omega )`$ as the Fourier transform of the wavelet function $`\psi (t)`$. Step 5: In the fifth and final step $`\widehat{x}_d(t)`$ is integrated to yield the cleansed signal $`\widehat{x}(t)`$: $`\widehat{x}(t)={\displaystyle \widehat{x}_d(t)𝑑t}.`$ (5) There exists a commutativity property between the operation of differentiation/integration and wavelet transform. Therefore first differentiating the signal and then taking DWT is equivalent to carrying out the two operations in reverse order. This implies that the same result can be obtained by switching the order between the first and second steps, and then between the fourth and fifth. The effectiveness of the method lies in the following observations. Upon differentiation, contribution due to white noise moves towards the finer scales because the process of differentiation converts the uncorrelated stochastic process to a first order moving average process and thereby distributes more energy to the finer scales. That the differentiation of white noise brings about this behavior is known in the Fourier spectrum (Box et. al., 1994). It may be noted that the nature and effectiveness of separation depend on the wavelet basis function chosen and also on the properties of the derivatives of WT, which is in itself a highly interesting and not fully understood subject (Strang and Nguyen, 1996). For the signals studied in this paper, model as well as experimental, the wanted signal features lie in the coarser wavelet scales while the unwanted signal features after differentiation lie in the finer resolution wavelet scales. This is because the size of the data set handled decides the total number of scales available and a suitable choice can bring out the noisy signal WT features lying in the coarser scales. This justifies the assumption that fine scale features can be removed by setting the corresponding wavelet coefficients to zero and coarse scale features retained after differentiation. For this reason we also see a clear separation in the scales attributed to noise and those for the signal. A threshold scale for noise removal is thus identified and this leads to an automation for noise removal. Result and Discussion We first take up the three model systems and discuss the observations. For test purposes the pure signal obtained from these systems were corrupted with noise of certain strength. For the systems chosen for study, viz., Lorenz, Autocatalator, and Rössler, Table 1 summarizes the details, i.e. the equations governing their dynamics, the values chosen for the parameters, and the nature of the evolution of these systems for these sets of parameter values. These values were chosen appropriately so that the dynamics is chaotic. In our initial studies, purely for testing purposes, we studied situations where we ensure that all scales are affected by noise. In the wavelet domain this can be conveniently carried out by perturbing the wavelet coefficients in the following way. The differential equations are first numerically integrated to obtain pure signal $`x^0(t_i)`$ at equidistant time steps $`t_i`$. We then take DWT of the signal, and add white noise $`\eta `$ of zero mean and certain strength, i.e., $`W_{j,k}=W_{j,k}^0+\eta `$, where $`W_{j,k}^0,W_{j,k}`$ are the wavelet coefficients of pure and noisy signals respectively. We take the strength of noise as the relative percentage of the difference between the maximum and minimum of the signal value. Since each coefficient $`W_{j,k}`$ is individually affected by the noise, this procedure ensures equal weightage for presence of noise at all scales. Reconstructing the time series signal with this perturbed set of wavelet coefficients gave us the noisy signal to be cleansed. Our studies in this fashion did show that noise and signal separation was achieved. For the subsequent studies we followed the usual way of corrupting the signal by additive noise, i.e., $`x(t_i)=x^0(t_i)+\eta (t_i),`$ (6) where $`\eta (t_i)[.5,.5]`$ is the noise with zero mean and uniform distribution, and $`J`$ the number of available dyadic scales. We have taken data size of 16384 ($`=2^{14}`$) points for all these three systems, and so $`J=14`$. In Fig. 1 we plot our observations for the Lorenz system. Fig. 1 (a) shows the power at different scales in the pure signal $`x^0(t_i)`$. In Fig. 1 (b) we plot the scalewise power distribution after numerically differentiating the pure signal. We see that almost the entire power of the differentiated data is accumulated within the dyadic scales 4 and 9 (the signal power between scales 10 and 14 has disappeared by the process of differentiation). Fig. 1 (c) plots the scalewise power in the noisy signal $`x(t_i)`$ when the pure signal is infected with noise $`\eta (t_i)`$ of a typical strength of 5% of the signal (that is, 5% of the difference in the maximum and minimum values of $`x^0(t_i)`$). Because of the relative larger contribution at all scales from the pure signal, compared to that from noise, it is impossible to distinguish between the two components, and the figure looks qualitatively very similar to Fig. 1 (a). However, when we plot the scalewise power distribution of the differentiated noisy data in Fig. 1 (d), the signal contribution can easily be identified and also compared with the plot in Fig. 1 (b). It is to be noted that the difference in the values of the two peaks in Figs. 1 (b) and (d) arises because of the power being normalized by the respective total signal power. The contribution due to noise shows up in the finer scales. A clear minimum with close to zero value separates out two distinct regions. Fig. 1 (e) exhibits a small segment of the signal after the noise has been successfully removed following the procedure outlined above. All the three signals – pure, noisy, and cleansed – are overlaid for the sake of comparison. In Fig. 2 we show the results for the Autocatalator and the Rössler reacting systems. Fig. 2 (a) and (c) plot, respectively, the scalewise power distribution for noise–infected signals obtained from the two systems. Like in the case of Lorenz system, it is evident that here also one cannot distinguish the noise and signal components. Fig. 2 (b) and (d) exhibit scalewise power profile for the differentiated data of the two signals respectively. The clear separation is again obvious. In order to quantitatively estimate the efficiency of our denoising method, we have made the following error estimation (Kostelich and Schreiber, 1993) for the above three model systems. Since in all these cases the pure signal is known, a measure of the amount of error present in the cleaned data is obtained by taking rms deviation of the cleaned signal $`\widehat{x}(t_i)`$ from the pure signal $`x^0(t_i)`$ as follows, $`\widehat{E}=\left({\displaystyle \frac{1}{N}}{\displaystyle \underset{i=1}{\overset{N}{}}}(\widehat{x}(t_i)x^0(t_i))^2\right)^{1/2},`$ (7) where $`N`$ is the length of the time series. Similar quantity $`E`$ for the noisy data $`x(t_i)`$ is also computed. The condition $`\widehat{E}/E<1`$ guarantees that noise has been successfully reduced. The error estimator $`\widehat{E}/E`$ is a natural measure for noise reduction when the original dynamics is known (Kostelich and Schreiber, 1993). In Fig. 3 we plot $`\widehat{E}/E`$ against noise strength, for the three model systems. We see that for the entire range of noise values, and even with noise level as high as 10% of the signal exhibiting chaotic dynamics, $`\widehat{E}/E`$ remains appreciably below unity. Thus the plot demonstrates the efficiency of the approach. Different wavelet basis functions may change the nature and also improve the efficiency further. We now discuss our method when applied to raw data obtained from two real chemical systems. In the first system, the time series data was obtained from the measurements of the pressure fluctuations in a fluidized bed, which consists of a vertical chamber inside of which a bed of solid particles is supported by an upwardly moving gas. Our system used a bed of silica sand particles (of mean diameter 200 microns) with a settled height of 500 mm, fluidized by ambient air in a transparent vessel 430 mm across and 15 mm wide. Beyond a critical inlet gas velocity, viz. the minimum bubbling velocity, the gas passes through the bed in the form of bubbles, thereby churning the solid and gas mixture in a turbulent manner. The time series data have been taken by measuring the pressure fluctuations inside this mixture, relative to atmospheric pressure, using a pressure transducer attached to a probe inserted into the fluid bed. The bed was operated at an inlet gas velocity of 0.85 m/sec, and the pressure fluctuations were recorded at a sampling rate of 333 Hz (333 data points per second). As a standard procedure, we normalize the data by subtracting mean and dividing by standard deviation (Constantinides, 1987; Bai et. al., 1997). In Fig. 4 we show the results obtained after the data have been subjected to denoising. Fig. 4 (a) shows the power distribution at different scales in the original experimental signal, while in Fig. 4 (b) we plot the scalewise power profile of the differentiated data. Again one clearly sees the two distinct contributions due to the noise and signal components. Fig. 4 (c) shows short segments of the denoised signal and the original signal which is overlaid for comparison. The cleaned signal is seen to be smooth indicating that the noise has been removed. In the second chemical system, the time series data was obtained by sampling a measure related to the conductivity in a 3 liter liquid surfactant manufacturing experiment, at a sampling rate of 500 Hz. The time series is highly nonstationary since at various stages the operational parameters are altered (increasing the temperature for certain duration, then adding actives to the liquid, etc.). We studied unfiltered noisy data sets from the experiments, in order to check if our method can filter the noise out and also bring forth some intrinsic features of the system. We used our denoising algorithm to treat this data set in a slightly different way. The aim was to remove the finer scales from the differentiated data one by one, starting from the lowest (dyadic scale 1) and gradually going up, so that at each stage (after integrating the data) the observable frequencies in the filtered signal may be related to identifiable physical sources. Fig. 5 (a) shows a small segment (1 second long) of the noisy data. In Fig. 5 (b) we plot, on the same scale as in the earlier figure, the filtered data, using our method to remove the lowest dyadic scale 1. One can now clearly identify a 50 Hz component, due to the signal from electrical power supply (the ‘net frequency’). By removing scale 2 alongwith scale 1, the net frequency goes away, and the filtered data exhibits a 13 Hz component superimposed with occasional spikes. This 13 Hz signal shows up clearly in the filtered data with scale 3 also removed. The same Fig. 5 (b) shows this data, overlaid on the data with scale 1 removed. This 13 Hz may have arisen from the stirring device which has two blades and revolves with 260 rpm, coresponding to approximately 10 Hz. The electronic signal had an antialiasing feature of no more than 250 Hz and therefore aliasing (beating) may be ruled out. It may also be mentioned here that the Fourier power spectrum of the denoised signal shows a spike at 50 Hz frequency, whose removal resulted in a residual spectrum consisting mainly of a background continuum without any appreciable peak around 13 Hz. This study with the present example suggests that the wavelet transform methodology offers considerable benefits in the recovery of intrinsic signal components. Summary We have presented a new and alternative algorithm for noise reduction using discrete wavelet transform. We believe that our algorithm will be beneficial in various noise reduction applications, and that it shows promise in developing techniques which can resolve an observed signal into its various intrinsic components. In our method the threshold for reducing noise comes out automatically. The algorithm has been applied to three model flow systems - Lorenz, Autocatalator, and Rössler systems - all evolving chaotically. The method is seen to work quite well for a wide range of noise strengths, even as large as 10% of the signal level. We have also applied the method successfully to noisy time series data obtained from the measurement of pressure fluctuations in a fluidized bed, and also to that obtained by conductivity measurement in a liquid surfactant experiment. In all the illustrations we have been able to observe that there is a clean separation in the frequencies covered by the differentiated signal and white noise. However, if the noise is colored, a certain degree of overlap between the signal and noise may exist even after differentiation. For this complex situation, the method needs to be improved upon. Acknowledgement Authors acknowledge Unilever Research, Port Sunlight, for financial and other assistance. Part of the work has been carried out under the aegis of Indo–Australian S&T program DST/INT/AUS/I–94/97. Literature cited Abarbanel, H. D. I., “The Observance of Chaotic Data in Physical Syatems”, Rev. Mod. Phys. 65, 1340 (1993) Bai, D., T. Bi, and J. R. Grace “Chaotic Behavior of Fluidized Beds Based on Pressure and Voidage Fluctuations”, AIChE Journal 43, 1357 (1997) Bakshi, B., and G. Stephanopoulos “Representation of Process Trends. Part IV. Induction of Real–time Patterns from Operating Data for Diagnoses and Supervisory Control”, Computers chem. Engng. 18, 303 (1994) Box, G. E. P., G. M. Jenkins and G. C. Reinsel, “Time Series Analysis Forecasting and Control” Prentice Hall, Englewood Cliffs (1994) Carrier, J. F., and G. Stephanopoulos “Wavelet–Based Modulation in Control–Relevant Process Identification”, AIChE Journal 44, 341 (1998) Cawley, R., and G.–H. Hsu “Local–Geometric–Projection Method for Noise Reduction in Chaotic Maps and Flows”, Phys. Rev. A 46, 3057 (1992) Cohen, L., “Time frequency analysis” Prentice Hall, Englewood Cliffs (1995) Constantinides, A., “Applied Numerical Methods with Personal Computers” McGraw–Hill Book Company, USA (1987) Daubechies, I., “Ten Lectures on Wavelets” SIAM, Philadelphia (1990) Donoho, D. L., and I. M. Johnstone “Adapting to Unknown Smoothness via Wavelet Shrinkage”, J. American Statistical Association 90, 1200 (1995) Farmer, J. D., and J. J. Sidorowich “Optimal Shadowing and Noise Reduction”, Physica D 47, 373 (1991) Grassberger, P., T. Schreiber, and C. Schaffrath “Non–linear Time Sequence Analysis”, Int. J. Bifurcation Chaos 1, 521 (1991) Guckenheimer, E. and P. Holmes, “Nonlinear Oscillation, Dynamical Systems and Bifurcations of Vector Fields” Springer Verlag, Berlin (1983) Härdle, W., “Applied Nonparametric Regression” Econometric Society Monographs, Cambridge University Press (1990) Holschneider, M., “Wavelets: An Analysis Tool” Clarendon Press, Oxford (1995) Kantz, H. and T. Schreiber, “Nonlinear Time Series Analysis” Cambridge University Press, Cambridge (1997) Kostelich, E. J., and J. A. Yorke “Noise Reduction in Dynamical Systems”, Phys. Rev. A 38, 1649 (1988) Lorenz, E. “Deterministic Nonperiodic Flow”, J. Atmos. Sc. 20, 130 (1963) Luo, R., M. Misra, S. J. Qin, R. Barton, and D. M. Himmelblau “Sensor Fault Detection via Multiscale Analysis and Nonparametric Statistical Inference”, Ind. Eng. Chem. Res. 37, 1024 (1998) Lynch, D. T. “Chaotic Behavior of Reaction Systems: Mixed Cubic and Quadratic Autocatalators”, Chem. Engg. Sci. 47, 4435 (1992) Nason, G. P., “Wavelet Regression by Cross–validation” Dept. of Mathematics, University of Bristol (1994) Press, W. H., B. P. Flannery, S. A. Teukolsky and W. T. Vetterling, “Numerical Recipes” Cambridge University Press, Cambridge (1987) Rössler, O. E. “Chaotic Behavior in Simple Reaction Syatems”, Z. Naturforsch. 31 a, 259 (1976) Safavi, A. A., J. Chen, and J. A. Romagnoli “Wavelet–Based Density Estimation and Application to Process Monitoring”, AIChE Journal 43, 1227 (1997) Sauer, T. “A Noise Reduction Method for Signals from Nonlinear Systems”, Physica D 58, 193 (1992) Strang, G. and T. Nguyen, “Wavelets and Filter Banks” Wellesley–Cambridge Press (1996) Strogatz, S. H., “Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry and Engineering” Addison–Wesley (1994) Figure Captions * Plots for Lorenz system, with parameter values as stated in Table 1. 16384 ($`=2^{14}`$) data points are considered. Scalewise power distribution is plotted against the dyadic scale, (a) for the pure signal (the interpolated line through the data points is drawn for visualization), (b) using the data obtained after differentiating the signal, (c) for the signal corrupted by noise, and (d) using differentiated data of noisy signal. A segment of cleansed signal is shown in (e), alongwith the pure and noisy signals overlaid for comparison. * Plots for Autocatalator and Rössler systems, in (a), (b) and (c), (d) respectively, for parameter values as in Table 1. Scalewise power profile plotted, (a) for the noisy autocatalytic signal, (b) using data after differentiating the signal, (c) for noisy Rössler signal, and (d) using the differentiated data of noisy signal. * The error estimator $`\widehat{E}/E`$ plotted against the noise strength for all the three systems. * Plots for the fluidized bed experiment. (a) Scalewise power profile is shown, for (a) the experimental signal, and (b) the data after the signal has been numerically differentiated. A small segment of the cleansed signal is shown in (c), alongwith the original signal for comparison. * Plots for the liquid surfactant experiment. (a) A segment of the original noisy data, 1 second long. (b) The filtered data, with scale 1 removed, and with scales 1, 2 and 3 removed (on the same axes–scales as (a)).
no-problem/0002/cond-mat0002139.html
ar5iv
text
# Fluctuating spin 𝑔-tensor in small metal grains With the advance of nanoparticle technology, it has become possible to resolve individual energy levels for electrons in ultrasmall metal grains. Recent experiments addressed their Zeeman splitting under the application of a magnetic field $`\stackrel{}{B}`$ . The splitting of a level $`\epsilon _\mu `$ is described by a $`g`$-factor, $`\delta \epsilon _\mu =\pm \frac{1}{2}\mu _BgB_z`$, where $`\mu _B`$ is the Bohr magneton. A free electron has $`g=2`$, but in small metal grains the effective $`g`$-factor may be reduced as a result of spin-orbit scattering . In order to study this reduction, Salinas et al. have doped Al grains (which do not have significant spin-orbit scattering) with Au (which has). For small concentrations of Au, the effective $`g`$-factor was seen to drop from 2 to around 0.7. Even lower values $`g0.3`$ were reported in experiments on Au grains . For disordered systems with spin-orbit scattering, the splitting of a level $`\epsilon _\mu `$ does not only depend on the magnitude of the magnetic field $`\stackrel{}{B}`$, but also on its direction. Hence, an analysis in terms of a “$`g`$-tensor” is more appropriate . To be precise, the Zeeman field splits the Kramers’ doublet $`\epsilon _\mu \epsilon _\mu \pm \delta \epsilon _\mu `$ with $`\delta \epsilon _\mu ^2=(\mu _B/2)^2\stackrel{}{B}𝒢_\mu \stackrel{}{B},`$ (1) where $`𝒢_\mu `$ is a $`3\times 3`$ tensor. In the absence of spin-orbit scattering, the tensor $`𝒢_\mu `$ is isotropic, $`(𝒢_\mu )_{ij}=4\delta _{ij}`$. The effect of spin-orbit scattering on $`𝒢_\mu `$ is threefold: It leads to a decrease of the typical magnitude of $`𝒢_\mu `$, it makes the tensor structure of $`𝒢_\mu `$ important (i.e., it introduces an anisotropic response to the magnetic field $`\stackrel{}{B}`$), and it causes $`𝒢_\mu `$ to be different for each level $`\epsilon _\mu `$. Hence $`𝒢_\mu `$ becomes a fluctuating quantity, and it is important to know its statistical distribution. The latter problem was addressed in a recent paper by Matveev et al. , however without considering the tensor structure of $`𝒢_\mu `$. The anisotropy of the $`g`$-tensor is a measurable quantity and we here consider the distribution of the entire tensor $`𝒢_\mu `$. The distribution $`P(𝒢_\mu )`$ is defined with respect to an ensemble of small metal grains of roughly equal size. The same distribution applies to the fluctuations of $`𝒢_\mu `$ as a function of the level $`\epsilon _\mu `$ in the same grain. In general, $`𝒢_\mu `$ has a contribution $`𝒢_\mu ^{\mathrm{spin}}`$ from the magnetic moment of electron spins, and a contribution $`𝒢_\mu ^{\mathrm{orb}}`$ for the orbital angular moment of the state $`|\psi _\mu `$. In Ref. , the typical sizes of both contributions were estimated as $`𝒢^{\mathrm{spin}}\tau _{\mathrm{so}}\mathrm{\Delta }`$ and $`𝒢^{\mathrm{orb}}\mathrm{}/L`$, where $`\tau _{\mathrm{so}}`$ is the mean spin-orbit scattering time, $`L`$ is the grain size, $`\mathrm{\Delta }L^3`$ is the mean level spacing, and $`\mathrm{}L`$ is the elastic mean free path. We restrict ourselves to the spin contribution $`𝒢^{\mathrm{spin}}`$, which should be dominant for small grain sizes , provided $`\tau _{\mathrm{so}}`$ does not depend on system size, as should be the case for the experiments of Ref. . When orbital contributions are important, the anisotropy of $`𝒢`$ will be affected by the shape of the grain. In that case, our main conclusions apply only to a roughly spherical grain. As the typical magnitude of $`𝒢`$ (we drop the superscript “spin” and the subscript $`\mu `$ if there is no ambiguity) depends on the microscopic parameters $`\tau _{\mathrm{so}}`$ and $`\mathrm{\Delta }`$, which are in most cases not known accurately, we choose to have the typical magnitude of $`𝒢`$ serve as an external parameter in our theory. We first present our main results. With a suitable choice of the coordinate axes (“principal axes”), the tensor $`𝒢`$ can be diagonalized. Writing its eigenvalues as $`g_j^2`$ and denoting the components of the magnetic field along the principal axes by $`B_j`$, $`j=1,2,3`$, Eq. (1) takes a particularly simple form, $$\delta \epsilon _\mu ^2=\frac{1}{4}\mu _B^2(g_1^2B_1^2+g_2^2B_2^2+g_3^3B_3^2).$$ (2) We refer to the numbers $`g_1`$, $`g_2`$, and $`g_3`$ as principal $`g`$-factors. For a generic metal grain of a cubic material, rotational symmetry implies that, for a given level $`\epsilon _\mu `$, the positioning of the principal axes is entirely random in space, as long as they are mutually orthogonal. Hence, it remains to study the distribution $`P(g_1,g_2,g_3)`$ of the principal $`g`$-factors $`g_1`$, $`g_2`$, and $`g_3`$ for the level $`\epsilon _\mu `$. Our main result is, that for sufficiently strong spin-orbit scattering, $`P(g_1,g_2,g_3)`$ is given by the distribution $$P(g_1,g_2,g_3)\underset{i<j}{}|g_i^2g_j^2|\underset{i}{}e^{3g_i^2/2g^2},$$ (3) where $`g^2=\frac{1}{3}(g_1^2+g_2^2+g_3^2)`$ is the average of $`(2\delta \epsilon _\mu /\mu _BB)^2`$ over all directions of $`\stackrel{}{B}`$ and $`g^2`$ is its average over the ensemble of grains. In random matrix theory , this distribution is known as the Laguerre ensemble. Without loss of generality we may assume that $`g_1^2g_2^2g_3^2`$. Figure 1 shows the averages $`g_j^2`$ and a realization of the principal $`g`$-factors $`g_1`$, $`g_2`$, and $`g_3`$ for a specific sample, as a function of a parameter $`\lambda (\tau _{\mathrm{so}}\mathrm{\Delta })^{1/2}`$ measuring the strength of the spin-orbit scattering. (A formal definition of $`\lambda `$ in a random-matrix model will be given below.) From the figure, one readily observes that, typically, the three principal $`g`$-factors differ by a factor $`2`$$`3`$. This implies that, in spite of the average rotational symmetry of the grains, the response of a given level $`\epsilon _\mu `$ to an applied magnetic field is highly anisotropic because of mesoscopic fluctuations. The mathematical origin of this effect is the “level repulsion” factor $`|g_i^2g_j^2|`$ in the probability distribution (3), which signifies that, to a certain extent, $`𝒢_\mu `$ can be viewed a as a “random matrix”. Let us now turn to a more detailed discussion of our results. Without magnetic field, the Hamiltonian $``$ of the grain is invariant under time-reversal, so that all eigenstates come in doublets $`|\psi _\mu `$ and $`|𝒯\psi _\mu `$, where $`𝒯\psi =i\sigma _2\psi ^{}`$ is the time-reversal operator. To study the splitting of the doublets by a magnetic field, we add a term $`\mu _B\stackrel{}{B}\stackrel{}{\sigma }`$ to $``$, $`\stackrel{}{\sigma }=(\sigma _1,\sigma _2,\sigma _3)`$ being the vector of Pauli matrices. From degenerate perturbation theory we find that a level $`\epsilon _\mu `$ is split into $`\epsilon _\mu \pm \delta \epsilon _\mu `$, with $`\delta \epsilon _\mu `$ of the form (1). For the real symmetric $`3\times 3`$ matrix $`𝒢_\mu `$ one has $$𝒢_\mu =G_\mu ^\mathrm{T}G_\mu ,$$ (4) where $`G_\mu `$ is a real $`3\times 3`$ matrix with elements $`(G_\mu )_{1j}+i(G_\mu )_{2j}`$ $`=`$ $`2𝒯\psi _\mu |\sigma _j|\psi _\mu `$ (5) $`(G_\mu )_{3j}`$ $`=`$ $`2\psi _\mu |\sigma _j|\psi _\mu ,`$ (6) We use random-matrix theory (RMT) to compute the distribution of $`𝒢_\mu `$. In RMT, the microscopic Hamiltonian $``$ is replaced by a $`2N\times 2N`$ random hermitian matrix $`H`$, where at the end of the calculation the limit $`N\mathrm{}`$ is taken. (The factor $`2`$ accounts for spin.) The wavefunction $`\psi _\mu (\stackrel{}{r})`$ is replaced by an $`N`$-component spinor eigenvector $`\psi _{\mu n}`$ of $`H`$, where $`n`$ is a vector index. To study the effect of spin-orbit scattering, we take $`H`$ of the form $$H(\lambda )=S𝟙_\mathrm{𝟚}+𝕚\frac{\lambda }{\sqrt{\mathrm{𝟜}}}\underset{𝕛}{}𝔸_𝕛\sigma _𝕛,$$ (8) where $`S`$ ($`A_j`$) is a real symmetric (antisymmetric) $`N\times N`$ matrix with the Gaussian distribution $`P(S)`$ $``$ $`e^{(\pi ^2/4N\mathrm{\Delta }^2)\mathrm{tr}S^\mathrm{T}S},`$ (9) $`P(A_j)`$ $``$ $`e^{(\pi ^2/4N\mathrm{\Delta }^2)\mathrm{tr}A_j^\mathrm{T}A_j},j=1,2,3.`$ (10) The Hamiltonian $`H(\lambda )`$ is similar to the Pandey-Mehta Hamiltonian used to describe the effect of time-reversal symmetry breaking in a system of spinless particles . In Eq. (9), $`\mathrm{\Delta }`$ is the average spacing between the Kramers doublets near $`\epsilon =0`$. The amount of spin-orbit scattering is measured by the parameter $`\lambda (\tau _{\mathrm{so}}\mathrm{\Delta })^{1/2}`$ . The case $`\lambda =0`$ corresponds to the absence of spin-orbit scattering, when $`H=S`$ is a member of the Gaussian Orthogonal Ensemble (GOE) of random matrix theory. The case $`\lambda =(4N)^{1/2}`$ corresponds to the case of strong spin-orbit scattering, when $`H`$ is a member of the Gaussian Symplectic Ensemble (GSE). The ensemble of Hamiltonians $`H(\lambda )`$ corresponds to a crossover from the GOE to the GSE. Similar crossovers were studied previously in the literature, in particular for the cases GOE–GUE and GSE–GUE (GUE is Gaussian Unitary Ensemble) . The distribution of the tensor $`𝒢_\mu `$ for an eigenvalue $`\epsilon _\mu `$ of the matrix $`H(\lambda )`$ is related to the statistics of eigenvectors of $`H(\lambda )`$ in this crossover ensemble. To deal with the twofold degeneracy of the eigenvalue $`\epsilon _\mu `$, we combine the two $`N`$-component spinor eigenvectors $`\psi _\mu `$ and $`𝒯\psi _\mu `$ into a single $`N`$-component vector of quaternions $`\overline{\psi }=(\psi ,𝒯\psi )`$ . The quaternion vector $`\overline{\psi }`$ can be parameterized as, $$\overline{\psi }=\underset{k=0}{\overset{3}{}}\alpha _ku_k\varphi _k,$$ (11) where the $`u_k`$ are quaternion numbers with $`\text{tr}u_k^{}u_l=2\delta _{kl}`$ (“quaternion phase factors”), the $`\varphi _k`$ are $`N`$-component real orthonormal vectors, and the $`\alpha _k`$ are positive numbers such that $`_k\alpha _k^2=1`$ ($`k,l=0,1,2,3`$). A eigenvector in the GOE corresponds to $`\alpha _0=1`$, $`\alpha _1=\alpha _2=\alpha _3=0`$, while an eigenvector in the GSE has typically $`\alpha _0\alpha _1\alpha _2\alpha _3\frac{1}{2}`$. A similar parameterization has been applied to the GOE–GUE crossover . Orthogonal invariance of the distributions of $`S`$ and $`A_j`$, together with the freedom to choose the overall quaternion phase of $`\overline{\psi }`$, give a distribution of the $`u_k`$ and $`\varphi _k`$ that is as random as possible, provided the above mentioned orthogonality constraints are obeyed. Hence, all nontrivial information about the eigenvector statistics is encoded in the numbers $`\alpha _k`$. Substitution of the parameterization (11) into Eq. (6) yields $`g_1`$ $`=`$ $`2(\alpha _0^2+\alpha _1^2\alpha _2^2\alpha _3^2),`$ (12) $`g_2`$ $`=`$ $`2(\alpha _0^2\alpha _1^2+\alpha _2^2\alpha _3^2),`$ (13) $`g_3`$ $`=`$ $`2(\alpha _0^2\alpha _1^2\alpha _2^2+\alpha _3^2).`$ (14) While the squares $`\alpha _k^2`$ ($`k=0,1,2,3`$) are all positive, the principal $`g`$-factors as given by Eq. (13) can also be negative. Permutations of the $`\alpha _k`$ alter the signs of the individual $`g_j`$, but not of their product $`g_1g_2g_3`$. \[The product $`g_1g_2g_3=detG`$ also follows from Eq. (6); one verifies that it does not change when $`|\psi `$ is replaced by a linear combination of $`|\psi `$ and $`|𝒯\psi `$.\] Without loss of generality, we may assume that $`g_1^2g_2^2g_3^2`$, and that $`g_2`$ and $`g_3`$ are positive. Then equation (13) provides the constraint $`g_2+g_32+g_1`$, which poses a bound on the occurrence of negative values for the product $`g_1g_2g_3`$. We conclude that all information on the eigenvector statistics in the GOE–GSE crossover is encoded in the magnitudes of $`g_1`$, $`g_2`$, and $`g_3`$ and the sign of their product. Since for the level splitting $`\delta \epsilon _\mu (\stackrel{}{B})`$ only the squares $`g_j^2`$ are of relevance, we disregard the sign of $`g_1g_2g_3`$ in the remainder of the paper. The sign of $`g_1g_2g_3`$ may be determined in principle, however, by a spin-resonance experiment . In order to calculate the distribution $`P(g_1,g_2,g_3)`$ one has, in principle, to carry out the same program as was done in Refs. for the GOE–GUE crossover. However, it turns out that in the present case the calculation is considerably more complicated. This can already be seen from the mere observation that the wavefunction statistics in the GOE–GSE crossover is governed by three variables $`g_1`$, $`g_2`$, and $`g_3`$, whereas in the case of the GOE–GUE crossover only one variable was needed . In the field-theoretic language of Ref. , one has to use a nonlinear sigma model of $`16\times 16`$ supermatrices, instead of the usual $`8\times 8`$ for the GOE–GUE crossover . Here we refrain from such a truly heroic enterprise. Instead we focus on the regimes of strong and weak spin-orbit coupling, and study the intermediate regime by means of numerical simulations of the model (Fluctuating spin $`g`$-tensor in small metal grains). Before we address the case of strong spin-orbit scattering $`\lambda 1`$ in the crossover Hamiltonian, we first consider the GSE, corresponding to $`\lambda ^2=4N`$. In the GSE, the wavefunction $`\psi `$ is a vector of independently Gaussian distributed complex numbers. Then, one easily verifies that, for large $`N`$, the elements of the matrix $`G`$ of Eq. (6) are real random variables, independently distributed, with a Gaussian distribution of zero mean and variance $`2/N`$. Hence $`G`$ is a random real matrix with distribution $$P(G)\mathrm{exp}(N\mathrm{tr}G^\mathrm{T}G/4).$$ (15) The principal $`g`$-factors are the eigenvalues $`g_j^2`$ of the product $`𝒢=G^\mathrm{T}G`$. The distribution of the eigenvalues of such a matrix product is known in literature . It is given by Eq. (3) with $`g^2=6/N`$. Let us now turn to the Hamiltonian $`H(\lambda )`$ for large $`\lambda 1`$, but still $`\lambda N^{1/2}`$. In that case, spin-rotation invariance is broken globally (so that a wavefunction as a whole does not have a well-defined spin), but not locally; on short length scales, the particle keeps a well-defined spin. We then argue that, in the random matrix language, one may think of the quaternion wavevector $`\overline{\psi }`$ as consisting of $`\lambda ^21`$ components, each with a well-defined spin (or “quaternion phase”), but with uncorrelated spins for each component. The distribution of $`𝒢`$ is then given by the distribution for the GSE with $`N`$ replaced by a number $`\lambda ^2`$ . We have found that the precise correspondence is $`N2\lambda ^2`$, by estimating the exponential term in the exact distribution, along the lines of Ref. . In order to verify this statement we have numerically generated random matrices of the form (Fluctuating spin $`g`$-tensor in small metal grains). The comparison with the GSE distribution with $`N`$ replaced by $`2\lambda ^2`$ is excellent, see Fig. 2. In order to further analyze $`P(𝒢)`$ for strong spin-orbit scattering, we introduce the orientationally averaged $`g`$-factor, $$g^2=\frac{1}{3}(g_1^2+g_2^2+g_3^2)=(2\delta \epsilon _\mu /\mu _B|B|)^2_\mathrm{\Omega },$$ (16) where the brackets $`\mathrm{}_\mathrm{\Omega }`$ indicate an average over all directions of the magnetic field. Further, we introduce the ratios $`r_{12}=|g_1/g_2|`$ and $`r_{23}=|g_2/g_3|`$ to characterize the anisotropy of $`𝒢`$. Changing variables in Eq. (3), we find that $`P(g,r_{12},r_{23})`$ reads $`P`$ $``$ $`{\displaystyle \frac{r_{23}^3(1r_{23}^2)(1r_{23}^2r_{12}^2)(1r_{12}^2)}{(1+r_{23}^2+r_{23}^2r_{12}^2)^{9/2}}}g^8e^{9g^2/2g^2}.`$ (17) Note that the distribution of $`r_{12}`$ and $`r_{23}`$ does not depend on $`g^2`$ (provided the spin-orbit scattering is sufficiently strong). The “$`g`$-factor” $`g_z`$ for a magnetic field in the $`z`$-direction (which is a random direction with respect to the principal axes) is given by $`g_z=(𝒢_{zz})^{1/2}`$. Its distribution follows from Eq. (15) as $`P(g_z)g_z^2\mathrm{exp}(3g_z^2/2g^2)`$, in agreement with Ref. . The case of weak spin-orbit scattering can be addressed by treating the terms proportional to $`\lambda `$ in Eq. (Fluctuating spin $`g`$-tensor in small metal grains) as a small perturbation. To second order in $`\lambda `$ we find, $$𝒢=44\lambda ^2\underset{\nu \mu }{}a_{\mu \nu }^\mathrm{T}a_{\mu \nu }^{}\frac{1}{(\epsilon _\nu \epsilon _\mu )^2},$$ (18) where $`\mathrm{\Delta }`$ is the mean level spacing and $`a_{\mu \nu }`$ is an antisymmetric $`3\times 3`$ matrix proportional to the matrix elements of the perturbation in the eigenbasis $`\{|\psi _\nu \}`$ of $`H(0)=S`$, $`(a_{\mu \nu })_{ij}=N^{1/2}\psi _\mu |A_k|\psi _\nu \epsilon _{kij}`$, where $`\epsilon _{kij}`$ is the antisymmetric tensor. We first consider the change in the principal $`g`$-factors due to the matrix element $`a_{\mu \nu }`$ coupling the level $`\epsilon _\mu `$ to a close neighboring level $`\epsilon _\nu `$ where $`\nu =\mu +1`$ or $`\mu 1`$. (Level repulsion rules out the possibility that both levels $`\epsilon _{\mu \pm 1}`$ are very close.) In view of the energy denominators in Eq. (18), we may expect that this contribution is dominant. Taking only the relevant matrix element $`a_{\mu \nu }`$ into account, we find $$g_3=2,g_1=g_2=2\frac{1}{2}\lambda ^2(\epsilon _\mu \epsilon _\nu )^2\mathrm{tr}a_{\mu \nu }^\mathrm{T}a_{\mu \nu }^{},$$ (19) where $`\nu =\mu \pm 1`$. Since the spacing distribution $`P(|\epsilon _\mu \epsilon _\nu |)\pi \mathrm{\Delta }^2|\epsilon _\mu \epsilon _\nu |`$ for small $`\epsilon _\mu \epsilon _\nu `$ , we find that the distribution $`P(g)`$ of both $`g_1`$ and $`g_2`$ has tails $`P(g)=(3\lambda ^2/2\pi )(2g)^2`$ for $`2g\lambda ^2`$. The main effect of contributions from the other energy levels in Eq. (18) is a reduction of $`g_3`$ below $`2`$, and a separation of $`g_1`$ and $`g_2`$. This is illustrated in Fig. 1. The three regimes of weak, intermediate, and strong spin-orbit scattering are compared in Fig. 3, using a numerical evaluation of the distributions of the three principal $`g`$-values. We gratefully acknowledge discussions with T. A. Arias, D. Davidovic, K. M. Frahm, Y. Oreg, D. C. Ralph, and M. Tinkham. Upon completion of this project, we learned of Ref. , which contains some overlap with our work. This work was supported in part by the NSF through the Harvard MRSEC (grant DMR 98-09363), and by grant DMR 99-81283.
no-problem/0002/astro-ph0002161.html
ar5iv
text
# Patterns of Super Star Cluster Formation in ‘Clumpy’ Starburst Galaxies ## 1. Introduction The optical morphologies of active star forming galaxies are often dominated by kpc-scale star-forming regions, which appear as luminous ‘clumps’ or ‘islands’. These clumps can have very high optical and UV surface brightnesses, making them visible signatures of intense star formation over cosmological distances. High angular resolution imaging with the Hubble Space Telescope (HST) and ground-based telescopes, reveals that clumps consist of associations of dense star clusters. These are usually superimposed on a more diffuse background, probably made up of massive stars. The presence of numerous star clusters influences the evolution of clumps through their interactions with the surrounding ISM, that must respond to photoionization, as well as mechanical energy and momentum inputs from the evolving star clusters. These processes can trigger star formation and simultaneously remove ISM from the clump. Since star clusters can be age-dated from their spectra, their presence also offers the possibility of measuring the history of star-forming activity in starburst galaxies. Starbursts are well-known producers of dense, massive star clusters and thus are an excellence place to study these interactions. In this paper we explore the spatial and temporal patterns of massive star cluster formation in a small sample of relatively nearby starburst galaxies. The primary objects in this study are NGC 2415 (Haro 1), NGC 3310 and NGC 7673. The Haro 1 and NGC 7673 data are HST WFPC2 ultraviolet (UV) and optical images, while we use ground-based optical images from the WIYN 3.5-m telescope for NGC 3310. Details are presented in Gallagher et al. (2000). ## 2. Star Clusters in Starbursts We begin with NGC 3310, the least intense example in our sample (Figure 1). WIYN optical images reveal compact star clusters in the ring and along the arms of this galaxy, with a wider distribution of fainter cluster candidates outside of these regions. While clusters are numerous, their spatial distribution in this non-clumpy starburst resembles that of a normal spiral galaxy. Haro 1 is an intense starburst with high optical brightness, producing a luminous galaxy with a diameter of only about 13 kpc. The spectrum shows strong Balmer absorption lines indicating that this is a relatively evolved starburst. High quality ground-based images with the WIYN Telescope suggest a transition to a clumpy structure. Our WFPC2 image in the emission line-free F547M filter displays a moderate degree of clustering of dense, luminous star clusters, and high surface brightness regions usually containing more than one compact star cluster. NGC 7673 is a star bursting clumpy irregular galaxy. Colors and magnitudes derived for star cluster candidates in the F255W, F555W, and F814W WFPC2 bands indicate that young clusters are mixed through the optically prominent clumps; their colors appear to be driven at least as much by locally variable extinction as by ages. A low level of organization may be present, with clusters near the galaxy’s center possibly being made in a linear structure by bar-induced gas flows. Some clumps are roughly circular, and so propagating star formation could be present. Figure 2 shows the spectral energy distribution (SED) of two clumps in NGC 7673, while Figure 3 shows model calculation SEDs for star clusters at various ages. Probably more than one mechanism structures the distribution of star clusters within clumps, but in most cases populations of star clusters are formed over relatively short time scales of $`<`$ 100 Myr. ## 3. Discussion Clumps seem to be features of active star bursts with high intensities. This fits with theoretical models (Elmegreen & Efrenov 1997; Noguchi 1999) where stellar clumps form in gas-rich galactic disks with higher than average internal velocity dispersions, leading to large Jeans masses and lengths. Such conditions could be natural consequences of collisional perturbations of gassy disk galaxies. They would be more common in less evolved galaxies with higher gas contents and lower degrees of organization, providing a possible explanation for the clumpy appearances of high redshift galaxies (Noguchi 1997). Although super star clusters can form in a range of conditions, clumps may be particularly important in unevolved galaxies; observations of nearby clumps then may provide insight into the formation of globular clusters. Once star formation begins in a clump, the subsequent evolution must be complex. Energy and momentum inputs from massive stars and star clusters produce obvious observational signatures in emission line profiles (Homeier & Gallagher 1999) will disturb the ISM, and likely lead to compressed regions which are excellent sites for further cluster formation (eg., Scalo & Chappell 1999). The close spacing between clusters and high densities of gas clouds may also lead to cluster-cluster mergers or cluster rejuvenations due to gas cloud captures. These possibilities reinforce the capabilities for massive star forming clumps to be prime producers of massive super star clusters. Once made these clusters will be the most likely to survive and observed at later times. We thank the WFPC2 Investigation Definition Team, and the Space Telescope Science Institute Archival Observer program for providing support through the National Aeronautics and Space Administration for this work. ## References Elmegreen, B.G. & Efrenov, Y.N., 1997, ApJ, 480, 235 Gallagher, J.S., Homeier, N.L., Conselice, C.J., et al., 2000, in Preparation Homeier, N.L. & Gallagher, J.S., 1999, ApJ, 522, 199 Noguchi, M., 1999, ApJ, 514, 77 Noguchi, M., 1997, Nature, 392, 253 Scalo, J. & Chappell, D., 1999, ApJ, 510, 258
no-problem/0002/astro-ph0002143.html
ar5iv
text
# Predicting the properties of binary stellar systems: the evolution of accreting protobinary systems ## 1 Introduction The favoured mechanism for producing most binary stellar systems is the fragmentation of a molecular cloud core during its gravitational collapse. Fragmentation can be divided into two main classes: direct fragmentation (e.g. Boss & Bodenheimer 1979; Boss 1986; Bonnell et al. 1991, 1992; Bonnell & Bastien 1992; Nelson & Papaloizou 1993; Burkert & Bodenheimer 1993; Bate & Burkert 1997), and rotational fragmentation (e.g. Norman & Wilson 1978; Bonnell 1994; Bonnell & Bate 1994a, 1994b; Burkert & Bodenheimer 1996; Burkert, Bate, Bodenheimer 1997). Direct fragmentation depends critically on the initial density structure within the molecular cloud core (e.g. non-spherical shape or density perturbations), whereas rotational fragmentation is relatively independent of the initial density structure of the cloud because the fragmentation occurs due to nonaxisymmetric instabilities in a massive rotationally-supported disc or ring. The main conclusion, from $`20`$ years of fragmentation studies, is that it appears to be possible to form binaries with similar properties to those that are observed. However, it has not been possible to use these calculations to predict the fundamental properties of stellar systems such as the fraction of stellar systems which are binary or the properties of binary systems (e.g. the distributions of mass ratios, separations, and eccentricities and the properties of discs in pre-main-sequence systems). There are two primary reasons for this lack of predictive power. First, the results of fragmentation calculations depend sensitively on the initial conditions, which are poorly constrained. The second problem is that of accretion. In fragmentation calculations, the binary or multiple protostellar systems that are formed initially contain only a small fraction of the total mass of the original cloud (e.g. Boss 1986; Bonnell & Bate 1994b) with the magnitude of this fraction decreasing with the binary’s initial separation (see Section 6.1). To obtain the final parameters of a stellar system, a calculation must be followed until all of the original cloud material has been accumulated by one of the protostars or their discs. Unfortunately, due to the enormous range in densities and dynamical time-scales in such a calculation, this is very difficult. Thus far, only one calculation has followed the three-dimensional collapse of a molecular cloud core until $`>90`$% of the initial cloud was contained in the protostars or circumstellar/circumbinary discs (Bate, Bonnell & Price 1995). Because such calculations are so difficult to perform, it is impossible to perform the number of calculations that would be required to predict the statistical properties of binary stellar systems – even if we knew the distribution of initial conditions. On the other hand, if we can overcome this second difficulty, we can use observations of binary systems to better constrain the initial conditions for star formation. Bate & Bonnell \[Bate & Bonnell 1997\] quantified how the properties of a binary system are affected by the accretion of a small amount of gas from an infalling gaseous envelope. They found that the effects depend primarily on the specific angular momentum of the gas and the binary’s mass ratio (see also Artymowicz 1983; Bate 1997). Generally, accretion of gas with low specific angular momentum decreases the mass ratio and separation of the binary, while accretion of gas with high specific angular momentum increases the separation, drives the mass ratio toward unity, and can form a circumbinary disc. From these results, they predicted that closer binaries should have mass ratios that are biased toward equal masses compared to wider systems. In this paper, we use the results of Bate & Bonnell \[Bate & Bonnell 1997\] to develop a protobinary evolution code that enables us to follow the evolution of a protobinary system as it accretes from its initial to its final mass, but does so in far less time than would be required for a full hydrodynamic calculation. Using this code, we consider the following model for the formation of binary stellar systems. We assume that a ‘seed’ binary system is formed at the centre of a collapsing molecular cloud core, presumably via some sort of fragmentation. The protobinary system initial consists of only a small fraction of the total mass of the core. Subsequently, it accretes the remainder of the initial cloud (which is falling on to the binary) and its properties evolve due to the accretion. We consider the formation process to be complete when all of the original cloud’s material is contained either in one of the two stars or their surrounding discs. Our goal is to obtain predictions about the properties of binary stars that can be tested observationally, and to determine how these properties depend on the initial conditions (e.g. the density and angular momentum profiles) in the progenitor molecular cloud cores so that the initial conditions can be better constrained. In Section 2, we describe the methods used to follow the evolution of accreting protobinary systems, and we present the results of various test calculations in Section 3. In Sections 4 and 5, we present results from calculations with a range of initial conditions which follow the evolution of accreting protobinary systems. From these results, in Section 6, we make predictions regarding the properties of binary systems and compare them with the latest observations. These predictions are briefly summarised in Section 7. Those readers more interested in our predictions of the properties of binary stars, rather than the method by which these predictions have been obtained, may care to move directly to Section 4. ## 2 Computational methods ### 2.1 Protobinary evolution code Bate & Bonnell \[Bate & Bonnell 1997\] considered the effects that accretion of a small amount of gas from an infalling gaseous envelope has on the properties of a protobinary system. They quantified these effects as functions of the mass ratio of the protobinary and the specific angular momentum of the infalling gas. Therefore, if a ‘seed’ binary system is formed at the centre of a collapsing gas cloud, and we know its initial mass ratio and separation, the mass infall rate on to the binary, and the distribution of the specific angular momentum of the gas, then we can determine how the binary will evolve as it accretes infalling gas. Essentially, we can integrate the binary from its initial mass to its final mass by accreting the gas in a series of small steps and altering the masses of the binary’s components and their orbit by the amounts determined by Bate & Bonnell \[Bate & Bonnell 1997\] after each step. We now describe the implementation of the protobinary evolution (PBE) code. We begin with the properties of the molecular cloud core, before it begins to collapse dynamically, from which the binary system will form. For simplicity, the progenitor cloud (see Figure 1) is assumed to be spherical with mass $`M_\mathrm{c}`$, radius $`R_\mathrm{c}`$, density distribution $$\rho =\rho _0\left(r/R_\mathrm{c}\right)^\lambda ,$$ (1) and angular velocity $$\mathrm{\Omega }_\mathrm{c}=\mathrm{\Omega }_0\left(r/R_\mathrm{c}\right)^\beta ,$$ (2) where $`\lambda `$ controls the initial central condensation of the cloud, and $`\beta `$ gives the amount of differential rotation of the cloud. Note that $`\mathrm{\Omega }_\mathrm{c}`$ is constant on spheres ($`r`$), not cylinders ($`r_{\mathrm{xy}}`$). Note also that if molecular cloud cores spend a large fraction of their lifetimes as magnetically-supported, quasistatic structures before undergoing dynamic collapse, they are likely to be in solid-body rotation and significantly centrally-condensed before the dynamic collapse begins. We assume that a ‘seed’ binary is formed at the centre of this molecular cloud core after it begins to collapse. The ‘seed’ binary contains only a small fraction of the total mass of the cloud. The masses of the primary and secondary are $`M_1`$ and $`M_2`$, respectively. The binary has a total mass $`M_\mathrm{b}`$, mass ratio $`q=M_2/M_1`$, separation $`a`$ and is in a circular orbit. We assume the binary has a circular orbit and that its axis of rotation is aligned with that of the cloud, since the PBE code makes use of the results of Bate & Bonnell \[Bate & Bonnell 1997\] and they only studied such systems. The ‘seed’ binary is assumed to have formed from the gas that was in a spherical region at the centre of the progenitor cloud of radius $`R_\mathrm{b}`$ (Figure 1). Unless otherwise stated, we assume that this spherical region initially had the same mass and angular momentum as the ‘seed’ binary. Thus, the initial angular momentum of the binary is given by $$L_\mathrm{b}=\sqrt{GM_\mathrm{b}^3a}\frac{q}{(1+q)^2}=\mathrm{\Lambda }M_\mathrm{b}R_\mathrm{b}^2\mathrm{\Omega }_{\mathrm{Rb}}=L_{\mathrm{cen}}$$ (3) where $`\mathrm{\Omega }_{\mathrm{Rb}}=\mathrm{\Omega }_0(R_\mathrm{b}/R_\mathrm{c})^\beta `$ and, for various values of $`\beta `$ and $`\lambda `$, $$\mathrm{\Lambda }=\frac{2\left(3+\lambda \right)}{3\left(5+\lambda +\beta \right)}.$$ (4) For example, for an initially uniform-density cloud in solid-body rotation, $`\mathrm{\Lambda }`$ = 2/5. As in Bate & Bonnell \[Bate & Bonnell 1997\], we use natural units of $`M_\mathrm{b}=1`$ and $`a=1`$ for the ‘seed’ binary, with $`G=1`$. In general, to evolve the binary under the accretion of gas from the remainder of the collapsing cloud, we would need to know the infall rate and the specific angular momentum distribution of the gas as functions of time (i.e. we would need to calculate how the cloud evolves as it collapses on to the binary). However, if we make two simple assumptions, we can calculate the evolution of the binary knowing only the density and angular momentum distributions of the cloud before it began to collapse. Furthermore, we can consider the evolution of the binary as a function of the mass that has been accreted from the envelope $`M_{\mathrm{acc}}`$, and do not need to keep track of time explicitly. First, we assume that the specific angular momentum of each element of gas in the cloud is conserved during its fall until it gets very close to the binary. Second, we assume that the time it takes for gas to fall on to the binary from a radius $`r`$ from centre of the cloud is independent of direction (i.e. the gas falls on to the binary in spherical shells). These are reasonable assumptions if the molecular cloud core undergoes a dynamic collapse (i.e. the magnitude of its gravitational energy dominates the thermal, rotational and magnetic energies of the cloud). In principle, angular momentum transport between elements of gas could occur during collapse if the cloud has density inhomogeneities (via gravitational torques) or is threaded by magnetic fields (via magnetic torques). However, if the collapse is dynamic, these effects are unlikely to have enough time to transport a significant amount of angular momentum, and even cores which are initially magnetically sub-critical typically evolve into configurations which undergo a dynamic collapse (e.g. Basu 1997). The integration of the binary from its initial mass, $`M_{\mathrm{acc}}=M_\mathrm{b}=1`$, to its final mass $`M_{\mathrm{acc}}=M_\mathrm{c}M_\mathrm{b}`$ (since some of the gas could be contained in a circumbinary disc) under the accretion of the gas with $`r>R_\mathrm{b}`$ proceeds as follows (Figure 1). A spherical shell of gas with inner radius $`r=R_\mathrm{b}`$ and thickness $`\mathrm{\Delta }r`$ is divided into many elements such that the gas in each element has the same specific angular momentum (i.e. each ‘element’ consists of two rings of gas, one above and one below the orbital plane). The effect of each element of gas on the binary, when it is accreted, is determined using the results of Bate & Bonnell \[Bate & Bonnell 1997\] from its specific angular momentum, its mass, and $`q`$ (see below). The effects on the binary from all the gas elements making up a shell are added together, and then the binary’s properties ($`M_\mathrm{b}`$, $`q`$, $`a`$) are updated. The mass of material that is not captured by one of the two protostars but that remains in a circumbinary disc (if any) is also determined. The process is repeated for the shell of material at radius $`r=R_\mathrm{b}+\mathrm{\Delta }r`$ until the entire cloud of mass $`M_\mathrm{c}`$ has been exhausted. The results from Bate & Bonnell \[Bate & Bonnell 1997\] are used to determine the effects on the binary of the accretion of an element of gas. The specific angular momentum of an element of gas relative to the specific angular momentum required for gas to form a circular orbit at a radius equal to the binary’s separation is given by $$j_{\mathrm{rel}}=\frac{r_{\mathrm{xy}}^2\mathrm{\Omega }_\mathrm{c}}{\sqrt{GM_\mathrm{b}a}}$$ (5) where $`r_{\mathrm{xy}}`$ is the cylindrical radius of the gas element from the axis of rotation in the progenitor cloud. To evolve the binary, we need to know the values of $`\dot{M}_1/\dot{M}_{\mathrm{acc}}`$, $`\dot{M}_2/\dot{M}_{\mathrm{acc}}`$ and $`(\dot{a}/a)/(\dot{M}_{\mathrm{acc}}/M_\mathrm{b})`$ as functions of $`j_{\mathrm{rel}}`$ and $`q`$, where $`\dot{M}_{\mathrm{acc}}`$ is the rate of accretion from the infalling envelope on to the binary. Note that we consider gas to be ‘accreted’ by one of the protostars if it is actually accreted onto the protostar itself or if it is captured in the protostar’s circumstellar disc (see also Section 3.2). These quantities were determined by Bate & Bonnell \[Bate & Bonnell 1997\] only at set intervals in $`q`$-$`j_{\mathrm{rel}}`$-space. Interpolation between these points is performed to determine the quantities for any given $`q`$ and $`j_{\mathrm{rel}}`$. Boundary conditions are used for $`q=1`$ and $`q=0`$. For $`q=1`$, we set $`\dot{M}_1/\dot{M}_{\mathrm{acc}}=\dot{M}_2/\dot{M}_{\mathrm{acc}}`$ explicitly, with $`(\dot{a}/a)/(\dot{M}_{\mathrm{acc}}/M_\mathrm{b})`$ being determined from Bate & Bonnell \[Bate & Bonnell 1997\]. For $`q=0`$, we set $`\dot{M}_2/\dot{M}_{\mathrm{acc}}=0`$ and $$\begin{array}{c}\dot{M}_1/\dot{M}_{\mathrm{acc}}=\dot{M}_\mathrm{b}/\dot{M}_{\mathrm{acc}}=\{\begin{array}{cc}1\hfill & \text{ if }j_{\mathrm{rel}}1\text{,}\hfill \\ 0\hfill & \text{ otherwise}\hfill \end{array}\hfill \end{array}$$ (6) and $`(\dot{a}/a)/(\dot{M}_{\mathrm{acc}}/M_\mathrm{b})`$ is set equal to the values for $`q=0.1`$. The evolution of a binary with an initial mass ratio $`q\mathrm{¿}\mathrm{}0.05`$ is insensitive to the assumptions for $`q=0`$. This was tested by setting the primary and secondary accretion rates for $`q=0`$ equal to those for $`q=0.1`$ and repeating the calculations. Note that we use the values of $`(\dot{a}/a)/(\dot{M}_{\mathrm{acc}}/M_\mathrm{b})`$ that were derived by Bate & Bonnell \[Bate & Bonnell 1997\] for the effects of accretion only. We do not include any affect on the binary’s separation due to the loss of orbital angular momentum to the gas in a circumbinary disc (if one exists). From equation 3, we note that specifying the values of $`\mathrm{\Lambda }`$ and $`q`$ fixes the relationship between the mean specific angular momentum of a shell of gas, $`J(r)`$, at radius $`r`$, and the enclosed mass, $`M(r)`$, in the progenitor cloud. For all $`q`$, $$\frac{J(r)}{J_\mathrm{b}}=\frac{2}{3\mathrm{\Lambda }}\left(\frac{M(r)}{M_\mathrm{b}}\right)^{\left(\frac{2}{3\mathrm{\Lambda }}1\right)}$$ (7) where $`J_\mathrm{b}=L_\mathrm{b}/M_\mathrm{b}`$ is the mean specific angular momentum of the ‘seed’ binary, which depends on $`q`$. Equation 7 is plotted for various types of molecular cloud core in Figure 2. With progenitor cores that are more centrally-condensed, the specific angular momentum of the first gas to fall on to the ‘seed’ binary from the envelope is greater and increases more rapidly as mass is accreted than in less centrally-condensed cores. For cores which have a greater amount of differential rotation ($`\beta 0`$), the specific angular momentum of the gas is lower to begin with and increases more slowly as mass is accreted. Since $`J_\mathrm{b}`$ in equation 7 depends on the mass ratio, $`q`$, and separation, $`a`$, of the ‘seed’ binary, the angular momentum of the cloud, $`J(r)`$ is linked to the properties of the ‘seed’ binary (due to our use of equation 3). This can be viewed in two ways. First, if we consider a series of clouds with the same density and rotation profiles but that form ‘seed’ binaries with different mass ratios, then the separations of the binaries will be somewhat larger for those with smaller mass ratios and $`J_\mathrm{b}`$ is the same for all mass ratios. Alternately, if we choose ‘seed’ binaries with the same separations, their progenitor clouds must be rotating more slowly for those binaries with lower mass ratios. We chose to use equation 3 because, for a ‘seed’ binary with a given mass ratio and separation, this gives the slowest possible rotation rate of the progenitor cloud. The progenitor cloud could be rotating more rapidly than this if some of the angular momentum of the gas from which the ‘seed’ binary formed is contained in circumstellar discs, but it could not be rotating more slowly. Finally, since choosing $`\mathrm{\Lambda }`$ and $`q`$ fixes the relationship between angular momentum and mass in the progenitor cloud (independent of $`M_\mathrm{c}/M_\mathrm{b}`$, $`R_\mathrm{c}`$ and $`\mathrm{\Omega }_0`$), it follows that the evolution proceeds in the same way for any value of $`M_\mathrm{c}/M_\mathrm{b}`$. Thus, a graph which shows the evolution of a particular protobinary system as it accretes gas from its initial to its final mass can also be viewed as giving the final states of binaries that have accreted a certain amount of mass relative to their initial mass (e.g. Figures 9 to 14). This is only the case because we have chosen clouds that have scale-free density and angular momentum profiles, and does not apply, for example, to clouds with Gaussian density profiles that have a fixed inner to outer density contrast (e.g. 20:1). ### 2.2 Smoothed particle hydrodynamics code To test the PBE code described above, we compare its results with those obtained from full hydrodynamic calculations using a three-dimensional, smoothed particle hydrodynamics (SPH) code. The SPH code is based on a version originally developed by Benz (Benz 1990; Benz et al. 1990). The smoothing lengths of particles are variable in time and space, subject to the constraint that the number of neighbours for each particle must remain approximately constant at $`N_{\mathrm{neigh}}=50`$. The SPH equations are integrated using a second-order Runge-Kutta-Fehlberg integrator with individual time steps for each particle (Bate et al. 1995). Gravitational forces between particles and a particle’s nearest neighbours are found by using either a binary tree, as in the original code, or the special-purpose GRAvity-piPE (GRAPE) hardware. The implementation of SPH using the GRAPE closely follows that described by Steinmetz \[Steinmetz 1996\]. Using the GRAPE attached to a Sun workstation typically results in a factor of 5 improvement in speed over the workstation alone. Calculations were performed using three different forms of artificial viscosity. The scatter in the results from calculations with different viscosities is used to give indication of the uncertainty in the SPH results. The viscosity, $`\mathrm{\Pi }_{\mathrm{ij}}`$, between particles $`i`$ and $`j`$ enters the momentum equation as $$\frac{\mathrm{d}𝐯_\mathrm{i}}{\mathrm{d}t}=\underset{\mathrm{j}}{}m_\mathrm{j}\left(\frac{P_\mathrm{i}}{\rho _\mathrm{i}^2}+\frac{P_\mathrm{j}}{\rho _\mathrm{j}^2}+\mathrm{\Pi }_{\mathrm{ij}}\right)_\mathrm{i}W(r_{\mathrm{ij}},h_{\mathrm{ij}}),$$ (8) where $`𝐯`$ is the velocity, $`t`$ is the time, $`m`$ is the particle’s mass, $`P`$ is the pressure, $`\rho `$ is the density, $`W`$ is the SPH smoothing kernel, $`r_{\mathrm{ij}}`$ is the distance between particles $`i`$ and $`j`$, and $`h_{\mathrm{ij}}`$ is the mean of the smoothing lengths of particles $`i`$ and $`j`$. All formulations of the viscosity have linear and quadratic terms which are parameterised by $`\alpha _\mathrm{v}`$ and $`\beta _\mathrm{v}`$, respectively. The first form of viscosity is the ‘standard’ form, originally given by Monaghan & Gingold \[Monaghan & Gingold 1983\] (see also Monaghan 1992) $$\mathrm{\Pi }_{\mathrm{ij}}=\{\begin{array}{cc}\begin{array}{cc}(\alpha _\mathrm{v}c_{\mathrm{ij}}\mu _{\mathrm{ij}}+\beta _\mathrm{v}\mu _{\mathrm{ij}}^2)/\rho _{\mathrm{ij}}\hfill & 𝐯_{\mathrm{ij}}𝐫_{\mathrm{ij}}0\hfill \\ 0\hfill & 𝐯_{\mathrm{ij}}𝐫_{\mathrm{ij}}>0\hfill \end{array}\hfill & \end{array}$$ (9) where $`\rho _{\mathrm{ij}}=(\rho _\mathrm{i}+\rho _\mathrm{j})/2`$, $`c_{\mathrm{ij}}=(c_\mathrm{i}+c_\mathrm{j})/2`$ is the mean sound speed, $`𝐯_{\mathrm{ij}}=𝐯_\mathrm{i}𝐯_\mathrm{j}`$, and $$\mu _{\mathrm{ij}}=\frac{h_{\mathrm{ij}}𝐯_{\mathrm{ij}}𝐫_{\mathrm{ij}}}{𝐫_{\mathrm{ij}}^2+0.01h_{\mathrm{ij}}^2}.$$ (10) This form is known to have a large shear viscosity. The second form (Monaghan & Gingold 1983; Hernquist & Katz 1989) has much less shear viscosity but does not reproduce shocks quite as well as the ‘standard’ formalism. It is given by $$\mathrm{\Pi }_{\mathrm{ij}}=\frac{q_\mathrm{i}}{\rho _\mathrm{i}}+\frac{q_\mathrm{j}}{\rho _\mathrm{j}}$$ (11) where $$q_\mathrm{i}=\{\begin{array}{cc}\begin{array}{cc}\alpha _\mathrm{v}h_\mathrm{i}\rho _\mathrm{i}c_\mathrm{i}|𝐯|_\mathrm{i}+\beta _\mathrm{v}h_\mathrm{i}^2\rho _\mathrm{i}|𝐯|_\mathrm{i}^2\hfill & (𝐯)_\mathrm{i}0\hfill \\ 0\hfill & (𝐯)_\mathrm{i}>0\hfill \end{array}\hfill & \end{array}$$ (12) The third form is that proposed by Balsara \[Balsara 1989\] (see also Benz 1990), which is identical to the ‘standard’ form, except that $`\mu _{\mathrm{ij}}`$ is replaced by $`\mu _{\mathrm{ij}}(f_\mathrm{i}+f_\mathrm{j})/2`$ where $$f_\mathrm{i}=\frac{|v|_\mathrm{i}}{|v|_\mathrm{i}+|\times v|_\mathrm{i}+\eta c_\mathrm{i}/h_\mathrm{i}}$$ (13) and $`\eta =1.0\times 10^4`$. In purely compressional flows this form gives the same results as that of the ‘standard’ viscosity, while in shearing flows the magnitude of the viscosity is reduced. The three forms of viscosity will be referred to as the Standard, $`𝐯`$, and Balsara forms, respectively. Modelling of non-gaseous bodies (in this paper, the protostars), and the accretion of gas on to them, is achieved by the inclusion of sink particles (Bate et al. 1995; Bate 1995). These were also used by Bate & Bonnell \[Bate & Bonnell 1997\]. A sink particle is a non-gaseous particle with a mass many times larger than that of an SPH gas particle. Any SPH gas particle that passes within a specified radius of the sink particle, the accretion radius $`r_{\mathrm{acc}}`$, is accreted with its mass, linear momentum, and spin angular momentum being added to those of the sink particle. Sink particles interact with SPH gas particles only via gravitational forces. Boundary conditions can also be included for particles near the accretion radius (Bate et al. 1995), but they are not used for the calculations in this paper. Although boundary conditions are typically required to stop the erosion of discs near the accretion radius (Bate et al. 1995), this is not necessary here because once a disc forms it is replenished by the infalling gas more rapidly than it is eroded. ## 3 Testing the Protobinary Evolution Code To test how accurately the protobinary evolution (PBE) code (Section 2.1) describes the evolution of a ‘seed’ binary as it accretes from its initial to its final mass we performed several full SPH calculations for comparison. Two test cases were performed. The first follows the formation of a binary system from the collapse of an initially uniform-density, spherical molecular cloud core in solid-body rotation. The ‘seed’ binary is assumed to have a mass ratio of $`q=0.6`$ and a mass of $`M_\mathrm{b}=M_\mathrm{c}/10=1`$. The second test case is similar, except that the progenitor cloud is centrally-condensed with a 1/r-density distribution and there is less mass in the envelope relative to the initial binary’s mass: $`M_\mathrm{b}=M_\mathrm{c}/5=1`$. ### 3.1 SPH initial conditions #### 3.1.1 Test Case 1 The evolutions given by the PBE code are scale-free. However, for the SPH calculations, we choose to specify physical parameters. Test case 1 consists of the collapse of a uniform-density molecular cloud core in solid-body rotation with $$\begin{array}{cc}M_\mathrm{c}=1.0\mathrm{M}_{},\hfill & R_\mathrm{c}=1.0\times 10^{17}\mathrm{cm},\hfill \\ T=10\mathrm{K},\hfill & \mathrm{\Omega }_\mathrm{c}=3.13\times 10^{14}\mathrm{rad}\mathrm{s}^1,\hfill \end{array}$$ (14) where $`T`$ is the temperature of the cloud, and it is assumed to consist of molecular hydrogen. The ‘seed’ binary is assumed to form from the mass initially contained within the sphere of radius $`R_\mathrm{b}=4.64\times 10^{16}`$ cm, which has 1/10 of the cloud’s total mass. The initial conditions are evolved with $`2.0\times 10^5`$ SPH gas particles until the particles that were initially within $`R_\mathrm{b}`$ have collapsed to within $`r=2\times 10^{15}`$ cm of the centre. At this point, the calculation is stopped and the particles that were contained within $`R_\mathrm{b}`$ are removed and replaced by a ‘seed’ binary system with the same mass as those particles that were removed ($`M_\mathrm{b}=0.1\mathrm{M}_{}`$). The binary has mass ratio $`q=0.6`$, separation $`a=1.0\times 10^{15}`$ cm and is in a circular orbit. Note that the rotation rate of the cloud, $`\mathrm{\Omega }_\mathrm{c}`$, is chosen so that the total angular momentum of the gas initially within $`R_\mathrm{b}`$ is equal to the orbital angular momentum of the ‘seed’ binary (the default used by the PBE code). The calculation is then restarted and the remaining gas begins to fall on to the binary. As the binary increases it mass by 5%, its mass ratio and separation are forced to evolve as predicted by the PBE code (to allow time for the gas to establish its correct flow pattern on to the binary). After this, however, the binary is free to evolve as the accretion of gas dictates. #### 3.1.2 Test Case 2 The second test case consists of a 1/r-density molecular cloud core in solid-body rotation with $$\begin{array}{cc}M_\mathrm{c}=1.0\mathrm{M}_{},\hfill & R_\mathrm{c}=1.0\times 10^{17}\mathrm{cm},\hfill \\ T=10\mathrm{K},\hfill & \mathrm{\Omega }_\mathrm{c}=5.74\times 10^{14}\mathrm{rad}\mathrm{s}^1.\hfill \end{array}$$ (15) The ‘seed’ binary is assumed to form from the mass initially contained within the sphere of radius $`R_\mathrm{b}=4.47\times 10^{16}`$ cm, which has 1/5 of the cloud’s total mass. The initial conditions are evolved with $`1.0\times 10^5`$ SPH gas particles (half the number that were used for test case 1 due to the smaller mass of the cloud relative to the ‘seed’ binary) in an identical manner to test case 1. ### 3.2 Method While the gas is modelled using SPH particles in the usual manner, the binary’s components (the protostars) are modelled with sink particles (Section 2.2). Their accretion radii change in size as the mass ratio and separation of the binary changes and are always equal to $`r_{\mathrm{acc}}=0.1R_\mathrm{i}`$ where $`R_\mathrm{i}`$ are the sizes of the mean Roche lobes of the two protostars \[Frank, King & Raine 1985\] $$R_1/a=0.380.20\mathrm{log}q,0.05<q1,$$ (16) for the primary and $$R_2/a=\{\begin{array}{cc}\begin{array}{cc}0.38+0.20\mathrm{log}q,\hfill & 0.5q<1,\hfill \\ 0.462(q/(1+q))^{1/3},\hfill & \mathrm{\hspace{0.17em}0}<q<0.5,\hfill \end{array}\hfill & \end{array}$$ (17) for the secondary. This enables circumstellar discs of size $`\mathrm{¿}\mathrm{}0.2R_\mathrm{i}`$ to be resolved by the SPH calculations. Smaller accretion radii would allow smaller discs to be resolved, but would also require more computational time due to the shorter dynamical time-scales. As in Bate & Bonnell \[Bate & Bonnell 1997\], we define the mass of a protostar ($`M_1`$ or $`M_2`$) to be the mass of a sink particle and its circumstellar disc (if any). Hence, when stating the masses of the two components or mass ratio of the binary we are assuming that, in reality, all of the material in a circumstellar disc will eventually be accreted by its central star. Therefore, the binary’s mass $`M_\mathrm{b}`$ is equal to the masses of the sink particles and their circumstellar discs. The parameters of the binary’s orbit are calculated by considering these masses. A gas particle belongs to a circumstellar disc if its orbit, calculated considering only one sink particle at a time, has eccentricity $`<0.5`$, and semi-major axis $`<D/2`$. This simple criterion gives excellent extraction of the circumstellar discs. The mass of material in a circumbinary disc $`M_{\mathrm{cb}}`$ (if any), is defined as being the gas which has an outwards (positive) radial velocity or which is falling on to the binary more slowly than 1/3 of the local free-fall velocity (i.e. $`v_\mathrm{r}>1/3\sqrt{2GM_\mathrm{b}/r}`$) and which is not in one of the circumstellar discs as defined above or within distance $`a`$ of the binary’s centre of mass. The amount of gas that has fallen on to the binary from the envelope $`M_{\mathrm{acc}}`$ is defined as $`M_\mathrm{b}+M_{\mathrm{cb}}`$ plus the remaining gas within $`a`$ of the binary’s centre of mass. The equation of state of the gas is given by $$P=K\rho ^\gamma ,$$ (18) where $`P`$ is the pressure, $`\rho `$ is the density, and $`K`$ is a constant that depends on the entropy of the gas. The ratio of specific heats $`\gamma `$ varies with density as $$\gamma =\{\begin{array}{cc}\begin{array}{cc}1.0,\hfill & \rho 10^\eta \mathrm{g}\mathrm{cm}^3,\hfill \\ 1.4,\hfill & \rho >10^\eta \mathrm{g}\mathrm{cm}^3.\hfill \end{array}\hfill & \end{array}$$ (19) The value of $`K`$ is defined such that when the gas is isothermal $`K=c_\mathrm{s}^2`$, with $`c_\mathrm{s}=2.0\times 10^4`$ cm s<sup>-1</sup> for molecular hydrogen at 10 K. We choose $`\eta =12`$ for test case 1 and $`\eta =13`$ for test case 2. The heating of the gas at high densities inhibits fragmentation of the discs (e.g. Bonnell 1994; Bonnell & Bate 1994a; Burkert & Bodenheimer 1996; Burkert et al. 1997). In reality, this heating occurs when the gas becomes optically thick to infrared radiation. The results are independent of the exact density at which heating starts, so long as the gas which falls on to the binary from the envelope is ‘cold’ (i.e. pressure forces are not dynamically important so that the gas falls freely on to the binary), the discs are relatively thin, and the discs do not fragment. ### 3.3 Test Case 1 The evolutions of test case 1 as given by the PBE and SPH codes are presented in Figure 3. We give the binary’s mass ratio $`q`$, separation $`a`$, the ratio of the mass of the circumbinary disc (if one exists) to the mass of the binary $`M_{\mathrm{cb}}/M_\mathrm{b}`$, and the binary’s eccentricity $`e`$, as functions of the mass which has been accreted from the cloud $`M_{\mathrm{acc}}`$ (up to 10 times the binary’s initial mass). In addition, for two of the SPH calculations, we show snapshots at various times during the calculations (Figures 4 and 5). #### 3.3.1 PBE calculation The PBE code predicts that after the binary has accreted all the gas in the cloud, its mass ratio has increased from $`q=0.6`$ to $`q0.84`$ while its separation, after decreasing initially, has returned to approximately the initial value. The binary’s final mass is 8.8 times its initial mass with the remaining 12% of the cloud located in a circumbinary disc. With the PBE code, the binary is assumed to remain in a circular orbit throughout the evolution. #### 3.3.2 SPH calculations Three SPH calculations were performed using different formulations of the artificial viscosity: S1: Standard, with $`\alpha _\mathrm{v}=1`$ and $`\beta _\mathrm{v}=2`$; S2: Standard, with $`\alpha _\mathrm{v}=0.5`$ and $`\beta _\mathrm{v}=2`$; and B1: Balsara, with $`\alpha _\mathrm{v}=0.25`$ and $`\beta _\mathrm{v}=1`$. S1 and S2 have the same formulation, but S2 has approximately half the shear viscosity that is present in S1 (the linear $`\alpha _\mathrm{v}`$-viscosity dominates the shear viscosity in the code; the $`\beta _\mathrm{v}`$-viscosity is only important in shocks). B1 has the lowest shear viscosity of the three, but also has a different formulation. Unfortunately, the SPH calculations cannot be evolved until all of the original cloud has been accreted. The reason is that as the evolution proceeds, the rate at which mass falls on to the binary decreases and, thus, more orbits of the binary must be calculated for the same amount of mass to fall on to the binary. For example, the increase from $`M_\mathrm{b}=1`$ to $`M_\mathrm{b}=2`$ takes $`5`$ orbits, while the increase from $`M_\mathrm{b}=4`$ to $`M_\mathrm{b}=5`$ takes $`14`$ orbits. The CPU time required to evolve the binary until the entire cloud falls in is prohibitive, which, after all, is the reason that we developed the PBE code in the first place. It takes $`60`$ orbits for the binary to increase its mass by a factor of 6 (i.e. $`60`$% of the total cloud was accreted). Each of the three SPH calculations took $`45`$ months on a 170 MHz Sun Ultra workstation with a GRAPE board. The evolution with the PBE code takes a few seconds! #### 3.3.3 Evolution of the mass ratio and separation Although the SPH calculations do not run to completion, we can compare the evolution as the binary’s mass increases by a factor of 5-6. Overall, there is excellent agreement between the PBE and SPH codes for the evolution of the mass ratio and separation. The mass ratio is predicted to within 5% over the entire evolution and the separation to within 20%. Indeed, there is as much scatter between the different SPH results as there is between the PBE results and the SPH results. Thus, for the evolution of the mass ratio and separation, we conclude that the PBE code is at least as accurate as a full SPH calculation of this resolution. Notice also that the PBE results (which assumed $`e=0`$) and SPH results are in good agreement even though the eccentricity varies between $`0<e\mathrm{¡}\mathrm{}0.2`$ during the SPH calculations. This implies that the evolution given by the PBE code is satisfactory, not just for circular binaries, but for binaries with $`e\mathrm{¡}\mathrm{}0.2`$. #### 3.3.4 Evolution of the circumbinary disc The good agreement for the mass ratio and separation does not hold for the evolution of the circumbinary disc. The PBE code predicts that more gas will settle into a circumbinary disc than is found in the SPH calculations. Cases S1 and S2 do not form circumbinary discs at all for $`M_{\mathrm{acc}}<6`$, while B1 only begins to form a circumbinary disc when $`M_\mathrm{b}\mathrm{¿}\mathrm{}4.3`$ (Figures 3c and 5). The difference between S1/S2 and B1 can be attributed to the much larger shear viscosity which is present in S1 and S2 compared to B1. Greater shear viscosity inhibits the formation of a circumbinary disc by removing angular momentum from the gas which would otherwise settle into a circumbinary disc. Another problem is that, in all of the SPH calculations, the binary has a larger separation than is predicted by the PBE code when $`M_{\mathrm{acc}}\mathrm{¿}\mathrm{}2.3`$ which also inhibits the formation of a circumbinary disc. These problems of circumbinary disc formation are the main reason that we performed test case 2 and we defer further discussion to Section 3.4. #### 3.3.5 Dependence of the results on the circumstellar discs Returning to the evolution of the mass ratio and separation, although the overall agreement is good, there are two differences which need, briefly, to be commented on. These differences are related to the way in which the PBE and SPH codes handle the circumstellar discs. First, S1 and S2 both give a slower rate of increase of the mass ratio initially than is predicted by the PBE code (Figure 3a; $`1M_{\mathrm{acc}}\mathrm{¡}\mathrm{}2`$). This reflects a problem with the SPH calculations rather than with the PBE code: namely, the size of the accretion radius around the secondary is initially of a similar size to that of the circumstellar disc which should form. This stops a circumsecondary disc from forming which reduces the secondary’s cross-section for capturing infalling gas and, so, reduces the amount of gas captured by the secondary. Only when $`M_{\mathrm{acc}}\mathrm{¿}\mathrm{}2.0`$, is the specific angular momentum of the gas being captured by the secondary large enough for it to form a circumsecondary disc outside the accretion radius (Figure 4). From this point on, the mass ratio increases at the rate predicted by the PBE code. B1 has a similar problem, but for a slightly different reason. B1 has much less shear and bulk viscosity in shearing flows. In order for infalling gas to form a resolved disc around the secondary, it must, first, dissipate most of its kinetic energy so that it is captured by the secondary and, second, dissipate enough kinetic energy that the gas particles have roughly circular orbits. If the gas does not dissipate enough kinetic energy to be captured by the secondary on its first pass, it is likely to be captured by the primary instead which is deeper in the gravitational potential well. If a gas particle is captured by the secondary but still has a very elliptical orbit, it will pass inside the secondary’s accretion radius and be removed. Thus, the low bulk viscosity of B1 means that the formation of a resolved circumstellar disc around the secondary is delayed until $`M_{\mathrm{acc}}\mathrm{¿}\mathrm{}3.9`$ (Figure 5) and the cross-section of the secondary is underestimated until this point. As with S1 and S2, however, as soon as the circumsecondary disc is resolved ($`M_{\mathrm{acc}}\mathrm{¿}\mathrm{}3.9`$), the mass ratio begins to increase at the rate predicted by the PBE code (Figure 3a). Second, the PBE code does not include viscous evolution of the circumstellar discs. In reality, and in the SPH calculations, such viscous evolution results in the transfer of angular momentum from the discs to the orbit of the binary, increasing the separation. In order for us to be able to neglect viscous evolution of the circumstellar discs, this transfer of angular momentum must be negligible over the time for most of the envelope to be accreted. Viscous evolution of the circumstellar discs on a longer time-scale may increase the final separation of the binary by a small factor, but the masses of the binary’s components will have been determined during the accretion phase. For wide binaries, viscous evolution is unlikely to be significant. For example, the free-fall time for a dense molecular cloud core ($`10^{18}`$ g cm<sup>-3</sup>) is $`10^5`$ years. The envelope will fall on to the binary on this time-scale. The viscous time for one of the circumstellar discs is of order $$t_\mathrm{v}\frac{P}{2\pi \alpha _{SS}}\left(\frac{R}{H}\right)^2,$$ (20) where $`P`$ is the period of the binary, $`\alpha _{SS}`$ measures the strength of the shear viscosity using the standard Shakura–Sunyaev prescription, and $`H/R`$ is the ratio of the thickness of the disc to its radius (typically $`0.1`$; Burrows et al. 1996; Stapelfeldt et al. 1998). Observationally, it is thought that the shear viscosity acting in protostellar accretion discs is of order $`\alpha _{\mathrm{ss}}0.01`$ \[Hartmann et al. 1998\]. Thus, taking the median binary separation of 30 AU \[Duquennoy & Mayor 1991\] and a typical protobinary mass of $`0.1\mathrm{M}_{}`$, the viscous time in one of the circumstellar discs is $`10^6`$ years, an order of magnitude longer than the free-fall time. For close binaries (separations $`\mathrm{¡}\mathrm{}5`$ AU) viscous evolution may be expected to have some effect during the accretion phase. However, in most cases, a close binary is expected to have a circumbinary disc (see Section 6.5) and interactions between it and the binary will dominate the interactions between the circumstellar discs and the binary (Section 3.4.4). In SPH calculations S1 and S2, viscous evolution of the circumstellar discs does affect the binary’s separation during the calculation. For example, in calculation S1 when $`M_{\mathrm{acc}}\mathrm{¿}\mathrm{}3`$, the separation increases more rapidly than predicted by the PBE code and when $`M_{\mathrm{acc}}=6`$, after $`60`$ orbits ($`4\times 10^4`$ years) the separation is 20% larger than predicted. However, we estimate (Pongracic 1988; Meglicki, Wickramasinghe & Bicknell 1993; Artymowicz & Lubow 1994) the effective viscosity in the circumstellar discs to be $`\alpha _{\mathrm{SS}}0.2`$. With an orbital period of $`600`$ years over most of the evolution, this gives a viscous time of $`5\times 10^4`$ years which is similar to the time over which the calculation was followed. Therefore, it is not surprising that the separation was affected. We note, however, that this viscosity is $`20`$ times stronger than is expected in real protostellar discs and, thus, the effect on a real protobinary system over this period would have been negligible, as is assumed by the PBE code. Calculation S2 has approximately half the shear viscosity of S1 and its separation is correspondingly closer to the PBE code prediction. B1 has much less shear viscosity than S1 or S2. Consequently, the rate of change of separation when $`M_{\mathrm{acc}}\mathrm{¿}\mathrm{}3`$ is very close to that predicted by the PBE code. Instead, with B1, the differences in the evolution of the separation occurred earlier in the evolution ($`1.5\mathrm{¡}\mathrm{}M_{\mathrm{acc}}\mathrm{¡}\mathrm{}2.5`$). We conclude that it is reasonable for the PBE code to neglect the effect on the separation due to viscous evolution of the circumstellar discs since it is only likely to have a significant effect for close binaries and in these cases the effect is likely to be overwhelmed by the interaction between the binary and a circumbinary disc. We also find that the small differences between the PBE and SPH results for the evolution of the mass ratio and separation reflect the unphysical treatment of the circumstellar discs by the SPH code rather than a problem with the PBE code. #### 3.3.6 Eccentricity evolution Finally, we comment on the eccentricity of the binary as given by the SPH code. We see that initially, independent of the viscosity, the binary becomes eccentric due to the accretion of gas, with a peak eccentricity of $`e0.14`$. The eccentricity then displays a secular decrease, as the accretion rate on to the binary also decreases, while oscillating on an orbital time-scale. Such periodic changes of the orbital elements are known from analytic solutions of binary motion under a perturbing force (Kopal 1959), in this case, the accreting gas. Later in the evolution, the eccentricity displays a secular increase which depends on the shear viscosity in the calculations: the greater the shear viscosity, the earlier the increase begins. As with the overestimate of the rate of change of orbital separation that was given by S1 and S2 when $`M_{\mathrm{acc}}\mathrm{¿}\mathrm{}3`$, the eccentricity increases due to the unphysically-rapid transfer of angular momentum and energy from the circumstellar discs into the binary’s orbit. ### 3.4 Test case 2 The second test case was chosen primarily to study the differences that appeared in test case 1 between the PBE and SPH codes regarding the formation of a circumbinary disc. A more centrally-condensed initial cloud results in the gas which first falls on to the binary having more specific angular momentum than with a uniform-density cloud and in a more rapid increase of the specific angular momentum of the gas as the accretion proceeds (see Section 4.2). Therefore, a circumbinary disc should form earlier than with test case 1 and provide us with a better test for the evolution of the circumbinary disc. The evolutions given by the PBE code and two different SPH calculations are given in Figure 6, with snapshots from the SPH calculations in Figures 7 and 8. #### 3.4.1 PBE calculation The PBE code predicts that at the end of the evolution the mass ratio should have increased from $`q=0.6`$ to $`q0.88`$ while the separation, after decreasing slightly at the beginning, finally ends up at $`2.3`$ times the initial separation. The mass of the circumbinary disc becomes significant ($`M_{\mathrm{cb}}/M_\mathrm{b}\mathrm{¿}\mathrm{}1/20`$) at $`M_{\mathrm{acc}}1.5`$, and at the end of the calculation the circumbinary disc contains $`13`$% of the total mass. #### 3.4.2 SPH calculations The SPH calculations had two different forms of artificial viscosity: B1: Balsara, with $`\alpha _\mathrm{v}=0.25`$ and $`\beta _\mathrm{v}=1`$; D1: $`𝐯`$, also with $`\alpha _\mathrm{v}=0.25`$ and $`\beta _\mathrm{v}=1`$. Both formulations have low shear viscosity. The bulk viscosities are similar in non-shearing flows, but in shearing flows B1 has less bulk viscosity. We do not use the Standard viscosity formulation (S1 or S2 in test case 1) because of our finding that its large shear viscosity inhibits the formation of a circumbinary disc and leads to rapid evolution of the circumstellar discs. As with test case 1, due to the computational cost, the SPH calculations are stopped before all of the gas has fallen on to the binary. Each calculation took $`4`$ months on a 300 MHz Sun Ultra workstation (using the binary tree, not a GRAPE board). During the calculations, the binaries performed $`60`$ orbits and $`70`$% of the total mass was accreted by the binary or settled into a circumbinary disc. #### 3.4.3 Evolution of the mass ratio The agreement for the evolution of the mass ratio is even better than it was with test case 1 with differences of $`\mathrm{¡}\mathrm{}3`$%. This is because, in this test case, the specific angular momentum of the gas being captured by the secondary is large enough for it to form a circumsecondary disc outside the accretion radius as soon as the SPH calculations begin, regardless of the viscosity (Figures 7 and 8). The SPH calculations generally give a slightly higher mass ratio than is predicted by the PBE code which is exactly what is expected because the PBE code ignores the separation-decreasing effect of the interaction between the binary and the circumbinary disc (a smaller binary separation means that the specific angular momentum of the gas is higher relative to that of the binary, resulting in more gas being captured by the secondary). #### 3.4.4 Circumbinary disc evolution and interactions For test case 2, the differences between the PBE and SPH results involve the formation of the circumbinary disc and its interaction with the binary, the latter of which is not accounted for in the PBE code. Generally, a binary will interact with a circumbinary disc so as to transfer angular momentum from its orbit into the gas of the circumbinary disc. This tends to decrease the separation of the binary and increase its eccentricity (Artymowicz et al. 1991; Lubow & Artymowicz 1996). Both these effects can be seen in the SPH calculations. The separation follows the prediction of the PBE code to better than $`3`$% until the circumbinary disc begins to form, after which it is always smaller than predicted by the PBE code. As in test case 1, the eccentricity grows at the start of the calculation and then displays a secular decrease with time. However, when the mass of the circumbinary disc exceeds $`0.05M_\mathrm{b}`$, the transfer of angular momentum from the binary to the circumbinary disc overwhelms the eccentricity-decreasing effect of the infalling gas and the eccentricity increases. As with test case 1, different formulations of the SPH viscosity give slightly different evolutions. D1 results in earlier formation of a circumbinary disc than B1, although at $`M_{\mathrm{acc}}=2`$ and $`M_{\mathrm{acc}}=3.5`$ the circumbinary disc masses only differ by $`30`$%. More importantly, we find that B1 gives very poor resolution of spiral shocks in the circumbinary disc (created by gravitational torques from the binary) due to its lower bulk viscosity in shearing flows compared to D1 (c.f. Figures 7 and 8). The better shock resolution of D1 results in more efficient angular momentum transport from the binary’s orbit to the circumbinary disc which leads to a smaller binary separation and, consequently, more of the infalling gas settles into the circumbinary disc than with B1. This can be seen in the rapid decrease in separation of D1 compared to B1 when $`2.0\mathrm{¡}\mathrm{}M_{\mathrm{acc}}\mathrm{¡}\mathrm{}2.6`$ (Figure 6b), and, simultaneously, the more rapid increase in the mass of the circumbinary disc for D1 (Figure 6c). From the point of view of the ability to resolve shocks in the circumbinary disc, D1 is more realistic than B1 and we emphasise that extreme care should be used when employing the Balsara formulation of viscosity in shearing flows. Given that D1 appears to give more realistic results than B1, is it encouraging to note that over the entire evolution the PBE code predicts the circumbinary disc mass to within a factor of 2 of that given by the D1 SPH calculation (Figure 6). ### 3.5 Conclusions and limitations of the PBE code In summary, in the absence of a massive circumbinary disc, the PBE code gives results which are in good qualitative and quantitative agreement with the full hydrodynamic calculations. In all cases, the binary’s mass ratio is predicted to within 5% of the SPH results over an increase in the binary’s total mass by a factor of up to 6! In test case 1, the binary’s separation is predicted to within 20% by the PBE code. Furthermore, in test case 1, even these small differences can be attributed to the SPH code’s inability to resolve the circumsecondary disc during parts of the calculations or unphysically-rapid viscous evolution of the circumstellar discs, rather than a problem with the PBE code. Thus, the results from the PBE code are at least as accurate as those given by a full SPH calculation, but the PBE code is $`10^6`$ times faster. This makes it possible to investigate the statistical properties of binary stars (see the next section). In cases where a circumbinary disc forms around the binary, the PBE code generally predicts that the disc forms earlier than in the full SPH calculations, especially in the more borderline case of a progenitor molecular cloud core with uniform-density. However, in the best case for studying the formation of a circumbinary disc (test case 2, D1) the time of formation of the circumbinary disc was in good agreement and its mass was predicted to within a factor of 2 throughout the evolution. The main omission in the PBE code is that, if a massive circumbinary disc is formed ($`M_{\mathrm{cb}}/M_\mathrm{b}\mathrm{¿}\mathrm{}1/20`$), the code ignores the interaction between it and the binary. Omitting this interaction leads to the separation of the binary being overestimated by the PBE code. From the SPH results, we find that if a massive circumbinary disc is formed, the binary’s separation is likely to remain approximately constant as it accretes from a gaseous envelope. The overestimate of the separation by the PBE code means that the mass of the circumbinary disc is likely to be underestimated and, for the same increase in the binary’s mass, the binary’s mass ratio will be underestimated. We note, however, that this omission serves only to strengthen the predictions of the properties of binary stars that we obtain in Section 6. ## 4 Protobinary evolution We now use the PBE code to study the evolution and final states of binaries that are formed from the collapse of 6 types of molecular cloud core. We study binaries that are formed from clouds with three different initial density profiles: uniform-density, and power-law profiles of $`\rho 1/r`$ and $`\rho 1/r^2`$ (i.e. $`\lambda `$=0, -1, and -2). For each of these three density profiles, we study two different initial rotation profiles: solid-body rotation, and $`\mathrm{\Omega }_\mathrm{c}1/r`$ (i.e. $`\beta `$=0, and -1). These initial conditions are highly idealised. However, we use them to illustrate how the properties of a binary depend on the degree of central condensation and the amount of differential rotation in its progenitor molecular cloud core. In fact, in several cases the initial conditions represent extremes; the initial conditions before dynamic collapse occurs are almost certainly somewhat centrally-condensed, but are almost certainly less centrally condensed than $`\rho 1/r^2`$. In Section 6, we use the conclusions reached from these calculations to predict the properties of binaries and to constrain the properties of the molecular cloud cores from which they formed. For each type of cloud (Figures 9 to 13), we consider the evolution of ‘seed’ binaries that form with initial mass ratios in the range $`0.1q1.0`$. We give the mass ratio $`M_2/M_1`$, separation $`a`$, ratio of the mass of the circumbinary disc compared to the binary’s mass $`M_{\mathrm{cb}}/M_\mathrm{b}`$, and relative accretion rate $`\dot{M}_2/\dot{M}_1`$, as functions of the total mass that has fallen on to the binary system from the envelope, $`M_{\mathrm{acc}}`$. As mentioned in Section 2.1, the figures that are produced can be viewed in two ways: either as giving the evolution of individual protobinary systems as they evolve from their initial mass ($`M_{\mathrm{acc}}=M_\mathrm{b}=1`$) to a higher total mass (up to $`M_{\mathrm{acc}}=100`$), or, for the first three panels in each figure, as giving the final states of binaries that initially began with masses ranging from 1% ($`M_{\mathrm{acc}}=100`$ when the accretion stops) to 100% ($`M_{\mathrm{acc}}=1`$ if there is no accretion on to the protobinary) of the cloud’s total mass $`M_\mathrm{c}`$. ### 4.1 Solid-body rotation, uniform-density profile Figure 9 gives the results for binaries formed from initially uniform-density clouds in solid-body rotation. The long-term evolution of the mass ratios is that they increase toward unity (equal mass components). The separation initially decreases, due to accretion of gas with low mean specific angular momentum, but increases in the long-term. Both of these long-term effects are a consequence of the fact that the specific angular momentum of the infalling gas increases quickly as mass is accreted (Figure 2), since the accretion of material with high specific angular momentum increases the mass ratio and separation, while the accretion of material with low specific angular momentum decreases both the mass ratio and separation (Artymowicz 1983; Bate 1997; Bate & Bonnell 1997). The different evolutions for the different ‘seed’ mass ratios are in large part due to the way we have chosen the initial conditions (Section 2.1). We assumed that the central region of the progenitor cloud from which the ‘seed’ binary formed had the same angular momentum as that contained in the orbit of the ‘seed’ binary (equation 3). For a ‘seed’ binary with a given mass ratio and separation these initial conditions give the slowest possible rotation rate of the progenitor core and, therefore, the slowest possible evolution toward equal mass ratios and the formation of circumbinary discs. However, they also mean that the gas which is accreted by a ‘seed’ binary with a low mass ratio has less specific angular momentum than for a binary with a high mass ratio. Hence, the mass ratio of a ‘seed’ binary with a small mass ratio tends to decrease initially, while that of a binary with a large mass ratio tends to increase. This dependence of the rotation rate of the progenitor cloud on the mass ratio of the ‘seed’ binary also largely explains why the evolution of the separation (Figure 9b) and circumbinary disc (Figure 9c) differ for different initial mass ratios. In all cases, the separation decreases initially, but it does so quicker and for longer in systems with smaller initial mass ratios because the specific angular momentum of the infalling gas is lower. Likewise, the formation of a circumbinary disc requires more accretion for systems with lower initial mass ratios. However, for the formation of a circumbinary disc, there is an additional effect. For a system with a low mass ratio, the secondary is at a larger radius from the binary’s centre of mass than in a system with a high mass ratio. Therefore, the infalling gas must have more specific angular momentum before it can form a circumbinary disc rather than be captured by the secondary. In all cases, the long-term evolution is that the binary reaches a steady-state in which a constant fraction of the infalling gas is captured by the binary with the rest going into the circumbinary disc and $`\dot{M}_{\mathrm{cb}}/\dot{M}_\mathrm{b}0.17`$. Since the specific angular momentum of the gas that falls on to the binary is always increasing (Figure 2), this indicates that a steady-state is established in which the binary continually adjusts to the accretion so that ratio of the specific angular momentum of the infalling gas to $`\sqrt{GM_\mathrm{b}a}`$ (which is related to the specific angular momentum of the binary) is constant (see equation 5). Of course, as shown by the test calculations (Section 3), when the circumbinary disc becomes massive ($`M_{\mathrm{cb}}/M_\mathrm{b}\mathrm{¿}\mathrm{}1/20`$; dotted lines in Figures 9 to 14) the interaction of the binary with the disc will not allow this same equilibrium to be established. Instead the separations will be lower and more of the infalling gas will settle into the circumbinary disc than is predicted here. In addition, for the same increase in the mass of the binary, the mass ratio will tend to increase even more rapidly toward equal mass protostars because the smaller separation of the binary means that the specific angular momentum of the gas compared to that of the binary will be somewhat larger. Thus, as with the assumption that the cloud rotates at the slowest possible rate, ignoring the interaction of the binary with the circumbinary disc gives the slowest possible evolution toward equal masses and the formation of a massive circumbinary disc. ### 4.2 Solid-body rotation, 1/r-density profile Figure 10 gives the results for binaries formed from progenitor clouds with $`1/r`$-density profiles in solid-body rotation. The evolutions are similar to the uniform-density calculations, but everything evolves toward high-angular-momentum behaviour after the accretion of less gas. The mass ratios evolve toward unity and a massive circumbinary disc is formed after less mass has been accreted. The separations begin increasing earlier, and increase by a greater amount for the same amount of accretion. The more rapid evolution toward high-angular-momentum behaviour than with the uniform-density cloud is because the infalling gas has a more specific angular momentum initially, and its specific angular momentum increases more rapidly with mass (Figure 2). This also means that, to reach a steady-state, the binary must increase $`\sqrt{GM_\mathrm{b}a}`$ more rapidly. Hence, a steady-state is attained when the infalling gas has more specific angular momentum relative to the binary than in the uniform-density case, and thus $`\dot{M}_{\mathrm{cb}}/\dot{M}_\mathrm{b}`$ is also higher at $`0.19`$. Again, however, the interaction of the binary with the circumbinary disc, which is not taken into account by the PBE code, is likely to lead to: significantly smaller separations; more mass in the circumbinary disc; and, for a given increase in the binary’s total mass, a mass ratio that is even closer to unity. ### 4.3 Solid-body rotation, 1/r<sup>2</sup>-density profile Figure 11 gives the results for binaries formed from progenitor clouds with $`1/r^2`$-density profiles in solid-body rotation. The rate of increase of the specific angular momentum of the gas as mass is accreted is now so rapid (Figure 2) that the binary cannot evolve fast enough to keep up with it. This results in the rapid formation and runaway growth of a massive circumbinary disc (Figure 11c) because the infalling gas has too much angular momentum to be captured by the individual components of the binary. The binary only accretes a maximum of $`13`$ times its initial mass, even if the cloud mass is 100 times the initial binary mass – the rest of the gas goes into the circumbinary disc. Of course, in reality, if the circumbinary disc becomes comparable in mass to the binary we would expect it to react in ways that are not possible to follow with the simple PBE calculations. Interaction between the binary and circumbinary disc could lead to two possibilities. First, the disc may fragment to form one or more additional protostellar objects (e.g. Bonnell & Bate 1994a; Burkert & Bodenheimer 1996; Burkert et al. 1997). In this case, the model has completely broken down, since in this paper we assume that a binary is formed by an initial binary fragmentation and subsequent accretion. There is no allowance made for additional fragmentation. The second possibility is that gas is accreted by the circumstellar discs of the two protostars from the circumbinary disc (Artymowicz & Lubow 1996; Lubow & Artymowicz 1996). If such accretion occurs, the binary’s mass ratio will inevitably evolve rapidly toward unity because this material has high specific angular momentum and will preferentially be captured by the secondary. The mass ratios would continue to follow the rapid increases seen early in Figure 11a (when $`M_{\mathrm{acc}}\mathrm{¡}\mathrm{}2`$), giving even faster evolution toward equal masses than was seen for cores with $`1/r`$-density profiles. ### 4.4 Differential rotation, uniform-density profile Figure 12 gives the results for initially uniform-density clouds in differential rotation with $`\mathrm{\Omega }_\mathrm{c}1/r`$. The specific angular momentum of the infalling gas is low to begin with and only slowly increases as mass is accreted (Figure 2). Thus, a binary’s mass ratio takes longer to evolve toward unity, the separation continually decreases, and a circumbinary disc takes longer to form than in the solid-body case (Figure 9). Even after a binary has accreted 100 times its initial mass, its mass ratio depends primarily on its initial value (Figure 12a). The ratio of the mass in the circumbinary disc to the binary’s mass is always less than 0.1, and in most cases less than 0.05, even after the infall of 100 times the binary’s initial mass. ### 4.5 Differential rotation, 1/r-density profile Figure 13 gives the results for clouds with $`1/r`$-density profiles that are in differential rotation with $`\mathrm{\Omega }_\mathrm{c}1/r`$. Again, with differential rotation, the infall of gas with low specific angular momentum is maintained for longer than with solid-body rotation. The mass ratios do all increase toward unity after more than 10 times the binary’s initial mass has fallen in, however, after 100 times the initial binary’s mass has fallen in, a full range of mass ratios is still possible. The formation of a circumbinary disc is again delayed with differential rotation, although in most cases a steady-state is reached eventually with $`\dot{M}_{\mathrm{cb}}/\dot{M}_\mathrm{b}0.15`$. ### 4.6 Differential rotation, 1/r<sup>2</sup>-density profile Finally, we consider the evolution of binaries formed from progenitor clouds with $`1/r^2`$-density profiles that are in differential rotation with $`\mathrm{\Omega }_\mathrm{c}1/r`$. From Figure 2, the relationship between angular momentum and mass in the progenitor cloud is identical to a cloud with a $`1/r`$-density profile in solid-body rotation and, therefore, the evolution is identical to that in Figure 10. Compared to $`1/r^2`$-density with solid-body rotation, the effect of differential rotation is, once again, to decrease the rates of increase of the mass ratio and separation (at least while the binary is still accreting gas in Figure 11), and to delay the formation of a circumbinary disc. The reduced rate of increase of the specific angular momentum of the infalling gas means that, unlike in the case with solid-body rotation, the binary can reach an equilibrium with the cloud; the runaway of the mass in the circumbinary disc is avoided. However, even such strong differential rotation is not enough to stop the mass ratios being driven toward unity with an initial density profile of $`\rho 1/r^2`$. ## 5 Relaxing some of the assumptions ### 5.1 The rotation rate of the progenitor cloud In the previous section, we assumed that the orbital angular momentum of the ‘seed’ binary system, $`L_\mathrm{b}`$, was equal to the angular momentum of the spherical central region of the cloud, $`L_{\mathrm{cen}}`$, from which the ‘seed’ binary formed (equation 3). This assumption gives a lower limit to the rate of rotation of the progenitor cloud, since it does not allow for the possibility of circumstellar discs around the ‘seed’ binary which would contain additional angular momentum. Furthermore, if the ‘seed’ binary was formed via fragmentation of a circumstellar disc surrounding a single protostar, then the disc must have had a radius at least as big as the separation of the ‘seed’ binary which forms (i.e. at least some gas in the disc must have had a specific angular momentum of $`j=\sqrt{GM_\mathrm{b}a}`$). Thus, some of the gas which falls on to the system from the envelope immediately after the ‘seed’ binary forms is expected to have at least this much specific angular momentum. As a comparison, the first gas to be accreted by the binaries in Figure 9 had at most $`0.210.63`$ of this value, depending on the initial mass ratios. Therefore, it is conceivable that the progenitor clouds may be rotating faster than we assumed in the previous section. To demonstrate the effect of the progenitor cloud having a greater rate of rotation, we performed calculations which evolved binaries whose progenitor clouds were rotating rapidly enough that some (that part of the envelope which is in the plane of the binary) of the first gas to fall on to the binary had a specific angular momentum of $`j=\sqrt{GM_\mathrm{b}a}`$. Thus, rather than setting $`R_\mathrm{b}^2\mathrm{\Omega }_{\mathrm{Rb}}`$ using equation 3, we set $$R_\mathrm{b}^2\mathrm{\Omega }_{\mathrm{Rb}}=\sqrt{GM_\mathrm{b}a},$$ (21) regardless of the initial mass ratio of the binary. Thus, all the progenitor clouds have the same initial density profile and rotation rate and all the ‘seed’ binaries have the same separation, regardless of the mass ratio of the ‘seed’ binary. This also allows us to examine the degree to which the evolution of a binary depends on its mass ratio, and how much it depends on the properties of the gas which it accretes from the infalling envelope. These ‘seed’ binaries could have formed via the fragmentation of a disc, where we assume that the angular momentum that is in excess of the ‘seed’ binary’s orbital angular momentum, $`L_\mathrm{b}`$, is contained in circumstellar discs. The results for an initially uniform-density cloud in solid-body rotation are given in Figure 14. Due to the greater rotation rate of the progenitor cloud and, hence, the higher specific angular momentum of the gas which falls on to the binary, the mass ratios are driven toward unity after much less mass has been accreted compared with when the slowest possible rotation rate was used (Figure 9). Thus, the rates of increase toward mass ratios of unity that are given in Section 4 are lower limits. In contrast to the previous section, the evolutions of the separation and the mass in a circumbinary disc are quite independent of the mass ratio of the binary, demonstrating that the evolution of a binary’s separation and the formation of a circumbinary disc depend almost entirely on the properties of the infalling gas; the mass ratio of the binary has almost no effect. The very small dependence of the mass of the circumbinary disc on the binary’s mass ratio is because the secondary is farther from the centre of mass in a binary with a lower mass ratio. ### 5.2 Different density profiles The evolutions presented in Section 4 are for idealised progenitor cloud cores with power-law density and rotation profiles. These clearly illustrate the main dependencies of the evolution of a protobinary system as it accretes to its final mass. However, the PBE code easily be applied to other types of progenitor cloud core. For example, we have performed evolutions beginning with Gaussian centrally-condensed cores with inner to outer density contrasts of 20:1. These have density profiles that are similar to observed pre-stellar molecular cloud cores in that they are steep on the outside and flatter near the centre (Ward-Thompson et al. 1994; André, Ward-Thompson, Motte 1996; Ward-Thompson, Motte, & André 1999). As one might expect, the evolutions lie between those for uniform-density clouds and those with $`\rho 1/r`$, although they are closest to the uniform-density case. ### 5.3 Eccentric binary systems In this paper, strictly, we only consider the evolution of protobinary systems with circular orbits. However, in the test cases of Section 3, where we compared the evolution given by the PBE code with those from full SPH calculations, we obtained good agreement even though the orbit of the binary in the SPH calculation had eccentricities up to $`e0.2`$. Therefore, we are confident that the results in this paper are valid for binaries with eccentricities $`e\mathrm{¡}\mathrm{}0.2`$. For larger eccentricities, we expect the evolution of the binaries should be qualitatively similar to what we have found for circular binaries. Hence, we still expect that the long-term effect of accretion will be to drive the mass ratios toward unity and form circumbinary discs and that this evolution will be enhanced with more centrally-condensed initial conditions and diminished by differential rotation. However, there will be quantitative differences. Most importantly, the formation of a circumbinary disc will be delayed in an eccentric system for two reasons. First, for the same semi-major axis, an eccentric binary has less angular momentum than a circular one and, thus, the gas from which it first formed (and hence the molecular cloud core as a whole) may have been rotating more slowly. This is would also mean that more gas is able to be accreted before the binary’s mass ratio is driven toward unity. Second, for binaries with the same semi-major axis separation, a circumbinary disc must be formed at a larger radius from the centre of mass for an eccentric binary than for a circular binary to avoid disruption (e.g. Artymowicz & Lubow 1994). Thus, eccentric systems are expected to be able to accrete significantly more material than circular binaries before forming a circumbinary disc. Other effects may include collisions between the protostars and their circumstellar discs (e.g. Hall, Clarke & Pringle 1996) and, of course, the eccentricity itself is expected to evolve due to accretion and the interactions between the protostars and the discs. These effects are beyond the scope of this paper, but would certainly be well worth studying. ## 6 The Properties of Binary Systems The aim of this paper is to determine, for a particular model of binary star formation, the properties of binaries and how they depend on the initial conditions in their progenitor molecular cloud cores. By comparing these predictions with observations, we hope to determine whether this is a viable model for binary star formation and, if so, to constrain the initial conditions for binary star formation. In order to do so, however, we must first discuss the relationship that is expected between the mass of a ‘seed’ binary system and its separation. ### 6.1 Dependence of the mass of a ‘seed’ protobinary on its separation In this paper, we consider the evolution of a ‘seed’ binary system as it accretes gas from an infalling envelope, but we do not specify exactly how the protobinary is formed. Presumably, it is formed via some sort of fragmentation process (see Section 1). In order for binary fragmentation to occur, the Jeans radius at the time of fragmentation must be less than or approximately equal to the separation of the protobinary that is formed. Thus, $`a\mathrm{¿}\mathrm{}2R_\mathrm{J}`$, where $`R_\mathrm{J}`$ is the Jeans radius $$R_\mathrm{J}\frac{2GM_{\mathrm{frag}}\mu }{5R_\mathrm{g}T},$$ (22) $`\mu `$ is the mean molecular weight, and $`R_\mathrm{g}`$ is the gas constant. The ‘seed’ binary’s initial mass is approximately twice the mass of each fragment, $`M_{\mathrm{frag}}`$. Therefore, for a constant temperature, $`T`$, we expect a roughly linear relationship between the mass of a ‘seed’ binary and its separation: $`M_\mathrm{b}2M_{\mathrm{frag}}a`$. Figure 15 gives the initial mass of a binary versus its separation from a series of fragmentation calculations performed by Boss \[Boss 1986\]. Indeed, there is a strong, approximately linear, relationship between the binary’s mass and its separation. Using equation 22, we plot (solid line) the relationship if $`a=2R_\mathrm{J}`$. We use $`\mu =2.46`$ with $`T=10`$ K for densities below $`10^{13}`$ g cm<sup>-3</sup> and $`T\rho ^{\gamma 1}`$ with $`\gamma =1.4`$ for $`10^{13}\rho <6\times 10^8`$ g cm<sup>-3</sup> and $`\gamma =1.1`$ for $`\rho 6\times 10^8`$ g cm<sup>-3</sup> (e.g. Tohline 1982). The numerical results generally lie above the solid line because the properties of the binaries were calculated somewhat after they first began to form and they have already accreted some gas. For example, if we assume that the fragments are embedded in a sphere of gas with radius $`2R_\mathrm{J}`$ and a mean density of half the mean density of the fragments which is rapidly accreted by the fragments, then we obtain the dotted line for the masses of the ‘seed’ binaries. Furthermore, the fragments typically form on elliptical orbits and are initially falling toward each other (which moves the numerical results to smaller separations). Generally, for separations $`\mathrm{¿}\mathrm{}10`$ AU, wider ‘seed’ binaries have larger initial masses, while for separations $`\mathrm{¡}\mathrm{}10`$ AU the initial masses are $`0.01\mathrm{M}_{}`$. Therefore, for example, in order to form binary systems with a solar-mass primary, close systems ($`\mathrm{¡}\mathrm{}10`$ AU) may have to accrete $`100`$ times their initial mass. Systems with separations $`10\mathrm{¡}\mathrm{}a\mathrm{¡}\mathrm{}100`$ AU need to accrete between $`10010`$ times their initial mass, and the widest systems need only accrete a few times their initial mass. This leads us to our first prediction: for binary systems with the same total mass, the properties of close systems will be more heavily determined by accretion than those of wide systems. In turn this means that the mass ratios of wide binary systems are expected to be determined primarily from the initial density structure in the molecular cloud cores and so they may enable us to better understand the initial conditions for star formation by giving us a way to measure the density structure. Therefore, in order to study the initial conditions for star formation we should consider the properties of wide binaries, while to learn more about the evolution due to accretion it is best to consider close binaries. A further prediction is: if high and low mass stars form by the same general process, and the initial mass of a ‘seed’ binary is independent of the mass of the progenitor core, then the properties of massive binary systems should be more heavily influenced by accretion than those of lower mass systems with the same separation. ### 6.2 Binary mass ratios In sections 4 and 5, we found that the long-term evolution of an accreting binary is for its mass ratio to approach unity. #### 6.2.1 Dependence of the mass ratio on separation As argued in Section 6.1, for molecular cloud cores of a given mass, the amount of mass accreted by a binary, relative to its initial fragmentation mass, will be greater for close binaries than for wide systems. Thus, closer binary systems are more likely to have mass ratios near unity than wider systems with the same total mass. Duquennoy & Mayor \[Duquennoy & Mayor 1991\] surveyed main-sequence G-dwarf stellar systems. They found that the mass ratio distribution, averaged over binaries with all separations, increases toward small mass ratios. However, there is mounting evidence that the mass-ratio distributions differ between short and long-period systems with the distribution for close binaries ($`P<3000`$ days; $`a\mathrm{¡}\mathrm{}5`$ AU) consistent with a uniform distribution (Mazeh et al. 1992; Halbwachs, Mayor & Udry 1998). Thus, relative to wide systems, the close systems are biased toward mass ratios of unity, in agreement with the above prediction. The fraction by which the mass of a ‘seed’ binary must be increased in order for its mass ratio to approach unity depends on the conditions in the progenitor cloud core. In general, the less centrally-condensed a core is and/or the more differential rotation it has ($`\beta 0`$), the easier it is to form binary systems with low mass ratios for a given increase in the binary’s initial mass. By inquiring how easy it is to reproduce the observed mass-ratio distributions with different types of progenitor molecular cloud core, we can use these results to constrain the initial conditions for binary star formation. Duquennoy & Mayor \[Duquennoy & Mayor 1991\] found that binaries containing G-dwarfs with separations $`\mathrm{¿}\mathrm{}30`$ AU generally have unequal mass ratios (typically $`q0.3`$). From Figure 15, typically, these binaries may be expected to accrete $`10`$ times their ‘seed’ mass. For uniform-density progenitor clouds in solid-body rotation (Figure 9a), this observed mass ratio distribution could be produced if the fragmentation process typically produces ‘seed’ binaries with low mass ratios ($`q0.2`$). However, with more centrally-condensed cores it becomes progressively more difficult to produce the observed mass-ratio distribution because the accretion drives the initial mass ratios toward unity (Figures 10a and 11a). With $`\rho 1/r`$ cores, the ‘seed’ binaries must typically have mass ratios of $`q0.1`$ in order to produce G-dwarf binaries that have typical mass ratios of $`q0.3`$. With $`\rho 1/r^2`$ (assuming the ‘seed’ binaries manage to accrete from their circumbinary discs) it is difficult to see how the observed mass-ratio distribution could be produced because of the rapid evolution toward equal mass protostars. If the progenitor cores have significant differential rotation, uniform-density and $`\rho 1/r`$ cores can easily produce the observed mass ratios, but it is still difficult to produce G-dwarf binaries with low mass ratios from cores with $`\rho 1/r^2`$. For close G-dwarf binary systems ($`a\mathrm{¿}\mathrm{}5`$ AU), the constraints are even more pronounced. Typically, we expect such binaries to accrete $`100`$ times their ‘seed’ mass, yet the observed mass-ratio distribution is approximately flat (i.e. approximately 1/2 the binaries have $`q<0.5`$). It is effectively impossible for cores in solid-body rotation to produce such a mass ratio distribution if they are significantly centrally-condensed. Even for uniform-density cores, approximately 1/2 of the ‘seed’ binaries would need to have $`q<0.1`$, although taking into account the effect of eccentricity (Section 5.3) it is probable that cores with nearly uniform-density profiles can reproduce the observations. If the cores have significant differential rotation, the observed mass ratios can be produced with cores that are as centrally-condensed as $`\rho 1/r`$, but cores with $`\rho 1/r^2`$ are still excluded. We note that, as discussed in Section 5, the results in Section 4 give the slowest possible evolution toward mass ratios of unity (unless the binary has a significant eccentricity). Thus, it may be even more difficult to form unequal mass binaries than suggested by the above discussion. Even taking the above numbers, with solid-body rotation the mass ratios of the ‘seed’ binaries must typically be $`q0.10.2`$. To obtain such mass ratios via direct fragmentation requires that the progenitor molecular cloud cores have significant asymmetries (e.g. Bonnell & Bastien 1992) which implies that they are formed and/or triggered to collapse by dynamical processes. Low mass ratios can also be obtained via rotational fragmentation (Section 1), but then the binary’s mass ratio will be driven toward unity even more rapidly. Thus, in reality, molecular cloud cores may have a degree of differential rotation. In summary, the observed mass-ratio distributions are most easily explained by cores that have density profiles that are less centrally condensed than $`\rho 1/r`$ (e.g. Gaussian), possibly with a small amount of differential rotation. This is in good agreement with the observed density profiles of pre-stellar cores (Ward-Thompson et al. 1994; André, Ward-Thompson, Motte 1996; Ward-Thompson, Motte, & André 1999). If the progenitor cores rotate as solid-bodies it is essentially impossible to produce the observed mass-ratio distribution of close binaries if the cores are much more centrally condensed than a Gaussian-density profile. If differential rotation is allowed, cores with density profiles as steep as $`\rho 1/r`$ are feasible (with strong differential rotation), but cores with $`\rho 1/r^2`$ still cannot reproduce the observations. #### 6.2.2 Dependence of the mass ratio on the binary’s total mass If all binary systems form by the same general process, regardless of a system’s final total mass (i.e. from the collapse of molecular cloud cores with the same properties, only more massive), and if the initial mass of a ‘seed’ binary (Section 6.1) is independent of the mass of the core (depending only on its initial separation), then massive binaries should have more-equal mass ratios than low-mass binaries of the same separation. This is because systems with a larger final mass would have had to accrete more, relative to their initial mass, than systems with a lower final mass. Unfortunately, unbiased surveys of stars more massive than G-dwarfs are only slowly becoming available. Preliminary results for O-star binaries (e.g. Manson et al. 1998) show a difference in the mass-ratio distribution between close ($`P\mathrm{¡}\mathrm{}40`$ days; $`a\mathrm{¡}\mathrm{}1/2`$ AU) and wide ($`P\mathrm{¿}\mathrm{}10^4`$ years; $`a\mathrm{¿}\mathrm{}1000`$ AU) systems, with close systems biased toward mass ratios of unity, but comparison between the mass-ratio distributions of high and low-mass binaries is not yet possible. In addition, Bonnell, Bate & Zinnecker (1998) recently proposed that O-stars form in a different way to lower mass stars, from the collision of less massive stars in very dense star-forming regions. This theory predicts a large frequency of close binaries for O-stars (due to tidal capture) with a mass-ratio distribution that is not determined by accretion from an infalling envelope. Thus, we strongly encourage surveys that will determine the mass-ratio distributions of B-, A-, and F-stars. #### 6.2.3 Formation of brown dwarf companions to solar-type stars Given that it becomes more difficult to form binaries with low mass ratios as more gas is accreted by a binary, it is of interest to ask how easy it is to form stars with brown-dwarf companions (see also Bate 1998). Since wider binaries should have to accrete less gas, relative to their initial mass, to attain the same final total mass, then the frequency of brown dwarf companions to solar-type stars should increase with separation. Likewise, we expect that systems with lower final masses also accrete less relative to their initial masses so that for the same range of separations, brown dwarf companions should be more frequent in systems with a lower total mass than in higher-mass systems. Taking the types of cores that most easily reproduce the mass-ratio distributions of low-mass stars (i.e. near-uniform density cores in solid-body rotation), we find that a brown dwarf companion to a solar-type star could be formed if an extreme mass ratio ($`q0.1`$) was produced at fragmentation and the system subsequently increased its mass by a factor of $`20`$ or less (Figure 9a). This implies that it may be quite easy to form brown dwarf companions to stars of around a solar mass or less with separations $`\mathrm{¿}\mathrm{}10`$ AU. Indeed, two wide systems have been found: GL 229B is a brown dwarf companion ($`0.020.06\mathrm{M}_{}`$) to a $`0.6\mathrm{M}_{}`$ M-dwarf with a separation of $`50`$ AU (Nakajima et al. 1995); G 196-3B is a brown dwarf companion ($`0.020.04\mathrm{M}_{}`$) to a $`0.4\mathrm{M}_{}`$ M-dwarf with a separation of $`300`$ AU (Rebolo et al. 1998). However, close systems ($`\mathrm{¡}\mathrm{}10`$ AU) need to accrete $`100`$ times their initial mass in order to obtain a solar-mass primary and, due to the evolution of the mass ratio toward unity, it would be very difficult for the secondary to have the final mass of a brown dwarf after this amount of accretion. (Figure 9a). Therefore, brown-dwarf companions to stars with masses $`1\mathrm{M}_{}`$ and separations $`\mathrm{¡}\mathrm{}10`$ AU are likely to be extremely rare or perhaps even nonexistent. This prediction is supported by the recent radial-velocity searches for giant planets (see Marcy, Cochran & Mayor 1999 and references within). Although many planetary candidates ($`M\mathrm{sin}i\mathrm{¡}\mathrm{}0.013\mathrm{M}_{}`$) have been found with separations less than a few AU, there are currently only 4 brown dwarf ($`0.013\mathrm{M}_{}\mathrm{¡}\mathrm{}M\mathrm{sin}i\mathrm{¡}\mathrm{}0.075\mathrm{M}_{}`$) candidates from $`600`$ target stars and even these could be stellar companions with orbits nearly perpendicular to the line of sight (Marcy, Cochran & Mayor 1999). Within this model, the easiest way to obtain such close systems with extreme mass ratios would be that they formed from cores with significant differential rotation, or that the companion was formed significantly after the primary. In the latter case, the primary would already have a significant fraction of its final mass and, hence, the protobinary would not have to increase its total mass by such a large factor. However, if this was achieved via the fragmentation of a circumstellar disc, then the fragmentation would have to occur after the primary and its disc had accreted a large fraction of the envelope, otherwise the secondary would still end up with a stellar mass due to subsequent accretion (Section 5.1 and Figure 14a). ### 6.3 Binary separations In Section 4, we found that a binary’s separation can evolve significantly due to accretion, increasing or decreasing by up to 2 orders of magnitude (Figures 9b to 14b). However, for most cases, if the binary’s separation is increasing the PBE code also predicts that a massive circumbinary disc will be present. In test case 2 (Section 3.4; Figure 6), we found that if a massive circumbinary disc is present the interaction of the binary with the circumbinary disc is likely to negate the separation-increasing affect of the accretion and the separation will remain approximately constant. Therefore, we conclude that a binary’s separation is likely to decrease or remain of the same order as its initial value during the accretion of the gaseous envelope. After the accretion phase, if a binary has a circumbinary disc its separation is expected to decrease. Without a circumbinary disc, its separation is likely to increase as the circumstellar discs evolve viscously and transfer their angular momentum to the orbit of the binary. However, the angular momentum contained in the circumstellar discs is likely to be small compared to the orbital angular momentum of the binary and, therefore, the binary’s separation is expected to increase by less than a factor of two. ### 6.4 Circumstellar discs Bate & Bonnell \[Bate & Bonnell 1997\] studied the disc formation process in accreting protobinary systems and established criteria for the formation of circumstellar and circumbinary discs. They found that if a protobinary system only accretes gas with low specific angular momentum after its formation, the primary will have a circumstellar disc but the secondary may not. The reverse is not true; if a circumstellar disc is formed around the secondary, the primary will also have a disc. These conclusions can also be seen in the snapshots from the SPH test cases in Figures 4, 5, 7, and 8. With the PBE code, we do not differentiate between gas that is directly accreted by a protostar and gas that is captured in its circumstellar disc (presumably to be accreted by the protostar at a later time). However, we do determine the relative accretion rate on to the secondary and its circumstellar disc compared to the primary and its circumstellar disc $`\dot{M}_2/\dot{M}_1`$ during formation of a binary (Figures 9d to 14d). We find that the relative accretion rate is $`\dot{M}_2/\dot{M}_11.3`$ and in the majority of cases is less than unity. Therefore, we expect that in most cases the circumsecondary disc will have a mass that is less than or similar to that of the circumprimary disc. This conclusion is valid unless the circumprimary disc accretes on a shorter time-scale than the circumstellar disc. In fact, Armitage, Clarke & Tout (1999) showed that the circumsecondary disc is expected to accrete more rapidly than the circumprimary disc and, therefore, this effect should only be enhanced. This conclusion is in excellent agreement with the latest observations of circumstellar material around young binary systems. Ghez, White & Simon \[Ghez, White & Simon 1997\] considered UV and NIR excess emission from the components of young binary systems and found that the excesses are either comparable or dominated by the primary, suggesting that the gas in circumstellar discs is either distributed similarly or preferentially around the primary. The only cases where a circumsecondary disc may become significantly more massive than the circumprimary disc are those where the gaseous envelope has effectively been exhausted and accretion on to the circumstellar discs comes primarily from a circumbinary disc. In these cases, because the gas has very high angular momentum with respect to the binary, it will preferentially be captured by the secondary (Artymowicz & Lubow 1996; Bate & Bonnell 1997). ### 6.5 Circumbinary discs From the results in Section 4, just as the mass ratio of an accreting protobinary evolves toward unity in the long-term, the more material that is accreted by a protobinary, the more likely it is that a circumbinary disc is formed. Therefore, following the same argument that we made for the mass ratios of binary stars, we predict that closer binary systems are more likely to have circumbinary discs than wider systems with the same total mass. Furthermore, if massive and low-mass binary systems via the same process, and if the initial mass of a ‘seed’ binary is independent of the mass of the core, then massive binaries are more likely to have circumbinary discs than low-mass binaries of the same separation. The first of these predictions is in good agreement with recent observations. Jensen, Mathieu & Fuller \[Jensen, Mathieu & Fuller 1996\] surveyed 85 pre-main-sequence binary systems and found that, while emission presumably associated with circumbinary discs could be found around many close binaries (separations less than a few AU), circumbinary emission around binaries with separations of a few AU to $`100`$ AU is almost entirely absent. The only exception was GG-Tau with a separation of $`40`$ AU \[Dutrey, Guilloteau & Simon 1994\]. Furthermore, Dutrey et al. (1996) performed an imaging survey of 18 multiple systems that could resolve circumbinary discs with radii $`\mathrm{¿}\mathrm{}100`$ AU and found only one circumbinary disc around UY-Aur which has a separation of $`120`$ AU. As with the mass-ratio evolution, the quantitative predictions depend on the properties of the progenitor cloud cores: the more centrally-condensed a core is and/or the less differential rotation it has ($`\beta 0`$), the lower the fraction by which a binary has to increase its mass before a circumbinary disc is formed. For cores in solid-body rotation, a uniform-density cloud leads to a significant circumbinary disc ($`M_{\mathrm{cb}}/M_\mathrm{b}>0.05`$) after a circular binary accretes $`240`$ times its initial mass (depending on the initial mass ratio; Figure 9c). Using Figure 15, we would expect most circular binaries with separations $`a\mathrm{¡}\mathrm{}100`$ AU to have circumbinary discs, while many with separations $`a\mathrm{¿}\mathrm{}100`$ AU should not have circumbinary discs. However, most wide binaries have large eccentricities (e.g. Duquennoy & Mayor 1991) and more material must be accreted by an eccentric binary before a circumbinary disc is formed (Section 5.3). Taking this into account, uniform-density cores in solid-body rotation are likely to produce circumbinary discs around most binaries with $`a\mathrm{¡}\mathrm{}10`$ AU and some binaries with intermediate separations. However, very few should exist around binaries with separations $`a\mathrm{¿}\mathrm{}100`$ AU, in reasonable agreement with the observations. For a core with $`\rho 1/r`$ (Figure 10), a binary can only increase its mass by a factor of $`1.56`$ before a circumbinary disc is formed. In this case, most binaries with separations $`a\mathrm{¡}\mathrm{}100`$ AU would be expected to have circumbinary discs, even taking eccentricity into account. With a $`1/r^2`$-density profile a binary can’t even double its mass before a circumbinary disc is formed (Figure 11) so that almost all binaries would be expected to have circumbinary discs, in strong disagreement with observations. Differential rotation allows a binary to accrete more mass before a circumbinary disc is formed. In most cases, binaries formed from uniform-density cores (Figure 12) can accrete up to 100 times their initial mass without forming a massive circumbinary disc, meaning that even the closest binaries would not have circumbinary discs. Cores with $`1/r`$-density profiles (Figure 13) allow from $`5100`$ times a binary’s initial mass to be accreted so that most binaries with separations $`\mathrm{¡}\mathrm{}10`$ AU and many with separations $`\mathrm{¡}\mathrm{}100`$ AU would have circumbinary discs. Cores with $`1/r^2`$-density profiles would still produce discs around most binaries with separations $`a\mathrm{¡}\mathrm{}100`$ AU. Therefore, as with our predictions concerning binary mass-ratio distributions, the circumbinary disc observations can be reasonably well explained if most binaries form from progenitor cores which are less centrally condensed than $`\rho 1/r`$ (e.g. Gaussian), possibly with a small amount of differential rotation. Cores with $`\rho 1/r`$ are possible if there is significant differential rotation, but the singular isothermal sphere ($`\rho 1/r^2`$) cannot reproduce the observations even with extreme differential rotation. ## 7 Conclusions We have considered a model for the formation of binary stellar systems which has been inspired by the results obtained from $`20`$ years of study of the fragmentation collapsing molecular cloud cores. In the model, a ‘seed’ protobinary system forms, presumably via fragmentation, within a collapsing molecular cloud core and evolves to its final mass by accreting material from an infalling gaseous envelope. We developed and tested a method which can rapidly follow the evolution of the mass ratios, separations and circumbinary disc properties of such binaries as they accrete to their final masses. Using this protobinary evolution code, we predict the properties of binary stars and how they depend on the pre-collapse conditions in their progenitor molecular cloud cores. These predictions and their comparison with current observations are discussed in detail in Section 6. Briefly, we conclude that, if most binary stars form via the above model, binary systems with smaller separations or greater total masses should have mass ratios which are biased toward equal masses when compared to binaries with wider orbits or lower total masses. Similarly, the frequency of circumbinary discs should be greater for pre-main-sequence binaries with closer orbits or greater total masses. These conclusions can be understood because: binaries which are closer or have a greater final mass should accrete more gas relative to their initial masses than wider or lower-mass binaries; the specific angular momentum of the infalling gas relative to that of the binary is expected to increase as the accretion proceeds; and the accretion of gas with high specific angular momentum tends to equalise the mass ratio and forms a circumbinary disc. We also demonstrate that in a young binary which is accreting from an infalling gaseous envelope, the primary will generally have a circumstellar disc which is more massive or similar in mass to that of the secondary. All of these conclusions are in good agreement with the latest observations. By making rough quantitative predictions of the properties of binary stars, we find that the observed properties of binary stars are most easily reproduced if the pre-collapse molecular cloud cores from which binaries form have radial density profiles between uniform and $`1/r`$ (e.g. Gaussian) with near uniform rotation. This is in excellent agreement with the observed properties of pre-stellar cores (Ward-Thompson et al. 1994; André, Ward-Thompson, Motte 1996; Ward-Thompson, Motte, & André 1999). Conversely, the observed properties of binaries cannot be reproduced if the cores are in solid-body rotation and have initial density profiles which are strongly centrally condensed (between $`1/r`$ and $`1/r^2`$), and the singular isothermal sphere ($`\rho 1/r^2`$) cannot fit the observations even with strong differential rotation. ## Acknowledgments I am grateful to Ian Bonnell, Cathie Clarke and Jim Pringle for many helpful discussions and their critical reading of the manuscript.
no-problem/0002/cond-mat0002282.html
ar5iv
text
# Low-dimensional Bose liquids: beyond the Gross-Pitaevskii approximation \[ ## Abstract The Gross-Pitaevskii approximation is a long-wavelength theory widely used to describe a variety of properties of dilute Bose condensates, in particular trapped alkali gases. We point out that for short-ranged repulsive interactions this theory fails in dimensions $`d2`$, and we propose the appropriate low-dimensional modifications. For $`d=1`$ we analyze density profiles in confining potentials, superfluid properties, solitons, and self-similar solutions. \] Experimental observation of Bose-Einstein condensation (BEC) in trapped alkali vapors has ushered in a new era of superlow temperature physics. The Gross-Pitaevskii (GP) mean-field theory has proven to be an indispensable tool both in analyzing and predicting the outcome of experiments. With rapid progress in the experimental exploration of BEC systems it is reasonable to anticipate that effectively one and two-dimensional systems are realistic prospects in the near future . For example, high aspect ratio cigar-shaped traps approximating quasi-one-dimensional BEC systems are already available experimentally . From the theoretical viewpoint low-dimensional systems are rarely well represented by mean-field theories which leads one to question the validity of the GP theory in one and two dimensions. In this paper we shall show that, indeed, the physics of dilute Bose systems requires a fundamental modification of the GP theory in low dimensions $`d2`$. The GP theory is a quasi-classical (or mean-field) approximation which replaces the bosonic field operator $`\psi `$ by a classical order parameter field $`\mathrm{\Phi }(𝐫,t)`$. For short-ranged interactions the interparticle potential $`U(𝐫)`$ is replaced by $`g\delta ^d(𝐫)`$, where $`g`$ is the pseudopotential. Then for a system of bosons in an external potential $`V`$, the energy functional and the equation of motion for $`\mathrm{\Phi }`$ take the following form: $$F_{GP}=d^dr\left[\frac{\mathrm{}^2}{2m}|\mathrm{\Phi }|^2+V(𝐫)|\mathrm{\Phi }|^2+\frac{g}{2}|\mathrm{\Phi }|^4\right],$$ (1) and $$i\mathrm{}_t\mathrm{\Phi }=\frac{\delta F_{GP}}{\delta \mathrm{\Phi }^{}}=\left[\frac{\mathrm{}^2}{2m}^2+V(𝐫)+g|\mathrm{\Phi }|^2\right]\mathrm{\Phi }.$$ (2) The GP equations (1) and (2) are widely used to compute a variety of properties of Bose systems . The GP approximation is a long-wavelength theory relying on the concept of the pseudopotential to account for interparticle interactions. However, for repulsive bosons the canonical pseudopotential vanishes in two dimensions , implying that an essential modification of the GP theory is necessary for $`d2`$. To see how to modify the theory, it is useful to rewrite the last integrand of (1) in terms of the particle density $`n=|\mathrm{\Phi }|^2`$, so that $`(g/2)|\mathrm{\Phi }|^4=(g/2)n^2`$. This can be recognized as the lowest order term of a dilute expansion of the ground-state energy density for $`d>2`$. Thus the correct low-dimensional local density theory will instead have the ground-state energy density of the $`d2`$ dilute Bose system, which is not proportional to $`n^2`$. Let us write the interparticle interaction in the form $`U(𝐫)=u_0\delta _a^d(𝐫)`$ where $`u_0`$ is the amplitude of the interparticle repulsion, and the notation $`\delta _a^d(𝐫)`$ denotes any well-localized function that transforms into the mathematical Dirac $`\delta `$ function when the range of interaction $`a0`$. The renormalization group analysis of this problem reveals that two dimensionless combinations $`na^d`$ and $`\mathrm{}^2n^{(2d)/d}/mu_0`$ play an important role in defining the conditions of the dilute limit for $`d2`$. In the dilute limit $`na1`$ and $`\mathrm{}^2n/mu_01`$, any one-dimensional Bose system with short-ranged repulsive interactions becomes equivalent to a gas of free-fermions (or equivalently point hard-core bosons), with energy density $`\pi ^2\mathrm{}^2n^3/6m`$. This can be generalized for $`d<2`$ to the statement that for $`na^d1`$ and $`\mathrm{}^2n^{(2d)/d}/mu_01`$, the lowest order term of the ground-state energy density expansion is universal and given by $`(\mathrm{}^2C_d/2m)n^{(2+d)/d}`$, where $`C_d`$ is a $`d`$-dependent constant. This implies that the quartic nonlinearity $`|\mathrm{\Phi }|^4`$ in Eq.(1) should be replaced by $`|\mathrm{\Phi }|^{2(2+d)/d}`$. In particular, for the practically important case of one dimension, the system of equations (1) and (2) is modified to $$F=\frac{\mathrm{}^2}{2m}𝑑x\left[\left|\frac{d\mathrm{\Phi }}{dx}\right|^2+\frac{2m}{\mathrm{}^2}V(x)|\mathrm{\Phi }|^2+\frac{\pi ^2}{3}|\mathrm{\Phi }|^6\right],$$ (3) and $$i\mathrm{}_t\mathrm{\Phi }=\frac{\mathrm{}^2}{2m}\left[_x^2+\frac{2m}{\mathrm{}^2}V(x)+\pi ^2|\mathrm{\Phi }|^4\right]\mathrm{\Phi }.$$ (4) Similar considerations applied to the marginal two-dimensional case lead to the conclusion that in the dilute limit a theory replacing the GP approximation starts from the energy functional $$F=\frac{\mathrm{}^2}{2m}d^2r\left[|\mathrm{\Phi }|^2+\frac{2m}{\mathrm{}^2}V(x)|\mathrm{\Phi }|^2+\frac{4\pi ^2}{|\mathrm{ln}(|\mathrm{\Phi }|^2a^2)|}|\mathrm{\Phi }|^4\right].$$ (5) Ignoring the logarithmic factor will perhaps suffice for many practical purposes; then (5) is precisely of the GP form. Finding an experimental manifestation of the logarithmic correction is likely to be very challenging. A detailed analytical and numerical study of the one and two-dimensional cases will be given in a longer publication; hereafter we restrict ourselves to only the salient features of one dimension where the deviations from the GP theory are largest. Density profiles in external potentials: The stationary solution to Eq.(4) defined via $`\mathrm{\Phi }(x,t)=\varphi (x)e^{i\mu t/\mathrm{}}`$ can be found by solving $$\frac{d^2\varphi }{dx^2}+\frac{2m}{\mathrm{}^2}(\mu V(x))\varphi \pi ^2\varphi ^5=0,$$ (6) subject to the condition of fixed total particle number $`N=𝑑x\varphi ^2`$ which determines the chemical potential $`\mu `$. For an external potential that varies slowly on the scale of the interparticle spacing the derivative term in (6) can be ignored: this gives the density profile in the Thomas-Fermi (TF) approximation: $$n_{TF}(x)=\varphi _{TF}^2=\left[2m(\mu V(x))\right]^{1/2}/\pi \mathrm{},$$ (7) with the density being zero in the classically forbidden region $`\mu <V(x)`$. For the practically important case of a harmonic trap, $`V=m\omega ^2x^2/2`$, and the density profile is elliptical: $$n_{TF}(x)=\left[(2m\mu m^2\omega ^2x^2)\right]^{1/2}/\pi \mathrm{}.$$ (8) The chemical potential is given by $`\mu _{TF}=\mathrm{}\omega N`$, the density in the center of the trap is $`n_{TF}(0)=(2m\mathrm{}\omega N)^{1/2}/\pi \mathrm{}`$, and the size of the trapped condensate is $`2(2m\mathrm{}\omega N)^{1/2}/\pi \mathrm{}`$. The accuracy of these predictions can be tested against the exact solution of a dilute system of bosons with repulsive interactions: an ideal candidate being a system of point impenetrable bosons. The boson-fermion equivalence implies that in the many-body system, the single-particle energy levels $`E_n=\mathrm{}\omega (n+1/2)`$ of the harmonic oscillator are occupied in a fermionic fashion, i.e. with no more than one particle per state. The chemical potential is then given by $`\mu =\mathrm{}\omega (N+1/2)`$, which for large $`N`$ approaches our TF result $`\mu _{TF}=\mathrm{}\omega N`$. Similarly the density profile can be computed as a sum of squares of the single-particle wave functions: $$n(x)=\frac{1}{(\pi l)^{1/2}}\underset{k=1}{\overset{N1}{}}\frac{1}{2^kk!}H_k^2(x/l)\mathrm{exp}(x^2/l^2),$$ (9) where $`H_k`$ are Hermite polynomials and $`l=(\mathrm{}/m\omega )^{1/2}`$. The density distribution (9) is plotted in Fig.1 where it is compared with a) the numerical solution of (6) with $`V=m\omega ^2x^2/2`$, and b) the TF result (8), for different numbers of particles. The main flaw of the theory based on (6) is that it does not reproduce density oscillations due to algebraic ordering of the particles. This is not surprising as (akin to the GP approximation) the discreteness of the particles, which is responsible for the density oscillations, is ignored. Otherwise, the agreement between the approximate and the exact profiles is very good; in the limit of large particle number the differences become imperceptible. These results can be directly tested experimentally; as a comparison we note that the one-dimensional GP theory in the TF approximation predicts $`n_{TF}\mu V(x)`$, which is quite distinct from (7), and agrees very poorly with the exact result. Solitons: Gray solitons have been recently created and their dynamics was observed in cigar-shaped condensates of $`{}_{}{}^{87}Rb`$ vapors , which makes it important to understand solitonic properties of the system (3) and (4). Let us look for solutions to (4) (with $`V=0`$) of the form $`\mathrm{\Phi }(x,t)=\varphi (x,t)e^{i\mu t/\mathrm{}}`$. The function $`\varphi `$ then obeys the equation $$i\mathrm{}_t\varphi =\frac{\mathrm{}^2}{2m}\left[_x^2\varphi +\pi ^2(|\varphi |^4\varphi _0^4)\varphi \right],$$ (10) where the chemical potential $`\mu =\pi ^2\mathrm{}^2\varphi _0^4/2m`$ is selected so that the particle density $`n_0=\varphi _0^2`$ is constant at infinity. In dimensionless variables $`f=\varphi /\varphi _0`$, $`y=\pi n_0x`$, $`\tau =\pi ^2n_0^2\mathrm{}t/m`$, Eq.(10) simplifies to $$2i_\tau f=_y^2f+(|f|^41)f.$$ (11) We will be looking for a localized traveling wave solution to (11) of the form $`f(y,\tau )=f(y\beta \tau )`$ where the dimensionless velocity $`\beta `$ is measured in units of the sound velocity $`c=\pi \mathrm{}n_0/m`$. This problem can be solved exactly. The results are conveniently described in terms of the amplitude $`A`$ and phase $`\theta `$ of the dimensionless order parameter $`f=Ae^{i\theta }`$: $`A^2`$ $`=`$ $`1{\displaystyle \frac{3(1\beta ^2)}{2+(1+3\beta ^2)^{1/2}\mathrm{cosh}[2(1\beta ^2)^{1/2}(y\beta \tau )]}}`$ (12) $`2\theta `$ $`=`$ $`\mathrm{cos}^1\left[{\displaystyle \frac{(3\beta ^2/A^2)1}{(1+3\beta ^2)^{1/2}}}\right].`$ (13) The spatial behavior given by (12) is shown in Fig.2. The amplitude in (12) describes a moving depression (particle deficit) with the minimal value at the soliton center given by $`A^2(0)=(1+3\beta ^2)^{1/2}1`$. The soliton exists only for $`\beta <1`$ (i.e. the soliton velocity cannot exceed the speed of sound); for $`\beta =1`$ (12) gives the uniform result $`A^2=1`$. On the other hand, for $`\beta =0`$ (i.e. a vortex, or dark soliton ) the minimal value of the amplitude at the soliton center drops to zero. The phase expressed in (12) varies rapidly in the vicinity of the amplitude dip, staying approximately constant far away from it. The total phase shift across the soliton can be found as $`\mathrm{\Delta }\theta =\mathrm{cos}^1[(3\beta ^21)/(1+3\beta ^2)^{1/2}]`$. It is a continuous function of the soliton velocity varying between $`\pi `$ (when $`\beta =0`$) and zero (when $`\beta =1`$). Antisolitons may be defined as having opposite signs of $`d\theta /dy`$, and there are no constraints on $`\mathrm{\Delta }\theta `$ for the open line or ring geometries (provided the number of solitons matches the number of antisolitons). However, if there is an imbalance of solitons and antisolitons in the ring geometry, then the uniqueness of the order parameter $`f(y,\tau )`$ implies that $`\mathrm{\Delta }\theta `$ is a fraction of $`2\pi `$ for any excess soliton; this will in turn mean that the excess soliton velocity is quantized. The solution (12) bears some similarity with the one-dimensional soliton of the GP theory ; the main qualitative difference (seen after recovering the original units) is that in the dilute limit the soliton size is of order $`1/(1\beta ^2)^{1/2}n_0`$ independent of the amplitude of the interparticle repulsion . General methods can be used to compute the soliton energy $`E`$ and momentum $`P`$. For their dimensionless counterparts $`ϵ=2mE/\pi ^2\mathrm{}^2n_0^2`$ and $`𝒫=P/\pi \mathrm{}n_0`$ we find $`ϵ`$ $`=`$ $`{\displaystyle \frac{\sqrt{3}}{\pi }}(1\beta ^2)\mathrm{ln}\left\{{\displaystyle \frac{2+[3(1\beta ^2)]^{1/2}}{(1+3\beta ^2)^{1/2}}}\right\}`$ (14) $`𝒫`$ $`=`$ $`{\displaystyle \frac{\beta }{(1\beta ^2)}}ϵ+{\displaystyle \frac{1}{\pi }}\mathrm{cos}^1\left[{\displaystyle \frac{3\beta ^21}{(1+3\beta ^2)^{1/2}}}\right].`$ (15) The dependencies $`ϵ(\beta )`$ and $`𝒫(\beta )`$ parametrically define the soliton dispersion law $`ϵ(𝒫)`$ which should be identified with the “hole” branch of the elementary excitations spectrum . To assess the accuracy of $`ϵ(𝒫)`$ given in (14) we compare it with the exact result of Lieb for the system of $`\delta `$-interacting bosons in the dilute limit $`\mathrm{}^2n/mu_01`$: $`ϵ_{\mathrm{exact}}(𝒫)=2|𝒫|𝒫^2`$, for $`|𝒫|1`$. Since the velocity $`\beta `$ in (14) varies between zero and unity, the momentum (which we choose to be positive) computed from (14) varies between unity and zero in correspondence with the exact result. It is straightforward to show that for $`𝒫1`$, the elimination of $`\beta `$ in (14) leads to $`ϵ=2𝒫`$ which is again in agreement with the exact result. The behavior $`ϵ(𝒫)`$ implied by (14) in the vicinity of the end-point of the spectrum $`𝒫=1`$ is qualitatively similar, and quantitatively close to the exact dependence. To illustrate these statements we have plotted the dispersion law (14) in Fig.3 against the exact result. Superflow: The dimensionless current density is given by $`j=A^2_y\theta `$, and below we look for solutions with fixed given current $`j`$ (i.e. the steady state) and $`_\tau A=_\tau \theta =0`$. Substituting $`f=Ae^{i\theta }`$ and $`j=A^2d\theta /dy`$ into (11), and imposing fixed chemical potential, we find: $$\frac{d^2A}{dy^2}=\frac{j^2}{A^3}+A^5A.$$ (16) In the spatially uniform state $`\frac{d^2A}{dy^2}=0`$ and one finds the dimensionless amplitude $`A_{\mathrm{}}^4=[1+(14j^2)^{1/2}]/2`$, which implies that superflow reduces the amplitude of the order parameter. The uniform solution, and thus superfluidity, cease to exist above the critical flow $`j_c=1/2`$ when the amplitude drops down to its minimal value $`A_{\mathrm{}}^c=2^{1/4}`$. These results imply that the critical velocity for superfluidity in the original units is $`c/\sqrt{2}`$. The equation (16) also has an immobile well-localized solution in the form of a dip of the order parameter; far away from the dip the amplitude recovers to its uniform value. The dip solution is closely related to the soliton previously discussed. Indeed, in the reference frame moving with the flow, the dip solution is moving and thus is identical to a soliton. The functional form of the dip can be deduced from (12) by replacing $`\beta `$ by $`j/A_{\mathrm{}}^4`$, $`A`$ by $`A/A_{\mathrm{}}`$, and $`(y\beta \tau )`$ by $`yA_{\mathrm{}}^2`$. The dip solution disappears altogether for $`j>j_c`$. Self-similar solutions: The results derived so far have their counterparts in the context of the one-dimensional GP approximation. However the theory based on Eqs.(3) and (4) allows self-similar solutions which do not exist in the one-dimensional GP theory . Below we only look at the cases consistent with the condition of conservation of total particle number. Consider the system of bosons placed in a harmonic trap. In dimensionless variables it is described by $$2i_\tau f=_y^2f+[|f|^4+w^2y^2]f,$$ (17) where $`w=m\omega /\pi ^2\mathrm{}n_0^2`$ is the dimensionless oscillator frequency ($`n_0`$ now has the meaning of a density introduced to make $`f`$ dimensionless; it should be determined from the complete solution of the problem). In contrast to (11) (and without loss of generality) we have shifted the origin of the chemical potential. The self-similar solution $`f=Ae^{i\theta }`$ derived from (17) has the form $$A=\rho (\tau )^{1/2}h(y/\rho (\tau )),\theta =\theta _0(\tau )+\frac{1}{2}\frac{d\mathrm{ln}\rho }{d\tau }y^2,$$ (18) where the functions $`\rho (\tau )`$ and $`h(v)`$ obey the equations $`{\displaystyle \frac{d^2\rho }{d\tau ^2}}`$ $`=`$ $`w^2\rho +{\displaystyle \frac{\gamma }{\rho ^3}}`$ (19) $`{\displaystyle \frac{d^2h}{dv^2}}`$ $`=`$ $`\delta h+h^5+\gamma v^2h`$ (20) where $`\gamma `$ and $`\delta `$ are arbitrary constants. Eq.(20) for the scaling function $`h(v)`$ has localized solutions only for $`\delta >0`$ and $`\gamma 0`$: for $`\gamma =0`$ an explicit analytic solution to (20) can be written down, while for $`\gamma >0`$, (20) has the same functional form as the equation we encountered in determining the density profile in the harmonic trap (cf. (6) for $`V=m\omega ^2x^2/2`$\]. The dynamics of the length scale $`\rho (\tau )`$ can be understood by viewing (19) as a fictitious classical mechanics problem in the potential $`U=w^2\rho ^2/2+\gamma /2\rho ^2`$. This analogy implies that an initially localized cloud of bosons in free space ($`w=0`$) will expand asymptotically in a ballistic fashion: $`\rho (\tau )\tau `$. In the presence of the confining potential ($`w0`$) the scale $`\rho (\tau )`$ oscillates between maximum and minimum values: for $`\gamma =0`$ the dynamics of $`\rho `$ is the same as for a harmonic oscillator of frequency $`w`$. We have also performed direct numerical integration of the non-linear equation (4), and have confirmed the existence of both the similarity solutions, and moving trains of solitons with quantized velocity, with amplitude and phase as given by (12). More details will be given in a future publication. In conclusion we have presented a new continuum description of dilute Bose liquids appropriate for low dimensional systems. For the case of one dimension, we have derived stationary properties, along with solitonic and similarity solutions. In particular, the latter have no analog in the GP theory. It is our hope that these results will be testable in BEC experiments in the near future.
no-problem/0002/hep-ph0002194.html
ar5iv
text
# 1 Introduction ## 1 Introduction Many types of discrete symmetries appear in supersymmetric models. They are usually broken at intermediate scales and cause the cosmological domain wall problem, if the walls remain stable. The usual assumption is that non-renormalizable terms induced by gravitational interactions may explicitly break these symmetries and make such walls collapse on a cosmologically safe timescale. On the other hand, many models of supersymmetry breaking involve particles with the masses of order the gravitino mass $`m_{3/2}`$ and Planck mass suppressed couplings. Coherent production of such particles in the early universe destroys the successful predictions of nucleosynthesis. This problem may be solved by a brief period of weak scale inflation. In this paper we examine whether the usual criterion for the safe decay of the unstable domain walls can be applied when weak scale inflation takes place. In section 2 and 3 we review the usual condition for unstable domain walls and the basic idea of thermal inflation. In section 4 we show collective examples of discrete symmetries which appear in many supersymmetric models. In section 5 we examine whether the usual criterion for the safe decay of the cosmological domain wall is also applicable for supersymmetric models. We show that the decaying process of these unstable domain walls should be changed significantly if thermal inflation occurs. As a result, the scenario for the safe decay of the cosmological domain walls must be changed, depending on their scales and interactions. We also make a brief comment on the cosmological structure formation which will be induced by the soft domain walls. If such walls are expanded during weak scale inflation, the conditions for the structure formation will be changed. ## 2 Collision of the Cosmological Domain Walls In this section we briefly review how to estimate the value of the pressure (i.e., explicit breaking) to safely remove the walls. The crudest estimate we can make is to insist that the walls are removed before they dominate over the radiation energy density in the universe. When the discrete symmetry is broken by gravitational interactions, the symmetry is an approximate discrete symmetry. The degeneracy is broken and the energy difference $`ϵ0`$ appears. Regions of higher density vacuum tend to shrink, the corresponding force per unit area of the wall is $`ϵ`$. The energy difference $`ϵ`$ becomes dynamically important when this force becomes comparable to the force of the tension $`f\sigma /R`$, where $`\sigma `$ is the surface energy density of the wall. For walls to disappear, this has to happen before the walls dominate the universe. On the other hand, the domain wall network is not a static system. In general, initial shape of the walls right after the phase transition is determined by the random variation of the scalar VEV. One expects the walls to be very irregular, random surfaces with a typical curvature radius, which is determined by the correlation length of the scalar field. To characterize the system of domain walls, one can use a simulation. The system will be dominated by one large (infinite size) wall network and some finite closed walls (cells) when they form. The isolated closed walls smaller than the horizon will shrink and disappear soon after the phase transition. Since the walls smaller than the horizon size will efficiently disappear so that only walls at the horizon size will remain, their typical curvature scale will be the horizon size, $`RtM_p/g_{}^{\frac{1}{2}}T^2`$. Since the energy density of the wall $`\rho _w`$ is about $$\rho _w\frac{\sigma }{R},$$ (2.1) and the radiation energy density $`\rho _r`$ is $$\rho _rg_{}T^4,$$ (2.2) one sees that the wall dominates the evolution below a temperature $`T_w`$ $$T_w\left(\frac{\sigma }{g_{}^{1/2}M_p}\right)^{\frac{1}{2}}.$$ (2.3) To prevent the wall domination, one requires the pressure to have become dominant before this epoch, $$ϵ>\frac{\sigma }{R_w}\frac{\sigma ^2}{M_p^2}.$$ (2.4) Here $`R_w`$ denotes the horizon size at the wall domination. A pressure of this magnitude would be produced by higher dimensional operators which explicitly break the discrete symmetry. The criterion (2.4) seems appropriate, if the scale of the wall is higher than $`(10^5GeV)^3`$. For the walls below this scale ($`\sigma (10^5GeV)^3`$), there should be further constraints coming from primordial nucleosynthesis. Since the time associated with the collapsing temperature $`T_w`$ is $`t_wM_p^2/g_{}^{\frac{1}{2}}\sigma 10^8\left(\frac{(10^2GeV)^3}{\sigma }\right)`$sec, the walls $`\sigma (10^5GeV)^3`$ will decay after nucleosynthesis. If the walls are not hidden and can decay into the standard model particles, the entropy produced when walls collide will violate the phenomenological bounds for nucleosynthesis. On the other hand, this simple bound ($`\sigma (10^5GeV)^3`$) is not effective for the walls which cannot decay into standard model particles. The walls such as soft domain walls, the succeeding story should strongly depend on the details of the hidden components and their interactions. These walls can decay late to contribute to the large scale structure formation. ## 3 Weak scale inflation Supersymmetry is probably one of the most attractive extensions of the standard model. In virtue of supersymmetry, the hierarchy can be stabilized against the radiative corrections. However, overviewing the cosmology of the supersymmetric models, one faces with various difficulties. One of the most obvious and famous problems is the gravitino problem. This problem still exists even if the universe experiences a primordial inflation, since the gravitino is reproduced during the reheating process. The mass of the gravitino depends on the mechanisms of supersymmetry breaking. In the supergravity mediated models of supersymmetry breaking, the gravitino has a mass of the electroweak scale ($`m_{3/2}10^{23}`$GeV), and it decays soon after big bang nucleosynthesis. High energy photons produced by the gravitino decay may destroy the usual assumptions for big bang nucleosynthesis. Another example is the gauge mediated supersymmetry breaking models, in which the predicted gravitino mass is much lighter than the supergravity mediated models. The gravitino mass is expected to be $`m_{3/2}10`$eV$`1`$GeV and cosmologically stable. If the gravitino mass is larger than $`1`$keV, the universe will be overclosed unless the gravitino is diluted at an earlier epoch. The gravitino problem is a common feature of the superstring inspired models, because the moduli fields should play the same role as the gravitino. However, the cosmological moduli problem has a different feature. Because a moduli is a scalar field, it should have the potential which is inevitably flat but raised by the moduli mass $`m_\varphi m_{3/2}`$. In the supergravity mediated models of supersymmetry breaking, the moduli fields decay soon after big bang nucleosynthesis as the gravitino, causing to the same cosmological problem. For the gauge mediated models which predicts lighter mass for the moduli, the energy of the oscillation lasts and overcloses the universe for $`m_{3/2}<100`$MeV, or the decay of the moduli gives too much contribution to the x($`\gamma `$)-ray background spectrum for $`m_{3/2}10^110^4`$MeV. The most promising way to evade these difficulties is to dilute these unwanted relics after the primordial inflation. One of such dilution mechanisms is the thermal inflation model proposed by Lyth and Stewart. Thermal inflation occurs before the electroweak phase transition and produces entropy before big bang nucleosynthesis, which dilutes the moduli density. During thermal inflation the flaton field (i.e., the inflation field for thermal inflation) is held at the origin by finite temperature effects. The potential energy during thermal inflation is the value $`V_0`$ of the flaton potential at the origin, which is of order $`m^2M^2`$. With $`M10^{12}`$GeV and $`m10^2`$GeV, this gives $`V_0^{1/4}10^7`$GeV which satisfies the condition to avoid the excessive regeneration of light stable fields. Thermal inflation starts when the thermal energy density falls below $`V_0`$ which corresponds to a temperature roughly of order $`V_0^{1/4}`$, and it ends when the finite temperature becomes ineffective at a temperature of order $`m`$. The number of e-folds is $`N_e^{th}=\frac{1}{2}ln(M/m)10`$, which is much smaller than the primordial inflation. There is also the intriguing possibility that two or more bouts of thermal inflation can occur in succession, allowing more efficient solution of the moduli problem. In such cases the number of e-folds will be about $`N_e=1025`$. ## 4 Unstable Domain Walls The discrete symmetry appears in many supersymmetric models. The origins of such symmetries can be traced back to $`U(1)_R`$ anomaly in the dynamical sector, or the discrete symmetry in the superstring models which appears as the consequence of possible compactification schemes. In this section we make a collective review of such discrete symmetries. Here we consider a domain wall with the energy scale $`\sigma \mathrm{\Lambda }_w^3`$. R-changed In the supergravity mediated models for dynamical supersymmetry breaking, gaugino condensation in the hidden sector of order $`10^{1012}`$GeV is mediated by gravity, bulk fields or other interactions such as an anomalous $`U(1)_X`$ field and induces soft supersymmetry breaking terms in the observable sector. In the simplest hidden sector model (supersymmetric Yang-Mills), $`U(1)_R`$ symmetry in the hidden sector is broken by anomaly and a discrete R symmetry remains. The discrete R symmetry is then spontaneously broken when gaugino condensate. This is a well-known example of the BPS domain wall in the global supersymmetric gauge theory. One may expect that the domain wall structure in the hidden dynamical sector is a common feature of such models for dynamical supersymmetry breaking. On the other hand, in the gauge mediated models for dynamical supersymmetry breaking, $`U(1)_R`$ symmetry is strongly connected to the supersymmetry breaking. The presence of an $`U(1)_R`$ symmetry is a necessary condition for supersymmetry breaking and a spontaneously broken $`U(1)_R`$ symmetry is a sufficient condition provided two conditions are satisfied. These conditions are genericity and caliculability. This means that the domain wall structure in the dynamical sector is not a common feature of the gauge mediated supersymmetry breaking models. However, there are many models which do not satisfy the genericity condition where the $`U(1)_R`$ symmetry will be anomalous. In these models a discrete R symmetry is implemented and is spontaneously broken at relatively low energy scale of order $`10^{59}`$GeV. In these models for dynamical supersymmetry breaking, the dynamical sector may have a domain wall configuration at intermediate scale $`\mathrm{\Lambda }_w10^{59}`$GeV. The R-charged domain wall configuration may also appear as the consequence of the spontaneous symmetry breaking of the explicit $`Z_n^R`$ symmetry, which is sometimes imposed by hand in order to solve phenomenological difficulties such as the $`\mu `$-problem or the cosmological moduli problem. In these models the scales of the domain walls are determined by phenomenological requirements, typically at $`\mathrm{\Lambda }_w10^{510}`$GeV. When the universe undergoes a phase transition associated with spontaneous breaking of such discrete symmetries, domain walls will inevitably form. These domain walls are generally not favorable if they are stable. In ref. we have shown that a constant term in the superpotential always breaks the degeneracy of vacua when supergravity is turned on, and that the pressure induced by the constant term satisfies the usual condition for the safe decay of unwanted domain walls. In general, the constant term is required to make the cosmological constant very small, which is an inevitable feature of any phenomenological models for supergravity. The magnitude of the energy difference $`ϵ`$ induced by the constant term is $`ϵ\sigma ^2/M_p^2`$ where $`\sigma `$ is the surface energy density of the wall. This satisfies the usual condition for the safe decay of the cosmological domain wall. Although the model is technically non-generic because it includes a single term which explicitly breaks the $`Z_n^R`$ symmetry, it is still a reasonable model for a vanishing cosmological constant. In this sense, the basic idea is similar to the well-known mechanism for the mass generation of the R-axion. Not R-charged On the other hand, for the walls which do not have R charge, the explicit symmetry breaking term should be added by hand. Since the gravity interaction does not respect global symmetries such as the discrete symmetries we are concerned about, the explicit breaking terms may appear as higher order terms suppressed by the Planck mass. In this case, however, there is no reason to expect that the magnitude of the energy difference appears at their lowest bound $`ϵ\sigma ^2/M_p^2`$. ## 5 Unstable Domain Walls and Thermal Inflation In this section we shall discuss the formation, evolution and collapsing process of the cosmological domain walls paying attention to the changes that should be induced by weak scale inflation. In general, initial shape of the walls right after the phase transition is determined by the random variation of the scalar VEV. One expects the walls to be very irregular, random surfaces with a typical curvature radius, which is determined by the correlation length of the scalar field. To characterize the system of domain walls, one can use a simulation. The system will be dominated by one large (infinite size) wall network and some finite closed walls (cells) when they are produced. The isolated closed walls smaller than the horizon will shrink and disappear soon after the phase transition. As a result, only a domain wall stretching across the horizon will remain. The initial distribution of the cosmological domain walls after the primordial inflation is not determined solely from the thermal effect of reheating. In some cases non-linear dynamics of the fields (parametric resonance, for example) will be important. Here we do not discuss further on these topics and temporarily make a simple assumption that the walls are produced just after the end of the primordial inflation. For the walls $`\mathrm{\Lambda }_w<10^{11}`$GeV and $`ϵ\sigma ^2/M_p^2`$, thermal inflation occurs before they collapse. Such walls may experience a large though not huge number of e-folds. Extended structures arising from such weak scale inflations are not necessarily inflated away. Since the cells of the false vacuum cannot decay soon if they are much larger than the horizon scale<sup>2</sup><sup>2</sup>2Here the bubble nucleation rate is extremely small. We do not consider the false vacuum annihilation induced by the bubble nucleation, because such a scenario is not realistic in our model., we expect that additional constraints should be required for such walls to decay. When thermal inflation starts, the initial scale of the domain wall network is the same as the particle horizon. It is of order $`H_{th}^1(V_0^{1/2}/M_p)^1`$, where $`H_{th}`$ and $`V_0`$ denote the Hubble constant and the vacuum energy during thermal inflation. The cells inflate during thermal inflation, then become the scale of order $`l_e=H_{th}^1e^{N_{th}}`$. Here $`N_{th}`$ demotes the number of e-folds of thermal inflation. Since weak scale inflation may occur in succession , the number of e-folds $`N_{th}`$ will be the sum of these succeeding weak scale inflations; $`N_{th}1025`$. At the end of weak scale inflation (at the time $`t=t_0`$), $$\left(\frac{l}{d_H}\right)_{t=t_0}e^{N_{th}},$$ (5.1) where $`d_H`$ denotes the particle horizon. After thermal inflation, coherent oscillation of the inflation field $`\varphi `$ starts. The expansion during this epoch is estimated as $$l=l_{t=t_0}\left(\frac{\rho _\varphi }{V_0}\right)^{\frac{1}{3}}.$$ (5.2) The horizon size during preheating is estimated as $$d_H\left(\frac{\rho _\varphi ^{1/2}}{M_p}\right)^1.$$ (5.3) Thus the ratio becomes $$\left(\frac{l}{d_H}\right)_{t_{rad}>t>t_0}e^{N_{th}}\left(\frac{\rho _\varphi }{V_0}\right)^{1/6}.$$ (5.4) Here $`t_{rad}`$ denotes the time when radiation domination starts. Thereafter $`l`$ grows like $`T^1`$, where $`T`$ denotes the temperature in the radiation dominated era. Assuming the radiation-dominated expansion, the ratio will be $$\left(\frac{l}{d_H}\right)_{t>t_{rad}}\left(\frac{l}{d_H}\right)_{t=t_{rad}}\left(\frac{T(t)}{T_D}\right).$$ (5.5) Here $`T_D`$ denotes the reheating temperature after thermal inflation. The domain wall network enters the particle horizon at the time $`t_s`$ when the ratio becomes unity. Assuming that this occurs before radiation energy dominates the universe, $`\rho _\varphi `$ at the time $`t_s`$ is $$\rho _\varphi |_{t=t_s}e^{6N_{th}}V_0.$$ (5.6) In the case that the expansion during thermal inflation is about $`10^5`$ and $`V_0^{1/4}`$ is about $`10^7`$GeV, $`\rho _\varphi |_{t=t_s}`$ is estimated as $`\rho _\varphi |_{t=t_s}10^2(GeV)^4`$. Here we assumed that the reheating temperature of the thermal inflation is very low ($`T_D<1GeV`$) in order to ensure sufficient entropy production. As we have discussed in Sec.2, the walls that do not decay until they dominate the universe must be excluded. In this case, the walls that dominate the universe when they enter the horizon (i.e.,$`\mathrm{\Lambda }_w10^6`$GeV) are ruled out. This bound seems very severe, since the walls below this scale should be further constrained by primordial nucleosynthesis. On the other hand, one may think that the walls of the scale $`\mathrm{\Lambda }_w<10^7`$ should be produced during thermal inflation and suffer less expansion. Including this effect, the above constraint can be relaxed to allow the walls of the scale $`\mathrm{\Lambda }_w10^{56}`$GeV. As a result, in many types of supersymmetric theories, walls of the intermediate scale produced just after primordial inflation cannot decay before they dominate the energy density of the universe, even if they satisfy the usual condition. It must also be noted that this result does not depend on the magnitudes of the explicit breaking terms. If the magnitudes of the explicit breaking terms exceed the Vilenkin’s lowest bound, the domination by the false vacuum energy begins at earlier epoch. As a result, the situation becomes worse for such larger magnitudes of the explicit breaking terms. Of course, there may be an exception that the explicit breaking terms are quite large so that the walls decay before weak scale inflation. This may happen for the walls $`\mathrm{\Lambda }_w>V_0^{1/4}`$, although the requirement becomes very severe. For example, when $`\mathrm{\Lambda }_w=10^9`$GeV the required value of the energy difference is about $`ϵ>10^5\sigma ^2/M_p^2`$, which is more than $`10^5`$ times larger than the usual bound. Another way to avoid the domain wall problem is to gauge the discrete symmetry so that there is really one vacuum. However, nontrivial anomaly cancellation conditions must be satisfied. Sometimes it requires fatal constraint on the components of the model. ## 6 Conclusions and Discussions In this paper we have studied the decaying processes of unstable domain walls and shown that the processes should be changed significantly if weak scale inflation takes place. As a result, the usual condition for the safe decay of the cosmological domain wall must be changed. For walls which can decay into particles in the standard model, the energy scales of the walls are strongly restricted. Although the above constraints looks severe, there are other possibilities related to the structure formation; cosmology with ultra-light pseudo-Nambu-Goldstone bosons. Contrary to the ordinary types of cosmological domain walls which we have discussed above, expansion during thermal inflation is a good news for such scenarios. Late decay of the soft domain walls can be realized as a natural consequence of thermal inflation, and can contribute to the large scale structure formation. We will study this topic in the next paper. ## 7 Acknowledgment We wish to thank N.Sakai, N.Maru, Y.Sakamura and Y.Chikira for many helpful discussions.
no-problem/0002/cond-mat0002315.html
ar5iv
text
# New way to achieve synchronization in spatially extended systems ## Abstract We study the spatio-temporal behavior of simple coupled map lattices with periodic boundary conditions. The local dynamics is governed by two maps, namely, the sine circle map and the logistic map respectively. It is found that even though the spatial behavior is irregular for the regularly coupled (nearest neighbor coupling) system, the spatially synchronized (sometime chaotically synchronized) as well as periodic solution may be obtained by the introduction of three long range couplings at the cost of three nearest neighbor couplings. Recently there have been lot of interests in the small world network systems . The small world network systems lie in between the regular and the random networks where only a small fraction of long range links are introduced at the cost of the equal number of regular short range links keeping the total number of links conserved. The properties of such systems have been thoroughly studied very recently because of the fact that many biological, technological or social networks fall in this category. Intuitively it is expected that the information or source in the small world network will spread quickly through out the system because of the presence of some long range links. On the other hand a number of phase models have been proposed over the recent years to describe the dynamical behavior of large population of nonlinear oscillators subject to a variety of coupling mechanisms . A major phenomenon that can be observed is the possibility of self synchronization among the members of the population. These can represent the fire flies, heart pace maker cells, pancreatic beta cells, neurons etc. as well as the circuit arrays . Different cells (units) in different systems may be locally governed by some specific rule. For example it may be governed by the sine circle map, logistic map or some differential equations depending on the physical or biological system. Furthermore different cells may be coupled to each other in various possible ways. Even if the local dynamics is regular, the spatial as well as the temporal behavior of the regularly coupled system may be irregular. It might happen other way also, i.e., the dynamics of regularly coupled chaotic oscillators may turn out to be regular. We are interested to confine ourselves in the parameter regime where the spatial behavior of a regularly coupled lattice becomes irregular. Then the question we ask is whether it is possible to control the spatial irregularity by introducing some small fraction of long range couplings at the cost of the equal number of nearest neighbor couplings keeping the total number of couplings conserved. If so, then that would be a new way to control spatial irregularity in a spatially extended systems. Furthermore, if the solution turns out to be synchronized spatially but chaotic temporally, it will find good application to the secret communication network. We would like to investigate these aspects in one dimensional lattice where the local dynamics of individual lattice site is governed either by the sine circle map or the logistic map . The coupling we consider here is the simple unidirectional nearest neighbor coupling. First we discuss the unidirectionally coupled map lattice with the local dynamics defined by the sine circle map and then by the logistic map. Here we consider a one dimensional periodic lattice of size $`N`$. The local dynamics at any site is governed by the following rule: $$x\left(t+1\right)=f\left(x\left(t\right)\right)=x\left(t\right)+\mathrm{\Omega }\frac{K}{2\pi }\mathrm{sin}\left(2\pi x\left(t\right)\right)\mathrm{mod1}$$ (1) where $`x`$ is the dynamical variable, $`t`$ is the time, $`\mathrm{\Omega }`$ is the frequency and $`K`$ is the nonlinearity parameter. The map given by Eq. (1) is known as the sine circle map and very much similar to the dynamical equation in the circuits of Josephson Junction arrays. If all the sites in the system are independent of each other, the dynamics of the individual sites will be governed by Eq. (1). However, we are interested in the dynamics of the system when all the sites are directly or indirectly coupled to each other. The lattice sites may interact with each other in a various possible ways. But, here, we consider the situation where the dynamical variable at the $`i^{th}`$ site may be governed by the following rule: $`x_i(t+1)=(1ϵ)f(x_i(t))+ϵf(x_{i+1}(t));1i<N,`$ (2) $`\mathrm{and}`$ (3) $`x_N(t+1)=(1ϵ)f(x_N(t))+ϵf(x_1(t))`$ (4) where $`ϵ`$ is the coupling parameter. The spatio-temporal behavior of the system has been studied extensively. Even though the local dynamics is chaotic, the global dynamics of the system may turn out to be chaotic or regular depending on the parameters, initial condition, as well as the nature of the coupling. Our intention is to start with a random initial condition and fix the parameters such that the dynamics of the system becomes chaotic both spatially as well as temporally with the regular nearest neighbor coupling as described by Eq. (2). For example, we chose the parameter values as $`ϵ`$=0.5, $`\mathrm{\Omega }`$= 0.3 and $`K=\sqrt{2}`$. We consider the lattice size, $`N`$=90. We allow the system to evolve for 5000 steps before determining it’s state. The spatial behavior of the system described by Eq. (2) is shown in Fig. (1) where $`x_i`$’s are plotted as a function of $`i`$ for t=5001. We see that there is no regular spatial behavior. The temporal evolution for all the sites are also found to be irregular. The temporal evolution at the first site is shown in the Fig. (2) as an example. We introduce the concept of small world network systems in the spatially extended system, namely, coupled map lattices. Suppose that we have a periodic one dimensional lattice of $`N`$ sites. If the lattice sites are coupled with its nearest neighbor site unidirectionally, there will be $`N`$ couplings or bonds. We will use the word bond in place of coupling for convenience. A fraction of these $`N`$ bonds are replaced by long range couplings. The replacement of nearest neighbor bond (coupling) by the long range bond is made randomly. In the process of replacement, we make sure that duplicates are not allowed and no part of the system becomes isolated. Under such rearrangements, the total number of bonds remain conserved. Since the replacement of bonds are done randomly, there will be many possible configurations for fixed number of bonds being replaced. Therefore, it is necessary to check if any of those configurations produces the synchronized solutions. So, we consider the system where $`p`$ fraction of nearest neighbor bonds are replaced by randomly chosen long range bonds. The parameter values are are chosen to be same as in Fig. (1). Initially all the sites are populated randomly and then they evolve according to Eq. (2) with the replacement of $`p`$ fraction of bonds. The solution appears to be stable after 5000 iterations. Since there are large number of possible configurations, the numerical experiment is carried out for 1000 configurations for a fixed value of $`p`$. The simulation is also carried out for different values of $`p`$ between 0 and 1. It is found that only two to ten percentage of configurations lead to synchronized solutions for each value of $`p`$. For $`p=0.5`$ more number of configurations produce synchronized solution. Thus we experience that there exist some configurations which may produce the synchronized solutions. Therefore, the control of the spatially chaotic behavior is possible by the above mentioned mechanism but the percentage value of configurations producing the synchronized solutions are very low. So, one has to go through various sets of re-wiring to achieve the synchronization and is therefore difficult to implement in the physical systems. One needs to know a re-wiring procedure where least number of bonds will be disturbed or replaced. Furthermore, one should know a systematic way of replacement of bonds to achieve the synchronization. Intuitively we suspect that the replacing of the nearest neighbor bonds may be made in a systematic manner and think that the bonds should be broken at regular interval in the lattice and the disturbed vertices should be connected in a regular fashion. More explicitly, let us divide the lattice into $`n`$ number of segments, each segment has length (L) equal to $`\frac{N}{n}`$. So, the bond between sites $`i`$ and $`i+1`$ should be broken and the $`i^{th}`$ site should be connected to $`(i+L)^{th}`$ site. Next, the bond between sites $`(i+L)`$ and $`(i+L+1)`$ should be broken and $`(i+L)^{th}`$ site should be connected to $`(i+2L)^{th}`$ site and so on. This procedure should be continued until the full lattice is covered. We note that if $`n=N`$ or $`L=1`$, then the modified lattice remains same as the regular one. Therefore, we expect that $`n`$ should be between 1 to $`N1`$. We therefore do the numerical experiment for $`n`$=1, 2, 3, 4 and so on. In fact we find that $`n`$=1 and 3 cases work very well. Although, ideally, $`n`$=1, would be the most suitable solution, we also study $`n`$=3, since it is the most ideal case for studying multiple rewirings. We find that $`n`$=2 and $`n`$=4 have lesser probability to synchronize than $`n`$=3, over wide regions of the $`\mathrm{\Omega }ϵ`$ phase space. We will discuss these concepts in details in a forthcoming article. For $`n`$=3, the re-wiring rule is given explicitly for a lattice of size 90: The bonds between sites 1 and 2, 30 and 31, and 60 and 61 are broken and the links between sites 1 and 30, 30 and 60, 60 and 90 are established. Furthermore, any arbitrary site can be chosen as site 1. The spatial behavior for this configuration is shown in Fig. (3). The parameters values are same as used for Fig. (1). We notice that total synchronization is achieved. The synchronization behavior can be obtained for a lattice of any size with the above mentioned rule. The only difference is that for larger lattice it takes longer time to achieve the synchronization. This is in fact not surprising. However, the temporal evolution is chaotic. So, it may be called chaotic synchronization. We further note that even if the site 1 is coupled with any site around the $`30th`$ site with small width, synchronization is achieved. Thus a small defect in re-wiring does not matter in achieving the synchronization. We clearly note that by disturbing only three regular nearest neighbor bonds and establish three long ranged bonds in a regular way, one can achieve synchronization over a wide range of parameters. The phase diagram in the $`ϵ\mathrm{\Omega }`$ plane for $`k=\sqrt{2}`$ is shown in Fig 4. The shaded region (in fig. 4) in the $`ϵ\mathrm{\Omega }`$ plane are the allowed parameter regime where one achieves the synchronized solution starting from a random initial conditions. We see that there is a large region in the $`ϵ\mathrm{\Omega }`$ plane where the synchronization is obtained through the re-wiring mechanism. The synchronized region for only one broken bond is larger than that of the 3 bond case by about 20-30 $`\%`$. However, for any other number of rewirings, the allowed part of the phase space is much smaller. Here we consider the same system where the local dynamics is governed by the Logistic map. The logistic map has been extensively used to model a number of physical as well as biological phenomenon . The map is given by $$x(t+1)=f\left(x(t)\right)=\mu x(t)\left(1x(t)\right)$$ (5) where $`\mu `$ is the parameter controling the local dynamics. The the dynamics of the variable $`x_i`$ at the $`i^{th}`$ site is governed by the Eq. (2) with $`f\left(x(t)\right)`$ as given in Eq. (3) where $`t`$ represents the time, and $`ϵ`$ is the coupling strength. In this system also we find that one may obtain the spatially periodic solution and sometimes synchronized solution through the re-wiring rule as stated earlier. In this system, we find that there is a critical value of the coupling strength $`ϵ`$ for each value of $`\mu `$. For example, all $`x_i`$’s are plotted as a function of $`ϵ`$ for $`\mu `$=4 in Figs. (5) and (6). Fig. 5 is for the system where all the sites are regularly coupled to its nearest neighbor. But Fig. 6 is for the rewired lattice. We clearly see from Fig. 5 that there is no spatial regularity. On the other hand, for the rewired lattice (Fig. 6) there is a critical value of $`ϵ`$, say $`ϵ_{cr}=0.35`$ above which the solution is spatially periodic of period four. The spatial nature of the solution for the rewired lattice is verified in the entire $`ϵ\mu `$ plane as shown in Fig 7. There are various regions corresponding to various kinds of solutions. The solution is spatially periodic with period one or four in the region I, the solution is temporally as well as spatially irregular in region II, the solution is spatially periodic of period four in region III and the solution is chaotically synchronized in region IV. Thus we see that the re-wiring mechanism helps us to control the spatial irregularity in a large region of the parameter space. Furthermore we note the parameter region for both the spatial as well as temporal periodic solution also large. Therefore, this mechanism with the logistic map works well to control spatial and temporal irregularity in a large region of parameter space. In summary, we have studied spatially extended system where the local dynamics are governed by the sine circle map and the logistic map. We have implemented the idea of small world network and investigated that spatial irregularity in the system can be controlled. We have shown that for a regularly coupled system showing irregular behavior spatially, synchronization can be achieved by disturbing a few regular links (nearest neighbor coupling) and establishing an equal number of long ranged links in a regular way. We notice that the region in the $`\mathrm{\Omega }ϵ`$ space where synchronization occurs depends on how many of these rewirings are done. For one and three rewirings, synchronization occurs over large regions of the $`\mathrm{\Omega }ϵ`$ space. We will establish the reason for this in a forthcoming article. Chaotic synchronization is achieved for the sine circle map with the re-wiring mechanism. On the other hand, spatially as well as temporally periodic solution is achieved for the logistic map. Thus we experience that the re-wiring mechanism helps us to achieve the chaotic synchronization for some local dynamics. The chaotic synchronization is an important solution in Josephson junction arrays. Furthermore it is very useful as far as the secret communication network is concerned. This mechanism may be tested with other local dynamics existing in physical or biological systems.
no-problem/0002/gr-qc0002028.html
ar5iv
text
# On 4-dimensional cosmological models locally embedded in a 5-dimensional Ricci-flat space ## Abstract We employ a theorem due to Campbell to build some simple 4-dimensional cosmological models which originate from solutions describing waves propagating along the extra-dimension of a 5-dimensional Ricci-flat space. The dimensional reduction is performed in the Jordan frame according to the induced-matter theory of Wesson. *Dipartimento di Fisica dell’Università di Genova <br>Istituto Nazionale di Fisica Nucleare,Sezione di Genova <br>Via Dodecaneso 33, 16146 Genova, Italy* PACS numbers: 04.50.+h , 11.10.Kk An interesting version of 5-dimensional General Relativity has been developed in recent years by Wesson . The central thesis of his induced-matter theory is that the 4D Einstein’s field equations with matter $$G_{\alpha \beta }=8\pi T_{\alpha \beta }$$ (1) are a subset of the 5D field equations for vacuum in terms of the Ricci tensor $$R_{AB}=0$$ (2) The theory also allows to obtain flat cosmological solutions containing the usual 4D perfect fluid energy-momentum tensor . Due to its primary significance in the present context, it is worth quoting a theorem due to Campbell , which states as follows: Theorem: *Any analytic Riemannian space $`V_n(s,t)`$ can be locally embedded in a Ricci-flat Riemannian space $`V_{n+1}(s+1,t)`$ or $`V_{n+1}(s,t+1)`$.* This theorem has been recently brought to light by Romero et al. and employed both in applying Wesson’s method and in investigating the embedding of lower-dimensional spacetimes . In this Brief Note we consider the cosmological solution describing waves propagating in the extra-dimension of a (4+1)-dimensional Ricci-flat space and, after selecting particular modes, we obtain the corresponding (3+1)-dimensional cosmologies. The final result will be written in the Jordan frame, according to the dimensional reduction prescribed by the “induced-matter theory of Wesson”. We start from the 5D line element $$ds_5^2=e^\omega (dr^2+d\mathrm{\Omega }^2)e^\nu dt^2+e^\mu dl^2$$ (3) where $`d\mathrm{\Omega }^2=d\vartheta ^2+\mathrm{sin}\vartheta ^2d\phi ^2`$ and $`l`$ is the extra coordinate. The metric coefficients $`\omega `$, $`\nu `$ and $`\mu `$ will depend in general on both $`t`$ and $`l`$, the dependence on $`r`$ being ruled out in the absence of sources. The case when sources are present and the metric coefficients depend only on $`r`$, has been treated in a more general context in ref. . Moreover we can put, by simmetry considerations, $`\nu =\mu `$. The relevant Ricci equations are: $`3\nu ^{}\omega ^{}3\omega _{}^{}{}_{}{}^{2}2\nu ^{\prime \prime }6\omega ^{\prime \prime }+3\dot{\nu }\dot{\omega }+2\ddot{\nu }=0`$ (4a) $`3\nu ^{}\omega ^{}+2\nu ^{\prime \prime }+3\dot{\nu }\dot{\omega }3\dot{\omega }^22\ddot{\nu }6\ddot{\omega }=0`$ (4b) $`\omega ^{}\dot{\nu }+\nu ^{}\dot{\omega }\omega ^{}\dot{\omega }2\dot{\omega ^{}}=0`$ (4c) $`3\omega _{}^{}{}_{}{}^{2}+2\omega ^{\prime \prime }3\dot{\omega }^22\ddot{\omega }=0`$ (4d) Here partial derivatives with respect to $`t`$ and $`l`$ are denoted by an overdot and a prime respectively. One can immediately see that equation (4d) admits 3-brane wave solutions, propagating along the fifth dimension of a 5-dimensional bulk, of the form $`\omega _\pm =\omega (t\pm l)`$ and $`\nu _\pm =\nu (t\pm l)`$. Denoting by an asterisk derivatives with respect to $`t\pm l`$, equations (4a), (4b) and (4c) all become, after substitution: $$2\stackrel{}{\nu }_\pm \stackrel{}{\omega }_\pm \stackrel{}{\omega }_{\pm }^{}{}_{}{}^{2}2\stackrel{}{\omega }_\pm =0$$ (5) Therefore, selecting a particular form of $`\omega _\pm `$, the other metric coefficients are given by $$e^{\nu _\pm }=e^{\mu _\pm }=L\stackrel{}{\omega }_\pm e^{\frac{\omega _\pm }{2}}$$ (6) where $`L`$ is a suitable constant of integration. Starting from the above solutions, which describe waves propagating along the extra dimension of a 5-dimensional Ricci-flat spacetime with line element $$ds_5^2=e^{\omega _\pm }(dr^2+r^2d\mathrm{\Omega }^2)+L\stackrel{}{\omega }_\pm e^{\frac{\omega _\pm }{2}}(dt^2+dl^2)$$ (7) we build some simple cosmological models in a 4-dimensional spacetime with line element $$ds_4^2=e^{\omega (t)}(dr^2+r^2d\mathrm{\Omega }^2)L\dot{\omega }(t)e^{\frac{\omega (t)}{2}}dt^2$$ (8) obtained from (7) by a section with a hypersurface at constant $`l`$ chosen, without loss of generality, as $`l=0`$. Components of the Einstein tensor in mixed form, derived from the above metric (8), are $$\begin{array}{cc}& G_r^r=G_\vartheta ^\vartheta =G_\phi ^\phi =\frac{e^{\frac{\omega }{2}}(\dot{\omega }^2+\ddot{\omega })}{2L\dot{\omega }}\hfill \\ & G_t^t=\frac{3e^{\frac{\omega }{2}}\dot{\omega }}{4L}\hfill \end{array}$$ (9) We wish to match the terms in (9) with the components of the usual 4D perfect fluid energy-momentum tensor $`T_{\alpha \beta }=(p+\rho )u_\alpha u_\beta +pg_{\alpha \beta }`$. In our case the pressure and density are given by $`T_r^r=p`$ and $`T_t^t=\rho `$, and therefore we can simply identify $`G_r^r`$ with $`8\pi p`$ and $`G_t^t`$ with $`\mathrm{\hspace{0.17em}8}\pi \rho `$. Of course the choice of the function $`\omega (t)`$ is to a large extent arbitrary so we suggest, to make some physically meaningful examples, the following one: $$\omega (t)=\alpha \mathrm{ln}\left(1+\frac{t}{\alpha L}\right)$$ (10) where $`\alpha `$ is an assignable constant. As a consequence, the line element (8) becomes $$ds_4^2=\left(1+\frac{t}{\alpha L}\right)^\alpha (dr^2+r^2d\mathrm{\Omega }^2)\left(1+\frac{t}{\alpha L}\right)^{\frac{\alpha }{2}1}dt^2$$ (11) and clearly characterizes a conformally flat spacetime when $`\alpha +2=0`$. To go further, it is useful to make in (11) the change of variable $$\tau =\{\begin{array}{cc}& \frac{4\alpha L}{\alpha +2}\left[\left(1+\frac{t}{\alpha L}\right)^{\frac{\alpha +2}{4}}\mathrm{\hspace{0.17em}1}\right]\text{if }\alpha +20\hfill \\ & \\ & \mathrm{\hspace{0.17em}2}L\mathrm{ln}(1\frac{t}{2L})\text{if }\alpha +2=0\hfill \end{array}$$ (12) thus obtaining $$ds_4^2=\left(1+\frac{(\alpha +2)\tau }{4\alpha L}\right)^{\frac{4\alpha }{\alpha +2}}(dr^2+r^2d\mathrm{\Omega }^2)d\tau ^2\text{if }\alpha +20$$ (13) and $$ds_4^2=e^{\frac{\tau }{L}}(dr^2+r^2d\mathrm{\Omega }^2)d\tau ^2\text{if }\alpha +2=0$$ (14) where in both cases $`1/(2L)`$ represents the Hubble constant $`H_0`$. Accordingly, pressure and density of the perfect fluid can be rewritten respectively as $$8\pi p=\frac{{\displaystyle \frac{2(1\alpha )H_0^2}{\alpha }}}{\left(1+{\displaystyle \frac{(\alpha +2)H_0\tau }{2\alpha }}\right)^2}8\pi \rho =\frac{3H_0^2}{\left(1+{\displaystyle \frac{(\alpha +2)H_0\tau }{2\alpha }}\right)^2}$$ (15) and $$8\pi p=\mathrm{\hspace{0.17em}3}H_0^28\pi \rho =3H_0^2$$ (16) It is apparent that the case $`\alpha +2=0`$ describes a de-Sitter Universe with cosmological constant $`\mathrm{\Lambda }=3H_0^2`$. On the other hand, the case $`\alpha +20`$, provides the equation of state of radiation $`3p=\rho `$ when $`\alpha =2/3`$, and the equation of state of matter $`p=0`$ when $`\alpha =1`$.
no-problem/0002/hep-ph0002113.html
ar5iv
text
# 1 Introduction ## 1 Introduction The EMC measurement of the integrated helicity parton distributions (for recent results at SLAC see ) triggered the interest in a deeper understanding of how the total angular momentum of the nucleon is shared among its constituents. It was found that the fraction of the nucleon spin carried by the quarks was rather small, at variance with the most naive quark model expectation where the proton spin is (almost) entirely built from the spin of the quarks. Among the different explanations of these discrepancies, it was proposed that the (overlooked until then) polarization of the gluons might also contribute to the singlet axial charge through the axial anomaly. In that case, experimental data would be compatible with a rather large fraction of the spin carried by the quarks ($`\mathrm{\Delta }\mathrm{\Sigma }=0.45\pm 0.09`$ in a recent world data analysis ), though still far away from the non-relativistic quark model predictions. One of the most important issues raised by the ’spin crisis’ was the need for considering all the possible sources of angular momentum in the nucleon. Therefore, the spin sum rule should read : $$\frac{1}{2}\mathrm{\Delta }\mathrm{\Sigma }(Q^2)+\mathrm{\Delta }g(Q^2)+L_q(Q^2)+L_g(Q^2)=\frac{1}{2},$$ (1) where $`\frac{1}{2}\mathrm{\Delta }\mathrm{\Sigma }(Q^2)`$ ($`\mathrm{\Delta }g(Q^2)`$) is the spin carried by the quarks and antiquarks (gluons) and $`L_q(Q^2)`$ ($`L_g(Q^2)`$) is the orbital angular momentum (OAM) contribution of the quarks (gluons) . The significant role of OAM was pointed out several years ago , but the problem was rigorously formulated only recently, when a gauge invariant definition of the quark and gluon (twist-two) operators was proposed . Besides, there has been a big effort to derive evolution equations (at one-loop level) for OAM observables . At the present there is only one gauge invariant definition of quark OAM with known $`Q^2`$ evolution and experimentally accessible (for a discussion cfr. Ref. ). Such a definition holds for reference frames with definite nucleon polarization and the OAM distribution could be measured through the forward limit of skewed parton distributions. One peculiar feature, already expected by general arguments , is explicitly realized in the evolution equations, namely that the OAM distribution is coupled to the helicity parton distributions. As a consequence, OAM contributions can be generated, through evolution at higher scales, even in the case of vanishing OAM components at low initial hadronic scale. This is indeed the case for most hadronic models where the quarks are arranged, in the ground state, in a $`l=0`$ S-wave configuration, stressing the crucial role of $`Q^2`$ evolution for the evaluation of spin observables or, in other words, the roughness of the identification of constituent quarks with partons at all energy scales. From a quantitative point of view, some studies are currently available . In particular, in ref. , the OAM distributions have been calculated for a number of hadronic models. As a first step these quantities are evaluated at the hadronic (low energy) scale and then evolved to the experimental $`Q^2>>\mu _0^2`$ scale by using the Leading Order (LO) evolution procedure recently established . One central conclusion of that work is that a sizeable initial OAM distributions can deeply influence the final high-energy results. As a consequence, a clear difference arises between non-relativistic and relativistic models: while the former usually give a tiny OAM contribution at $`\mu _0^2`$, the latter may give rise to sizeable effects at high $`x`$ that persist after evolution. To this respect, OAM distributions are useful quantities to assess the relevance of relativistic effects in the hadronic models of the nucleon. In a recent study we investigated the consequences of a light-front treatment of relativistic spin effects on the helicity distributions and in the present paper we want to enlarge our analysis to OAM investigating in detail the predictions of a light-front covariant quark model. As a matter of fact, the spin dynamics can be discussed within the light-front approach in a way which respects covariance requirements and particularly suitable to discuss deep inelastic polarized processes, both at the hadronic and high-energy (partonic) scale . We will show that light-front covariant quark models (LFCQM) predict a non-vanishing OAM distribution whose main features survive after evolution. We will also see that these predictions hold for a variety of mass operators indicating that the relevant ingredient is the relativistic treatment of the spin wave functions, absent in many traditional formulations of the quark model. The comparison with other relativistic models (MIT bag model) and the analysis of the moments that enter the spin sum rule will allow us to assess the reliability of LFCQM. ## 2 OAM at the hadronic scale In the recent years a quark model-based approach has been developed for computing the non-perturbative inputs in the evolution equations and describing polarized and unpolarized parton distributions. Schematically, the central assumption is that at some low-energy scale ($`\mu _0^2`$) the nucleon is made up of valence quarks that can be identified with the constituents of the quark model (or the bag model in alternative treatments). Therefore, the non-perturbative boundary conditions can be evaluated by using low-energy models of the nucleon. Subsequent refinements led to the inclusion of non-perturbative gluons and sea at the hadronic scale $`\mu _0^2`$ , as well as the explicit partonic content of the constituent quarks . By following such a procedure we will assume that at the hadronic scale only valence quarks are resolved so that the quark helicity distribution $`g_1^a(x,\mu _0^2)`$ for a given flavour $`a`$ is given in terms of the momentum density of the valence quarks: $$g_1^a(x,\mu _0^2)=\frac{1}{(1x)^2}𝑑\stackrel{}{k}(n_a^{}(\stackrel{}{k})n_a^{}(\stackrel{}{k}))\delta \left(\frac{x}{1x}\frac{k^+}{M_N}\right),$$ (2) where $`x`$ is the Bjorken variable, $`M_N`$ the mass of the nucleon and $`k^+`$ is defined as a function of the parton momentum as $`k^+=\sqrt{\stackrel{}{k}^{\mathrm{\hspace{0.25em}2}}+m^2}+k_z`$. The polarized momentum densities are defined as: $$n_a^{}(\stackrel{}{k})=N,J_z=1/2|\underset{i=1}{\overset{3}{}}𝒫_a\frac{1\pm \sigma _z^{(i)}}{2}\delta (\stackrel{}{k}_i\stackrel{}{k})|N,J_z=1/2,$$ (3) where $`𝒫_a`$ is the flavour projector. An analogous definition can be worked out for the OAM distributions : $$L_z(x,\mu _0^2)=\frac{1}{(1x)^2}𝑑\stackrel{}{k}L_z(\stackrel{}{k})\delta \left(\frac{x}{1x}\frac{k^+}{M_N}\right),$$ (4) where the density of the angular momentum is defined in the usual way: $$L_z(\stackrel{}{k})=N,J_z=1/2|\underset{i=1}{\overset{3}{}}i(\stackrel{}{k}_i\times \stackrel{}{}_{\stackrel{}{k}_i})_z\delta (\stackrel{}{k}_i\stackrel{}{k})|N,J_z=1/2.$$ (5) From the previous definitions one can recover the result $`L_z(x,\mu _0^2)=0`$ obtained assuming S-wave quarks only, and non-relativistic approximation (i.e. $`L_z(\stackrel{}{k})=0`$). A more complicated example within non-relativistic dynamics, is given by models whose nucleon wave function is a superposition of various $`SU(6)`$ components, such as the Isgur-Karl model . The non-vanishing contribution to $`L_z(\stackrel{}{k})`$ in these cases is due to the D-state (or higher waves) admixture. For example, in the Isgur-Karl model, the OAM distribution of Eq. (4) results to be proportional to the D-State probability $`a_D^2`$ and its contribution is very small ($`a_D=0.067`$). This situation is radically changed in a light-front covariant quark model. In light-front dynamics (LFD) , the specific partition of the Poincaré algebra into kinetic and Hamiltonian generators leads to several simplifications of the relativistic many-body problem such as, for example, the clean separation of the center of mass motion. The prize to pay is that the description of angular momentum is rather complicated. Not all the generators of rotations, in fact, belong to the kinetic subgroup, and hence the angular momentum operator is, in general, interaction dependent. For this reason, in the phenomenological applications of LFD to the quark model, it is customary to work in the Bakamjian-Thomas construction , that is adding a phenomenological interaction to the free mass operator, only. However, the resulting total angular momentum operator, although interaction free, does not satisfy ordinary composition rules. In order to restore them, a unitary transformation of the Hilbert space, known as Melosh Rotation (MR), has to be performed. In particular, if the nucleon is in a S-wave state, such rotation acts only on the spin part of the wave function. The $`D^{1/2}`$ representation of the MR is given by: $$D^{1/2}[R_M(\stackrel{}{k})]=\frac{(m+\omega +k_z)i\stackrel{}{\sigma }(\widehat{z}\times \stackrel{}{k}_{})}{((m+\omega +k_z)^2+\stackrel{}{k}_{}^{\mathrm{\hspace{0.17em}2}})^{1/2}}.$$ (6) As a result, motion and spin are now intimately correlated as it is required by a relativistic theory. The MR can be interpreted as the boost transformations required to move from the rest frame of each subsystem (quark) to the rest frame of the total system (nucleon). In the present study we will not investigate $`SU(6)`$ breaking effects in spin-isospin space: the nucleon wave function will correspond to a S-wave. This simplifying assumption ensures that the non-vanishing OAM contribution originates from pure relativistic effects due to the treatment of the spin in light-front dynamics. Indeed the MR gives rise to a non-vanishing angular momentum density even if the spatial wave function corresponds to a S-wave and the angular momentum density can be written, for a $`SU(6)`$ symmetric spin-isospin wave function, as $$L_z(\stackrel{}{k})=\frac{1}{3}\frac{\stackrel{}{k}_{}^{\mathrm{\hspace{0.17em}2}}}{(m+\omega +k_z)^2+\stackrel{}{k}_{}^{\mathrm{\hspace{0.17em}2}}}n(\stackrel{}{k}),$$ (7) where $`n(\stackrel{}{k})`$ is the total momentum density, defined in the usual way : $$n(\stackrel{}{k})=\mathrm{\Psi }_N|\underset{i=1}{\overset{3}{}}\delta (\stackrel{}{k}_i\stackrel{}{k})|\mathrm{\Psi }_N,$$ (8) and normalized to the number of particles ($`n(\stackrel{}{k})𝑑\stackrel{}{k}=3`$). Recalling the expressions for the polarized densities that enter the helicity distributions : $`n_u^{}(\stackrel{}{k})n_u^{}(\stackrel{}{k})`$ $`=`$ $`4\left(n_d^{}(\stackrel{}{k})n_d^{}(\stackrel{}{k})\right)`$ (9) $`=`$ $`{\displaystyle \frac{4}{9}}{\displaystyle \frac{(m+\omega +k_z)^2\stackrel{}{k}_{}^{\mathrm{\hspace{0.17em}2}}}{(m+\omega +k_z)^2+\stackrel{}{k}_{}^{\mathrm{\hspace{0.17em}2}}}}n(\stackrel{}{k}),`$ one can check that the total angular momentum sum rule is automatically fulfilled at the hadronic scale: $$\frac{1}{2}(g_1^u(x,\mu _0^2)+g_1^d(x,\mu _0^2))𝑑x+𝑑xL_z(x,\mu _0^2)=\frac{1}{2}.$$ (10) Another interesting relationship connects $`L_z`$ to the longitudinal ($`g_1`$) and transversity ($`h_1`$) parton distributions, namely : $$g_1^a(x,\mu _0^2)+L_z^a(x,\mu _0^2)=h_1^a(x,\mu _0^2),$$ (11) and is naturally fulfilled in our approach. This relationship also holds for other relativistic models of the nucleon, such as the bag model. Let us stress that Eq. (11) is valid at the hadronic scale $`\mu _0^2`$ only and one should be careful when using it to extract information about $`L_z^a`$ because it is broken by evolution, even at small values of $`Q^2`$. This can be easily demonstrated by considering the singlet combination corresponding to Eq. (11), i.e. $$\mathrm{\Sigma }(x,\mu _0^2)+L_z(x,\mu _0^2)H(x,\mu _0^2)=0$$ (12) where $`\mathrm{\Sigma }(x,\mu _0^2)=\mathrm{\Sigma }_a(g_1^a(x,\mu _0^2)+g_1^{\overline{a}}(x,\mu _0^2))`$ and $`H(x,\mu _0^2)=\mathrm{\Sigma }_a(h_1^a(x,\mu _0^2)h_1^{\overline{a}}(x,\mu _0^2))`$<sup>1</sup><sup>1</sup>1The minus sign in front of $`h_1^{\overline{a}}`$ comes from the properties of the operator that defines the transversity under charge conjugation, and therefore the analogous of Eq. (11) for antiquarks should read $`g_1^{\overline{a}}(x,\mu _0^2)+L_z^{\overline{a}}(x,\mu _0^2)=h_1^{\overline{a}}(x,\mu _0^2)`$. Though our model does not contain antiquarks at the scale $`\mu _0^2`$, this relationship can be easily checked in the bag model.. In order to check the validity of Eq. (11) at $`Q^2>\mu _0^2`$ let us evolve (at LO) the first moments of the left-hand side of Eq. (12): $`\mathrm{\Sigma }(x,Q^2)_1+L_z(x,Q^2)_1H(x,Q^2)_1`$ $`=`$ $`{\displaystyle \frac{1}{2}}(1b^{50/81})\mathrm{\Sigma }(x,\mu _0^2)_1`$ (13) $`+`$ $`(b^{50/81}b^{4/27})H(x,\mu _0^2)_1`$ $`+`$ $`{\displaystyle \frac{9}{50}}(1b^{50/81})`$ where $`b=\frac{\mathrm{ln}(Q^2/\mathrm{\Lambda }^2)}{\mathrm{ln}(\mu _0^2/\mathrm{\Lambda }^2)}`$. Clearly, the right-hand side of the equation above vanishes only if $`Q^2=\mu _0^2`$. Furthermore, due to the form of the $`b`$-dependent coefficients, it quickly deviates from 0 at the initial stages of evolution pointing out the limits of the attempt, carried out in , of extracting information on $`L_z`$ from Eq. (11). Coming back to our evaluation of OAM, Eqs. (7) - (9) show that the exact ratio between the amount of OAM and spin will depend on the specific form of $`n(\stackrel{}{k})`$, or equivalently, on the spatial nucleon wave function. Let us note however that the momentum density averages many details of the spatial wave function and to this respect, the sensitivity of the final results to the fine details of the spatial wave function is reduced. In the following we will discuss predictions obtained solving explicitly the mass equation: $$\widehat{M}\mathrm{\Psi }=(\underset{i_1}{\overset{3}{}}\sqrt{\stackrel{}{k}_i^{\mathrm{\hspace{0.17em}2}}+m^2}+V)\mathrm{\Psi }=E\mathrm{\Psi }$$ (14) with an hypercentral phenomenological potential: $$V=\frac{\tau }{\xi }+\kappa _l\xi +\mathrm{\Delta },$$ (15) where $`\xi `$ is the hyperradius defined in the usual way and $`\tau `$, $`\kappa _l`$ and $`\mathrm{\Delta }`$ are free parameters that are fixed by spectroscopy requirements . It is worthwhile mentioning that MR has no effects on the energy levels of the confining mass operator (14) - (15) explaining to some extent the success of non-relativistic (or relativized) approaches in reproducing the baryonic spectrum. On the other hand, another remarkable effect of the relativistic mass equation is the enhancement of the high momentum components in the nucleon wave function. Since the MR factor involves momentum dependent terms, the final results will be biased by the presence of these high-momentum components. In order to test the sensitivity to the details of the momentum density we will consider an additional scenario where the MR factors are combined with a wave function obtained from the non-relativistic Schrödinger reduction of the Eq. (14) with the same form of potential (15). This new spatial wave function, hereafter indicated by $`\mathrm{\Psi }^{}`$, will contain far less high momentum components. In fact one of the risks of guessing the wave function instead of solving the mass equation (14) explicitly, is to underestimate the contribution coming from the high-momentum components of the correct solution, mostly carried over by the relativistic kinetic energy operator in the mass equation. Although the use of MR is not fully consistent when $`\mathrm{\Psi }^{}`$ is considered, since it was derived from a non-relativistic mass equation, we will discuss it as a ’pedagogical’ example that represents an extreme scenario where high-momentum components have been strongly suppressed. The comparison of results obtained with $`\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ will serve to establish bounds on the effects of MR. ## 3 Results and discussion The obtained OAM distribution at the hadronic scale $`\mu _0^2`$, Eq. (4), for the wave function $`\mathrm{\Psi }`$, the solution of Eq. (14), is shown in Fig. 1.a. The outcome for the modified scenario (corresponding to the wave function $`\mathrm{\Psi }^{}`$) is shown in Fig. 1.b to appreciate the effect of the lack of high momentum components. Furthermore, the comparison with the bag model results is also provided (Fig. 1.c). It is clear that the LFCQM, regardless of the details of the spatial wave function, provides OAM distributions which are comparable (even bigger by a factor 2) to the bag model. From the comparison between Figs. 1.a and 1.b one can see that the MR (and not the specific shape of the spatial wave function) is responsible for this sizeable OAM. In non-covariant quark models such as the Isgur-Karl model, where MR is omitted, the OAM distributions is almost flat . Even when considering a D-model where the probability of the D-wave component is raised up to a 20 %, the resulting OAM, though comparable in size to those obtained here, are peaked at lower $`x`$. Nonetheless the large deformation of the nucleon in the D-model should not be taken as realistic. In order to bring the OAM distributions to the high-energy experimental scale, we use the recently obtained evolution equations at LO . In the process the OAM distributions for the gluons will be generated. The initial scale $`\mu _0^2`$ is determined following the criteria exposed in , and at LO turns out to be $`\mu _0^2=0.079`$ GeV<sup>2</sup>. In fig. 1.a and 1.b we also present the evolved OAM distributions up to $`Q^2=10`$ GeV<sup>2</sup> (short-dashed line) and $`Q^2=1000`$ GeV<sup>2</sup> (long-dashed line). By comparing again the LFCQM with the bag model (Fig. 1.c) it is clear that a non-vanishing OAM persists in the large $`x`$ region and this is a distinctive feature of relativistic treatments of the nucleon. Indeed, in I-K models, the OAM is entirely concentrated at low $`x`$. This may constitute a clear signature of relativity in the low-energy models of the nucleon if $`L_z(x,Q^2)`$ is measured. In our approach all the gluon OAM is generated through evolution. In Fig. 2 we present the resulting $`L_g(x,Q^2=10\text{GeV}^2)`$ for $`\mathrm{\Psi }`$, $`\mathrm{\Psi }^{}`$ and the bag model. There is an inverse correlation between the amount of high momentum components in the wave function and the value of $`L_g`$ at small $`x`$. The OAM gluon distribution for the bag model falls between those obtained with $`\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$. Concerning the first moments of the distributions, our model gives a value for $`L_q(\mu _0^2)=L_z(x,\mu _0^2)𝑑x`$ that ranges from 0.272 to 0.126 ($`\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ model respectively). It should be stressed that the corresponding values for $`\mathrm{\Delta }\mathrm{\Sigma }`$ (0.456 and 0.748 respectively) are per se a clearcut signature in favor of light-front quark models, when compared to recent analysis of data ($`\mathrm{\Delta }\mathrm{\Sigma }=0.45\pm 0.09`$) . Furthermore, these numbers are quite close to the angular momentum share-out given by the bag model. The first moments that make up the spin sum rule also evolve with $`Q^2`$ according to : $`{\displaystyle \frac{1}{2}}\mathrm{\Delta }\mathrm{\Sigma }(Q^2)`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{\Delta }\mathrm{\Sigma }(\mu _0^2)`$ (16) $`L_q(Q^2)`$ $`=`$ $`(b^{\frac{50}{81}}1){\displaystyle \frac{1}{2}}\mathrm{\Delta }\mathrm{\Sigma }(\mu _0^2)+b^{\frac{50}{81}}L_q(\mu _0^2){\displaystyle \frac{9}{50}}(b^{\frac{50}{81}}1)`$ (17) $`J_g(Q^2)`$ $`=`$ $`b^{\frac{50}{81}}J_g(\mu _0^2){\displaystyle \frac{8}{25}}(b^{\frac{50}{81}}1)`$ (18) In Fig. 3 we show the evolution of these quantities with $`Q^2`$ for $`\mathrm{\Psi }`$ (Fig. 3.a) and $`\mathrm{\Psi }^{}`$ (Fig. 3.b). It is worthwhile mentioning that, even if we do not have gluons at the hadronic scale, they quite rapidly develop a sizeable angular momentum content. Furthermore the gluon angular momentum evolves decoupled from the quark sector and if we start with a vanishing $`J_g(\mu _0^2)`$ then $`J_g(Q^2)`$ is completely determined by the QCD anomalous dimensions. The values for $`J_g(Q^2)`$ in the region between 1 and 10 GeV<sup>2</sup> that we find ($`J_g0.200.25`$) are compatible with those found by using QCD sum rules ($`J_g0.25`$) and in a recent lattice calculation ($`J_g=0.20\pm 0.07`$). They also agree with another model calculation based on the one-gluon exchange interaction between quarks ($`J_g0.24`$). The consideration of a non-vanishing $`J_g(\mu _0^2)`$ would not change much our results since it would also raise the scale $`\mu _0^2`$ (due to the fact that at that scale the gluons would carry some momentum) and hence $`b`$ would be larger for a given $`Q^2`$. Though the large error bars in the first direct measurement of the ratio $`\frac{\mathrm{\Delta }g}{g}`$ , $`\frac{\mathrm{\Delta }g}{g}=0.41\pm 0.18(stat)\pm 0.03(syst)`$, and the values for $`\mathrm{\Delta }g`$ obtained in recent data analysis , $`\mathrm{\Delta }g(Q^2=1\text{ GeV}^2)=1.6\pm 0.9`$, do not allow to discriminate between models, our results fall within the range of the latter. As a matter of fact the rather moderate values for $`J_g`$ result from a strong cancellation between $`L_q(Q^2)`$ and $`\mathrm{\Delta }g(Q^2)`$ and, in particular we have $`\mathrm{\Delta }g(Q^2=1\text{ GeV}^2)=`$ 1.36 and 2.22 for $`\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ respectively. In a non-relativistic quark model one would expect $`L_q(Q^2)J_g(Q^2)0.25`$ in the range $`Q^2110`$ GeV<sup>2</sup> since $`\mathrm{\Delta }\mathrm{\Sigma }`$ is a constant with $`Q^2`$. When relativistic spin effects are taken into account, as Fig. 3 shows, one expects $`L_q(Q^2)`$ to be much smaller ($`L_q(Q^2)0.12`$ at most) or close to zero. ## 4 Concluding remarks In summary we have shown that covariant light-front based quark models give rise to non-trivial predictions for the OAM distributions at both low and high momentum scales. This departure from traditional treatments of the angular momentum structure of the nucleon is more manifest in the high-$`x`$ region of the quark sector. We have seen that the performance of LFCQM is quite similar to other relativistic models of the nucleon such as the bag model. This comparison holds for a quite flexible choice of the mass operator. We have studied the predictions for other potentials that interpolate between the two somehow extreme situations presented here and conclusions are not changed. In fact, there is a clear correlation between the amount of high-momentum components in the momentum density $`n(\stackrel{}{k})`$ and the size of the OAM distribution. A more realistic interaction would give results closer to those of $`\mathrm{\Psi }`$ than to those obtained with $`\mathrm{\Psi }^{}`$ because a relativistic treatment of the kinetic energy operator inevitably emphasizes the high-momentum tail. One should keep in mind however that the origin of the relativistic aspects is not the same in the bag and in the LFCQM presented here. While in the former the non-vanishing OAM comes from the small Dirac components, in the latter these ones are absent and relativity enters through the momentum dependence of the Pauli spinors. Certainly other more sophisticated spin-flavour basis can be constructed, such as the Dirac-Melosh bases , where covariance is manifest. Despite the fact that the used basis contains only kinematic and not dynamical (higher Fock states) effects, it represents a minimal framework that combines in an elegant way simplicity and a proper treatment of boost. Results obtained with this basis and with the bag model are of similar quality pointing out that it allows an easy implementation of relativistic effects in the spin structure of the nucleon. We gratefully acknowledge Vicente Vento for useful comments and a careful reading of the manuscript. Figure captions Figure 1 Quark orbital angular momentum distributions calculated in light-front dynamics with the wave function $`\mathrm{\Psi }`$ (a), with the modified wave function $`\mathrm{\Psi }^{}`$ (see text) (b) and in the bag model (c). Solid lines correspond to the initial hadronic scale $`\mu _0^2`$, short-dashed lines to $`Q^2=10`$ GeV<sup>2</sup> and long-dashed ones to $`Q^2=1000`$ GeV<sup>2</sup>. Figure 2. Gluon orbital angular momentum distributions calculated at $`Q^2=10`$ GeV<sup>2</sup> with the wave function $`\mathrm{\Psi }`$ (solid line), $`\mathrm{\Psi }^{}`$ (long-dashed line) and the bag model (short-dashed line). Figure 3. The contributions to the proton spin sum rule according to the model with $`\mathrm{\Psi }`$ (a) and $`\mathrm{\Psi }^{}`$ (b). The dashed curve shows $`\frac{1}{2}\mathrm{\Delta }\mathrm{\Sigma }(Q^2)`$, the long-dashed one is $`L_q(Q^2)`$ and the dot-dashed curve represents $`J_g(Q^2)`$. Figure 1 Figure 2 Figure 3
no-problem/0002/astro-ph0002264.html
ar5iv
text
# Optical Observations of Type II Supernovae ## INTRODUCTION Supernovae (SNe) occur in several spectroscopically distinct varieties; see reference avf97 , for example. Type I SNe are defined by the absence of obvious hydrogen in their optical spectra, except for possible contamination from superposed H II regions. SNe II all prominently exhibit hydrogen in their spectra, yet the strength and profile of the H$`\alpha `$ line vary widely among these objects. The early-time ($`t1`$ week past maximum brightness) spectra of SNe are illustrated in Figure 1. \[Unless otherwise noted, the optical spectra illustrated here were obtained by my group, primarily with the 3-m Shane reflector at Lick Observatory. When referring to phase of evolution, the variables $`t`$ and $`\tau `$ denote time since maximum brightness (usually in the $`B`$ passband) and time since explosion, respectively.\] The lines are broad due to the high velocities of the ejecta, and most of them have P-Cygni profiles formed by resonant scattering above the photosphere. SNe Ia are characterized by a deep absorption trough around 6150 Å produced by blueshifted Si II $`\lambda `$6355. Members of the Ib and Ic subclasses do not show this line. The presence of moderately strong optical He I lines, especially He I $`\lambda `$5876, distinguishes SNe Ib from SNe Ic. Figure 1: Early-time spectra of SNe, showing the main subtypes. The late-time ($`t4`$ months) optical spectra of SNe provide additional constraints on the classification scheme (Figure 2). SNe Ia show blends of dozens of Fe emission lines, mixed with some Co lines. SNe Ib and Ic, on the other hand, have relatively unblended emission lines of intermediate-mass elements such as O and Ca. At this phase, SNe II are dominated by the strong H$`\alpha `$ emission line; in other respects, most of them spectroscopically resemble SNe Ib and Ic, but with narrower emission lines. The late-time spectra of SNe II show substantial heterogeneity, as do the early-time spectra. To a first approximation, the light curves of SNe I are all broadly similar lei91a , while those of SNe II exhibit much dispersion pat93 . It is useful to subdivide the majority of early-time light curves of SNe II into two relatively distinct subclasses bar79 ; dog85 . The light curves of SNe II-L (“linear”) generally resemble those of SNe I, with a steep decline after maximum brightness followed by a slower exponential tail. In contrast, SNe II-P (“plateau”) remain within $`1`$ mag of maximum brightness for an extended period. The peak absolute magnitudes of SNe II-P show a very wide dispersion you89 , almost certainly due to differences in the radii of the progenitor stars. The light curve of SN 1987A, albeit unusual, was generically related to those of SNe II-P; the initial peak was very low because the progenitor was a blue supergiant, much smaller than a red supergiant arn89 . The remainder of this review concentrates on SNe II. Figure 2: Late-time spectra of SNe. At even later phases, SN 1987A was dominated by strong emission lines of H$`\alpha `$, \[O I\], \[Ca II\], and the Ca II near-infrared triplet. ## SUBCLASSES OF TYPE II SUPERNOVAE Most SNe II-P seem to have a relatively well-defined spectral development, as shown in Figure 3 for SN 1992H (see also reference clo96 ). At early times the spectrum is nearly featureless and very blue, indicating a high color temperature ($``$ 10,000 K). He I $`\lambda `$5876 with a P-Cygni profile is sometimes visible. The temperature rapidly decreases with time, reaching $`5000`$ K after a few weeks, as expected from the adiabatic expansion and associated cooling of the ejecta. It remains roughly constant at this value during the plateau (the photospheric phase), while the hydrogen recombination wave moves through the massive ($`10M_{}`$) hydrogen ejecta and releases the energy deposited by the shock. At this stage strong Balmer lines and Ca II H&K with well-developed P-Cygni profiles appear, as do weaker lines of Fe II, Sc II, and other iron-group elements. The spectrum gradually takes on a nebular appearance as the light curve drops to the late-time tail; the continuum fades, but H$`\alpha `$ becomes very strong, and prominent emission lines of \[O I\], \[Ca II\], and Ca II also appear. Figure 3: Montage of spectra of SN 1992H in NGC 5377. Epochs (days) are given relative to the estimated time of explosion, February 8, 1992. Few SNe II-L have been observed in as much detail as SNe II-P. Figure 4 shows the spectral development of SN 1979C bra81 , an unusually luminous member of this subclass. Near maximum brightness the spectrum is very blue and almost featureless, with a slight hint of H$`\alpha `$ emission. A week later, H$`\alpha `$ emission is more easily discernible, and low-contrast P-Cygni profiles of Na I, H$`\beta `$, and Fe II have appeared. By $`t1`$ month, the H$`\alpha `$ emission line is very strong but still devoid of an absorption component, while the other features clearly have P-Cygni profiles. Strong, broad H$`\alpha `$ emission dominates the spectrum at $`t7`$ months, and \[O I\] $`\lambda \lambda `$6300, 6364 emission is also present. Several authors whe90 ; avf91a ; sch96 have speculated that the absence of H$`\alpha `$ absorption spectroscopically differentiates SNe II-L from SNe II-P, but the small size of the sample of well-observed objects precluded definitive conclusions. Figure 4: Montage of spectra of SN 1979C in NGC 4321, from reference bra81 ; reproduced with permission. Epochs (days) are given relative to the date of maximum brightness, April 15, 1979. The progenitors of SNe II-L are generally believed to have relatively low-mass hydrogen envelopes (a few $`M_{}`$); otherwise, they would exhibit distinct plateaus, as do SNe II-P. On the other hand, they may have more circumstellar gas than do SNe II-P, and this could give rise to the emission-line dominated spectra. They are often radio sources sra90 ; moreover, the ultraviolet excess (at $`\lambda 1600`$ Å) seen in SNe 1979C and 1980K may be produced by inverse Compton scattering of photospheric radiation by high-speed electrons in shock-heated ($`T10^9`$ K) circumstellar material fra82 ; fra84 . Finally, the light curves of some SNe II-L reveal an extra source of energy: after declining exponentially for several years, the H$`\alpha `$ flux of SN 1980K reached a steady level, showing little if any decline thereafter uom86 ; lei91b . The excess almost certainly comes from the kinetic energy of the ejecta being thermalized and radiated due to an interaction with circumstellar matter che90 ; lei94 . The very late-time optical recovery of SNe 1979C and 1980K lei91b ; fes95 ; fes99 and other SNe II-L supports the idea of ejecta interacting with circumstellar material. The spectra consist of a few strong, broad emission lines such as H$`\alpha `$, \[O I\] $`\lambda \lambda `$6300, 6364, and \[O III\] $`\lambda \lambda `$4959, 5007. A Hubble Space Telescope (HST) ultraviolet spectrum of SN 1979C reveals some prominent, double-peaked emission lines with the blue peak substantially stronger than the red, suggesting dust extinction within the expanding ejecta fes99 . The data show general agreement with the emission lines expected from circumstellar interaction che94 , but the specific models that are available show several differences with the observations. For example, we find higher electron densities ($`10^5`$ to $`10^7`$ cm<sup>-3</sup>), resulting in stronger collisional de-excitation than assumed in the models. These differences can be used to further constrain the nature of the progenitor star. Note that based on photometry of the stellar populations in the environment of SN 1979C (from HST images), the progenitor of the SN was at most 10 million years years old, so its initial mass was probably 17–18 $`M_{}`$ van99a . During the past decade, there has been the gradual emergence of a new, distinct subclass of SNe II avf91a ; avf91b ; sch90 ; lei94 whose ejecta are believed to be strongly interacting with dense circumstellar gas, even at early times (unlike SNe II-L). The derived mass-loss rates for the progenitors can exceed $`10^4M_{}`$ yr<sup>-1</sup> chu94 . In these objects, the broad absorption components of all lines are weak or absent throughout their evolution. Instead, their spectra are dominated by strong emission lines, most notably H$`\alpha `$, having a complex but relatively narrow profile. Although the details differ among objects, H$`\alpha `$ typically exhibits a very narrow component (FWHM $`200`$ km s<sup>-1</sup>) superposed on a base of intermediate width (FWHM $``$ 1000–2000 km s<sup>-1</sup>; sometimes a very broad component (FWHM $``$ 5000–10,000 km s<sup>-1</sup>) is also present. This subclass was christened “Type IIn” sch90 , the “n” denoting “narrow” to emphasize the presence of the intermediate-width or very narrow emission components. Representative spectra of five SNe IIn are shown in Figure 5, with two epochs for SN 1994Y. The early-time continua of SNe IIn tend to be bluer than normal. Occasionally He I emission lines are present in the first few spectra (e.g., SN 1994Y in Figure 5). Very narrow Balmer absorption lines are visible in the early-time spectra of some of these objects, often with corresponding Fe II, Ca II, O I, or Na I absorption as well (e.g., SNe 1994W and 1994ak in Figure 5). Some of them are unusually luminous at maximum brightness, and they generally fade quite slowly, at least at early times. The equivalent width of the intermediate H$`\alpha `$ component can grow to astoundingly high values at late times. The great diversity in the observed characteristics of SNe IIn provides clues to the various degrees and forms of mass loss late in the lives of massive stars. Figure 5: Montage of spectra of SNe IIn. Epochs are given relative to the estimated dates of explosion. ## TYPE II SUPERNOVA IMPOSTORS? The peculiar SN IIn 1961V (“Type V” according to Zwicky zwi65 ) had probably the most bizarre light curve ever recorded. (SN 1954J, also known as “Variable 12” in NGC 2403, was similar hum94 .) Its progenitor was a very luminous star, visible in many photographs of the host galaxy (NGC 1058) prior to the explosion. Perhaps SN 1961V was not a genuine supernova (defined to be the violent destruction of a star at the end of its life), but rather the super-outburst of a luminous blue variable such as $`\eta `$ Carinae goo89 ; avf95 . A related object may have been SN IIn 1997bs, the first SN discovered in the Lick Observatory Supernova Search (LOSS) that we are conducting with the 0.75-m Katzman Automatic Imaging Telescope (KAIT) at Lick Observatory wli00 . Its spectrum was peculiar (Figure 6), consisting of narrow Balmer and Fe II emission lines superposed on a featureless continuum. Its progenitor was discovered in an HST archival image of the host galaxy van99b . It is a very luminous star ($`M_V7.4`$ mag), and it didn’t brighten as much as expected for a SN explosion ($`M_V13`$ at maximum). These data suggest that SN 1997bs may have been like SN 1961V — that is, a supernova impostor. The real test will be whether the star is still visible in future HST images obtained years after the outburst. Figure 6: Spectrum of SN 1997bs, obtained on April 16, 1997 UT. ## LINKS BETWEEN TYPE II AND TYPE Ib/Ic SUPERNOVAE Filippenko avf88 discussed the case of SN 1987K, which appeared to be a link between SNe II and SNe Ib. Near maximum brightness, it was undoubtedly a SN II, but with rather weak photospheric Balmer and Ca II lines. Many months after maximum brightness, its spectrum was essentially that of a SN Ib. The simplest interpretation is that SN 1987K had a meager hydrogen atmosphere at the time it exploded; it would naturally masquerade as a SN II for a while, and as the expanding ejecta thinned out the spectrum would become dominated by emission from deeper and denser layers. The progenitor was probably a star that, prior to exploding via iron core collapse, lost almost all of its hydrogen envelope either through mass transfer onto a companion or as a result of stellar winds. Such SNe were dubbed “SNe IIb” by Woosley et al. woo87 , who had proposed a similar preliminary model for SN 1987A before it was known to have a massive hydrogen envelope. Figure 7: Early-time spectral evolution of SN 1993J. A comparison with the Type Ib SN 1984L is shown at bottom, demonstrating the presence of He I lines in SN 1993J. The explosion date was March 27.5, 1993. The data for SN 1987K (especially its light curve) were rather sparse, making it difficult to model in detail. Fortunately, the Type II SN 1993J in NGC 3031 (M81) came to the rescue, and was studied in greater detail than any supernova since SN 1987A whe96 . Its light curves ric96 and spectra avf93 ; avf94 ; mat00 amply supported the hypothesis that the progenitor of SN 1993J probably had a low-mass (0.1–0.6 $`M_{}`$) hydrogen envelope above a $`4M_{}`$ He core nom93 ; pod93 ; woo94 . Figure 7 shows several early-time spectra of SN 1993J, showing the emergence of He I features typical of SNe Ib. Considerably later (Figure 8), the H$`\alpha `$ emission nearly disappeared, and the spectral resemblance to SNe Ib was strong. The general consensus is that its initial mass was $`15M_{}`$. A star of such low mass cannot shed nearly its entire hydrogen envelope without the assistance of a companion star. Thus, the progenitor of SN 1993J probably lost most of its hydrogen through mass transfer to a bound companion 3–20 AU away. In addition, part of the gas may have been lost from the system. Had the progenitor lost essentially all of its hydrogen prior to exploding, it would have had the optical characteristics of SNe Ib. There is now little doubt that most SNe Ib, and probably SNe Ic as well, result from core collapse in stripped, massive stars, rather than from the thermonuclear runaway of white dwarfs. SN 1993J held several more surprises. Observations at radio van94 and X-ray suz95 wavelengths revealed that the ejecta are interacting with relatively dense circumstellar material fra96 , probably ejected from the system during the course of its pre-SN evolution. Optical evidence for this interaction also began emerging at $`\tau 10`$ months: the H$`\alpha `$ emission line grew in relative prominence, and by $`\tau 14`$ months it had become the dominant line in the spectrum avf94 ; pat95 ; fin95 , consistent with models che94 . Its profile was very broad (FWHM $``$ 17,000 km s<sup>-1</sup>; Figure 8) and had a relatively flat top, but with prominent peaks and valleys whose likely origin is Rayleigh-Taylor instabilities in the cool, dense shell of gas behind the reverse shock che92 . Radio VLBI measurements show that the ejecta are circularly symmetric, but with significant emission asymmetries mar95 , possibly consistent with the asymmetric H$`\alpha `$ profile seen in some of the spectra avf94 . Figure 8: In the top spectrum, which shows SN 1993J about 7 months after the explosion, H$`\alpha `$ emission is very weak; the resemblance to spectra of SNe Ib is striking. A year later (bottom), however, H$`\alpha `$ was once again the dominant feature in the spectrum (which was scaled for display purposes). ## SPECTROPOLARIMETRY OF TYPE II SUPERNOVAE Spectropolarimetry of SNe can be used to probe their geometry leo00a . The basic question is whether SNe are round. Such work is important for a full understanding of the physics of SN explosions and can provide information on the circumstellar environment of SNe. We have obtained spectropolarimetry of one object from each of the major SN types and subtypes, generally with the Keck-II 10-m telescope. Figure 9: Polarization data for SN 1998S, obtained with Keck-II on March 7, 1998. (a) Total flux, in units of $`10^{15}`$ ergs s<sup>-1</sup> cm<sup>-2</sup> Å<sup>-1</sup>. (b) Observed degree of polarization. (c,d) The normalized $`q`$ and $`u`$ Stokes parameters, with prominent narrow-line features indicated. (e) Average of the (nearby identical) $`1\sigma `$ statistical uncertainties in the Stokes $`q`$ and $`u`$ parameters. See reference leo00b for details. We have completed our analysis of the peculiar Type IIn SN 1998S leo00b . The data consist of one epoch of spectropolarimetry (5 days after discovery) and total flux spectra spanning the first 494 days after discovery. The SN is found to exhibit a high degree of linear polarization (Figure 9), implying significant asphericity for its continuum-scattering environment. Prior to removal of the interstellar polarization, the polarization spectrum is characterized by a flat continuum (at $`p2\%`$) with distinct changes in polarization associated with both the broad (symmetric, half width near-zero intensity $`10,000`$ km s<sup>-1</sup>) and narrow (unresolved, FWHM $`<300`$ km s<sup>-1</sup>) line emission seen in the total flux spectrum. When analyzed in terms of a polarized continuum with unpolarized broad-line recombination emission, however, an intrinsic continuum polarization of $`p3\%`$ results, suggesting a global asphericity of $`45\%`$ from the oblate, electron-scattering dominated models of Höflich hof91 . The smooth, blue continuum evident at early times is inconsistent with a reddened, single-temperature blackbody, instead having a color temperature that increases with decreasing wavelength. Broad emission-line profiles with distinct blue and red peaks are seen in the total flux spectra at later times, suggesting a disk-like or ring-like morphology for the dense ($`n_e10^7\mathrm{cm}^3`$) circumstellar medium, generically similar to what is seen directly in SN 1987A, although much denser and closer to the progenitor in SN 1998S. The Type IIn SN 1997eg also exhibits considerable polarization leo00a ; there are sharp polarization changes across its strong, multi-component emission lines, suggesting distinct scattering origins for the different components. Based on our rather small sample, it appears as though SNe II-P are considerably less polarized than SNe IIn, at least within the first month or two after the explosion. Leonard et al. leo00a show some spectropolarimetric evidence of asphericity in the ejecta of SN II-P 1997ds, but it does not match the degree of polarization of SNe IIn 1998S and 1997eg. Moreover, SN II-P 1999em does not reveal significant polarization variation across the strong Balmer lines shortly after its explosion leo99 . ## SUPERNOVAE ASSOCIATED WITH GAMMA-RAY BURSTS? At least a small fraction of gamma-ray bursts (GRBs) may be associated with nearby SNe. Probably the most compelling example thus far is that of SN 1998bw and GRB 980425 gal98 ; iwa98 ; woo99 , which were temporally and spatially coincident. SN 1998bw was, in many ways, an extraordinary SN; it was very luminous at optical and radio wavelengths, and it showed evidence for relativistic outflow. Its bizarre optical spectrum is often classified as that of a SN Ic, but the object should be called a “peculiar SN Ic” if not a subclass of its own; the spectrum was distinctly different from that of a normal SN Ic. As discussed by several speakers at this meeting, models suggest that SNe associated with GRBs are highly asymmetric. Thus, spectropolarimetry should provide some useful tests. In particular, perhaps objects such as SN 1998S, discussed above, would have been seen as GRBs had their rotation axis been pointed in our direction. That of SN 1998S was almost certainly not aligned with us leo00b ; both the spectropolarimetry and the appearance of double-peaked H$`\alpha `$ emission suggest an inclined view, rather than pole-on. Figure 10: Spectral evolution of SN 1999E, which may have been associated with GRB 980910. The case of GRB 970514 and the very luminous SN IIn 1997cy is also interesting ger00 ; tur00 ; there is a reasonable possibility that the two objects were associated. The optical spectrum of SN 1997cy was highly unusual, and bore some resemblance to that of SN 1998bw, though there were some differences as well. SN 1999E, which might be linked with GRB 980910 but with large uncertainties tho99 , also had an optical spectrum similar to that of SN 1997cy avf99 ; cap99 ; see Figure 10. The undulations are very broad, indicating high ejection velocities. Besides H$`\alpha `$, secure line identifications are difficult, though some of the emission features seem to be associated with oxygen and calcium. Perhaps SN 1999E was produced by the highly asymmetric collapse of a carbon-oxygen core. Shortly before this meeting, SN IIn 1999eb was discovered with KAIT mod99 , and Terlevich et al. ter99 pointed out that it might be associated with GRB 991002. However, KAIT data show that the optical SN was visible at least 10 days before the GRB occurred, making it very unlikely that the two were linked. If SN 1999eb ends up showing double-peaked H$`\alpha `$ emission at late times, as did SN 1998S, it will be another argument against the SN/GRB association in this particular case, since our view will not have been pole-on. ## THE EXPANDING PHOTOSPHERE METHOD Despite not being anything like “standard candles,” SNe II-P (and some SNe II-L) are very good distance indicators. They are, in fact, “custom yardsticks” when calibrated with the “Expanding Photosphere Method” (EPM); see eas96 . A variant of the famous Baade-Wesselink method for determining the distances of pulsating variable stars, this technique relies on an accurate measurement of the photosphere’s effective temperature and velocity during the plateau phase of SNe II-P. Briefly, here is how EPM works. The radius ($`R`$) of the photosphere can be determined from its velocity ($`v`$) and time since explosion ($`tt_0`$) if the ejecta are freely expanding: $`R=v(tt_0)+R_0v(tt_0)`$, where we have assumed that the initial radius of the star ($`R_0`$ at $`t=t_0`$) is negligible relative to $`R`$ after a few days. The velocity of the photosphere is determined from measurements of the wavelengths of the absorption minima in P-Cygni profiles of weak lines such as those of Fe II or, better yet, Sc II. (The absorption minima of strong lines like H$`\alpha `$ form far above the photosphere.) The angular size ($`\theta `$) of the photosphere, on the other hand, is found from the measured, dereddened flux density ($`f_\nu `$) at a given frequency. We have $`4\pi D^2f_\nu =4\pi R^2\zeta ^2\pi B_\nu (T)`$, so $$\theta =\frac{R}{D}=\left[\frac{f_\nu }{\zeta ^2\pi B_\nu (T)}\right]^{1/2},$$ where $`D`$ is the distance to the supernova, $`B_\nu (T)`$ is the value of the Planck function at color temperature $`T`$ (derived from broadband measurements of the supernova’s brightness in at least two passbands), and $`\zeta ^2`$ is the flux dilution correction factor (basically a measure of how much the spectrum deviates from that of a blackbody, due primarily to the electron-scattering opacity). The above two equations imply that $`t=D(\theta /v)+t_0.`$ Thus, for a series of measurements of $`\theta `$ and $`v`$ at various times $`t`$, a plot of $`\theta /v`$ versus $`t`$ should yield a straight line of slope $`D`$ and intercept $`t_0`$. This determination of the distance is independent of the various uncertain rungs in the cosmological distance ladder; it does not even depend on the calibration of the Cepheids. It is equally valid for nearby and distant SNe II-P. An important check of EPM is that the derived distance be constant while the SN is on the plateau (before it has started to enter the nebular phase). This has been verified with SN 1987A eas89 and a number of other SNe II-P schmidt92 ; schmidt94a . Moreover, the EPM distance to SN 1987A agrees with that determined geometrically through measurements of the brightening and fading of emission lines from the inner circumstellar ring pan91 . It is also noteworthy that EPM is relatively insensitive to reddening: an underestimate of the reddening leads to an underestimate of the color temperature $`T`$ \[and hence of $`B_\nu (T)`$ as well\], but this is compensated by an underestimate of $`f_\nu `$, yielding a nearly unchanged value of $`\theta `$. Indeed, for errors in $`A_V`$ \[the visual extinction, or $`3.1E(BV)`$\] of 0–1 mag, one incurs an error in $`D`$ of only $`0`$–20% schmidt92 . Of course, EPM has some caveats or potential limitations. A critical assumption is spherical symmetry for the expanding ejecta, yet polarimetry shows that SN 1987A was not spherical jef91 , as do direct HST measurements of the shape of the ejecta. As discussed above, the few other SNe II-P that have been studied don’t show very much polarization, though it is possible that deviations from spherical symmetry could be a severe problem for some SNe II-P. On the other hand, the average distance derived with EPM for many SNe II-P might be almost unaffected, given random orientations to the line of sight. (Sometimes the cross-sectional area will be too large, and other times too small, relative to spherical ejecta.) Indeed, comparison of EPM and Cepheid distances to the same galaxies shows agreement to within the expected uncertainties for a sample of 6 objects ($`D_{\mathrm{Cepheids}}/D_{\mathrm{EPM}}=0.98\pm 0.08`$; eas96 ). Another limitation of EPM is that one needs a well-observed SN II-P in a given galaxy in order to measure its distance; thus, the technique is most useful for aggregate studies of galaxies, rather than for distances of specific galaxies in a random sample. Finally, knowledge of the flux dilution correction factor, $`\zeta ^2`$, is critical to the success of EPM. Fortunately, an extensive grid of models eas96 shows that the value of $`\zeta ^2`$ is mainly a function of $`T`$ during the plateau phase of SNe II-P; it is relatively insensitive to other variables such as helium abundance, metallicity, density structure, and expansion rate. Also, it does not differ too greatly from unity during the plateau. There are, however, some differences of opinion regarding the treatment of radiative transfer and thermalization in expanding supernova atmospheres bar96 . The calculations are difficult and various assumptions are made, possibly leading to significant systematic errors. The most distant SN II-P to which EPM has successfully been applied is SN 1992am at $`z=0.0487`$ schmidt94b . The derived distance is $`D=180_{25}^{+30}`$ Mpc. This object, together with 15 other SNe II-P at smaller redshifts, yields a best fit value of $`H_0=73\pm 7`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, where the quoted uncertainty is purely statistical schmidt94a ; eas96 . A systematic uncertainty of $`\pm 6`$ km s<sup>-1</sup> Mpc<sup>-1</sup> should also be associated with the above result. The main source of statistical uncertainty is the relatively small number of SNe II-P in the EPM sample, and the low redshift of most of the objects (whose radial velocities are substantially affected by peculiar motions). My group is currently trying to remedy the situation with EPM measurements of additional nearby SNe II-P (at Lick Observatory), as well as with Keck spectra of SNe II-P in the redshift range 0.1–0.3. ## ACKNOWLEDGMENTS My recent research on SNe has been financed by NSF grant AST–9417213, as well as by NASA grants GO-6584, GO-7434, AR-6371, and AR-8006 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA Contract NAS5-26555. The Committee on Research (U.C. Berkeley) provided partial travel support to attend this meeting. I am grateful to the students and postdocs who have worked with me on SNe over the past 14 years for their assistance and discussions. Tom Matheson and Doug Leonard were especially helpful with the figures for this review.
no-problem/0002/cond-mat0002438.html
ar5iv
text
# Symmetry alteration of ensemble return distribution in crash and rally days of financial markets \[ ## Abstract We select the $`n`$ stocks traded in the New York Stock Exchange and we form a statistical ensemble of daily stock returns for each of the $`k`$ trading days of our database from the stock price time series. We study the ensemble return distribution for each trading day and we find that the symmetry properties of the ensemble return distribution drastically change in crash and rally days of the market. We compare these empirical results with numerical simulations based on the single-index model and we conclude that this model is unable to explain the behavior of the market in extreme days. 89.90.+n \] In the last few years physicists interested in financial analysis have performed several empirical researches investigating the statistical properties of stock price and volatility time series of a single asset (or of an index) at fixed or at different time horizons . Other researches have been focusing on the cross-correlation properties of simultaneously traded stocks . Another key aspect of the financial dynamics concerns the behavior of the market in days of extreme gain or loss. Statistical properties at, before and immediately after extreme days have been recently investigated by considering the behavior of market indices . In this letter we investigate extreme market days by following a different approach. Specifically, we investigate the return distribution of an ensemble of $`n`$ selected stocks simultaneously traded in a financial market in market days of extreme crash or rally in the period of our database (from January 1987 to December 1998). The investigation of the return distribution of an ensemble of stocks simultaneously traded was introduced in . The customary statistical properties of price return distribution of an ensemble of stocks are discussed elsewhere , here we pose and answer the following question: Are crash and rally days significantly different from the typical market days with respect to the statistical properties of return distribution of an ensemble of stocks? The investigated market is the New York Stock Exchange (NYSE) during the 12-year period from January 1987 to December 1998 which corresponds to 3032 trading days. The total number of assets $`n`$ traded in NYSE is rapidly increasing and it ranges from $`1128`$ in 1987 to $`2788`$ in 1998. The total number of data records exceeds $`6`$ million. The variable investigated in our analysis is the daily price return, which is defined as $$R_i(t)\frac{Y_i(t+1)Y_i(t)}{Y_i(t)},$$ (1) where $`Y_i(t)`$ is the closure price of $`i`$th asset at day $`t`$ ($`t=1,2,..`$). In our study we consider only the trading days and we remove the weekends and the holidays from the data set. Moreover we do not consider price returns which are in absolute values greater than $`50\%`$ because some of these returns might be attributed to errors in the database and may affect in a considerable way the statistical analyses. We extract the $`n`$ returns of the $`n`$ stocks for each trading day and we consider the normalized probability density function (PDF) of price returns. The distribution of these returns gives an idea about the general activity of the market at the selected trading day. In the absence of extreme events, the central part of the distribution is conserved for long time periods. In these periods the shape of the distribution is systematically non-Gaussian and approximately symmetrical . We attribute the non-Gaussian profile of the central part of the PDF to the presence of correlations among the stocks. Sometimes the PDF changes abruptly its shape either towards positive returns or towards negative returns. A systematic study of these days shows that they corresponds to extreme events in the market, i.e. to crash days and to rally days. In other words the periods in which the shape of the PDF changes corresponds to period of financial turmoil in the market. The most prominent example is the dramatic change of shape and of scale of the PDF observed during and after the 19 October 1987 crash. Other dramatic changes are observed at the beginning of 1991 and at the end of 1998. To illustrate in detail this behavior we consider the financial crisis of October 1987. Figure 1 shows the surface and contour plot of the ensemble return PDFs determined in a $`200`$ trading days time interval centered at 19th October 1987 (which correspond in abscissa to the arbitrary value 0). The $`z`$-axis is logarithmic in Figure 1. The central part of the ensemble return distribution shows an triangular-like shape which is approximately conserved far from the crisis. At the crisis, the ensemble distribution moves towards negative returns and then begins to oscillate between positive and negative returns. These oscillations are clearly evident for an interval of $`70`$ trading days after the 1987 crash. In that case, this is the time interval the market needed to come back to a ’typical’ state. This phenomenon is partly reflected into the oscillatory behavior of the Standard and Poor’s 500 index (S&P500) observed after the 1987 crash . It is worth to investigate the changes of the ensemble return PDF not only by investigating the tails of the distribution but also its central part. In particular, it is important to understand whether in extreme days only the return mean value and the scale of the PDF are changed or if the shape of the PDF is modified also. To this end, we select the $`9`$ trading days of our database in which the S&P500 has negative extreme returns. We also consider the opposite case of the $`9`$ trading days in which the S&P500 has the greatest positive returns. These days are listed in Table I with the corresponding S&P500 return value. In Figure 2 and 3 we show the return distributions observed in the New York Stock Exchange in the days of extreme absolute returns. Specifically, Figure 2 shows the return distribution in crash days whereas Figure 3 shows the return distribution in rally days. Figure 2 shows that in crash days the PDF has a peak at a negative value of return. Moreover the distribution is asymmetric and the positive tail is steeper than the negative one. Therefore in crash days not only the scale but also the shape and symmetry properties of the distribution change. A specular behavior is observed during the days of great gain of the market. Figure 3 shows that in these trading days the negative tail of the distribution is steeper than the positive one and the distribution has a peak at a positive return. These findings can be quantified by noting that the distribution is negatively skewed in crash days, whereas the distribution is positively skewed in days of great gains. A quantitative estimate of the asymmetry of the PDF is difficult in finite statistical sets because the skewness parameter depends on the third moment of the distribution. Moments higher than the second are essentially affected by rare events rather than by the central part of the distribution. Due to the finite number of stocks in our statistical ensemble, a measure of the asymmetry of the distribution based on its skewness is not statistically robust. We overcome this problem by considering a different measure of the asymmetry of the distribution. Specifically, we extract the median and the mean of the distribution for all trading days. When a probability distribution is symmetric the median coincides with the mean. Therefore the difference between the mean and the median is a measure of the degree of asymmetry of the distribution. For positively (negatively) skewed distribution the median is smaller (greater) than the mean. The median depends weakly on the rare events of the random variable and therefore is much less affected than the skewness by the finiteness of the number of records of the ensemble. In order to estimate the median value we construct an histogram of the returns and we evaluate the median value as the value for which the area of the histogram below and above it are equal. Figure 4 shows the difference between the mean and the median as a function of the mean for each trading day of the investigated period. In the Figure each circle refers to a different trading day. The circles cluster in an asymmetrical pattern which resembles a sigmoid shape. In days in which the mean is positive (negative) the difference between mean and median is positive (negative). In extreme days (for example those listed in Table I) the corresponding circles are characterized by great absolute value of the the mean and a great value of the difference between mean and median. Another result summarized in Figure 4 is that this effect is not exclusive of the days of extreme crash and rally but it is also evident for trading days of intermediate absolute average return. The change of the shape and of the symmetry properties during the days of large absolute returns suggests that in extreme days the behavior of the market cannot be statistically described in the same way of the ’normal’ periods. Moreover Figure 4 indicates that the difference from normal to extreme behavior increases gradually with the absolute value of the average return. Among the extreme days one circle does not cluster around the sigmoid shape and shows a different behavior having a negative mean but a positive difference between mean and median. This circle (indicated by an arrow in Figure 4) corresponds to the 20 October 1987 day, which is the day after the black Monday. The ensemble return distribution for this day is shown in panel b of Figure 3. This day is quite anomalous because the S&P500 had a $`5.24\%`$ positive return, but the mean return of all the assets traded in NYSE was $`5.28\%`$. In other words in this day companies performed returns which were strongly correlated with their capitalization. In summary, with just one exception, our results provide an empirical evidence that the ensemble statistical properties of a set of stocks traded simultaneously in a financial market change in a systematic way when the market moves far from the typical day characterized by a small average return. We now compare the results of our empirical analysis with the results obtained by modeling the stock price dynamics with a simple model: the single-index model. The single-index model assumes that the returns of all assets are controlled by one factor, usually called the market. For any asset $`i`$, we have $$R_i(t)=\alpha _i+\beta _iR_M(t)+ϵ_i(t),$$ (2) where $`R_i(t)`$ and $`R_M(t)`$ are the return of the asset $`i`$ and of the market at day $`t`$, respectively, $`\alpha _i`$ and $`\beta _i`$ are two real parameters and $`ϵ_i(t)`$ is a zero mean noise term characterized by a variance equal to $`\sigma _{ϵ_i}^2`$. The noise terms of different assets are uncorrelated, $`<ϵ_i(t)ϵ_j(t)>=0`$ for $`ij`$. Moreover the covariance between $`R_M(t)`$ and $`ϵ_i(t)`$ is set to zero for any $`i`$. Each asset is correlated with the market and the presence of such a correlation induces a correlation between any pair of assets. It is customary to adopt a broad-based stock index for the market $`R_M(t)`$. Our choice for the market is the Standard and Poor’s 500 index. The best estimation of the model parameters $`\alpha _i`$, $`\beta _i`$ and $`\sigma _{ϵ_i}^2`$ is usually done with the ordinary least squares method . To compare our empirical results with the predictions of the single-index model we build up an artificial stock market following Eq. (2). This is done by first evaluating the model parameters for all the assets traded in the NYSE and then by generating a set of $`n`$ surrogate time series according to Eq. (2). The ensemble return distribution computed in the artificial stock market is symmetrical in ’typical’ trading days. In crash and rally days, it is still approximately symmetrical around the mean value which can be positive (rallies) or negative (crashes). By contrast, as noted above, the return distribution in the real ensemble is asymmetric in extreme days. We can again quantify the asymmetry of the distribution by evaluating the difference between mean and median. The inset of Figure 4 shows the difference between the mean and the median as a function of the mean for the artificial data for each trading day of the investigated period. The differences between the real and artificial sets of circles are evident. The circles representing the synthetic data are distributed roughly symmetrically around the origin of the plane. Moreover the values of the difference between mean and median observed for the single-index model are not very large compared with the ones observed in the real set, confirming that the ensemble return distribution of the artificial data is approximately symmetrical in extreme days too. This difference suggests that the effective correlation among the assets can be described by the single-index model only as a first approximation. The degree of approximation of the single-index model progressively becomes worst for market days of increasing absolute average return and fails in properly describing the market behavior of extreme days. The main object of this letter is the study of the return distribution of an ensemble of stocks in a trading day with extreme absolute average return. We show that the ensemble return distribution changes shape and symmetry properties in crash and rally days. We compare our empirical results with the expected behavior of the single-index model and we observe that this simple model fails in describing the market in extreme days. In particular the main discrepancy concerns the asymmetry of the ensemble return PDF of the model which is different from the one observed in empirical data in extreme days. Changes in the shape and symmetry of the PDF may be associated to changes of the correlation properties. It is commonly accepted that the return time series of different stocks synchronously traded are correlated and several researches has been performed in order to extract information from the correlation properties . Our study suggests that the correlation properties between stocks may change during market days characterized by extreme absolute return. A precise characterization of the correlation properties and of their modification is of key importance for the modeling of market dynamics in normal and in extreme market days. The authors thank INFM and MURST for financial support. This work is part of the FRA-INFM project ’Volatility in financial markets’. F. Lillo acknowledges an FSE-INFM fellowships. We wish to thank Giovanni Bonanno for help in numerical calculations.
no-problem/0002/hep-ph0002189.html
ar5iv
text
# THE ϕ→𝜋⁺⁢𝜋⁻ AND ϕ RADIATIVE DECAYS WITHIN A CHIRAL UNITARY APPROACH ## 1 The $`\mathit{\varphi }\mathbf{}𝝅^\mathbf{+}𝝅^{\mathbf{}}`$ decay The $`\varphi `$ decay into $`\pi ^+\pi ^{}`$ is an example of isospin violation. The $`\varphi `$ has isospin $`I=0`$, spin $`J=1`$, and hence it does not couple to the $`\pi ^+\pi ^{}`$ system in the isospin limit, which implies the rule $`I+J=\text{even}`$. The experimental situation on this decay is rather confusing. There are two older results whose central values are very different but their quoted errors are so big that both were still compatible: The first one from gives $`BR=(1.94+1.030.81)\times 10^4`$. The second one from provides $`BR=(0.63+0.370.28)\times 10^4`$. Very recently two new, more precise, but conflicting results have been reported from the two experiments at the VEPP-2M in Novosibirsk: the CMD-2 Collaboration reports a value $`BR=(2.20\pm 0.25\pm 0.20)\times 10^4`$ whereas the SND Collaboration obtains $`BR=(0.71\pm 0.11\pm 0.09)\times 10^4`$. Isospin violation has become a fashionable topic in Chiral Perturbation Theory ($`\chi PT`$) but the $`\varphi \pi \pi `$ decay is however unreachable with plain $`\chi PT`$, since it involves the propagation of the pair of pions around 1 GeV, far away from the $`\chi PT`$ applicability range. Nevertheless, new nonperturbative schemes imposing unitarity and still using the chiral Lagrangians have emerged enlarging the convergence of the chiral expansion. In the inverse amplitude method (IAM) is used in one channel and good results are obtained for the $`\sigma `$, $`\rho `$ and $`K^{}`$ regions, amongst others, in $`\pi \pi `$ and $`\pi K`$ scattering. In the method is generalized to include coupled channels and one is able to describe very well the meson-meson scattering and all the associated resonances up to about 1.2 GeV. A more general approach is used in by means of the N/D method, in order to include the exchange of some preexisting resonances explicitly, which are then responsible for the values of the parameters of the fourth order chiral Lagrangian. Here we shall follow the work since it provides the most complete study of the different meson-meson scattering channels, including the mesonic resonances and their properties up to 1.2 GeV. In particular, this method yields a resonance in the $`I=0,J=1`$ channel, the $`\omega _8`$ resonance, related to the $`\varphi `$, and this allows us to obtain the strong contribution to the $`\varphi \pi \pi `$ decay. We also consider electromagnetic contributions at tree level which turn out to be dominant and were already considered in . In order to calculate the contribution of an intermediate photon to the $`\varphi \pi \pi `$ decay, let us consider the effective Lagrangian for vector mesons presented in , which is written in terms of the SU(3) pseudoscalar meson matrix $`\mathrm{\Phi }`$ and the antisymmetric vector tensor field $`V_{\mu \nu }`$ defined in $$_2[V(1^{})]=\frac{F_V}{2\sqrt{2}}V_{\mu \nu }f_+^{\mu \nu }+\frac{iG_V}{\sqrt{2}}V_{\mu \nu }u^\mu u^\nu ,$$ (1) where “$``$” indicates the SU(3) trace. In order to introduce the physical states $`\varphi `$ and $`\omega `$, we assume ideal mixing between the $`\omega _1`$ and $`\omega _8`$ vector resonances and hence taking into account that the $`\omega _1`$ does not couple to pairs of mesons at the order of eq. (1), the coupling of the $`\varphi `$ is easily deduced from that of the $`\omega _8`$ by simply multiplying the results of the $`\omega _8`$ by the factor $`\frac{2}{\sqrt{6}}`$. With these ingredients and the standard $`\gamma \pi \pi `$ coupling we can write the contribution of a Feynman diagram with the $`\varphi `$ going to a photon which then couples to a pair of pions, and which is given by $$i_{\varphi \pi ^+\pi ^{}}=ie^2\frac{\sqrt{2}F_V}{3M_\varphi }ϵ^\mu (\varphi )(p_+p_{})_\mu F(M_\varphi ^2),$$ (2) where $`p_+`$ and $`p_{}`$ are, respectively, the momenta of positive and negative pions and $`F(q^2)`$ is the pion electromagnetic form factor, which at the $`\varphi `$ mass is given by $`F(M_\varphi ^2)=1.56+i\mathrm{\hspace{0.17em}0.66}`$. This can be compared with the coupling of the $`\varphi `$ to $`K^+K^{}`$, or $`K^0\overline{K^0}`$, which can be obtained from the $`G_V`$ term in Eq. (1) and reads $$i_{\varphi K^+K^{}}=ig_{\varphi K^+K^{}}ϵ^\mu (\varphi )(p_+p_{})_\mu ,g_{\varphi K^+K^{}}=\frac{M_\varphi G_V}{\sqrt{2}f^2},$$ (3) which provides the right $`\varphi `$ decay width with a value of $`G_V=54.3`$ MeV . By analogy to Eq. (3), Eq. (2) gives a $`\varphi `$ coupling to $`\pi ^+\pi ^{}`$ $$g_{\varphi \pi ^+\pi ^{}}^{(\gamma )}=\frac{\sqrt{2}}{3}e^2\frac{F_V}{M_\varphi }F(M_\varphi ^2),$$ (4) which provides the $`\varphi \pi ^+\pi ^{}`$ decay width with the tree level photon mechanism. With a value of $`F_V=154`$ MeV from the $`\rho e^+e^{}`$ decay and using the coupling of Eq. (4) one obtains a branching ratio to the total $`\varphi `$ width of $`1.7\times 10^4`$. In order to evaluate the strong contribution to the process we consider the $`K\overline{K}\pi ^+\pi ^{}`$ amplitude corrected from isospin violation effects due to quark mass differences. The method used is based on the chiral unitary approach to the meson-meson interaction followed in . The technique starts from the $`O(p^2)`$ and $`O(p^4)`$ $`\chi PT`$ Lagrangian and uses the IAM in coupled channels, generalizing the one channel version of the IAM developed in . Within the coupled channel formalism, the partial wave amplitude is given in the IAM by the matrix equation $$T=T_2[T_2T_4]^1T_2,$$ (5) where $`T_2`$ and $`T_4`$ are $`O(p^2)`$ and $`O(p^4)`$ $`\chi PT`$ partial waves, respectively. In principle $`T_4`$ would require a full one-loop calculation, but it was shown in that it can be very well approximated by $$\text{Re}T_4T_4^P+T_2\text{Re}GT_2$$ (6) where $`T_4^P`$ is the tree level polynomial contribution coming from the $`_4`$ chiral Lagrangian and $`G`$ is a diagonal matrix for the loop function of the intermediate two meson propagators which are regularized in by means of a momentum cut-off. In the present case, in which isospin is broken explicitly and $`J=1`$, we are dealing with three two-meson states: $`K^+K^{}`$, $`K^0\overline{K}^0`$ and $`\pi ^+\pi ^{}`$, that we will call 1, 2 and 3, respectively. The amplitude is a $`3\times 3`$ matrix whose elements are denoted as $`T_{ij}`$. The $`T_2`$ and $`T_4^P`$ amplitudes used in the present work and calculated in the isospin breaking case, are collected in the appendix of . The fit of the phase shifts and inelasticities is carried out here in the isospin limit, as done in and there are several sets of $`L_i`$ coefficients which give rise to equally acceptable fits. We write in table 1 the values of the coefficients of the different sets of chiral parameters. The corresponding results for the phase shifts and inelasticities can be seen in where it is shown that the small differences in the results appear basically only in the $`a_0(980)`$ and $`\kappa (900)`$ resonance regions, where data have also larger errors or are very scarce. In order to evaluate the contribution to the $`\varphi \pi ^+\pi ^{}`$ coupling from the strongly interacting sector we evaluate the $`K^+K^{}K^+K^{}`$ amplitude ($`T_{11}`$) and the $`K^+K^{}\pi ^+\pi ^{}`$ amplitude ($`T_{13}`$) near the pole of the $`\omega _8`$ resonance which in our case appears around $`M_{\omega _8}=920`$ MeV. Close to the $`\omega _8`$ pole the amplitudes obtained numerically are then driven by the exchange of an $`\omega _8`$. By assuming a coupling of the type of Eq. (3) for the $`\omega _8`$ to $`K^+K^{}`$ and $`\pi ^+\pi ^{}`$, these two amplitudes, close to the $`\omega _8`$ pole, are given by $`T_{11}`$ $`=`$ $`g_{\omega _8K^+K^{}}^2{\displaystyle \frac{1}{P^2M_{\omega _8}^2}}\mathrm{\hspace{0.17em}4}\stackrel{}{p}_K\stackrel{}{p}_K^{}`$ $`T_{13}`$ $`=`$ $`g_{\omega _8K^+K^{}}g_{\omega _8\pi ^+\pi ^{}}{\displaystyle \frac{1}{P^2M_{\omega _8}^2}}\mathrm{\hspace{0.17em}4}\stackrel{}{p}_K\stackrel{}{p}_\pi .`$ (7) where $`\stackrel{}{p}_i`$ is the three-momentum of the $`i`$ particle in the CM frame. By looking at the residues of the amplitudes $`T_{11}`$, $`T_{13}`$ in the $`\omega _8`$ pole we can get the products $`g_{\varphi K^+K^{}}g_{\varphi K^+K^{}}`$ and $`g_{\varphi K^+K^{}}g_{\varphi \pi ^+\pi ^{}}`$. Thus, defining $$Q_{ij}=\underset{P^2M_{\omega _8}^2}{lim}(P^2M_{\omega _8}^2)\frac{T_{ij}}{4\stackrel{}{p_i}\stackrel{}{p_j}}$$ (8) we obtain the ratio of the $`g_{\varphi K^+K^{}}`$ to $`g_{\varphi K^+K^{}}`$ by means of the ratio of $`Q_{13}`$ to $`Q_{11}`$, and hence taking $`g_{\varphi K^+K^{}}`$ from Eq. (3), we get the value for $`g_{\varphi \pi ^+\pi ^{}}^{(s)}`$. Then, by adding the above contribution with that of Eq. (4) we can obtain the $`\varphi \pi ^+\pi ^{}`$ decay width. We have taken $`F_VG_V>0`$, as demanded by vector meson dominance . Each set of chiral parameters has then been used in the isospin-breaking amplitudes given in the appendix of , obtaining a value of $`BR(\varphi \pi \pi )`$ given in table 1. The dispersion of the results provides an estimate of the systematic theoretical uncertainties. From table 1, we obtain, after taking into account the strong contributions $$Br(\varphi \pi \pi )_{\text{tree+strong}}(1.2\pm 0.2)\times 10^4$$ (9) On the other hand, explicit calculations of the absorptive part of the $`\eta \gamma `$ intermediate channel give a contribution of about 1/4 of the kaon loops. In order to estimate the uncertainties from neglecting the photonic loops we take a conservative estimate and consider them of the same magnitude as the strong interaction correction, and, hence, add an extra $`\pm 0.5\times 10^4`$ uncertainty. Adding in quadrature the errors from the different sources, our final result is the band of values: $$BR(\mathrm{\Phi }\pi \pi )0.7\text{to}\mathrm{\hspace{0.33em}1.7}\times 10^4,$$ (10) which is compatible with the present PDG average within errors and lies just between the results of the two recent experiments, which are much more precise, but mutually incompatible. ## 2 The $`\mathit{\varphi }`$ radiative decay into $`𝝅^\mathrm{𝟎}𝝅^\mathrm{𝟎}𝜸`$ and $`𝝅^\mathrm{𝟎}𝜼𝜸`$ The $`\varphi `$ meson cannot decay into two pions or $`\pi ^0\eta `$ in the isospin limit. The decay into two neutral pions is more strictly forbidden by symmetry and the identity of the two pions. As a consequence the decay of the $`\varphi `$ into $`\pi ^0\pi ^0\gamma `$ and $`\pi ^0\eta \gamma `$ is forbidden at tree level. However, the $`\varphi `$ decays into two kaons and the processes described can proceed via the loop diagrams depicted in Fig. 1 where the intermediate states in the loops stand for $`K\overline{K}`$. The evaluation of the diagrams of Fig. 1 is done in . The terms with $`G_V`$ of Eq. (2) contribute to all the diagrams in the figure. However, the $`F_V`$ term of Eq. (2) only contributes to the diagrams containing the contact vertex $`\varphi \gamma K\overline{K}`$, like diagrams (a), (e). The idea follows closely the work of but for the treatment of the final state interaction of the mesons one uses here the norperturbative chiral techniques. In this case for L=0, which is the only partial wave needed, one can use the results of , where it is proved that the use of the Bethe Salpeter equation in connection with the lowest order chiral Lagrangian and a suitable cut off in the loops gave a good description of the meson meson scalar sector. Furthermore, in it was proved that the meson meson amplitude in those diagrams factorized on shell. The loops of type (a), (b) and (c) can be summed up using arguments of gauge invariance following the techniques of and lead to a finite amplitude. On the other hand, the terms involving $`F_V`$ and a remnant momentum dependent term from the $`G_V`$ Lagrangian in Eq. (2) only appear in the contact vertex $`\varphi \gamma K\overline{K}`$, and the diagrams of type (b), (c) are now not present. Hence, in this case the only loop function involved is the one of two mesons which is regularized as in for the problem of the meson meson scattering. The average over polarization of the $`\varphi `$ for the modulus square of $`t`$ matrix is then easily written and for the case of $`\pi ^0\pi ^0\gamma `$ decay one finds $`\overline{{\displaystyle }}{\displaystyle |t|^2}={\displaystyle \frac{2}{3}}e^2\left|{\displaystyle \frac{M_\varphi G_V}{f^2\sqrt{3}}}\stackrel{~}{G}_{K^+K^{}}t_{K\overline{K},\pi \pi }^{I=0}+{\displaystyle \frac{K}{f^2\sqrt{3}}}\left({\displaystyle \frac{F_V}{2}}G_V\right)G_{K^+K^{}}t_{K\overline{K},\pi \pi }^{I=0}\right|^2`$ For the $`\varphi \pi ^0\eta \gamma `$ case we have $`\overline{{\displaystyle }}{\displaystyle |t|^2}={\displaystyle \frac{4}{3}}e^2\left|{\displaystyle \frac{M_\varphi G_V}{f^2\sqrt{2}}}\stackrel{~}{G}_{K^+K^{}}t_{K\overline{K},\pi \eta }^{I=1}+{\displaystyle \frac{K}{f^2\sqrt{2}}}\left({\displaystyle \frac{F_V}{2}}G_V\right)G_{K^+K^{}}t_{K\overline{K},\pi \eta }^{I=1}\right|^2`$ where $`\stackrel{~}{G}_{K^+K^{}}`$ and $`G_{K^+K^{}}`$ are the loop functions mentioned above. We have evaluated the invariant mass distribution for these decay channels and in Fig. 2 we plot the distribution $`dB/dM_I`$ for $`\varphi \pi ^0\pi ^0\gamma `$ which allows us to see the $`\varphi f_0\gamma `$ contribution since the $`f_0`$ is the important scalar resonance appearing in the $`K^+K^{}\pi ^0\pi ^0`$ amplitude . The results are obtained using $`G_V`$=55 MeV and $`F_V`$=165 MeV, which are suited to describe the $`K\overline{K}`$ and $`e^+e^{}`$ decay of the $`\varphi `$. The solid curve shows our prediction, with $`F_VG_V>0`$, the sign predicted by vector meson dominance, as we quoted above. The dashed curve is obtained considering $`F_VG_V<0`$. In addition we show also the results of the intermediate dot-dashed curve which correspond to taking for $`G_V`$ and $`F_V`$ the parameters of the $`\rho `$ decay, $`G_V`$=69 MeV and $`F_V`$=154 MeV. We compare our results with the recent ones of the Novosibirsk experiment . We can see that the shape of the spectrum is relatively well reproduced considering statistical and systematic errors (the latter ones not shown in the figure). The results considering $`F_VG_V<0`$ are in complete disagreement with the data. The finite total branching ratio which we find for the $`\varphi \pi ^0\pi ^0\gamma `$ decay is $`0.8\times 10^4`$, which is slightly smaller than the result given in , $`(1.14\pm 0.10\pm 0.12)\times 10^4`$, where the first error is statistical and the second one systematic. The result given in is $`(1.08\pm 0.17\pm 0.09)\times 10^4`$, compatible with our prediction. Should we use the values for $`F_V`$ and $`G_V`$ of the $`\rho `$ decay we would obtain $`1.7\times 10^4`$. The branching ratio obtained for the case $`\varphi \pi ^0\eta \gamma `$ is $`0.87\times 10^4`$. The results obtained at Novosibirsk are $`(0.83\pm 0.23)\times 10^4`$ and $`(0.90\pm 0.24\pm 0.10)\times 10^4`$. Should we use the values for $`F_V`$ and $`G_V`$ of the $`\rho `$ decay we would obtain $`1.6\times 10^4`$. The spectrum, not shown, is dominated by the $`a_0`$ contribution. The results reported here are two examples of the successful application of the chiral unitary techniques. A recent review of multiple applications of these methods can be seen in . ## 3 Acknowledgments We would also like to acknowledge financial support from the DGICYT under contracts PB96-0753 and AEN97-1693 and from the EU TMR network Eurodaphne, contract no. ERBFMRX-CT98-0169.
no-problem/0002/cond-mat0002439.html
ar5iv
text
# Observation of Partially Suppressed Shot Noise in Hopping Conduction ## Abstract We have observed shot noise in the hopping conduction of two dimensional carriers confined in a p-type SiGe quantum well at a temperature of 4K. Moreover, shot noise is suppressed relative to its “classical” value 2eI by an amount that depends on the length of the sample and carrier density, which was controlled by a gate voltage. We have found a suppression factor to the classical value of about one half for a 2 $`\mu `$m long sample, and of one fifth for a 5 $`\mu `$m sample. In each case, the factor decreased slightly as the density increased toward the insulator-metal transition. We explain these results in terms of the characteristic length ($`1\mu `$m in our case) of the inherent inhomogeneity of hopping transport. Shot noise, which is a manifestation of the particle nature of the electric current, has lately received much attention because it can yield information complementary to that obtained from conductance measurements. It is most pronounced when the current is formed by statistically independent charges tunneling through a single potential barrier of low transparency, in which case the noise power spectral density, $`S`$, is equal to the Schottky, or classical, value of $`2qI`$, where $`q`$ is the value of the charge and $`I`$ the average current. This proportionality has been employed, for instance, to determine the effective charge in superconducting transport and in the fractional quantum Hall effect . In more general cases, when the motion of the charges is not independent from each other, the value of $`S`$ has an additional factor $`F`$, the so called Fano factor. Except when negative differential conductance occurs , the Fano factor is in the range $`0<F<1`$, meaning that shot noise is then partially suppressed. Knowledge of the degree of suppression sheds light at the microscopic level on the conduction mechanisms of a specific system. By now the noise characteristics of ballistic, diffusive, and chaotic transport have been established, as well as those of resonant tunneling and single-electron tunneling. For example, in diffusive conductors, whose size along the current direction is smaller than the inelastic scattering length, it is $`F=1/3`$. In the opposite limit, that is, when the scattering length is much shorter than the length of the sample, $`F`$ approaches zero, and in macroscopic metallic conductors shot noise is completely suppressed. Although the 1/f noise properties of hopping conduction have also been elucidated , surprisingly, little is known about shot noise for such a well studied transport mechanism , which has regained interest in connection with the metal-insulator transition recently observed in Si MOSFETs and other two-dimensional (2D) systems, including SiGe quantum wells . In this Letter we report the observation in a 2D hopping conductor of shot noise that is only partially suppressed, and we introduce a model that in spite of its simplicity can explain our experimental results. If in hopping conduction, where electrons tunnel assisted by phonons between localized states created by the random impurity potential, the determinant factor were the inelastic scattering length, then, given the smallness of this length, shot noise should be zero. On the other hand, since the process involves tunneling through potential barriers, which insures the discrete nature of the current, one could naively assume that shot noise should have the full $`2qI`$ value. A closer look reveals a more complex situation. In a simple one-dimensional system in which electrons tunnel through N identical barriers, the Fano factor is $`F=1/N`$. When, like in hopping, tunneling occurs between single electron states, depending on their occupancy, shot noise suppression can be a different function of $`N`$ . Since the equivalent resistances of the various hops are exponentially different from each other and only the most resistive hop (“bottle neck”) determines the current, it could be argued that effectively $`N=1`$. However, in real quasi one-dimensional hopping , in which there is a maximum resistance obtainable (hard hop), the effective N should be the number of hard hops along the sample length, as in the case of identical barriers. In a 2D system, hopping conduction can be seen as occurring through a network of one-dimensional chains connected to each other at certain nodes, as is normally done in percolation theory, where the network is modeled by resistors of exponentially different values, out of which the most conductive subnetwork (critical percolation subnetwork) is selected . The characteristic size of this subnetwork is the length beyond which the sample is homogeneous, and its nodes are such that each chain contains only one resistor with the largest resistance (hard hop). Even this simpler subnetwork is still complicated enough as to make it difficult to guess, let alone to calculate, what the effective $`F`$ will be. To answer experimentally this question we chose a 2D hole system confined in a modulation-doped SiGe well. The heterostructure, grown by molecular beam epitaxy on a n-Si substrate, consists (from the substrate up) of 4300Å of Si boron-doped at $`1\times 10^{18}`$ $`cm^3`$, 225Å of undoped Si, 500Å of Si<sub>0.8</sub>Ge<sub>0.2</sub> (quantum well), 275Å of undoped Si, and finally 725Å of Si boron-doped at $`1\times 10^{18}`$ $`cm^3`$. The 2D hole density and the in-plane resistivity were controlled by a voltage $`Vg`$ applied to an Al Schottky gate deposited on the top layer. The heterostructure was processed into samples with gate width of 50$`\mu `$m and length (along the current direction) of either 2$`\mu `$m or 5$`\mu `$m. The noise measurements were done with the samples immersed in liquid He. At $`T=4K`$ even for the smallest $`Vg`$ used the samples had resistance per square much larger than the quantum resistance $`h/2e^2`$, so that they were always in the insulating regime. Current through the sample was produced by applying a $`dc`$ bias and a small $`ac`$ signal, $`Vin`$, to a $`1M\mathrm{\Omega }`$ load resistor. The voltage drop across the sample, $`Vsd`$, was measured simultaneously with the $`ac`$ signal and the noise, using a lock-in technique and a spectrum analyzer. The $`1/f`$-noise contribution was reduced by doing the measurements at high frequencies up to 100kHz, which demanded minimizing lead capacitance and required placing a preamplifier inside the cryostat and very close to the sample. The preamplifier, with an output resistance of about 100$`\mathrm{\Omega }`$, was a commercial low power switching MOSFET, connected in source follower configuration, as in . To avoid heating, the $`dc`$ current through the preamplifier was kept at 1mA, enough to get an amplification coefficient of $`0.9`$. The input impedance (defined by the resistance of the sample in parallel with the load resistor and parasitic capacitances) of the preamplifier was proportional to the preamplifier’s transfer function, $`TF`$ ($`Vout`$/$`Vin`$), whose real and imaginary components were determined with the $`ac`$ signal applied to the load. Figure 1(a) shows the measured output voltage noise for a 2$`\mu `$m long sample as a function of gate voltage, in the absence of any in-plane current (zero bias) and for three different frequencies (20 kHz, 50 kHz, and 80 kHz). The origin of this voltage is thermal noise. As expected, the voltage noise spectral density follows the real part of the impedance and of the transfer function, shown in Fig. 1(b). The background preamplifier noise, seen at small $`Vg`$ (small sample resistance) in Fig.1a, has a 1/f dependence. Taking into account this noise, about 6$`nV/\sqrt{Hz}`$ for $`f_0=80kHz`$, we confirmed that the measured noise is indeed thermal noise at $`T=4K`$ (insert in Fig.1b). When the sample resistance is much larger than the load resistance (at high gate voltage) the TF saturates. From the saturation value we determined a parallel to the sample capacitance of about 2$`pF`$, which is mainly the gate-drain capacitance of the preamplifier. The transfer function exhibits small but noticeable oscillations at $`Vg0.5V`$, as illustrated in Fig. 1(b). Their presence suggests that the sample is close to the mesoscopic size, where conductance is not self-averaged but depends on a particular spatial configuration of the fluctuation potential. In the language of percolation theory, we can say that those oscillations reveal that the length of the sample and the size of the critical subnetwork are comparable . For a fixed gate voltage, the current noise was obtained by measuring the voltage noise spectral density as a function of the current. The transfer function, measured simultaneously, was then used to calculate the current noise spectral density at the preamplifier input. The dependence of the current noise on current at $`Vg=0.5V`$ is shown in Fig.2, for $`f=20`$, $`50`$ and $`80kHz`$. The insignificance of the thermal noise (the residual noise density at $`Vsd=0`$) is a consequence of the fact that the sample’s resistance is larger than the load resistance. Indeed, the theoretical thermal noise is $`4k_BT/(1M\mathrm{\Omega })0.2\times 10^{27}A^2/Hz`$, a number that is consistent with the $`I=0`$ limit of Fig. 2. The signatures of shot noise – linear dependence on current and independence of frequency – are evident in the figure. In view of the strongly non-linear dependence of the current on Vsd (insert in Fig. 2), it could seem surprising that the proportionality between noise and current is maintained in a large current range. This proportionality suggests (as validated below) that even at the highest voltage the in-plane electric field is still weak enough as not to modify the critical percolation network, and indicates that the measured shot noise is not sensitive to possible field-induced variations of the hopping percolation paths. From that proportionality, a value $`F=0.59`$ is obtained for the Fano factor. It is noticed in Fig.2 that at $`I=10nA`$ there is a hump in the noise spectral density, which is larger at lower frequency. This is a signature of random telegraph noise in mesoscopic structures , also seen before in hopping transport . The Fano factor does not depend very strongly on gate voltage, as shown in Fig.3 (top curves). When $`Vg`$ decreases from $`Vg=0.55V`$ to $`0.2V`$ $`F`$ drops from $`0.61`$ to $`0.43`$, which is still roughly one half of the classical value in this range of $`Vg`$. Since the smaller the gate voltage the larger the in-plane conductance, for $`Vg=0.2V`$ thermal noise dominates at low current, thus the curvature in the noise characteristic observable in Fig. 3. The transition from thermal to shot noise occurs above $`2kT/e`$. Figure 3 also illustrates the dependence of shot noise on the length of the current path. Similar measurements to those on the $`2\mu m`$ sample are shown for a $`5\mu m`$ sample. In this case, the variation of shot noise with Vg has the same trend as before, but the change is smaller. Most significant, however, is that shot noise is much more suppressed in the longer sample, in which the measured Fano factor is F = $`0.2`$, that is, shot noise is $`1/5`$ of its classical value. We can explain our results if we assume that there is a characteristic scaling length $`L_01\mu m`$ in 2D hopping for both samples, such that the Fano factor is just the ratio of that length to the length of the sample, $`F=L_0/L`$. It is reasonable to assume also that this scale is a characteristic of the homogeneity of the sample, which is the distance between hard hops of the critical percolation subnetwork. Then, the trend for noise suppression on $`Vg`$ reflects the fact that hopping becomes more uniform as the sample is driven towards the insulator-metal transition. To justify these assumptions we can take into account the fact that when an electric field is applied, only hard hops along the field direction are modified (e.g., decrease their resistances) and thus the network is separated into a set of equivalent parallel chains. In this case, the total shot noise would be that of a single chain, which, as we have seen, will have a Fano factor inversely proportional to the number of hard hops in the chain. The distance between hard hops in the percolation subnetwork can be obtained using : $$L_0=l(T)\left(\frac{T_0}{T}\right)^{\frac{\nu }{d+1}},$$ (1) where $`l(T)`$ is the characteristic hopping length in the zero-field limit, $`T_0`$ is a characteristic temperature inversely proportional to the density of states at the Fermi level and to the localization radius $`a`$, $`d`$ is the effective dimensionality of the system (d=$`1`$ for hopping with Coulomb gap and d=$`2`$ for 2D hopping), and $`\nu `$ is the critical index of the correlation radius, which is about 1.3 for a 2D system. In turn, the hopping length can be estimated within the percolation model, in which the non-linear conductance G(E,T) is written as $$G(E,T)=\frac{I}{Vsd}=G(0,T)exp\left(\frac{eEl(T)}{k_BT}\right),$$ (2) where $`E`$ is the electric field. (The other symbols have their usual meaning.) This expression is only valid in the low-field regime, that is, when $`eEa<k_BT`$. The hopping length depends on temperature as $$l(T)=a\left(\frac{T_0}{T}\right)^{\frac{1}{1+d}}.$$ (3) When Eq. (2) was used in combination with the experimental $`IVsd`$ characteristics (Fig. 2, insert) a value of $`l0.08\mu m`$ was obtained for the hopping length of the $`2\mu m`$ sample at T = 4K. The localization radius was estimated to be 100-130Å from Eq. (3) and the experimental temperature dependence of the zero-field conductance in the range $`2K<T<30K`$. The 30Å variance of $`a`$ reflects the difficulty in discerning experimentally between $`1/2`$ and $`1/3`$ for the exponent in Eq. (3). This estimation of the localization radius is consistent with a fluctuation potential created by interface impurities separated about 200Å from each other (surface density of $`2\times 10^{11}cm^2`$ ) and validates our low-field assumption since for $`Vsd=30mV`$ (insert, Fig. 2) it is $`Ea0.15meV<k_BT/e0.36meV`$. Finally, from Eq.(1) we get either $`L_00.8\mu m`$ or $`1.2\mu m`$, depending, again, on whether we use $`d=1`$ or $`2`$. A similar analysis for the data on the $`5\mu m`$ sample yielded a value of 1$`\mu m`$ for the inter-node distance. This result tells us that if two nodes were separated by 1$`\mu m`$, than there would be two hard hops in a $`2\mu m`$ sample and five such hops in a $`5\mu m`$ sample. Consequently, according to the above discussion, the corresponding Fano factor in the shot noise formula should be $`0.5`$ and $`0.2`$, respectively. These values are in excellent agreement with the experimental results. Although these results have been obtained for a 2D hole gas, they should be general to any other system in the hopping regime. It also follows from our results that by decreasing the sample length even further one could obtain full shot noise, corresponding to tunneling through only one hard hop along the current direction. Interestingly, a further decrease in length and in temperature should cause a transition to resonant tunneling transport, such as resonant tunneling through impurities . The only calculation available in such a regime for the shot noise is for tunneling through one impurity, done recently , in which $`F=3/4`$. We hope that the experiments presented here will stimulate calculations in the entire hopping conduction regime. We are thankful to Alexander Korotkov for numerous discussions. This work has been sponsored by the Department of Energy (DOE Grant No. DE-FG02-95ER14575).