id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9904/astro-ph9904010.html
ar5iv
text
# ISOGAL Survey of Baade’s Windows in the Mid-infrared ## 1 Introduction The ISOGAL<sup>1</sup><sup>1</sup>1This is paper no. 3 in a refereed journal based on data from the ISOGAL project. (Pérault et al, 1996; Ojha, Omont and Simon, 1997, Omont et al, 1999 a,b,c) project has surveyed a number of fields at low galactic latitude in the intermediate infrared using the ISOCAM (Cesarsky et al., 1996) instrument of the ISO<sup>2</sup><sup>2</sup>2ISO is an ESA project with instruments funded by ESA member states (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA. satellite. Its aim is the better understanding of the stellar content of the inner Galaxy, mostly in very obscured regions. Normally, the filter bands chosen for the observations were LW2 (5.5–8 $`\mu `$m) and LW3 (12–18 $`\mu `$m). The pixel size was 6 $`\times `$ 6 arcsec<sup>2</sup>. The dusty shells surrounding mass-losing late-type stars are expected to be particularly prevalent amongst the objects that should be detected. These stars will also, in general, be seen by the DENIS and 2-MASS surveys at IJK<sub>S</sub> and JHK<sub>S</sub> respectively. However, in the more obscured ISOGAL fields, there is little information available concerning individual stars except for those few that are among the most luminous OH/IR sources. For this reason, it was decided to include some of the relatively unobscured fields known as “Baade’s Windows”, located in the inner Bulge, in the programme. These should serve as relatively well surveyed comparison areas. $`A_V`$ is thought to average 1.78 $`\pm `$ 0.10 mag in these fields as a whole (see Glass et al, 1995)<sup>3</sup><sup>3</sup>3An extinction map by Stanek (1996) indicates, however, that it is particularly low (about $`A_V`$ $``$ 1.45) in the part of the NGC 6522 window covered by our observations. The $`K_0`$ magnitudes that we quote later in this work remain as published by the original authors, who used $`A_K`$ $``$ 0.14 mag.. In both cases it will be almost negligible in the LW2 and LW3 bands discussed here and the ISOGAL magnitudes have not been corrected for interstellar extinction. In particular, we have examined the field located around the globular cluster NGC 6522 and that known as Sgr I. The late-type stellar contents (M-stars) of the first of these fields have been surveyed and classified by Blanco, McCarthy and Blanco (1984) (BMB) by objective prism, and a smaller portion has been examined similarly by Blanco (1986) to fainter magnitudes. $`I`$-band photometry is given for the BMB survey, together with an indication of variability, whereas $`V`$-band photometry is provided by Blanco (1986). The long-period variable star content of both fields has been surveyed by Lloyd Evans (1976; TLE), using $`I`$-band plates, and he summarizes previous work at visible wavelengths. Infrared ($`JHKL`$) studies of Lloyd Evans’ variable stars in NGC 6522 and Sgr I were carried out by Glass and Feast (1982) in order to make use of their period-luminosity relation for determining the distance to the Galactic Centre. It was later pointed out by Feast (1985) that many of the IRAS sources in the Sgr I and NGC 6522 windows could be identified with known variables. These were listed by Glass (1986) who showed that the remaining (unidentified) IRAS sources in Sgr I were very red at $`JHKL`$ and were also likely to be long-period variables. Following this work, the known long-period variables and IRAS sources in the Sgr I field were monitored by Glass et al (1995) and periods were confirmed or determined for all sources except one that was non-variable. The $`K_0,(HK)_0`$ colour-magnitude diagram of the Sgr I field shows that the long-period variables are among the reddest and most luminous objects (Glass, 1993) at these wavelengths. The dispersion of the $`K_0`$, log $`P`$ relation in Sgr I is quite small ($``$ 0.35 mag; Glass et al., 1995). This implies that most of the AGB stars in the field are at a nearly uniform distance, allowing a simple connection to be made between apparent and absolute magnitudes. Photometry of some of the BMB (1984) stars has been obtained by Frogel and Whitford (1987). Deep near-infrared photometry of the NGC 6522 window is presented by Tiede, Frogel and Terndrup (1995). The DENIS results will form the subject of a separate paper. ## 2 ISOGAL Observations The ISOGAL observations were made in the two filters mentioned, as rasters covering squares of 15 $`\times `$ 15 arcmin<sup>2</sup> orientated in $`\mathrm{}`$,$`b`$. They were centered at $`\mathrm{}`$=+1.03, $`b`$=–3.83, which includes the globular cluster NGC 6522 itself, and at $`\mathrm{}`$=+1.37, $`b`$=–2.63 in Sgr I. Each position on the sky was observed for a total of 22s on average. Two rasters of each field were made with the LW2 filter and one with LW3 (Table 1). The first three digits of each identification indicate the ISO revolution number ($``$ day of flight). The second LW2 observation in each case is almost simultaneous with that in LW3 (within $``$ 30 min). The images are shown in Figs 1–4. The 6<sup>′′</sup> pixel field of view was utilised in all cases. Reduction of the science processed data (SPD) from version 6.32 of the OLP (Off-line Processing) pipeline was carried out with the CIA<sup>4</sup><sup>4</sup>4The ISOCAM data presented in this paper was analysed using ‘CIA’, a joint development by the ESA Astrophysics Division and the ISOCAM Consortium. The ISOCAM Consortium is led by the ISOCAM PI, C. Cesarsky, Direction des Sciences de la Matière, C.E.A., France. package (CIA version 3.0) The data were first corrected for dark current, using the default method of subtracting a ‘model’ dark frame. Following this, the cosmic-ray hits were removed from the data cube using the multi-resolution median method. At this stage, two copies of the data cube were made and the individual copies were treated by two different methods for simulating the time behaviour of the pixels of the ISOCAM detectors: the ‘vision’ method and the ‘IAS model transient correction’, also known as the ‘inversion’ method (Abergel et al., 1998). The vision method does not correct much for the transient behaviour but is useful in removing the memory remnants (electronic ghosts) of previously observed strong sources. From this stage onwards in the reduction procedure we thus had two sets of data, corresponding to the two methods of stabilization. For each data set the images at each raster pointing were averaged. The data were then flat-fielded with the flats generated from the raster observations themselves. Subsequent to the flat fielding, the individual images were mosaiced together after correcting for the field of view distortion using the ‘projection’ method. The two individual rasters thus obtained were then in units of ADU/gain/sec. They were converted into mJy/pixel units within CIA. The conversion factors were: $$F(\mathrm{mJy})=(\mathrm{ADU}/\mathrm{gain}/\mathrm{sec})/2.33$$ for LW2 and $$F(\mathrm{mJy})=(\mathrm{ADU}/\mathrm{gain}/\mathrm{sec})/1.97.$$ for LW3 (Blommaert, 1998). These are correct for a $`F_\lambda `$ $``$ $`\lambda ^1`$ power-law spectrum at wavelengths 6.7 $`\mu `$m and 14.3 $`\mu `$m respectively. Source extraction was performed on each pair of reduced images using a point spread function fitting routine (Alard et al., in preparation). The vision-treated point sources were cross-identified with the inversion-treated ones and the final catalogue of point sources was built with the inversion photometry for those sources found in both vision- and inversion-treated images. This procedure ensured that most false sources were dropped while the better photometry of the inversion-treated images was retained. Conversion to magnitudes was then carried out using the formulae $$[7]=12.382.5\mathrm{log}F_{\mathrm{LW2}}(\mathrm{mJy})$$ and $$[15]=10.792.5\mathrm{log}F_{\mathrm{LW3}}(\mathrm{mJy})$$ where the zero point has been chosen to get zero magnitude for a Vega model flux at the respective wavelengths mentioned earlier. We have limited the extracted catalogue to sources with fluxes greater than 5mJy in both LW2 and LW3. This corresponds to = 10.64 and = 8.99. The reliability of the ISOCAM data is affected at the faint end by crowding and noise. These effects can be investigated by constructing histograms of source counts vs mag. Figure 5 is a histogram of the LW3 observations in the NGC 6522 field. Also included is the expected distribution, based on averaged magnitudes and a relation <sub>expected</sub> = 1.56 $`\times `$ – 5.0 (This approximation is a line passing through the , – values of (9.0, 0.0) and (3.96, 1.8) in the colour-magnitude diagrams (Figures 8 and 9)). It appears that the NGC 6522 detections at 15 $`\mu `$m are reliable to a depth of $``$ 9.0 mag. Figure 6 is a similar diagram for the Sgr I field. From analysis of repeated observations, the rms dispersion of the ISOGAL photometry is estimated to be generally less than 0.2 magnitudes, except towards the fainter magnitude ranges, where it rises to $``$0.4 mag (Ganesh et al., in preparation). See also section 3.3.3 of this paper. A small uncertainty remains in the absolute photometry due to the crowded nature of the ISOGAL fields. It is felt that the behaviour of the inversion method of transient correction is not yet completely understood and it is hoped that the uncertainty can be reduced in the near future. It may be noted that, with $``$ 45 and $``$ 35 pixels per LW2 source, respectively, the density of sources in our two fields is close to the confusion limit in both. The astrometry of the extracted sources was improved by using the newly available DENIS $`K`$ band observations of these fields. A systematic shift of $`0.8^{\prime \prime }`$ (with a rms dispersion of 0.6<sup>′′</sup>) in right ascension and $`5.1^{\prime \prime }`$ (0.7<sup>′′</sup>) in declination was found for the NGC 6522 observation. The corresponding numbers for the Sgr I observation were found to be $`2.0^{\prime \prime }`$ (0.9<sup>′′</sup>) and $`6.1^{\prime \prime }`$ (0.8<sup>′′</sup>) respectively. The positions given in Tables 2 and 3 include the corrections. ### 2.1 NGC 6522 Sources were extracted from the LW2 and LW3 images and cross-correlated to form a working table. The total number detected was 497. Of these, 182 or 37% were detected at 15 $`\mu `$m. The remainder were detected only in LW2, usually because they were too faint to be seen at 15 $`\mu `$m. Sources were detected in both LW2 exposures in 363 cases. Non-detections in LW2 are often clearly due to blending with strong sources in their neighbourhoods. In some cases the reliability of the detection is questionable and needs to be confirmed when deeper data become available. Fifty-four sources were detected only in the first LW2 observation and 66 only in the second. These are for the most part close to the detection limit and their existence requires confirmation. Some 14 sources were detected at 15 $`\mu `$m only. Each was investigated visually in the three images and comments on individual sources were included in the working table. Because of the small uncertainty in the zero-point of the photometry that has already been mentioned, we have decided to publish only those sources with $``$ 7.5 for the time being (see table 2). The full table will be published when it is felt that the performance of the camera in crowded fields is better understood. The lack of a strong density enhancement in the neighbourhood of NGC 6522 indicates that few sources are likely to be members of the cluster itself. Circles with radius 0.02 degree about the cluster centre are shown on figs. 1 and 2. ### 2.2 Sgr I A total of 696 sources were detected. Of these, 287 or 41% were detected at 15 $`\mu `$m . Only 20 sources were detected solely at 15 $`\mu `$m. At 7 $`\mu `$m, 517 sources were detected twice. Eighty-eight sources were only seen in the first LW2 exposure and 71 were only seen in the second. As in the case of the NGC 6522 field, only those sources with $``$ 7.5 are given in Table 3. ## 3 Correlations with other Catalogues The fields observed by ISOGAL overlap completely or in part with areas surveyed in other ways. ### 3.1 Spectroscopic information (NGC 6522 only) The spectroscopic survey of BMB (1984) covers M6 and later M-type giants. Spatially, more than 80% of our field is included and almost all their stars in the overlap area ($``$ 112) were detected. Cross-correlation of source positions was performed by means of transparent overlays. The sources BMB20 and 21 may both correspond to our no. 118. The following BMB sources that we do not detect are of spectral type M6: BMB 41, 56, 62, 82, 100, and 118. We also did not see BMB 13 (M5). These are all at the early end of the range. More interesting is the deeper survey of Blanco (1986) which covers a much smaller portion of the ISOGAL field but includes earlier spectral types, from M1 onwards. Table 4 summarizes our detections as a function of spectral type for the part of the field coincident with the deep spectroscopic survey. It is clear that our survey cuts off between spectral classes M3 and M4 (III), where the changeover from majority non-detections to majority detections occurs. It must also be expected that some brighter objects from the foreground will have been included. In fact, there are eight objects in the overlap region that were detected in the ISOGAL programme but were not classified as having spectra in the range M1–M9 by Blanco (1986). Their $`I`$-band counterparts appear quite bright. They were examined on $`I`$ and $`V`$ plates (Lloyd Evans, 1976) and were found to be probably non-variable and to have colours corresponding to K or early M types. Only one was detected at 15 $`\mu `$m. The others are almost certainly foreground stars. ### 3.2 Photometric information #### 3.2.1 NGC 6522 Only one object in the IRAS Point-source Catalog falls within our survey area. This is IRAS 17598-2957, which coincides with our no. 180, and has = 5.11 (IRAS 12 $`\mu `$m flux = 0.87 Jy). Star 224 has = 4.82 and star 12 has = 3.64, so it is surprising that these objects were not also detected by IRAS. The cause may be variability or crowding of sources. A number (56) of our NGC 6522 stars have been observed by Frogel and Whitford (1987) or Tiede, Frogel and Terndrup (1995) on the CTIO/CIT near-infrared system. Some of the variables also have photometry by Glass and Feast (1982). Fig. 7 shows the $`(JK)_0`$, $`K_0`$ diagram of the ISOGAL objects that were measured on the CIT/CTIO system. I-band photographic photometry was included in the BMB (1984) M-star survey. Sharples, Walker and Cropper (1990) have pointed out that the stars with I mag $`<`$ 11.8 have a significantly lower velocity dispersion than fainter ones, indicating that they probably belong to the foreground disc. A similar effect is seen amongst K giants in the same field (Sadler, Terndrup and Rich, 1996). There are 9 BMB stars with I $`<`$ 11.8 amongst the ISO detections, including two Miras (TLE 238 and TLE 136) and a possible semiregular variable (BMB 18). #### 3.2.2 Sgr I There are two IRAS sources within the ISOGAL field, viz no. 363 = IRAS 17559-2901 = TLE 53 (IRAS 12 $`\mu `$m flux = 1.7 Jy) and no. 247 = IRAS 17558-2858 = TLE 79 (IRAS 12 $`\mu `$m flux = 1.19 Jy). ### 3.3 Mira-type Variability The spectra of Mira variables change by several sub-types around the cycle, so that a direct correspondence between, for example, period and spectral sub-type is not to be expected. As will be seen, several stars with positions around those of the less-luminous Miras in the colour-magnitude diagrams have been examined for large-amplitude variability with negative results. It therefore appears that our inventory of the Mira variables in these two fields is complete. #### 3.3.1 NGC 6522 Six stars from the ISOGAL field were detected by Lloyd Evans (1976) as Mira variables. Four of these are amongst the most luminous at 15 $`\mu `$m, while one, of relatively short period for a “long-period variable” is near the limit of detection. The IRAS source corresponds to TLE 228. Table 5 shows the cross-identifications. The relatively short-period star (115d) TLE 395 is only just detectable at 15 $`\mu `$m. The two ISOGAL 7 $`\mu `$m exposures were compared by magnitude range (in the first measurement) and the average differences and their rms values were found for stars that appear in both. The result is shown in Table 6. #### 3.3.2 Sgr I Nine stars within the ISOGAL field are known long-period variables (see Table 7). These have been followed at JHKL by Glass et al (1995), who list their mean magnitudes and periods. The two ISOGAL 7 $`\mu `$m exposures were again compared by magnitude range. The result is shown in Table 8. #### 3.3.3 Photometric consistency If the known variables are omitted the average and rms differences are somewhat reduced. In the case of NGC 6522, the known variables show only small differences and do not materially affect the rms values. The error in the repeatibility of a single 7 $`\mu `$m observation in either field is thus about 0.14 mag, where we have divided the RMS difference columns in tables 6 and 8 by a factor of $`\sqrt{2}`$. ### 3.4 OH/IR Catalogues No known OH/IR sources fall within our fields (of 0.063 square degrees each). The density of known OH/IR sources in the part of the sky occupied by our two fields is only 1–2 per square degree (Sevenster et al., 1997), though it is higher at the Galactic Centre ($``$10 per square degree for a survey of similar depth). The deepest surveys of the Central region indicate that the density of OH/IR stars reaches several hundred per square degree or about 1/3 the number of known large-amplitude variables (Glass et al., 1999). From their $`H_K`$ colours, the Central region sources do not necessarily possess optically thick dust shells at $`K`$. However, it is not clear whether a deeper OH survey of the NGC 6522 and Sgr I fields would yield detections from the objects with moderate dust shells reported here. ## 4 The ISOGAL –, colour-magnitude diagrams Stars detected at both wavelengths are shown in the –, colour-magnitude diagrams (Figs 8 and 9). The fact that these and similar diagrams discussed below show well-defined sequences implies that they are not significantly contaminated by foreground stars. It is instructive to see how the – colours change with spectral type, bearing in mind that the few Miras may vary between late types. This is shown for the NGC 6522 field in Fig. 10, which includes all our sources that have been classified by BMB (1984) or Blanco (1986) and have been detected in both bands. Note that the fraction of stars satisfying both these criteria is almost zero for the early M-types and almost one for the later. Except for M sub-types 6, 6.5 and 7, the numbers are small. This selection effect arises from the fact that many stars fall below our limit in the 15 $`\mu `$m band. ### 4.1 Mira – colours The – colours of the Mira variables are noticeably displaced from the extrapolation of the sequence formed by the remainder of the late M-stars in the ISOCAM colour-magnitude diagrams (Figs. 8 and 9). For Miras, the relative flux in the 7 $`\mu `$m-band must be greater or that in the 15 $`\mu `$m-band must be weaker or both. O-rich Mira spectra have strong peaks due to the 9.7 $`\mu `$m and 18 $`\mu `$m silicate dust emission features. These are seen clearly in ISO-SWS spectra (Onaka et al., 1998). The ISOCAM LW3 band (12–18 $`\mu `$m) is situated longward of the 9.7 $`\mu `$m silicate band but is strongly influenced for about half of its width by the 18 $`\mu `$m feature. So far as is known, the LW3 band is not affected significantly by (gaseous) molecular absorption, the most conspicuous feature being the CO<sub>2</sub> band at 15.0 $`\mu `$m, which appears in some O-rich Miras (Onaka et al, 1997) with small equivalent width. In many sources an extension of the 10 $`\mu `$m peak to 13 $`\mu `$m, attributed to Al<sub>2</sub>O<sub>3</sub>, may also contribute to the ISOCAM 15 $`\mu `$m band. ## 5 $`K`$– colours The $`K`$– colour is primarily a measure of the infrared excess emitted by circumstellar dust shells. The $`K`$-band flux originates in the stellar photosphere and is only moderately affected ($``$ 0.1–0.2 mag overall) by CO and H<sub>2</sub>O absorption bands. As mentioned above, the ISOCAM LW3 band (12–18 $`\mu `$m) is strongly influenced for about half of its width by the 18 $`\mu `$m silicate feature. Here we consider those sources detected at 15 $`\mu `$m which also have ground-based near-infrared observations available from Frogel and Whitford (1987). Spectral classifications by BMB (1984) or Blanco (1986) are also available for these objects. Figure 11 shows their 15 $`\mu `$m mag vs $`K`$– colour and Fig. 12 shows the 15 $`\mu `$m vs M-subtype diagram. There is a well-defined sequence of increasing 15 $`\mu `$m luminosity with $`K_0`$– colour. Figs 11 and 12, as well as the other C-M diagrams, show that there is a steady increase of mass-loss from at least the mid-M giants to the Miras. It is also clear, for example by adding up the fluxes from each 15 $`\mu `$m magnitude band, that the flux from non-Miras exceeds that from the Miras. It is therefore also likely that at least half of the mass being returned from the stars to the interstellar medium is from the non-Miras. As pointed out by Omont et al (1999c, in preparation), $`K_0`$ $``$ 8.2 corresponds to the tip of the red giant branch (RGB)(Tiede, Frogel and Terndrup, 1995). This point corresponds (Fig. 11) to $`K_0`$– $``$ 0 or $``$ 8.2. Thus stars brighter than $``$ 8 are on the AGB. ## 6 High mass-loss objects with little or no variation Five of the known long-period variables in the NGC 6522 field are near the top of the ISOCAM colour-magnitude diagram (Fig. 8), but are interspersed with other stars that appear similarly luminous and somewhat redder. Ten of the latter stars, brighter than = 5.7, were selected for detailed investigation. They are denoted by letters in Table 2 and are visible as red stars on $`V`$ and $`I`$ plates (Lloyd Evans, 1976). They have been re-examined for variability. Although BMB (1984) assigned very late spectral types and suggested variability, none of the ten stars is, in fact, a large-amplitude variable. However, variability with small amplitude cannot be excluded in about half of them. They therefore represent a type of late M-star with high mass loss but small or zero amplitude of variation. The colours of four of these stars (A, C, D and F) are shown in the $`(JK)_0`$, $`K_0`$ diagram, Fig. 7, where they appear to be similar to 200–300 day Miras. One of the sample (star B) may be mainly at maximum, with occasional faint episodes, while stars A and G could be variables with occasional bright episodes superimposed on a constant background. Such variability in these stars is consistent with differences at 7 $`\mu `$m between the two observations, especially for star G (where the change amounts to $``$ 0.53 mag). For Sgr I, as in the case of the NGC 6522 field, 12 objects with 15 $`\mu `$m mags similar to the known Miras and somewhat redder colours were checked for variability by Lloyd Evans on the plate material. In all cases, these objects could be identified with very red stars. Only two were found to be fairly certain variables of low amplitude (F and H). Both these stars spend most of their time at maximum, with two fainter episodes separated by about 250 days for F and a single one for H. The positions of these objects in the –, diagram (Figs. 8, 9 ) suggest that they form a continuation of the general sequence of late M stars and are not similar to the Mira variables. Unfortunately, only four of them, BMB 28, BMB 46, BMB 86 and BMB 179, have been measured in the near-infrared. Their $`K_0`$– indicies are near-zero (Fig. 14). Two of these non-Mira high mass-loss objects have been classified spectroscopically by BMB (1984) as M9, a spectral type that is usually associated with large-amplitude variability. It is known that the onset of Mira-type behaviour occurs at later types with increasing metallicity. The existence of this class of stars may therefore be a consequence of super metal richness. Alternatively, variability may have ceased temporarily but the dust shells have not yet dissipated. Sloan and Price (1995) and Sloan, LeVan and Little-Marenin (1996) have shown that certain irregular and semi-regular AGB variables have dust excesses in IRAS spectra and are associated with the appearance of the 13 $`\mu `$m dust feature. By obtaining mid-infrared spectra for our objects or by obtaining more precise variability information it will be possible to decide if they are identifiable with this category. They may also be similar to the ‘red’ O-rich SRb variables of Kerschbaum and Hron (1996). ## 7 The $`K_0`$– colours The 56 objects of NGC 6522 with $`JHK`$ photometry and spectral classifications are plotted in figure 13, which shows their $`K_0`$– colours and spectral types. These colours are unexpectedly negative in most cases, especially for the earlier M-subtypes. While part of this could be due to the small uncertainty in the absolute 7 $`\mu `$m photometry in crowded fields, ISO SWS spectra of late-type O-rich M stars frequently show a broad absorption shortward of the strong silicate dust peak at 10$`\mu `$m. This region is known to be affected by the SiO fundamental at $``$ 7.9 $`\mu `$m (Cohen et al., 1995). Carefully calibrated spectrophotometry of $`\beta `$ Peg (M2.5 II–III) shows absorption at around 6 $`\mu `$m attributed to the $`\nu _2`$ band of water vapour centred at 6.25 $`\mu `$m. Spectra of O-rich M-type giants that also demonstrate these features are presented by Tsuji et al. (1998), who envisage that they arise in a warm absorbing layer somewhat above the photosphere. Approximate calculations based on the Tsuji et al. spectra yielded $`K_S`$– $``$ –0.1 for $`\beta `$ Peg but slightly positive $`K_𝐒`$– for SW Vir (M7III)<sup>5</sup><sup>5</sup>5Spectrophotometry shows that $`K`$$`K_{\mathrm{𝐷𝐸𝑁𝐼𝑆}}`$ should have values of 0.02 to 0.07 for Miras because the $`K_S`$ band does not include the first overtone band of CO.. The presence of significant amounts of dust could be an influence in the latter case. The depth of the CO fundamental band in late M-type giants is known to have a similar effect on the $`LM`$ colours, which tend to have low or negative values. Figure 13 shows that in the latest M-type giants the $`K_0`$– colour approaches zero or may even become positive. It is known that the overtone SiO absorption band at 3.95–4.1 $`\mu `$m is strong in semiregular variables but in Miras its strength varies with time and it can become weak (Aringer et al., 1995; Rinsland and Wing, 1982). The optical depth of the circumstellar dust in the stars that we are discussing is very low in the earlier spectral types, rising somewhat towards the Miras but never becoming high. The emission at 9.7 and 18 $`\mu `$m is probably dominated by particles at a fairly uniform temperature around 1000 K, where condensation of silicate dust grains first becomes possible in the stellar wind. Thus the 15 $`\mu `$m fluxes are largely a measure of the mass of grains present and ultimately of the mass-loss rate from the star. The near-infrared $`K`$ flux will not be affected by emission from the dust but will be reduced slightly by the absorption it causes at shorter wavelengths. The opacity of silicate dust at 7 $`\mu `$m is very low (see e.g. Schutte and Tielens, 1989) so that the trend of $`K`$– colour with spectral type that we observe could partly be caused by increasing extinction at $`K`$ and decreasing SiO and possibly H<sub>2</sub>O molecular band strengths at 5.5–8 $`\mu `$m. However, dust emission at 7 $`\mu `$m is certainly responsible for the largest values observed in LPVs, as shown by Groenewegen (private communication) for various dust models. ### 7.1 The $`K_0`$–, colour-magnitude diagram In the $`K_0`$–, colour-magnitude diagram (Fig. 14) we include the Miras from both fields. It is seen that almost all of the Miras are brighter and redder at 7 $`\mu `$m than the other red giants. A diagram involving the $`K_0`$– colour will therefore be the best criterion for detection of large-amplitude LPVs. ## 8 Absence of very luminous sources Let us recall that, for the class of sources that we detect, the $`K`$ bolometric magnitude correction is practically constant ($``$ 3.0; Groenewegen, 1997). With a distance to the Galactic Centre of 8.5 kpc, $`M_{\mathrm{Bol}}`$ $``$ $`K_0`$ \- 11.65. The brightest stars in our fields therefore have bolometric magnitudes $`>`$ –5.9. Neither field includes any luminous dust-enshrouded AGB sources of the type found in the Large Magellanic Cloud by Wood et al. (1992), which reach luminosities of $`M_{\mathrm{Bol}}`$ $``$ –7.5. ## 9 Mid-infrared period-colour relation for Miras The Mira variables show a clear period-colour relation with very moderate scatter in the mid-infrared (Fig. 15). The and photometry is almost simultaneous, so that such scatter as exists is not attributable to variation between measurements. Following the discussion in section 7, this relation confirms that the mass-loss rate in Miras is directly related to their periods (see e.g., Whitelock et al., 1994). The colour-magnitude diagrams for each of our two fields do not appear to contain any stars brighter than the known Miras. It is therefore likely that the census of stars at the long-period end of the the range is complete and that there is no evidence for long-period variables in our fields with periods longer than those already known, i.e. $``$ 700 d (see Glass et al., 1995). ## 10 Star Counts Determination of the spatial density distributions of the various stellar populations in the central Galaxy, and especially the bulge, is one of the key scientific goals of the ISOGAL project. A detailed analysis, covering several fields, and with careful consideration of completeness, extinction, etc, is in preparation. For the present we consider simply the differential number counts between the two fields at magnitudes above the completeness limit. At this covers the AGB above the RGB-tip, while at the limit extends approximately one mag below the RGB tip, to $`9.5`$, using $`K`$ – and $`K`$ – colours from Figs 11 and 14 respectively. The ratio of the surface density of sources is, within the sampling errors, identical for both RGB and AGB stars, indicating that there are no steep population gradients apparent. It is also identical at both and microns. The surface density ratio is $$\rho (\mathrm{Sgr}\mathrm{I})/\rho (\mathrm{NGC}\mathrm{\hspace{0.17em}6522})=1.85.$$ This count ratio corresponds to an exponential scale height of 2.0, or $`280`$ pc. That is, the inner bulge minor axis scale height is the same as that of the Galactic disk. A more interesting comparison is with the scale height derived from analysis of the COBE/DIRBE surface brightness observations of the inner Galaxy. These have been analysed most completely by Binney, Gerhard, and Spergel (1997), whose best-fit model has a bulge density distribution which is a truncated power-law: $$f_b=f_0\frac{\mathrm{e}^{a^2/a_m^2}}{(1+a/a_0)^{1.8}}$$ with $`a_m`$=1.9kpc, $`a_0`$=100pc, $`f_0`$ a normalisation constant, and $`a=(z/\xi )`$ for ($`x=y=0`$), where $`z`$ is the minor axis distance and $`\xi `$ an axial ratio, with $`\xi =0.6`$ best fitting the data. This density profile, which interpolates between the central luminosity spike, which follows an $`R^{1.8}`$ luminosity profile, and the outer bulge, which follows an $`R^{3.7}`$ profile, is also used in the models of Kent, Dame and Fazio (1991) and the kinematic analysis of Ibata and Gilmore (1995). It is known to provide an acceptable description of the bulge from latitudes of 4 to at least 12. It remains to be tested at intermediate latitudes. Direct comparison of the ISOGAL data with this function is of specific interest, since the fields surveyed here are included in the COBE/DIRBE analysis, and are of low reddening. The COBE/DIRBE data are however integrated light, and so must have a statistical correction for the foreground disk. This can be a complex function of wavelength and spatial resolution, especially at low latitudes (see Unavane et al., 1998, and Unavane and Gilmore, 1998) for a more complete description). Our ISOGAL source counts are however strongly biased against the foreground disk, being dominated by sources in the central Galaxy. Thus, the ISOGAL observations test directly the analysis of the integrated light. Our present analysis however does not consider line of sight depth effects in the bulge itself. With the adopted vertical axis ratio $`\xi =0.6`$ the predicted count ratio in the ISOGAL data is $`\rho (\mathrm{Sgr}\mathrm{I})/\rho (\mathrm{NGC}\mathrm{\hspace{0.17em}6522})=2.04`$, somewhat larger than the observed value. Interestingly, near-exact agreement with the model above follows with an axis ratio $`\xi =1.0`$, which predicts $`\rho (\mathrm{Sgr}\mathrm{I})/\rho (\mathrm{NGC}\mathrm{\hspace{0.17em}6522})=1.81`$. This systematic discrepancy between the model and our data is in agreement with the small systematic residuals emphasised to exist by Binney et al. (1997) and shown in their figure 2. ## 11 Conclusions Most of the detected objects are late-type M stars, with a cut-off for those earlier than about M3–M4. There is a continuous sequence of increasingly mass-losing objects from the mid-M-type giants to the long-period Miras, which are the most luminous stars in the field. There appears to be no component of dust-enshrouded very long-period OH/IR stars or similar objects faint even at $`K`$ in these fields. The upper limit of luminosities remains at $`M_{\mathrm{Bol}}`$ $``$ –5.7 and the upper limit of periods remains at about 700 days, as determined from near-infrared studies (Glass et al., 1995). There is a group of late-type M-stars on the AGB which are not large-amplitude variables but may be irregular or of small amplitude, that have luminosities similar to Mira variables in the 200–300 day period range and show redder – but bluer $`K_0`$– colours. The ISOCAM 7 $`\mu `$m band is almost certainly affected by molecular absorption in ordinary M-giant stars. However, Mira variables are brighter than other stars possibly in part because of reduced SiO and H<sub>2</sub>O absorption. The results from these fields should form a template for analyses of more heavily obscured regions about which little is known from visible-light studies. ## 12 Acknowlegments We would like to acknowledge the Les Houches 1998 summer school for access to the JUN98 version of CIA. We acknowledge useful discussions with K.S. Krishnaswamy, TIFR. Eric Copet is thanked for his help with the Unix scripts and Martin Groenewegen for his comments on dust at 7 $`\mu `$m. ISG wishes to acknowledge the hospitality of IAP during part of this work. SG and MS acknowlege receipts of fellowships from the Ministere des Affaires Etrangères, France, and ESA respectively.
no-problem/9904/astro-ph9904146.html
ar5iv
text
# THE SIZES OF 1720 MHZ OH MASERS: VLBA AND MERLIN OBSERVATIONS OF THE SUPERNOVA REMNANTS W 44 AND W 28 ## 1 Introduction The 1720.53 MHz line from the hydroxyl radical (OH) was conclusively shown to be associated with the supernova remnant (SNR) W 28 by Frail, Goss & Slysh (1994). Since then several surveys have been made toward other Galactic SNRs (Frail et al. 1996, Yusef-Zadef et al. 1996, Green et al. 1997, Koralesky et al. 1998), clearly establishing that the OH(1720 MHz) line toward masers is a new class of OH maser, distinct from those in star-forming regions and evolved stars. Follow-up work (Claussen et al. 1997, Frail & Mitchell 1998, Wardle, Yusef-Zadeh & Geballe 1998) supports the hypothesis that the OH(1720 MHz) masers originate in C-type shocks, transverse to the line-of-sight, being driven into adjacent molecular clouds by the expanding SNR. The measured densities, temperatures and magnetic fields from these studies are consistent with collisional excitation of the OH by the H<sub>2</sub> molecules in the post-shock gas (Elitzur 1976, Pavlakis & Kylafis 1996, Lockett, Gauthier, & Elitzur 1999). Claussen et al. (1997) imaged the masers toward the SNRs W 28 and W 44 and reported finding numerous maser features distributed across these SNRs. At their arcsecond resolution, some features were unresolved (“spots”) while other features appeared to be resolved with measured angular sizes ranging from 0.25” to 2.5”. These measured sizes of the masers could, of course, be spots that appeared spatially blended due to insufficient angular resolution. Alternatively, the resolved features could be spots whose apparent sizes reflect scattering by interstellar turbulence along the line of sight (angular broadening). Many masers are known whose apparent sizes are dominated by angular broadening (e.g., Diamond et al. 1998; Frail et al. 1994); most such masers are seen with similar sizes and elongations as noted by Claussen et al. (1997). The current study was undertaken primarily to address the question of whether the measured size of the masers is due to scattering or to multiple maser components. In order to reach a definite conclusion on this question, we must investigate both the intrinsic size of the masers and the possible effects of interstellar scattering. The OH(1720 MHz) masers are quite rare in the interstellar medium as compared to main-line masers, and even more so toward late-type stars where the other satellite line at 1612 MHz is the dominant maser transition. So published sizes of OH(1720 MHz) masers, which necessitate the use of VLBI techniques are also very rare. Forster et al. (1982) found upper limits of 20 mas to the sizes of OH(1720 MHz) masers toward the H II region NGC 7538, while Masheder et al. (1994) reported that the W3(OH) 1720 MHz masers are unresolved with sizes $`<`$1.2 mas. These measurements may not even be applicable in the case of supernova remnants where the physical conditions and pumping mechanisms could be very different from that in star-forming regions. A recent study of the pumping of the 1720 MHz masers toward supernova remnants (Lockett et al. 1999) finds tight constraints on the physical conditions needed for their production (temperature in the range 50 — 125 K, molecular hydrogen density $``$ 10<sup>5</sup> cm<sup>-3</sup>, and OH column densities of order 10<sup>16</sup> cm<sup>-2</sup>.) An upper limit for the size of the maser spots is the thickness of the shocked region over which such conditions exist. This thickness is estimated to be about 3$`\times `$10<sup>15</sup> cm. At the distance (3 kpc) of both W 44 and W 28 , this corresponds to an angular size of about 60 mas. The effects of interstellar scattering on the sizes of masers in W 28 and W 44 can be constrained by observations of nearby (in projection) pulsars and extragalactic sources. Interstellar scattering of an extragalactic source results in angular broadening while interstellar scattering of pulsars results in pulse broadening, an increase in the apparent width of a pulsar’s average pulse profile beyond its intrinsic width. The degree of angular broadening for the masers and the extragalactic source and the degree of pulse broadening for the pulsar all depend in different ways upon the relative geometry of the observer, scattering material, and sources. Measurements of these three scattering effects can constrain the distribution of the scattering material and can be used to estimate, for example, the unscattered sizes of the masers. In this paper, we present VLBI observations of several bright OH(1720 MHz) masers in W 28 and W 44 using the VLBA of the NRAO. The large extent of the SNRs and associated maser emission (tens of arcminutes) precluded observations of all the OH masers for these remnants. In addition, we have used the MERLIN telescope of the Nuffield Radio Astronomy Laboratory at Jodrell Bank to observe the OH masers in W 44 with intermediate angular resolution between that of the VLBA and the VLA. We present the results of these observations and discuss their impact on the question of the masers’ intrinsic size vs. broadening due to interstellar scattering. ## 2 Observations and Data Reduction ### 2.1 VLBA Observations The 1720 MHz transition of the ground-state OH molecule was observed with the VLBA (Napier et al. 1994) on 09 May 1997 toward one $``$ 25´ field in each of the two supernova remnants W 28 and W 44. Table 1 lists the position of the pointing center for both sources. These positions were chosen to encompass both the “OH E” and “OH F” 1720 MHz masers in W 28 and the “OH E” masers in W 44, following the nomenclature of Claussen et al. (1997). A single antenna of the VLA was also used in conjunction with the antennas of the VLBA in order to provide short projected baselines ($``$60 km). The data were recorded with a 62.5 kHz bandwidth centered at velocities of 44.0 km s<sup>-1</sup> and 10.0 km s<sup>-1</sup> (LSR) for W 44 and W 28, respectively. Both right and left circular polarizations were recorded with 2-bit sampling. The data were correlated with the NRAO VLBA correlator to provide 128 spectral channels for each polarization averaged every 4.1 seconds. All four polarization correlations were performed. This correlator mode provided a channel spacing of 0.09 km s<sup>-1</sup> per spectral channel. The velocity resolution is slightly large than this ($``$0.11 km s<sup>-1</sup>) due to the spectral weighting function applied. In order to provide manageable dataset sizes, the correlator averaging time was the limiting factor for the field of view. For averaging times of 4.1 seconds, the fringe-rate window for the longest baselines of the VLBA provides a field of view of approximately 24˝ . Thus for both SNRs we made two correlation passes near positions of strong OH(1720 MHz) masers in the primary field of view. Table 2 provides the positions of the two correlation positions for both remnants, and the peak flux density in the VLA A configuration observations (Claussen et al. 1997). Data reduction was performed using standard software contained in the NRAO AIPS package. Delays were measured via observations of nearby continuum sources (1748$``$253 for W 28, and 1904+013 for W 44) by performing fringe fitting. Bandpass calibration was determined using total-power observations of the strong sources 1921$``$293 and 1611+343. Residual fringe rates were determined by fringe fitting on a single strong spectral channel. For W 28, amplitude calibration was accomplished by fitting the bandpass-corrected, total-power, on-source spectra of each antenna in the array to a template total-power, on-source spectrum observed with a sensitive antenna (at the Mauna Kea, HI station) at a high elevation angle. The absolute amplitude calibration determined by this method is accurate to about 10%. For W 44, the amplitude calibration was determined by using known antenna gains (provided by NRAO staff) and system temperatures measured during the observations (so-called a priori amplitude calibration). This procedure was carried out for the W 44 data because the signal-to-noise ratio in the total-power spectra of the OH maser line was not high enough to attempt the template-fitting method. The absolute amplitude calibration determined by the a priori method is accurate to about 15%. Even on the strongest spectral channels, no correlated signal was detected on the longer baselines (typically from one of the southwest locations to Saint Croix, VI; Hancock, NH; and Mauna Kea, HI), and so data to these stations were automatically deleted. The spectral channel with the greatest flux density was then used in an iterative self-calibration mapping procedure. These self-calibration solutions were then applied to all spectral channels. The rms noise ($``$ 100 mJy beam<sup>-1</sup>) in these images is close to the expected theoretical noise limit. Images were then made of all spectral channels with maser emission. Naturally weighted maps for each velocity channel were then produced using the AIPS task IMAGR. The resultant synthesized beam was about 40 $`\times `$ 15 mas at a position angle of 0 for W 28, and about 40 $`\times `$ 30 mas at a position angle of $``$10 for W 44. ### 2.2 MERLIN Observations The MERLIN radio telescope was used on 12 April 1998 to observe the position listed in Table 1 for W 44 in the 1720 MHz transition of OH. Seven telescopes were used, including the 76-m Lovell telescope, the Mark II, the 32-m telescope at Cambridge, and the 25-m telescopes at Tabley, Darnhall, Knockin, and Defford. Both right and left-hand polarization was observed, and the correlator produced 512 spectral channels over a bandwidth of 500 kHz, for a channel spacing of 0.18 km s<sup>-1</sup>. The field of view for the MERLIN observations included both the W 44 “E” and “F” sources (Claussen et al. 1997). The synthesized beam from the MERLIN observations was 290 $`\times `$ 165 mas at a position angle of 25 . Initial phase and amplitude calibration was performed by MERLIN staff at the University of Manchester. Bandpass calibration was determined from observations of 3C 84, and phase calibration was determined from interleaved observations of 1904+013. The absolute amplitude calibration was determined by observations of 3C 286 on the shortest baselines. Similar to the VLBA reduction, AIPS was then used to apply self-calibration solutions, based on iterative self-calibration imaging carried out on the strongest spectral channel. The rms noise obtained after applying the self-calibration to a single channel was about 30 mJy beam<sup>-1</sup>, also close to the theoretical noise limit. ## 3 Results ### 3.1 W 28 Masers Figure 1 shows a contour image of the OH(1720 MHz) emission at 11.3 km s<sup>-1</sup> , and Stokes I spectra at two emission peaks. This emission corresponds to feature F 39 in the nomenclature of Claussen et al. (1997) (see their Table 2). The peak flux density in the image is 6.1 Jy beam<sup>-1</sup> with a total flux of $``$70 Jy, in close agreement with the peak flux density (73 Jy beam<sup>-1</sup>) at this position in the VLA A configuration maps. The OH(1720 MHz) emission is clearly resolved at this resolution. Several emission peaks can be seen. The size of the feature marked B in Figure 1, determined by Gaussian fitting, is 75 $`\times `$ 34 mas, at a position angle of 9 . Thus the peak brightness temperature is $``$2 $`\times `$ 10<sup>9</sup> K. Other emission peaks in the contour image of Figure 1 have similar spectral profiles, differing primarily in their peak flux density. In addition to the masers shown in Figure 1, there is an additional maser emission peak $``$500 mas to the northeast, at a velocity of 9.6 km s<sup>-1</sup> . Figure 2 shows a contour image of the peak velocity channel in this region and Stokes I spectra of two emission peaks. Again, the emission is quite extended (size $``$60 mas) compared with the beam. The two peaks C and D in the emission correspond to brightness temperatures of 1$`\times `$ 10<sup>9</sup> and 8 $`\times `$ 10<sup>8</sup> K, respectively. For the positions marked A and B in Figure 1, we have determined the Stokes parameters I and V. Assuming the V profile is due to the Zeeman effect, and that the splitting is small compared to the intrinsic (Doppler) line width, the Stokes V profile is proportional to the frequency derivative of the Stokes I profile. A fit of the V profile to the derivative of the I profile yields a measurement of the line-of-sight magnetic field (see Claussen et al. 1997 for further discussion). Figure 3 shows the result of this fit for the two positions marked in Figure 1. The estimate of the line-of-sight magnetic field ($``$ 2 milliGauss) is larger by a factor of $``$10 than estimates based on the VLA observations. However, toward this specific position, Claussen et al. (1997) were unable to estimate the line-of-sight magnetic field because the spectra were quite complex and did not show a clear signature (the classical S-shape in Stokes V) of Zeeman splitting. This relation used to derive the line-of-sight magnetic field is valid only for thermal absorption and emission lines. Nedoluha & Watson (1992) conclude that the standard thermal relationship used here is a valid approximation of the line-of-sight field strength for observations of water masers, if they are not strongly saturated, despite the complications of the maser radiative transfer. Elitzur (1996, 1998) has derived a general polarization solution for maser emission and arbitrary Zeeman splitting. According to this solution, Elitzur (1998) concludes that masers require smaller fields to produce the same amount of circular polarization as thermal emission. Thus the estimate of the magnetic field given above may be overestimated by as much as factors of 2—4. At the other correlated position in W 28 (W 28 E), we did not detect OH emission. Based on the VLA observations, we expected that there should be several features detectable by the VLBA. Some of the flux densities measured in the VLA observations are 24 Jy/beam (E 30), 12.5 Jy/beam (E 31), and 10 Jy/beam (E 24). E 24 was unresolved with the VLA. If these non-detections are due to emission that is smooth, we can calculate a lower limit to the size of the emission features, based on the rms noise that we measured on the shortest projected baseline in the VLBA observations. Assuming a source size that is a circular Gaussian function, a maximum visibility of 0.1 (for E 24, for example) on the shortest baseline ($``$60 km), the lower limit must be about 400 mas. For E 30, the visibility must be correspondingly lower and thus the size must be larger than about 500 mas. ### 3.2 W 44 Masers The OH(1720 MHz) masers observed with MERLIN are shown in Figures 4 and 5. Figure 4 is a contour plot of the peak maser emission from the E 11 source along with the OH spectrum of the peak emission. The peak occurs at a velocity of 44.2 km s<sup>-1</sup> . The stronger of the two features in the contour image has a peak flux density of 3.3 Jy beam<sup>-1</sup>, and is slightly resolved with a fitted Gaussian size of 165 $`\times `$ 57 mas at a position angle of 147 . The total flux density over the emission region is about 6.1 Jy, which is comparable to the peak flux density measured with the VLA. The brightness temperature for this feature is 1.3 $`\times `$ 10<sup>8</sup> K. We convolved the MERLIN map of the E 11 source with the VLA A configuration beam, and then made a Gaussian fit to the resulting image. The peak of the convolved image was 5.0 Jy beam<sup>-1</sup> with a fitted size of 715 $`\times `$ 250 mas at a position angle of 166 . This is comparable with the VLA observations which obtained a peak flux density of 6.6 Jy beam<sup>-1</sup>, and a fitted size of 890 $`\times `$ 180 at a position angle of 137 . Figure 5 shows the OH(1720 MHz) maser emission from the W 44 F 24 source which peaks at a velocity of 46.9 km s<sup>-1</sup>, with a peak flux density of 1.5 Jy beam<sup>-1</sup>. The OH spectrum at the emission peak is also shown. The total flux density in the emission region is about 4.2 Jy, only about half of the VLA observed peak flux density. Figure 6 shows a contour plot of the VLBA image of the E 11 source. This maser source is unresolved at the resolution of the VLBA (40 $`\times `$ 30 mas). The peak flux density is 1.2 Jy beam<sup>-1</sup>, and thus a lower limit to the brightness temperature is 8 $`\times `$ 10<sup>8</sup> K. This source is the core of the brightest MERLIN source shown in Figure 4. If a single Gaussian component with a large size were responsible for the difference in flux density between the VLBA and MERLIN measurements, then, based on the shortest projected spacing of the VLBA and the measured noise, the size of such feature would have to be larger than 270 mas. This is inconsistent with the MERLIN measurement. Thus we conclude that the 2.1 Jy of missing flux density must be in a few components whose peaks are each weaker than about 0.4 Jy beam<sup>-1</sup>. ## 4 The OH Maser Sizes: Scattering Disks or Physical Size? Table 3 summarizes the measured components and the estimated brightness temperatures of the OH(1720 MHz) masers in both W 28 and W 44. The VLBA and MERLIN observations clearly resolve the masers seen by Claussen et al. (1997) into multiple components. Typical angular sizes in both SNRs are 50 to 100 mas with aspect ratios of order 2.5:1. While these measurements have shown that the Claussen et al. (1997) maser sizes were an artifact of the low resolution used, a major question remains: “are these compact and resolved features the true size of the masers or are they due to interstellar scattering ?”. The observations and results of the present study cannot distinguish between these two options. This is due mainly to our ignorance of what the intrinsic sizes might be. In what follows we will interpret the results of these observations as they apply to both intrinsic structure and that due to scattering, and suggest further observational tests that should help to distinguish between these two possibilities. ### 4.1 Scattering Interpretation If the maser sizes in W 44 and W 28 are dominated by scattering, other nearby (in projection) objects such as pulsars and extragalactic sources can also be affected. Pulse broadening and angular broadening depend in different ways upon the distribution of scattering material along the line of sight; additionally, angular broadening of an extragalactic source seen through the turbulence in our Galaxy’s ISM is sensitive only to the strength of the turbulence, not to its distribution along the line of sight. The relative distances of the masers and nearby pulsars along with the distribution of scattering material can all be constrained using measurements of angular and pulse broadening. Adopting a distance of 3 kpc for both W 28 and W 44 , the models of Cordes et al. (1991) and Taylor & Cordes (1993) predict angularly broadened sizes for the OH masers of order 1$``$3 mas. These angular broadening estimates are well below the sizes given in Table 3 but it should be noted that the angular broadening is underestimated for lines-of-sight subject to enhanced scattering. Examples of lines-of-sight with enhanced scattering include the Galactic Center region (van Langevelde et al. 1992), the Cygnus region (Fey, Spangler & Cordes 1991), and towards 1849+005 (Fey, Spangler & Cordes 1991). We discuss separately the implications of our maser size measurements for the scattering towards W 28 and W 44. #### 4.1.1 Scattering in the Direction of W 28 The W 28 SNR and its associated masers lie at a distance of approximately 3 kpc (e.g., Kaspi et al. 1993; Frail, Kulkarni, & Vasisht 1993) in the direction $`(l,b)=(6.8,0.06)`$. The 60,000 year old pulsar PSR B1578$``$23 also lies in the same direction but is located outside the SNR, a few arcminutes to the north of its bright continuum edge. An extragalactic source, 1758$``$231, lies within two arcminutes of this pulsar. Frail et al. (1993), using observations of the pulsar and the neighboring extragalactic source, argued in favor of the association of the pulsar and the SNR. Kaspi et al. (1993) disagreed, suggesting that the pulsar was much further away. The discussion of Frail et al. was based upon a pulse broadening measurement of 70 milliseconds at 1 GHz for PSR B1758$``$23 (Kaspi et al. 1993) and upon an upper limit of $`1`$ arcsecond to the size of the extragalactic source 1758$``$231. Here we review the implications of our measurements of the $`60`$ mas maser size as it pertains to this disagreement. Under the usual assumption that masers and the extragalactic source are scattered by a turbulence confined to a single thin screen, angular broadening measurements completely constrain the location of the screen. The two angular broadening sizes are related to the location of the screen by $$d_s/d_m=1\theta _m/\theta _e,$$ where $`\theta _m`$ and $`\theta _e`$ are the angular broadening sizes of the masers and the extragalactic source and $`d_m`$ and $`d_s`$ are the observer distances to the masers and the screen, respectively. If the masers and the extragalactic source have sizes of $`\theta _e=1`$″and $`\theta _m=0.06`$″, respectively, then for a maser distance of $`d_m=3`$ kpc, the scattering screen lies at $`d_s=2.8`$ kpc. Note that the screen distance decreases if $`\theta _m/\theta _e`$ increases. The pulse broadening of PSR1758$``$231 and the angular broadening of 1758$``$231 provide a general constraint on the location of the pulsar. Frail et al. (1993) assumed a distance of 3 kpc for PSR B1758$``$23 and combined the pulse broadening of PSR B1758$``$23 with the angular broadening of 1758$``$231 to constrain $`f_p`$, the ratio of the observer-screen distance to the screen-pulsar distance, to values between $`0.3`$ and $`3.4`$. A more general constraint on the location of the pulsar can be obtained if the equations presented by Frail et al. are combined without assuming a distance to the pulsar to produce $$\theta _e=\frac{0.42}{\sqrt{d_p/3}}\left(\sqrt{f_p}+\frac{1}{\sqrt{f_p}}\right),$$ where $`\theta _e`$ is the angular size of the extragalactic source in arcseconds, and $`d_p`$ is the distance from the observer to the pulsar in kpc. It is easy to show that this implies that the distance to the pulsar is given by $$d_p=2.12/\theta _e^2,$$ if the screen is halfway to the pulsar and is further otherwise. In particular, if the measured extragalactic source size is smaller than $`0.84`$ arcseconds, the pulsar cannot be associated with the W 28 SNR. #### 4.1.2 Scattering in the Direction of W 44 The W 44 SNR and its associated masers lie at a distance of approximately $`3`$ kpc (Radhakrishnan et al. 1972) in the direction $`(l,b)=(34.7,0.4)`$. The association of pulsar PSR B1853$`+`$01 with W 44 is well established based on ages, dispersion measure distance, and positional coincidence (Wolszczan, Cordes, & Dewey 1991). Assuming a $`3`$ kpc distance to PSR B1853$`+`$01, the Taylor & Cordes model predicts $`30`$ microseconds of pulse broadening and $`1`$ mas of angular broadening. Because the Taylor & Cordes model prediction of $`92`$ mas severely underestimates the observed angular broadening size of $`378`$ mas for 1849+005, an extragalactic source situated only $`3.7`$ degrees away from W 44, we consider the possibility that the observed $`100`$ mas OH maser sizes are due to angular broadening. The observational limit to the pulse broadening is of order $`1`$ millisecond. Lines-of-sight with enhanced scattering are believed to intersect a clumped component of scattering material in the interstellar medium. All lines-of-sight passing within $`30`$ arcminutes of the Galactic Center are known to be heavily scattered (van Langevelde et al. 1992; Lazio & Cordes 1998). The scattering material has been constrained to lie within 50 parsecs of the Galactic Center. At a distance of $`8.5`$ kpc, a clump of size $`30`$ arcminutes corresponds to enhanced scattering in a region of about $`40`$ pc. Similar clump sizes have been proposed for scattering towards other lines-of-sight (e.g., Dennison et al. 1984). Because the OH masers in W 44 and PSR B1853$`+`$01 both lie at $`3`$ kpc, the distance $`d_s`$(in kpc) to an assumed scattering screen can be derived from the pulsar size formula presented by Frail et al. : $$d_s=\frac{3}{1+\theta _m^2/2.52\tau },$$ where $`\theta _m`$ is measured in arcseconds and $`\tau `$, the pulse broadening, is measured in seconds. If the angular broadening size for OH masers in W 44 is assumed to be $`80`$ mas and the pulse broadening of order $`0.5`$ milliseconds, a clump of enhanced scattering would need to be placed only $`0.5`$ kpc away. At this distance, a $`40`$ pc clump would subtend an angle of about $`4`$ degrees and could also intercept the line of sight towards 1849+005. Since the line-of-sight towards 1849+005 is the second most heavily angularly broadened line of sight known, it is reasonable to suggest that other, neighboring, lines-of-sight should also be heavily scattered — as is the case for the Galactic Center direction. The observed sizes of OH masers in W 44, if dominated by angular broadening, have sizes consistent with the scattering of 1849+005. This hypothesis predicts that other lines-of-sight close to 1849+005 should also be heavily scattered. In addition, improved better measurements of the pulse-broadening towards PSR B1853$`+`$01 could help to prove or disprove this suggestion. ### 4.2 Intrinsic Structure of the Masers If the structure that is observed in the OH(1720MHz) masers toward W 44 and especially W 28 are due to variations in emission intrinsic to the maser, then this data is the first demonstration of structure in 1720 MHz OH maser emission. The pumping requirements of these OH (1720 MHz) masers, modeled by Lockett et al. (1999), strongly suggests an OH column density of 10<sup>16</sup> cm<sup>-2</sup> with molecular hydrogen density $``$ 10<sup>5</sup> cm<sup>-3</sup>. This requires a linear dimension of $`\frac{10^{11}}{x_{OH}}`$ cm, where $`x_{OH}`$ is the OH abundance. According to Lockett et al. (1999), the highest OH abundance expected in C-shocks is $`2\times 10^5`$, so the expected thickness of the OH emitting region is about 5$`\times 10^{15}`$ cm, similar to the maser sizes we observe. We could conclude that these masers appear to be similar to the stellar OH(1612 MHz) masers. The OH(1612 MHz) masers in circumstellar shells show a large range of size scales: 40 — 1000 mas, as demonstrated by Bowers et al. (1990). Both main-line and 1720 MHz OH found in the interstellar medium have been shown to be very compact and with little structure (e.g., Reid et al. 1980; Forster et al. 1982; Masheder et al. 1994). Bowers et al. (1990) suggest that the OH(1612 MHz) maser structure is determined by a combination of density and velocity effects; our 1720 MHz observations could be indicative of a similar situation. A good test of whether or not the OH(1720 MHz) emission is really due to intrinsic structure would be a high-resolution observation of the OH(1720 MHz) emission toward a nearby SNR in the anti-center direction (to minimize possible scattering effects). A good candidate for this test observation would be the SNR IC 443. It lies at a Galactic longitude $`189`$ , and is only 1.5 kpc distant. If a measurement of the size of the OH(1720 MHz) masers in IC 443 showed the masers to be smaller than those in W 28 or W 44, then a good argument for scattering of the masers in W 28 and W 44 could be made. If the sizes of the masers in IC 443 were similar in size or larger than those in W 28 or W 44 , then the sizes of the masers in all three of the SNR would likely be intrinsic. ## 5 The OH Maser Polarization in W 28: Magnetic Fields The measurement of the line-of-sight magnetic field of about 2 milliGauss toward the region of strong maser emission in W 28 is stronger than the more widespread measurements reported by Claussen et al. (1997) toward the SNR by about a factor of 10. This measurement is closer to that reported by Yusef-Zadeh et al. (1996, 1998) for magnetic fields in the Galactic Center. It is interesting to note that the strongest field strength we measure is also in the region of strongest maser emission. This may be a selection effect, since our observations were limited to only a few maser regions. Since both the current observations and the VLA observations of V profiles show the classical S shapes of the Zeeman effect, we are confident that both sets of measurements are good estimates of the line-of-sight magnetic field. In this small region, the magnetic pressure must be 100 times the pressure estimated by Claussen et al. (1997), or about 2$`\times `$10<sup>-7</sup> dyn cm<sup>-2</sup>. Thus the magnetic pressure is very much larger than the thermal gas pressure of 6$`\times `$ 10<sup>-10</sup> dyn cm<sup>-2</sup> estimated from hot X-ray gas in the interior of the remnants (Rho et al. 1996), and so the magnetic field is likely the dominant factor in the structure of the shock. As discussed by Lockett et al. (1999) and Draine, Roberge, and Dalgarno (1983), the larger magnetic field estimated here is further strong evidence for a C-type shock in the OH maser region. ## 6 Conclusions We have used the VLBA and MERLIN to observe some of the OH(1720 MHz) masers toward the two supernova remnants W 44 and W 28 at resolutions of 40 mas. We have resolved the masers in both SNRs. The range of observed sizes are 50$``$180 mas, and the derived apparent brightness temperatures are in the range 0.3$``$20$`\times 10^8`$ K. Based on the present data, it is unclear if the observed structure of the masers is due to interstellar scattering or to intrinsic structure. If the OH(1720 MHz)structure is due to intrinsic maser emission, then we suggest that the OH(1720 MHz) masers in SNR may be similar to the OH(1612 MHz) maser emission from circumstellar shells. A possible test of whether or not the size and structures observed are intrinsic or due to scattering would be a high angular resolution observation of the remnant IC 443. If the sizes measured are considered to be dominated by the angular broadening effects of interstellar scattering, conclusions can be drawn about the location of the scattering material. In the case of W 28, the scattering material along the line of sight is most likely situated within $`100`$ pc of the masers. Also, we would conclude that pulsar PSR B1758$``$23 is definitely not associated with the SNR. In the case of W 44, we suggest that a 40 pc clump of enhanced scattering material located 500 pc from the Sun could explain the observed maser sizes as well as the scattering of 1849$`+`$005. This suggestion could be tested by searching for other heavily scattered sources within $`4`$ degrees of 1849+005 and by improved pulse broadening measurements of the pulsar PSR B1853$`+`$01. Finally, we measure a magnetic field in a small region of the SNR W 28 ($``$2$`\times `$10<sup>15</sup> cm), which is a factor of about 10 higher than that which was measured using the VLA. The magnetic field clearly dominates the shock structure, and is further evidence for a C-type shock in the OH maser region. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. MERLIN is a UK national facility operated by the University of Manchester on behalf of PPARC. We thank Peter Wilkinson for granting us time on MERLIN from the Director’s discretion, Peter Thomasson for his invaluable assistance in scheduling MERLIN, and Anita Richards for performing the initial phase and amplitude calibration for the MERLIN data. Finally, we thank the referee, Moshe Elitzur, for useful comments and discussions which have improved the paper.
no-problem/9904/cs9904014.html
ar5iv
text
# A Control and Management Network for Wireless ATM Systems ## 1: Introduction Research involving mobile wireless ATM is advancing rapidly. One of the earliest proposals for a wireless ATM architecture is described in . In this paper, various alternatives for a wireless Media Access Channel (MAC) are discussed and a MAC frame is proposed. The MAC contains sequence numbers, service type, and a Time of Expiry (TOE) scheduling policy as a means for improving real-time data traffic handling. A related work which considers changes to Q.2931 to support mobility is proposed in . A MAC protocol for wireless ATM is examined in with a focus on Code Division Multiple Access (CDMA) in which ATM cells are not preserved allowing a more efficient form of packetization over the wireless network links. The ATM cells are reconstructed from the wireless packetization method after being received by the destination. The Rapidly Deployable Radio Network Project (RDRN) architecture described in this paper maintains standard ATM cells through the wireless links. Research work on wireless ATM LANs have been described in and . The mobile wireless ATM RDRN differs from these LANs because the RDRN uses point-to-point radio communication over much longer distances. The system described in and consists of Portable Base Stations (PBS) and mobile users. PBSs are base stations which perform ATM cell switching and are connected via Virtual Path Trees which are preconfigured ATM Virtual Paths (VP). These trees can change based on the topology as described in the Virtual Trees Routing Protocol . However, ATM cells are forwarded along the Virtual Path Tree rather than switched, which differs from the ATM standard. An alternative mobile wireless ATM system is presented in this paper which consists of a mobile PNNI architecture based on a general purpose predictive mechanism known as Virtual Network Configuration that allows seamless rapid handoff. The objective of the Rapidly Deployable Radio Network (RDRN) effort is to create an ATM-based wireless communication system that will be adaptive at both the link and network levels to allow for rapid deployment and response to a changing environment. The objective of the architecture is to use adaptive point-to-point topology to gain the advantages of ATM for wireless networks. A prototype of this system has been implemented and will be demonstrated over a wide area network. The system adapts to its environment and can automatically arrange itself into a high capacity, fault tolerant, and reliable network. The RDRN architecture is composed of two overlaid networks: * a low bandwidth, low power omni-directional network for location dissemination, switch coordination, and management which is the orderwire network described in this paper, * a “cellular-like” system for multiple end-user access to the switch using directional antennas for spatial reuse, and and a high capacity, highly directional, multiple beam network for switch-to-switch communication. The network currently consists of two types of nodes, Edge Nodes (EN) and Remote Nodes (RN) as shown in Figure 1. ENs where designed to reside on the edge of a wired network and provide access to the wireless network; however, EN also has wireless links. The EN components include Edge Switches (ES) and optionally an ATM switch, radio handling the ATM-based communications, packet radio for the low speed orderwire running a protocol based on X.25 (AX.25), GPS receiver, and a processor. Host nodes or remote nodes (RN) consist of the above, but do not contain an ATM switch. The ENs and RNs also include a phased array steerable antenna. The RDRN uses position information from the GPS for steering antenna beams toward nearby nodes and nulls toward interferers, thus establishing the high capacity links as illustrated in Figure 2. Figure 2 highlights an ES (center of figure) with its omni-directional transmit and receive orderwire antenna and an omni-directional receive and directional transmit ATM-based links. Note that two RNs share the same $`45^o`$ beam from the ES and that four distinct frequencies are in use to avoid interference. The decision involving which beams to establish and which frequencies to use is made by the topology algorithm which is discussed in a later section. The ES has the capability of switching ATM cells among connected RNs or passing the cells on to an ATM switch to wire-based nodes. Note that the differences between an ES and RN are that the ES performs switching and has the capability of higher speed radio links with other Edge Switches as well as connections to wired ATM networks. The orderwire network uses a low power, omni-directional channel, operating at 19200 bps, for signaling and communicating node locations to other network elements. The orderwire aids link establishment between the ESs and between the RNs and ESs, tracking remote nodes and determining link quality. The orderwire operates over packet radios and is part of the Network Control Protocol (NCP)<sup>1</sup><sup>1</sup>1The Simple Network Management Protocol (SNMP) Management Information Base (MIB) for the NCP operation as well as live data from the running prototype RDRN system can be retrieved from http://www.ittc.ukans.edu/$``$sbush/rdrn/ncp.html.. An example of the user data and orderwire network topology is shown in Figure 3. In this figure, an ES serves as a link between a wired and wireless network, while the remaining ESs act as wireless switches. The protocol stack for this network is shown in Figure 4. The focus of this paper is on the NCP and in particular on the orderwire network and protocols. This includes protocol layer configuration, link quality, hand-off, and host/switch assignment along with information provided by the GPS system such as position and time. The details of the user data network will be covered in this paper only in terms of services required from, and interactions with, the NCP. Section 2: provides a more detailed description of the RDRN system, with a focus on the requirements and interaction of each protocol layer with the NCP. Operation of the NCP is described in Section 3:. A new concept known as Virtual Network Configuration (VNC) is explained in Section 4: along with an example application of a Mobile Private Network-Network Interface (PNNI) enhanced with VNC. The development and implementation of the NCP is described along with initial timing results in Section 5:. In Section 6:, an analysis of NCP indicates the performance of NCP as the system is scaled up. Finally, emulation results are presented in Section 7:. ## 2: Wireless ATM-Based Network Configuration Requirements This section provides a brief overview of the high speed protocol architecture for the RDRN wireless ATM network . The purpose is to introduce the RDRN network and more importantly to identify the requirements that each layer will have for the network configuration protocol. ### 2.1: Physical Layer The physical layer includes all hardware components and the wireless connections. This includes the high speed radios, orderwire packet radios, ATM switch, antennas, and additional processor for configuration and setup. This layer provides a raw pipe for the data link layer described in the next section. Directional beams from a single antenna are used to obtain spatial reuse and Time Division Multiple Access (TDMA) is used to provide access to multiple RNs within a beam. The physical layer details can be found in . The NCP sets up the physical layer wireless connections. ### 2.2: Link Layer In this architecture ATM will be carried end-to-end. However, at the edge between the wired (high-speed) network and wireless links, multiple ATM cells will be combined into an HDLC-like frame. These frames comprise the Adaptive HDLC (AHDLC) protocol. The wireless data link layer is adaptive to provide an appropriate trade-off between data rate and reliability in order to support the various services. For example, we may want to drop voice packets, which are time sensitive, but retry data packets. The edge interface unit makes this decision based on knowledge of the requirements of each traffic stream, possibly based on virtual circuit number. For some types of traffic, error correction may be achieved using retransmission. Here, delay is increased for this class of traffic to prevent cell losses. It is well known that even a few cell losses can have a significant impact on the performance of TCP/IP (Transmission Control Protocol/Internet Protocol), while TCP/IP can cope with variable delays . The Adaptive High Level Data Link Control (AHDLC) protocol can change in response to traffic requirements. ATM end-to-end provides the following benefits: 1. Moderate cut-through, e.g. an IP segment may contain 8192 bytes or about 170 cells, while one ATM HDLC-like frame will contain on the order of 3-20 cells 2. ATM is a standard protocol. 3. ATM can incorporate standardized Quality of Service (QoS) parameters which could be based on the virtual circuit identifier. The link layer must also maintain cell order; this will be critical during hand-off of an RN from one ES to another. Details of the Adaptive HDLC protocol and frame structures can be found in and . ### 2.3: ATM Layer The protocol on the Edge Switch (ES) will remove ATM cells from the AHDLC frames and switch them to the proper port. It will also pack ATM cells into an Adaptive HDLC frame to send to the radio. The ATM Device Driver API and Adaptive Driver are detailed in . Note that standard ATM call setup signaling is used and no AAL is precluded from use. ### 2.4: Network Layer This section of the architecture is concerned with the Internet Protocol and how it relates to ATM and mobility. This layer provides a well known and widely used network layer, whose primary purpose is to provide routing between subnetworks and service for the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) transport layers. The relation between IP and ATM is still an open issue. Classical IP and ARP over ATM (CLIP) is an initial standard solution. However, it has several weak points such as requiring a router to connect Logical IP Subnetworks (LIS) even when they are directly connected at the ATM level, and requiring an ATM Address Resolution Protocol (ARP) server to provide address resolution for a single LIS. The Non-Broadcast Multiple Access (NBMA) Next Hop Resolution Protocol (NHRP) provides a better solution but it is still in draft form. The RDRN architecture has implemented CLIP and supports both PVCs and SVCs via ATMARP. ## 3: Network Control Protocol Overview An initial implementation of the RDRN Network Control Protocol (NCP) for the prototype system is presented next. The physical layer of the high speed radio connection has a corresponding layer in the NCP, as shown in Table 1. The following is a description and ordering of events for the establishment of the wireless connections. ### 3.1: Physical Layer of the Network Control Protocol At the physical level we will be using the orderwire to exchange position, time and link quality information and to setup the wireless connections. The process of setting up the wireless connections involves setting up links between ESs and between ESs and RNs. The network will have one master ES, which will run the topology configuration algorithm and distribute the resulting topology information to all the connected ESs over point-to-point orderwire packet radio links. In the current prototype the point-to-point link layer for the orderwire uses AX.25 . The master ES is initially the first active ES, and any ES has the capability of playing the role of the master. The first ES to become active initially broadcasts its callsign and start-up-time in a MYCALL packet, and listens for responses from any other ESs. In this prototype system, the packet radio callsign is assigned by the FCC and identifies the radio operator. Since it is the first active ES, there would be no responses in a given time period, say T. At the end of T seconds, the ES rebroadcasts its MYCALL packet and waits another T seconds. At the end of 2T seconds, if there are still no responses from other ESs, the ES assumes that it is the first ES active and takes on the role of the master. If the first two or more ESs start up within T seconds of each other, at the end of the interval T, the ESs compare the start-up times in all the received MYCALL packets and the ES with the oldest start-up time becomes the master. In this system, accurate time stamps are provided by the GPS. Each successive ES that becomes active initially broadcasts its callsign in a MYCALL packet. The master on receipt of a MYCALL packet extracts the callsign of the source, establishes a point-to-point link to the new ES and sends it a NEWSWITCH packet. The new ES on receipt of the NEWSWITCH packet over a point-to-point orderwire link, obtains its position from its GPS receiver and sends its position to the master as a SWITCHPOS packet over the point-to-point orderwire link. On receipt of a SWITCHPOS packet, the master records the position of the new ES in its switch position table, which is a table of ES positions, and runs the topology configuration algorithm to determine the best possible interconnection of all the ESs. The master then distributes the resulting information to all the ESs in the form of a TOPOLOGY packet over the point-to-point orderwire links. Each ES then uses this information to setup the inter-ES links as specified by the topology algorithm. The master also distributes a copy of its switch position table to all the ESs over the point-to-point orderwire links, which they can use in configuring RNs as discussed below. This sequence of operations is illustrated in Figure 5 and Figure 6. Also, the ES then uses the callsign information in the switch position table to setup any additional point-to-point orderwire packet radio links corresponding to the inter ES links required to exchange any link quality information. Thus this scheme results in a point-to-point star network of orderwire links with the master at the center of the star and also point-to-point orderwire links between those ESs that have a corresponding inter ES link, as shown in Figure 3. In the event of failure of the master node which can be detected by listening for the AX-25 messages generated on node failure, the remaining ESs exchange MYCALL packets, elect a new master node, and the network of ESs is reconfigured using the topology configuration algorithm . Each RN that becomes active obtains its position from its GPS receiver and broadcasts its position as a USER\_POS packet over the orderwire network. This packet is received by all the nearby ESs. Each candidate ES then computes the distance between the RN and all the candidate ESs which is possible since each ES has the positions of all the other ESs from the switch position table. An initial guess at the best ES to handle the RN is the closest ES. This ES then feeds the new RN’s position information along with the positions of all its other connected RNs to a beamforming algorithm that returns the steering angles for each of the beams on the ES so that all the RNs can be configured. If the beamforming algorithm determines that a beam and TDMA time slot are available to support the new RN, the ES steers its beams so that all its connected RNs and the new RN are configured. It also records the new RN’s position in its user position table which contains positions of connected RNs, establishes a point-to-point orderwire link to the new RN and sends it a HANDOFF packet with link setup information indicating that the RN is connected to it. If the new RN cannot be accommodated, the ES sends it a HANDOFF packet with the callsign of the next closest ES, to which the RN sends another USER\_POS packet over a point-to-point orderwire link. This ES then uses the beamform algorithm to determine if it can handle the RN. Figure 7 shows the states of operation and transitions between the states for a RN. This scheme uses feedback from the beamforming algorithm together with the distance information to configure the RN. It should be noted that the underlying AX.25 protocol provides error free transmissions over point-to-point orderwire links. Also the point-to-point orderwire link can be established from either end and the handshake mechanism for setting up such a link is handled by AX.25. If the RN does not receive a HANDOFF packet within a given time it uses a retry mechanism to ensure successful broadcast of its USER\_POS packet. A point-to-point orderwire link is retained as long as a RN is connected to a particular ES and a corresponding high-speed link exists between them to enable exchange of link quality information. The link can be torn down when the mobile RN migrates to another ES in case of a hand-off. Thus at the end of this network configuration process, three overlaid networks are setup, namely, an orderwire network, an RN to ES network and an inter-ES network. The orderwire network has links between the master ES and every other active ES in a star configuration, links between ESs connected by inter-ES links as well as links between RNs and the ESs to which they are connected, as shown in Figure 3. Raw pipes for the user data links between RNs and appropriate ESs as well as for the user data links between ESs are also set up. ### 3.2: ATM Network Configuration Layer This section briefly describes how ATM VCs are setup by the NCP. As the orderwire network determines the topology of all nodes in the wireless segment (e.g., RNs, ESs) in our architecture, and establishes link connectivity among adjacent nodes, setup is still required of the actual ATM circuits on which wireless ATM are carried on the user data overlay network. This is accomplished by providing standard ATM signaling capabilities to RNs and ESs and using Classical IP over ATM to associate ATM VCs to IP addresses. The Classical IP over ATM implementation provided works for PVCs and SVCs (using ATMARP). Since an ES may connect to multiple RNs (wireless connections) or ATM switches (wired connections), it can be thought of as a software-based ATM switch. In this sense, an ES features ATM PNNI signaling while an RN features ATM UNI signaling. By default, an RN creates one wireless-ATM protocol stack and establishes an ATM VC signaling channel on such a stack; however, the stack is initially in an inactive state (i.e., non-operational mode) since there is no link connectivity to another node established yet. Likewise, an ES creates a predefined number of wireless-ATM protocol stacks – acting like ports in an ATM switch – and establishes ATM VC signaling channels on all configured stacks which are also initialized as inactive. Wireless-ATM protocol stacks are controlled by a daemon, called the adaptation manager, which acts on behalf of the orderwire network. The adaptation manager daemon not only controls the stacks by setting their state to either active or inactive (default), but also may modify configuration parameters of the stacks to provide dynamic adaptation to link conditions. Two possible scenarios illustrate the interactions between the orderwire network and the wireless-ATM network. In the first scenario the orderwire detects link connectivity between an adjacent pair of nodes (e.g., RN-ES or ES-ES). In this case, the orderwire network requests an inactive stack from the adaptation manager daemon at each end and associates them with a designated address. Upon establishment of link connectivity, a requested wireless stack has its state set to active and is ready to operate. Note that since the signaling channels are preconfigured on the stacks in question, users on the wireless establish end-to-end connections exactly as if they were connected in a wired ATM network. The other scenario occurs when the orderwire network detects a broken connection, at the link level, between two connected nodes. This case is typical of an RN moving away from the connectivity range of an ES. The orderwire network thus contacts the adaptation manager daemon at each end to set the wireless stacks in question to inactive. Since a wireless stack is never destroyed, it can be reused in a future request from the orderwire to establish connectivity to another pair of nodes. ## 4: Virtual Network Configuration for a Rapidly Deployable Network In order to make RDRN truly rapidly deployable, configuration at all layers has to be a dynamic and continuous process. Configuration can be a function of such factors as load, distance, capacity and permissible topology, all of which are constantly changing in a mobile environment. A Time Warp based algorithm is used to anticipate configuration changes and speed the reconfiguration process. ### 4.1: Virtual Network Configuration Algorithm The Virtual Network Configuration (VNC) algorithm is an application of a more general mechanism called Time Warp Emulation (TWE). Time Warp Emulation is a modification of Time Warp . The motivation behind TWE is to allow the actual components of a real-time system to work ahead in time in order to predict future behavior and adjust themselves when that behavior does not match reality. This is accomplished by realizing that there are now two types of false messages, those which arrive in the past relative to the process’s Local Virtual Time (LVT) and those messages which have been generated which are time-stamped with the current real time, but whose values exceed some tolerance from the component’s current value. The basic Time Warp mechanism is modified by adding a verification query phase. This phase occurs when real time matches the receive time of a message in the output queue of a process. In this phase, the physical device being emulated in time is queried and the results compared with the value of the message. A value exceeding a prespecified tolerance will cause a rollback of the process. ### 4.2: Virtual Network Configuration Overview The Virtual Network Configuration (VNC) algorithm can be explained by an example. A remote node’s direction, velocity, bandwidth used, number of connections, past history and other factors can be used to approximate a new configuration sometime into the future. All actual configuration processes can begin to work ahead in time to where the remote node is expected to be at some point in the future. If the prediction is incorrect, but not far off, only some processing will have to be rolled back in time. For example, the beamsteering process results may have to be adjusted, but the topology and many higher level requirements will still be correct. Working ahead and rolling back to adjust for error with reality is an on-going process, which depends on the tradeoff between allowable risk and amount of processing time allowed into the future. As a specific example, consider the effects of hand-off on TCP performance as described in . In this work, throughputs were measured for hand-off under various conditions and determined to degrade badly. ### 4.3: Virtual Network Configuration Implementation The effort required to enhance the network configuration algorithm to include Virtual Network Configuration is minimal. Three new fields are added to each existing message in Table 1: antimessage toggle, send time, and receive time. Physical processes include beamforming, topology acquisition, table updates, and all processing required for configuration. Each physical process is assigned a tolerance. When the value of a real message exceeds the tolerance of a predicted message stored in the send queue, the process is rolled back. Also, an additional packet type was created for updating an approximation of the Global Virtual Time (GVT). Because the system is composed of asynchronously executing logical processes, each working ahead as quickly as possible with its own local notion of time, it is necessary to calculate the time of the system as a whole. This system-wide time is the GVT. The difference between GVT and current time is the amount of lookahead, $`\mathrm{\Lambda }`$. Although $`\text{GVT}t`$ where $`t`$ is real time, $`\mathrm{\Lambda }`$ is required because it is used to control the efficiency and accuracy of the system. Since the network configuration system uses a master node as described in the physical layer setup, this is a natural centralized location for a centralized GVT update method. RNs transmit their LVT to the master, the master calculates an approximate GVT and returns the result. An estimate of the additional load on the orderwire packet radios using VNC is shown in Figure 8. It is assumed that virtual messages are 65 bits longer than real messages and there is one virtual message for each real message. The figure shows the prototype 19,200 bps orderwire link capacity as a function of the number of RNs, the position update rate of each RN, and the hand-off rate. The capability of the orderwire to support these rates without VNC is discussed later in detail and is shown in Figure 13. Comparing Figures 8 and 13, it is apparent that the VNC slightly more than doubles the orderwire load. However enough capacity remains to support users with a reasonable position update rate and handoff rate with this relatively low 19,200 bps orderwire bandwidth. ### 4.4: Seamless Mobile ATM Routing This section discusses an incorporation of Virtual Network Configuration (VNC) and the Network Control Protocol (NCP) as described in the previous sections into the Private Network-Network Interface (PNNI) to facilitate seamless ATM hand-off. An attempt is made to minimize the changes to the evolving PNNI standard. Figure 9 shows a high level view of the PNNI Architecture. Terminology used in the PNNI Specification. In this version of mobile PNNI, the standard PNNI route determination, topology database, and topology exchange would reside within the NCP. The NCP stack with VNC is shown in Figure 10. The enabling mechanism is the fact that VNC will cause the NCP to create a topology which will exist after a hand-off occurs at a time prior to the hand-off. This will cause PNNI to perform its standard action of updating its topology information immediately before the hand-off occurs. Note that this is localized within a single Peer Group (PG). The second enabling mechanism is a change to the PNNI signaling protocol. In mobile PNNI, standard PNNI signaling is allowed to dynamically modify logical links when triggered by a topology change. This is similar to a CALL ABORT message except that the ensuing RELEASE messages will be contained within the scope of the Peer Group (PG). This new message will be called a SCOPED CALL ABORT message. When the topology changes due to an end system hand-off, a check is made to determine which end system (RN) has changed logical nodes (LN). An attempt is made to establish the same incoming VCs at the new LN as were at the original LN and connections are established from the new LN to the original border LNs of the Peer Group. This allows the RN to continue transmitting with the same VCI as the hand-off occurs. The connections from the original LNs to the border LNs are released after the hand-off occurs. If the new LN is already using a VCI that was used at the original LN, the HANDOFF packet will contain the replacement VCIs to be used by the end system (RN). There are now two branches of a logical link tree established with the border LN as the root. After the hand-off takes place the old branch is removed by the new SCOPED CALL ABORT message. Note that link changes are localized to a single Peer Group. The fact that changes can be localized to a Peer Group greatly reduces the impact on the network and implies that the mobile network should have many levels in its PNNI hierarchy. In order to maintain cell order the new path within the Peer Group is chosen so as to be equal to or longer than the original path based on implementation dependent metrics. Consider the network shown in Figure 11. Peer Groups are enclosed in circles and the blackened nodes represent the lowest level Peer Group Leader for each Peer Group. End system A.1.2.X is about to hand-off from A.1.1 to A.2.2. The smallest scope which encompasses the old and new LN is the LN A. A.3.1 is the outgoing border node for LN A. A CALL SETUP uses normal PNNI operations to setup a logical link from A.3.1 to A.2.2. After A.1.2.X hands off, a SCOPED CALL ABORT message releases the logical link from A.3.1 to A.1.1. ## 5: Development and Implementation The initial physical layer network control protocol design was done using Maisie , a C-based parallel programming language. It facilitates creation of entities which execute in parallel and the ability to easily send and receive messages between entities. A Maisie emulation of the entire network was developed which uses the actual NCP code. This helped build confidence that the design of the Network Control Protocol was correct. The network control protocol code was initially tested with only the two packet radios available. Since at least three packet radios are necessary for a complete RN-ES-RN orderwire connection, the next step involved emulating the packet radios via TCP/IP over Ethernet, and completing the development of the code. The packet radio emulation also allowed testing of various configurations that helped determine if the network control protocol was scalable. The physical layer of the Network Control Protocol is a single-unit consumable resource system. There can be no deadlock since there are no cycles. All message interactions take place with a master switch, except for the initial MYCALL packet broadcast. The GPS system was also emulated to provide the appearance of mobility so that hand-offs of a host from one ES to another could be tested. The GPS emulation is also an important component of the Virtual Network Configuration Algorithm. The actual orderwire code is used in these emulations. ### 5.1: Timing Results This section summarizes the results of initial timing experiments that were undertaken to examine the performance of the orderwire system. The experiments involved determining the time required to transmit and process each of the packet types listed in Table 1 using the real packet radios. These times represent the time to packetize, transmit, receive, and depacketize each packet at the Network Control Protocol process. Figure 12 illustrates the physical configuration used for the experiments involving the real packet radios. The results are presented in Table 2. Most of the overhead occurs during the initial system configuration which occurs only once as long as ESs remain stationary. With regard to a handoff, the 473 millisecond time to transmit and process the handoff packet is on the same order of time as that required to compute the beam angles and steer the beams. The following sections provide an analysis and discuss the impact of scaling up the system on the configuration time. ### 5.2: Bandwidth required for the Orderwire Network The traffic over the orderwire was analyzed to determine a relation between the maximum update rate and the number of RNs. The protocol used for contention resolution on the broadcast channel is the Aloha Protocol which is known to have a maximum efficiency close to 18%. Given the bandwidth of the orderwire channel, size of an orderwire packet and this value for the efficiency, we compute and plot the value for the maximum update rate (in packets per minute) for a given number of RNs. The plot of Figure 13 shows the variation in update rate for between 5 and 30 RNs. This study gives us an upper limit on the number of RNs that can be supported over the orderwire given a minimum required update rate and handoff rate. ## 6: NCP Performance Analysis The analysis of the RDRN network configuration time using the protocols proposed earlier, will be divided into three phases. Phase I is the ES-ES configuration, Phase II is the RN configuration, and Phase III is handoff configuration. The specific numerical values used in this section were obtained from Table 2. ### 6.1: Phase I In Phase I the ES nodes act in a distributed manner to determine which ES will become the master ES. The master ES collects position information, determines the optimum ES interconnections, and distributes the results back to the ES nodes. The ES nodes determine the master ES by broadcasting MYCALL packets and collecting MYCALL packets until the MYCALL Timer expires with a prespecified time, $`T`$. The MYCALL packets contain the callsign and boot-time of each ES. The ES with oldest boot-time is designated as the master. $`T`$ should be chosen as the smallest value which allows enough time for all MYCALL packets to be received. This would be approximately $`0.492(N1)`$ seconds, where $`N`$ is the total number of ES nodes. NEWSWITCH packets take on the order of 0.439 seconds to transfer, and therefore, it will take $`0.439(N1)`$ seconds to send these packets. The ES nodes will respond with SWITCHPOS packets which will take another $`0.679(N1)`$ seconds. These events occur after each MYCALL packet has been received, and can occur before the MYCALL Timer has expired. The next step in Phase I is to run the topology algorithm which is based on a consistent labeling algorithm . This algorithm generates all fully connected topologies given ES node locations and constraints on the antenna beams such that beams do not interfere with one another. The information required by the topology module is the GPS location of all ES nodes, transmit and receive beam widths, transmit radius, the number of non-interfering frequency pairs, and an interference multiplier. An interference multiplier of 1.0 assumes adaptive power control, in which case it is assumed that beam power control will be adjusted to exactly match the link distance. The interference multiplier multiplied by a link’s actual length will determine the range of interference created by the link. This takes on the order of $`K_{top}\left[N^2+(L+1)^R\right]`$ seconds where $`L`$ is the number of available frequency pair combinations with the addition of $`1`$ for no link. Assigning distinct frequencies allows beams to overlap without interference. $`R`$ is the number of constrained links and $`K_{top}`$ is a constant. The constraints are based on maximum beam length, beam widths, and number of frequencies which can be supported. The final step in Phase I is to distribute the topology information to all ES nodes in TOPOLOGY packets. This takes approximately $`0.664+(0.1(N1))`$ seconds. The time for Phase I to complete as a function of N is shown in Equation 6.1:. $`P1(N)`$ $`=`$ $`\mathrm{max}[T,0.439(N1)+0.492(N1)]+`$ $`K_{top}\left[N^2+(L+1)^R\right]+`$ $`0.664+(0.1(N1))`$ ### 6.2: Phase II Phase II is the RN configuration phase. Let U be the number of RNs associated with a given ES. The first step is for the ES to receive USER\_POS packets from each RN. This takes $`0.677U`$ seconds. The next step is to determine the optimum direction of the beams in order to form a connection with the RNs. This algorithm execution is a linear function of the number of RNs, which takes $`K_{bf}U`$ seconds where $`K_{bf}`$ is a constant. The algorithm is currently implemented in MatLab and takes approximately 7.5 seconds to obtain reasonable convergence of the beam direction to connect with four RNs. The final step in Phase II is to generate a table of complex weights for antenna beamforming and download this table to the hardware. This is a function of the number of elements in the antenna array, $`K_{el}`$, the number of beams, $`B`$, and the number of bits per symbol, $`M`$. $`K_{el}`$ tables are created with $`2^{MB}`$ entries per table. This takes on the order of 2 seconds with 4 beams and 8 elements for QPSK modulation on an OSF1 V4.0 386 DEC 3000/400 Alpha workstation. The entire beamform and table generation module must be repeated for every combination of transmitting RNs. A different table is used depending on which RNs are currently transmitting data. The complete time for Phase II is shown in Equation 2. $$P2(U)=0.677U+\underset{r=1}{\overset{U}{}}\left(\genfrac{}{}{0pt}{}{U}{r}\right)\left(K_{bf}r+K_{el}2^{MB}\right)$$ (2) ### 6.3: Phase III Phase III, shown in Equation 3, is the time required for the orderwire to perform a hand-off. The current network control code determines RN to ES associations based on distance. When the distance between an RN and an ES other than its currently associated ES becomes smaller than the distance between the RN and its currently associated ES, the current ES initiates a hand-off by sending a HANDOFF packet. This takes $`0.473`$ seconds. The RN will then initiate a point-to-point orderwire connection with the new ES. Finally, Phase II must be run again at the new ES, which is the reason for including the function $`P2`$. $$P3_{RN}=0.473+P2(U+1)$$ (3) ### 6.4: Orderwire Performance Emulation The emulation of the orderwire systems satisfies several goals. It allows tests of configurations that are beyond the scope of the prototype RDRN hardware. Specifically, it verifies the correct operation of the RDRN Network Control Protocol in a wide variety of situations. The emulation helps to verify the correctness of the analytical results obtained above. As an additional benefit, much of the actual orderwire code was used by the Maisie emulation allowing further validation of that code. The Edge Switch (ES) and Remote Node (RN) are modeled as a collection of Maisie entities. This is an emulation rather than a simulation because the Maisie code is linked with the working orderwire code and also with the topology algorithm. There is an entity for each major component of the RDRN system including the GPS receiver, packet radio, inter ES links, RN to ES links and the Master, ES, and RN network configuration processors, as well as other miscellaneous entities. The input parameters to the emulation are shown in Tables 3, 4, 5. The RN VC setup process for connections over the inter ES antenna beams is assumed to be Poisson. This represents ATM VC usage over the physical link. The RN will maintain a constant speed and direction until a hand-off occurs, then a new speed and direction are generated from a uniform distribution. This simplifies the analytical computation. Note that NCP packet transfer times as measured in Table 2 are used here. ### 6.5: Orderwire Maisie Emulation Design The architecture for the RDRN link management and control is shown in Figure 14. The topology modules are used only on ES nodes capable of becoming a master ES. The remaining modules are used on all ES nodes and RNs. The beamform module determines an optimal steering angle for the given number of beams which connects all RNs to be associated with this ES. It computes an estimated signal to noise interference ratio (SIR) and generates a table of complex weights which, once loaded, will control the beam formation. Note that this table is not loaded until the table fill trigger is activated. The connection table is used by the Adaptive HDLC and ATM protocol stacks for configuration via the adaptation manager. The emulation uses as much of the actual network control code as possible. The packet radio driver, GPS driver, and Network Control Protocol state machine are implemented in Maisie; tables, data structures, and decision functions from working NCP code are used. Figure 15 shows the structure of the Maisie entities. The entity names are shown in the boxes and the message types are shown along the lines. Direct communication between entities is represented as a solid line. The dashed lines indicate from where entities are spawned. The RN entity which performs ATM VC setup (HSLRN entity in Figure 15) generates calls as a Poisson process which the ES node (HSLES entity in Figure 15) will attempt to accept. If the EN moves out of range or the ES has no beam or slot available the setup will be aborted. As the RN moves, the ES will hand off the connection to the proper ES based on closest distance between RN and ES. ## 7: Emulation Results This section discusses the current results from the emulation. Some of these results revealed problems which are not immediately apparent from the state diagrams in Figures 5, 6, 7. The emulation produces Network Control Protocol Finite State Machine (NCP FSM) output which shows the transitions based on the state diagrams in Figures 5, 6, 7. The FSM output provides an easy comparison with diagrams to insure correct operation of the protocol. ### 7.1: Effect of Scale on NCP The emulation was run to determine the effect on the NCP as the number of ES and RN nodes increased. The dominate component of the configuration time is the topology calculation run by the ES which is designated as the master. Topology calculation involves searching through the problem space of constraints on the directional beams for all feasible topologies and choosing an optimal topology from that set as described in . The units on all values should be consistent with the GPS coordinate units, and all angles are assumed to be degrees. The beam constraint values are Maximum link distance 1000.0, Maximum Frequencies 3, Interference Multiplier 1.0, Transmit Beam Width 10.0, Receive Beam Width 10.0. The topology calculation is performed in MatLab and uses the MatLab provided external C interface. Passing information through this interface is clearly slow, therefore these results do not represent the exact execution times of the prototype system. However, they do provide a worse case test for the protocol. A possible speedup may arise through the use of Virtual Network Configuration, which will provide a mechanism for predicting values in advance and also allows processing to be distributed. Another improvement which may be considered is to implement a hierarchical configuration. The network is partitioned into a small number of clusters of nodes in such a way that nodes in each group are as close together as possible. The topology code is run as though these were individual nodes located at the center of each group. This inter-group connection will be added as constraints to the topology computation for the intra-group connections. In this way the topology program only needs to calculate small numbers of nodes which it does relatively quickly. ### 7.2: MYCALL Timer The MYCALL Timer, set to a value of $`T`$ in the analysis section, controls how long the system will wait to discover new ES nodes before completing the configuration. If this value is set too low, new MYCALL packets will arrive after the topology calculation has begun, causing the system to needlessly reconfigure. If the MYCALL Timer value is too long, time will be wasted, which will have a large impact on a mobile ES system. Table 6 shows the input parameters and Figure 16 shows the time required for all MYCALL packets to be received as a function of the number of ES nodes. These times are the optimal value of the MYCALL Timer as a function of the number of ES nodes because the these times are exactly the amount of time required for all ESs to respond. In order to prevent the possibility of an infinite loop of reconfigurations from occurring, an exponential back-off on the length of the MYCALL Timer value is introduced. As MYCALL packets arrive after T has expired, the next configuration occurs with an increased value of T. ### 7.3: Link Usage Probability Multiple RNs may share a single beam using Time Division Multiplexing (TDMA) within a beam. The time slices are divided into slots, thus a $`(beam,slot)`$ tuple defines a physical link. The emulation was run to determine the probability distribution of links used as a function of the number of RNs. The parameters used in the emulation are shown in Table 7 the results of which indicate the number of links and thus the number of distinct $`(beam,slot)`$ tuples required. Figure 17 shows the link usage cumulative distribution function for 4 and 7 RNs. ### 7.4: ES Mobility ES mobility is a more difficult problem and will be examined in more detail as the research proceeds. The parameters used in an emulation with mobile ES nodes are shown in Table 8. As mentioned in the section on the MYCALL Timer, if a MYCALL packet arrives after this timer has expired, a reconfiguration occurs. This could happen due to a new ES powering up or an ES which has changed position. Figure 18 shows the times at which reconfigurations occurred in a situation in which ES nodes were mobile. Based on the state transitions generated from the emulation it is apparent that the system is in a constant state of reconfiguration; no reconfiguration has time to complete before a new one begins. As ES nodes move, the NCP must notify RNs associated with an ES with the new position of the ES as well as reconfigure the ES nodes. To solve this problem, a tolerance, which may be associated with the link quality, will be introduced which indicates how far nodes can move within in a beam before the beam angle must be recalculated, which will allow more time between reconfigurations. It is expected that this tolerance in addition to Virtual Network Configuration will provide a solution to this problem. ### 7.5: Effect of Communication Failures The emulation was run with a given probability of failure on each packet type of the Network Control Protocol. The following results are based on the output of the finite state machine (FSM) transition output of the emulation and an explanation is given for each case. A dropped MYCALL packet has no effect as long as at least one of the MYCALL packets from each ES is received at the master ES. This is the only use of the AX.25 broadcast mode in the ES configuration. The broadcast AX.25 mode is a one time, best effort delivery; therefore, MYCALL packets are repeatedly broadcast at the NCP layer. The Maisie emulation demonstrated that a dropped NEWSWITCH packet caused the protocol to fail. This is because the master ES will wait until it receives all SWITCHPOS packets from all ES nodes for which it had received MYCALL packets. The NEWSWITCH packet is sent over the AX.25 in connection-oriented mode, e.g. a mode in which corrupted frames are retransmitted; the probability of loosing a packet in this mode is very low. A dropped SWITCHPOS packet has the same effect as a dropped NEWSWITCH packet. In order to avoid this situation, the NCP will re-send the NEWSWITCH if no response is received. Finally, the Maisie emulation showed that a lost TOPOLOGY packet results in a partitioned network. The ES which fails to receive the TOPOLOGY packet is not joined with the remaining ES nodes; however, this ES node continued to receive and process USER\_POS packets from all RNs. It therefore attempts to form an initial connection with all RNs. The solution for this condition is not to allow RN associations with an ES node until the TOPOLOGY packet is received. Because MYCALL packets are transmitted via broadcast AX.25, each ES node can simply count the number of MYCALL packets and estimate the time for the master ES node to calculate the topology using the number of MYCALL packets as an estimate for the size of the network. If no TOPOLOGY packet is received within this time period, the ES node retransmits its SWITCHPOS packet to the master ES node in order to get a TOPOLOGY packet as a reply. ## 8: Summary This paper described the design of a control and management network for a mobile wireless ATM network. The orderwire system consists of a packet radio network which overlays the mobile wireless ATM network and receives GPS information. This information is used to control the beamforming antenna subsystem which provides for spatial reuse. This paper also proposed the design of the VNC algorithm which is a novel concept for predictive configuration. A mobile ATM PNNI based on VNC was also discussed. As a prelude to the system implementation, results of a Maisie simulation of the orderwire system were presented. Finally, the Network Control Protocol was tested, initial problems corrected, and initial performance results were obtained and presented in this paper.
no-problem/9904/astro-ph9904272.html
ar5iv
text
# A Radio Galaxy at 𝑧=5.19Based on observations at the W.M. Keck Observatory, which is operated as a scientific partnership among the University of California, the California Institute of Technology, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. ## 1 Introduction How did the first objects form after the Big Bang? In hierarchical cosmogonies (e.g., Turner 1998), the first gravitationally bound systems may have been stars and small star–forming systems which merge to form galaxies in large dark matter halos. Arising from the end products of stellar evolution and mergers, central black holes could grow to become extremely massive. However, it is not clear how this process would work at very high redshifts, where little time is available. It has been suggested that primordial black holes may form well before their host galaxies (Loeb 1993). In any case, accretion events fueling massive black holes are thought to manifest themselves as active galactic nuclei (AGN; e.g., Rees 1984). Due to their extreme luminosity, AGN are convenient beacons for exploring these formative, ‘Dark Ages’ of our Universe. Extragalactic radio sources have played an important role in identifying active galaxies at high redshifts. The most distant known galaxies have consistently been radio–selected until only very recently. In this Letter we report the discovery of a radio galaxy at $`z=5.19`$. At this redshift it is the most distant known AGN, surpassing even quasars for the first time in 36 years. Throughout this paper we use $`H_0=65h_{65}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`\mathrm{\Omega }_M=0.3`$, and $`\mathrm{\Lambda }=0`$. For these parameters, 1″ subtends 7.0 $`h_{65}^1`$ kpc at $`z=5.19`$ and the Universe is only 1.08 Gyr old, corresponding to a lookback time 91.1% of the age of the Universe. ## 2 Source Selection The most efficient method to find high–redshift radio galaxies (HzRGs) is to combine two well–known techniques. The first is to select radio sources with ultra–steep spectra (USS) at radio wavelengths, i.e. very red radio colors (e.g., Chambers, Miley, & van Breugel 1990). Most powerful radio galaxies have radio spectral energy distributions which steepen with frequency. Therefore, at fixed observing frequencies more distant sources exhibit steeper spectra (e.g., van Breugel et al. 1999). A second selection criterion relies upon the magnitude–redshift relationship at infrared wavelengths, or $`Kz`$ Hubble diagram, for powerful radio galaxies (Figure 1). At low redshifts ($`z<1`$), powerful radio galaxies are uniquely associated with massive galaxies. The well–behaved $`Kz`$ diagram suggests that such galaxies can be found through near–IR identification. This has been confirmed by the discovery of many $`3<z<4.4`$ radio galaxies which approximately follow the $`Kz`$ relationship, even to the highest redshifts and despite significant morphological evolution (van Breugel et al. 1998). Using several new, large radio surveys we constructed a USS sample ($`S_\nu \nu ^\alpha ;\alpha _{365\mathrm{M}\mathrm{H}\mathrm{z}}^{1.4\mathrm{GHz}}<1.30`$; De Breuck et al. 1999 \[DB99\]) which is much larger, more accurate, and reaches fainter flux density limits than previous such samples. TN J0924$``$2201, with $`\alpha _{365\mathrm{M}\mathrm{H}\mathrm{z}}^{1.4\mathrm{GHz}}=1.63\pm 0.08`$, is among the steepest sources of our sample. VLA observations at 4.85 GHz show the source is a slightly resolved $`1\stackrel{}{\mathrm{.}}2`$ double, with $`S_{4.85GHz}=8.6\pm 0.5`$ mJy, centered at $`\alpha _{\mathrm{J2000}}=09^h24^m19\stackrel{\mathrm{s}}{\mathrm{.}}92`$, $`\delta _{\mathrm{J2000}}=22\mathrm{°}01\mathrm{}41\stackrel{}{\mathrm{.}}5`$ (Figure 2). ## 3 Observations We obtained $`K_s`$ images of TN J0924$``$2201 using NIRC (Matthews & Soifer 1994) at the Keck I telescope. We integrated for 32 minutes on UT 1998 April 18 in photometric conditions with $`0\stackrel{}{\mathrm{.}}5`$ seeing, and again for 32 minutes on UT 1998 April 19 through light cirrus with $`0\stackrel{}{\mathrm{.}}6`$ seeing. The observing procedures, calibration, and data reduction techniques were similar to those described in van Breugel et al. (1998). The final image comprising 3840 s of on–source integration is shown in Figure 2. Using circular apertures of $`2\stackrel{}{\mathrm{.}}1`$ diameter, encompassing the entire object, we measure $`K=21.15`$ for night 1, and 21.45 for night 2. We estimate that $`K=21.3\pm 0.3`$. If TN J0924$``$2201 is at $`z=5.19`$ (§4), then redshifted \[$`\mathrm{II}`$$`\lambda `$ 3727 at $`\lambda =2.307\mu `$m would be included in the $`K_s`$ passband and some of the $`K`$-band flux might be due to line emission. We obtained spectra of TN J0924$``$2201 through a 1$`\stackrel{}{\mathrm{.}}`$5 wide, 3′ long slit using LRIS (Oke et al. 1995) at the Keck II telescope. The integration times were 5400 s on UT 1998 December 19 (position angle 0) and 4400 s on UT 1998 December 20 (position angle 180); both nights were photometric with 0$`\stackrel{}{\mathrm{.}}`$6 seeing. The observations used the 150 lines mm<sup>-1</sup> grating ($`\lambda _{\mathrm{blaze}}7500`$ Å; $`\mathrm{\Delta }\lambda _{\mathrm{FWHM}}17`$ Å), sampling the wavelength range 4000 Å to 1$`\mu `$m. Between each 1800 s exposure, we reacquired offset star A (see Fig. 2), performed 20″ spatial shifts to facilitate removal of fringing in the reddest regions of the spectra, and blind offset the telescope to return TN J0924$``$2201 within the slit. We calculated the dispersion using a NeAr lamp spectrum taken immediately subsequent to the observations (RMS variations of 0.50 Å), and adjusted the zero point according to telluric emission lines. Final wavelength calibration is accurate to 1 Å. The spectra were flux calibrated using observations of Feige 67 and Feige 110 obtained on each night and were corrected for foreground Galactic extinction using a reddening of $`E_{BV}=0.0168`$ determined from the dust maps of Schlegel, Finkbeiner, & Davis (1998). We find a strong, single emission line at $`\lambda 7530`$ Å which shifts by $`16`$ Å between the two nights. (Figure 3; Table 1). The cause of the line offset is unclear, though it may be related to problems LRIS was experiencing with slippage in the movable guider at the time of the observations. The relative brightnesses of other sources on the slit vary between each 1800 s observation, indicating that despite our precautions of reacquiring the target after each exposure, guider slippage must have caused some variations in telescope offsetting. These slight pointing changes may have caused the slit to sample different regions of spatially–extended, line–emitting gas. Indeed, TN J0924$``$2201 shows two separate components at $`K`$ (Figure 2), and emission–line regions of HzRGs are known to be kinematically complex (Chambers, Miley & van Breugel 1990; van Ojik et al. 1997). Line parameters are measured with a Gaussian fit to the emission line and a flat (in $`F_\lambda `$) fit to the continuum (Table 1). Equivalent width values were derived from a Monte Carlo analysis using the measured line flux and continuum values with errors, subject to the constraint that both are positive. For UT 1998 Dec. 19, when no continuum was reliably detected, we quote the 90% confidence limit, $`W^{\mathrm{obs}}>2760`$ Å. For UT 1998 Dec. 20, when continuum was marginally detected, we quote the 90% confidence interval, $`W^{\mathrm{obs}}=7101550`$ Å. ## 4 Redshift Determination As discussed by Dey et al. (1998) and Weymann et al. (1998) for two $`z>5`$ Ly$`\alpha `$-emitting field galaxies, a solitary, faint emission line at red wavelengths is most likely to be either low-redshift \[$`\mathrm{II}`$$`\lambda `$ 3727 or high-redshift Ly$`\alpha `$. Similar arguments are even more persuasive for HzRGs because of their strong, rich emission line spectra. For example, if the line at $``$ 7530 Å were \[O II\] at $`z=1.020`$ then composite radio galaxy spectra (McCarthy 1993; Stern et al. 1999a) indicate that the TN J0924$``$2201 spectrum should have shown $`\mathrm{II}`$$`\lambda `$ 2326 at 4699 Å with $`4070`$% the strength of \[O II\], and Mg $`\mathrm{II}`$ $`\lambda `$ 2800 at 5653 Å with $`2060`$% the strength of \[O II\]. Similar arguments rule out identifying the emission line with H$`\alpha `$ at $`z=0.147`$ or \[$`\mathrm{III}`$$`\lambda `$ 5007 at $`z=0.504`$, since in these cases even stronger confirming lines should have been seen. The large equivalent widths also argue against identifying the emission line with \[O II\] at $`z=1.020`$, implying $`W_{[\mathrm{OII}]}^{\mathrm{rest}}>1370`$ Å (night 1) and $`350<W_{[\mathrm{OII}]}^{\mathrm{rest}}<770`$ Å (night 2). Radio galaxy composites typically have rest–frame \[O II\] equivalent widths of $`130`$ Å (McCarthy 1993; Stern et al. 1999a), though active galaxies with extreme $`W_{[\mathrm{OII}]}^{\mathrm{rest}}`$ are occasionally observed ($`W_{[\mathrm{OII}]}^{\mathrm{rest}}750`$ Å; Stern et al. 1999b). The equivalent width of TN J0924$``$2201 is more typical of high-redshift Ly$`\alpha `$ which is often observed with rest frame values of several $`\times `$ 100 Å in HzRGs (Table 2). We also note that the observations from the second night show that Ly$`\alpha `$ is attenuated on the blue side, presumably due to associated and intervening hydrogen gas, as is commonly observed in HzRGs (e.g., van Ojik et al. 1997; Dey 1997) and normal star-forming galaxies at $`z>5`$ (e.g., Dey et al. 1998). Finally, the faint $`K`$-band magnitude of TN J0924$``$2201 conforms to the extrapolation of the $`Kz`$ relation to $`z>5`$ (Figure 1). Identifying the emission line with \[O II\] would imply a severely underluminous HzRG (by 3 – 4 mag). Therefore, the most plausible identification of the emission line in TN J0924$``$2201 is with Ly$`\alpha `$ at a (mean) observed wavelength of 7530 Å and $`z=5.19`$. Table 1 gives the dereddened emission–line fluxes. ## 5 Discussion Among all known $`z\mathrm{}>3.8`$ HzRGs, TN J0924$``$2201 is fairly typical in radio luminosity, equivalent width, and velocity width (Table 2). But this source has the steepest radio spectrum, consistent with the $`\alpha z`$ relationship for radio galaxies (e.g., Röttgering et al. 1997). TN J0924$``$2201 also has the smallest linear size, perhaps indicating that the source is relatively young and/or embedded in a denser environment compared to the other HzRGs, commensurate with its large velocity width (van Ojiket al. 1997) and very high redshift. Together with 8C 1435$`+`$63, TN J0924$``$2201 appears underluminous in Ly$`\alpha `$, which might be caused by absorption in a relatively dense cold and dusty medium. Evidence for cold gas and dust in some of the most distant HzRGs has been found from sub–mm continuum and CO–line observations of 8C 1435$`+`$63 and 4C 41.17 (e.g., Ivison et al. 1998). Our observations of TN J0924$``$2201 extend the Hubble $`Kz`$ diagram for powerful radio galaxies to $`z=5.19`$. Simple stellar evolution models are shown in Figure 1 for comparison with the HzRG. Despite the enormous $`k`$–correction effect (from $`U_{\mathrm{rest}}`$ at $`z=5.19`$ to $`K_{\mathrm{rest}}`$ at $`z=0`$) and strong morphological evolution (from radio–aligned to elliptical structures), the $`Kz`$ diagram remains a powerful phenomenological tool for finding radio galaxies at extremely high redshifts. Deviations from the $`Kz`$ relationship may exist (Eales et al. 1997; but see McCarthy 1998), and scatter in the $`Kz`$ values appears to increase with redshift. The clumpy, radio–aligned $`U_{\mathrm{rest}}`$ morphology resembles that of other HzRGs (van Breugel et al. 1998; Pentericci et al. 1998). If the continuum is dominated by star light, as appears to be the case in the radio–aligned HzRG 4C 41.17 at $`z=3.798`$ (Dey et al. 1997), then $`M(U)=24.4`$ for TN J0924$``$2201Ṫhen we can derive a SFR of $``$200 M yr<sup>-1</sup>, assuming a Bruzual & Charlot (1999) GISSEL stellar evolution model with metallicity $`Z=0.008`$, no extinction, and a Salpeter IMF. This SFR value is highly uncertain due to the unknown, but competing, effects of extinction and \[O II\] emission–line contamination, but is not unreasonable. It is 2.5 times less than in 4C 41.17, which has $`M(U)=25.2`$ using the same aperture (Chambers et al. 1990). TN J0924$``$2201 may be a massive, active galaxy in its formative stage, in which the SFR is boosted by induced star formation (e.g., Dey et al. 1997). For comparison other, ‘normal’ star forming galaxies at $`z>5`$ have 10 – 30 times lower SFR ($`620\mathrm{M}_{}yr^1`$; Dey et al. 1998; Weymann et al. 1998; Spinrad et al. 1998). Recent $`z3`$ and $`z4`$ Lyman–break galaxy observations have suggested a possible divergence of star formation and AGN activity at high redshift (Steidel et al. 1999), contrary to what was previously thought (e.g., Haehnelt, Natarajan & Rees 1998). However, if starbursts and AGN are closely coupled, as suggested to explain the ultraluminous infrared galaxies (Sanders & Mirabel 1996), then young AGN may inhabit especially dusty, obscured galaxy systems. To obtain a proper census of the AGN population at the very highest redshifts therefore requires samples which avoid optical photometric selection and extinction bias, such as our cm–wavelength/$`K`$-band radio galaxy sample. As emphasized by Loeb (1993), if massive black holes form in a hierarchical fashion together with their host galaxies, this process must be quick and efficient, as available timescales are short: at $`z=5.19`$ the Universe is only 1 Gyr old. It is unclear how this could be done, so other models, where primordial massive black holes form soon after the Big Bang and prior to the beginning of galaxy formation, may require additional investigation. We thank G. Puniwai, W. Wack, R. Goodrich and R. Campbell for their expert assistance during our observing runs at the W.M. Keck Observatory, and A. Dey, J.R. Graham and H. Spinrad for useful discussions. The work by W.v.B., C.D.B. and S.A.S. at IGPP/LLNL was performed under the auspices of the US Department of Energy under contract W-7405-ENG-48. W.v.B. also acknowledges support from NASA grant GO 5940, and D.S. from IGPP/LLNL grant 98–AP017.
no-problem/9904/physics9904059.html
ar5iv
text
# Magnetism of Neutron Stars and Planets ## 1 Introduction It is known that Neutron stars or Pulsars have strong magnetic fields of $`10^8`$ Tesla in their vicinity, while certain White Dwarfs have magnetic fields $`10^2`$ Tesla. If we were to use conventional arguments that when a sun type star with a magnetic field $`10^4`$ Tesla contracts, there is conservation of magnetic flux, then we are lead to magnetic fields for Pulsars and White Dwarfs which are a few orders of magnitude less than the required values. We will now argue, that in the light of recent results that below the Fermi temperature, the degenerate electron gas obeys a semionic statistics, that is a statistics in between the Fermi-Dirac and Bose-Einstein, it is possible to deduce the correct magnetic fields for Neutron stars and White Dwarfs. Moreover this will also enable us to deduce the correct magnetic field of a planet like the earth. ## 2 Anomalous Behaviour Below the Fermi Temperature In recent years, it has been realized that under specific conditions, for example low dimensionality or sub Fermi temperatures, Fermions exhibit an anomalous character - they obey statistics inbetween the Fermi-Dirac and Bose-Einstein statistics. Let us specifically consider the case of sub Fermi temperatures (cf.ref.). To notice the anomalous behaviour in a simple way we observe that in this case as is known the assembly fills up each and every single particle energy level below the Fermi energy, with the Fermionic occupation number $`1`$. The density of states in momentum space is given by $`d^3p`$, exactly as in the case of Bosons. Whence we obtain the well known result $$ϵ_F=\frac{\mathrm{}^2}{2m}\left(\frac{6\pi ^2}{v}\right)^{2/3}$$ (1) where $`ϵ_F`$ is the Fermi energy. The result for Phonons which obey Bose-Einstein statistics is identical to equation (1) (cf.ref.). The anomalous behaviour can also be seen as follows: We have for the energy density $`e,`$ in this case $$e_o^{p_F}\frac{p^2}{2m}d^3pT_F^{2.5}$$ (2) where $`p_F`$ is the Fermi momentum and $`T_F`$ is the Fermi temperature. On the other hand, it is known that in $`n`$ dimensions we have, $$eT_F^{n+1}$$ (3) (For the case $`n=3,`$ (3) is identical to the Stefan-Boltzmann law). Comparison of (3) and (2) shows that the assembly behaves with the fractal dimensionality $`1.5`$. Let us now consider an assembly of $`N`$ electrons. As is known, if $`N_+`$ is the average number of particles with spin up, the magnetisation per unit volume is given by $$M=\frac{\mu (2N_+N)}{V}$$ (4) where $`\mu `$ is the electron magnetic moment. At low temperatures, in the usual theory, $`N_+\frac{N}{2}`$, so that the magnetisation given in (4) is very small. On the other hand, for Bose-Einstein statistics we would have, $`N_+N`$. With the above semionic statistics we have, $$N_+=\beta N,\frac{1}{2}<\beta <1,$$ (5) If $`N`$ is very large, this makes an enormous difference in (4). Let us use (4) and (5) for the case of Neutron stars. ## 3 Magnetism of Neutron Stars and White Dwarfs In this case, as is well known, we have an assembly of degenerate electrons at temperatures $`10^7K`$, (cf.for example ). So the considerations of Section 2 apply. In the case of a Neutron star we know that the number density of the degenerate electrons, $`n10^{31}`$ per c.c.. So using (4) and (5) and remembering that $`\mu 10^{20}G,`$ the magnetic field near the Pulsar is $`10^{11}G10^8`$ Tesla, as required. As mentioned earlier some White Dwarfs also have magnetic fields. If the White Dwarf has an interior of the dimensions of a Neutron star, with a similar magnetic field, then remembering that the radius of a White Dwarf is about $`10^3`$ times that of a Neutron star, its magnetic field would be $`10^6`$ times that of the neutron star, which is known to be the case. ## 4 Discussion It is quite remarkable that the above mechanism can also explain the magnetism of the earth. As is known the earth has a solid core of radius of about 1200 kilometers and temperature about 6000 K. This core is made up almost entirely of Iron $`(90\%)`$ and Nickel $`(10\%)`$. It can easily be calculated that the number of particles $`N10^{48}`$, and that the Fermi temperature $`10^5`$. In this case we can easily verify using (4) and (5) that the magnetic field near the earth’s surface $`1G`$, which is indeed the case. It may be mentioned that the anomalous Bosonic behaviour given in (5) would imply a sensitivity to external magnetic influences which could lead to effects like magnetic flips or reversals. To see this, we observe that the number of electrons, with spin aligned along a magnetic field $`B`$ which is introduced, where, $$B<<ϵ_F/2\mu ,$$ is given by (cf.ref.), using Fermi-Dirac statistics, $$N_+\frac{N}{2}(1+\frac{3\mu B}{2ϵ_F})$$ That is, $`\beta `$ in (5) is given by $$\beta \frac{1}{2},$$ and the introduction of the field $`B`$, does not lead to a significant magnetic field in (4). But, if as in Section 2, $`\beta \frac{1}{2}`$, but rather, $`\beta >\frac{1}{2}`$, then in view of the fact that $`N`$ is very large, the contribution from (4) could be significant. Indeed in the case of the earth magnetic reversals do take place from time to time and are as yet not satisfactorily explained.
no-problem/9904/astro-ph9904271.html
ar5iv
text
# A Black Hole in the X-Ray Nova Velorum 1993 ## 1 INTRODUCTION An important class of binary systems has been identified in which a low-mass secondary (companion) star orbits a probable black hole; see White, Nagase, & Parmar (1995), van Paradijs & McClintock (1995), and Tanaka & Lewin (1995) for extensive reviews. In all cases they were first observed in outburst as “X-ray novae” (or “soft X-ray transients”). Their X-ray spectra are generally characterized by a prominent, “soft” thermal component ($`kT1`$ keV) as well as a “hard” power-law tail extending to very high energies (0.1–1 MeV). During outburst, the radiation from these and other low-mass X-ray binaries (LMXBs; the “low mass” refers to the secondary star) is emitted predominantly by the accretion disk surrounding the primary star. A lower limit to the mass of the primary ($`M_1`$) in a given X-ray transient can be measured when it returns to quiescence. At that time, light from the secondary star contributes significantly to (or even dominates) the visible spectrum, and the secondary’s radial-velocity curve can be determined with a series of time-resolved spectra (see Cowley 1992 for a review). The orbital period ($`P`$) and semiamplitude ($`K_2`$) of the secondary yield the mass function of the primary, $`f(M_1)=PK_2^3/2\pi G=M_1^3\mathrm{sin}^3i/(M_1+M_2)^2`$, where $`i`$ is the inclination of the orbital plane to our line of sight. Clearly, $`f(M_1)`$ provides an absolute lower limit to the mass of the primary; only if $`M_2=0`$ and $`i=90^{}`$ is $`M_1=f(M_1)`$. It is generally acknowledged that if the primary is dark and has $`f(M_1)3.2M_{}`$, it is probably a black hole, since the theoretical upper limit to the mass of a normal neutron star is $``$ 3.0–3.2 $`M_{}`$ (Friedman, Ipser, & Parker 1986; but see Friedman & Ipser 1987, as well as Bahcall, Lynn, & Selipsky 1990, for conditions that might allow neutron stars to exceed this nominal limit). The six best examples of LMXBs whose mass function exceeds (or is close to) $`3M_{}`$, along with the derived values of $`f(M_1)`$ (in units of $`M_{}`$) and references, are as follows in chronological order of the mass-function measurement: A0620–00 = V616 Mon ($`3.18\pm 0.16`$, McClintock & Remillard 1986; $`2.72\pm 0.06`$, Marsh, Robinson, & Wood 1994; $`2.91\pm 0.08`$, Orosz et al. 1994), GS 1124–68 = Nova Mus 1991 ($`3.1\pm 0.4`$, Remillard, McClintock, & Bailyn 1992; $`3.01\pm 0.15`$, Orosz et al. 1996), GS 2023+338 = V404 Cyg ($`6.26\pm 0.31`$, Casares, Charles, & Naylor 1992; $`6.08\pm 0.06`$, Casares & Charles 1994), GRO J1655–40 = Nova Sco 1994 ($`3.16\pm 0.15`$, Bailyn et al. 1995; $`3.24\pm 0.09`$, Orosz & Bailyn 1997), GS 2000+25 = Nova Vul 1988 ($`5.02\pm 0.47`$, Casares, Charles, & Marsh 1995; $`4.97\pm 0.10`$, Filippenko, Matheson, & Barth 1995a, slightly revised to $`5.01\pm 0.15`$ \[not $`\pm 0.12`$\] by Harlaftis, Horne, & Filippenko 1996), and Nova Oph 1977 ($`4.0\pm 0.8`$, Remillard et al. 1996; $`4.86\pm 0.13`$, Filippenko et al. 1997, slightly revised to $`4.65\pm 0.21`$ by Harlaftis et al. 1997). Here we add a seventh object to this list: Nova Vel 1993 = GRS 1009–45, with a derived mass function of $`3.17\pm 0.12M_{}`$. Nova Vel 1993 was discovered on 12 September 1993 with the WATCH all-sky monitor aboard Granat (Lapshov, Sazanov, & Sunyaev 1993; Lapshov et al. 1994) and with BATSE on the Compton Gamma-Ray Observatory (Harmon et al. 1993). Its spectrum exhibited an “ultrasoft” hump at low energies ($`1`$ keV) and a power-law tail out to at least 100 keV (Kaniovsky, Borozdin, & Sunyaev 1993), typical of X-ray binaries in which the compact object is a black hole. Two months later (17 November), Della Valle & Benetti (1993) discovered a blue optical counterpart at $`V14.6`$ mag, but reasonable estimates suggest that the magnitude at the time of outburst was $`V=13.8\pm 0.3`$ (Della Valle et al. 1997). Optical photometry conducted by Bailyn & Orosz (1995) about half a year after the primary outburst showed the presence of a secondary outburst and several mini-outbursts, again reminiscent of black-hole X-ray novae. Della Valle et al. (1997) suggested an orbital period of about 4 hours, but a more reliable period of $`6.86\pm 0.12`$ hours was obtained by Shahbaz et al. (1996). The spectral type of the secondary star in the binary system was estimated to be late-G/early-K by Shahbaz et al. (1996), and later than G5–K0 by Della Valle et al. (1997). On 1998 January 25 (UT dates are used throughout this paper), we obtained several $`R`$-band images of the field of Nova Vel 1993 with the Low Resolution Imaging Spectrometer (LRIS; Oke et al. 1995) at the Cassegrain focus of the Keck-II telescope. As can be seen in Figure 1, which shows a subset of one image (seeing $`0.65^{\prime \prime }`$), the nova was in quiescence by this time, with $`R=21.2\pm 0.2`$ mag.<sup>2</sup><sup>2</sup>2We adopt the magnitudes of comparison stars quoted in Table 1 of Della Valle et al. (1997). There appears to be a numbering mismatch, or errors in the photometry, of some stars in Table 1 of Shahbaz et al. (1996): for example, their Stars 2, 5, and 7 should have comparable magnitudes, yet they are listed as being very different. To determine whether Nova Vel 1993 should be considered a good dynamical black-hole candidate, we decided to obtain a radial-velocity curve with LRIS. Our group had already obtained excellent results in this way for the black-hole candidates GRO J0422+32 (Filippenko, Matheson, & Ho 1995b; Harlaftis et al. 1999), GS 2000+25 (Filippenko et al. 1995a; Harlaftis et al. 1996), and Nova Oph 1977 (Filippenko et al. 1997; Harlaftis et al. 1997), all of which are comparably faint. ## 2 OBSERVATIONS AND REDUCTIONS Nova Vel 1993 was observed with LRIS in 1998 during the nights of January 25, February 1, March 5–6, and May 2, as well as in 1999 during the night of January 21. A journal of useful observations is given in Table 1. (The spectra obtained on 1998 May 2, and a few spectra on other nights, were of marginal quality and are not considered here.) Given the object’s far southerly declination ($`45^{}`$), and the restrictive southwest azimuth limit on Keck-II ($`185^{}`$), observing was restricted to $`1.5`$ hours per night; hence, a number of different observing runs separated by a range of intervals was needed to avoid serious aliasing of the orbital period. Conditions were always clear, and the seeing was about $`1.2^{\prime \prime }`$, reasonably good considering the high airmass ($`2.4`$). Typical exposure times were 1100 s. The long slit of width $`1^{\prime \prime }`$ was oriented at a position angle (PA) of $`160^{}`$ for all observations except that of 1998 January 25. This was close to the parallactic angle at the time of observation, thereby reducing differences in the relative amount of light lost at different wavelengths (Filippenko 1982). At such a PA, the slit went directly through Star A, which is much brighter than the quiescent nova and only $`1.6^{\prime \prime }`$ SE of it. (Della Valle et al. 1997 incorrectly state that Star A is SW of the nova.) However, since the direction of atmospheric dispersion (i.e., the parallactic angle) at the time of observation coincided with the nova/Star-A orientation, the degree of contamination from Star A was nearly independent of wavelength. The slit also partially intersected Stars B and C (Fig. 1), but their light did not contaminate that of the nova; see Figure 2, which plots the intensity of light along the slit at the wavelength of H$`\alpha `$ (thereby accentuating the nova’s contribution). We used a Tektronix $`2048\times 2048`$ pixel CCD with a scale of $`0.215^{\prime \prime }`$ pixel<sup>-1</sup> ($`0.43^{\prime \prime }`$ per binned pixel in the spatial direction). The 1200 grooves mm<sup>-1</sup> grating, blazed at 7500 Å, resulted in a wavelength range of $``$ 5650–6950 Å, and a full-width at half-maximum (FWHM) spectral resolution of $`2.5`$ Å ($`120`$ km s<sup>-1</sup>) with the $`1^{\prime \prime }`$ slit, essentially identical to what we used in our previous studies of black-hole X-ray novae. Thirteen velocity standards with spectral types in the range G5 V–M2 V were observed with the same setup, given the estimated classification of the secondary star in Nova Vel 1993 (Shahbaz et al. 1996; Della Valle et al. 1997). Also, spectra of some sdF stars (Oke & Gunn 1983) were obtained for flux calibration and removal of telluric absorption lines. Cosmic rays were eliminated from the two-dimensional spectra through comparison of pairs of consecutive exposures. The two-dimensional spectra were bias-subtracted and flattened in the usual manner. The wavelength scale was determined from polynomial fits to the positions of emission lines in spectra of Hg-Ne lamps obtained with the telescope at (or near) the position of each object. To ensure accurate wavelength calibration, final corrections ($`0.0`$–0.3 Å) to the wavelength solution were obtained from night-sky emission lines in the spectra of the nova. We used the APALL task in IRAF<sup>3</sup><sup>3</sup>3IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. to optimally extract (Horne 1986) one-dimensional, sky-subtracted spectra of Nova Vel 1993. However, to minimize contamination from Star A, we extracted only a few rows of the CCD centered on the position of the nova; as shown in Figure 2, these were rows 3–5 from the center of Star A, in the left wing of its spatial profile, corresponding to a displacement of 1.3–2.1<sup>′′</sup>. To further remove contamination, we also extracted the same rows (3–5) in the right wing of Star A’s spatial profile, and subtracted this spectrum from that of the nova.<sup>4</sup><sup>4</sup>4This extraction determined the flux scale of the subtracted spectrum, but its signal-to-noise (S/N) ratio was improved by using an extraction of the entire right half (rows 0–5) of Star A’s spatial profile and scaling to the spectrum of rows 3–5. We found that Star A typically contributed 40% (65% on January 21) of the flux in the original extraction of Nova Vel 1993. In general (but, surprisingly, not on January 21), the subsequent analysis worked best on these fully decontaminated spectra of the nova, whose typical S/N ratio per final 0.75 Å bin is $`3.3\pm 0.4`$ in the continuum. ## 3 RESULTS ### 3.1 Unphased Average Spectra Two unphased average spectra of Nova Vel 1993 are shown in Figure 3, with that of 6 March 1998 offset by 2.5 units. In the region of overlap, these resemble the spectra shown by Shahbaz et al. (1996). The strongest emission line is H$`\alpha `$, and there may be weak He I $`\lambda `$5876 on 6 March, though the latter line partially coincides with the Na I D absorption. Despite being in quiescence, Nova Vel 1993 still exhibits variability; the equivalent width (EW) of the H$`\alpha `$ emission line, which is unaffected by errors in flux calibration, was somewhat larger on March 6 (76 Å) than on January 21 (60 Å). Note that this emission is considerably weaker than in GRO J0422+32 (EW $`250`$ Å; Filippenko et al. 1995b), but comparable to that in GS 2000+25 (EW $`40`$ Å; Filippenko et al. 1995a) and Nova Oph 1977 (EW = 25–85 Å; Filippenko et al. 1997). As in the previous objects we have studied, the H$`\alpha `$ line has two peaks ($`\mathrm{\Delta }v1200`$ km s<sup>-1</sup>), more obvious in March than in January. ### 3.2 Cross-Correlations Following the procedure discussed in Filippenko et al. (1995b), we employed the FXCOR package (“Release 9/13/93”) in IRAF to cross-correlate the spectra of Nova Vel 1993 with the 13 velocity standards (shown in Fig. 4, along with two others from our previous studies). The correlation was done over the ranges 5980–6270 Å and 6320–6500 Å to avoid the H$`\alpha `$ and He I $`\lambda `$5876 emission lines, Na I D absorption, the 6270 Å interstellar line, and poorly subtracted \[O I\] $`\lambda `$6300 night-sky emission. In almost all cases a definitive correlation peak was obvious. Typical values of the Tonry & Davis (1979) significance threshold were quite high, $`R3`$, with a few as high as $`6`$. As in our previous studies, we adopted the FXCOR $`1\sigma `$ uncertainties reduced by a factor of 2.77; the Fourier transform properties of Gaussians (the functions used in fitting the cross-correlation peaks) were used to determine that FXCOR overestimates the Tonry & Davis uncertainties by this amount. The strongest formal correlation was obtained with BD+00 3090, which is officially listed as an M0 V star (Upgren et al. 1972), but over our spectral range it also looks very similar to K7 V and K8 V stars; see Figure 4. Indeed, the correlation was insignificantly lower with K7 V and K8 V stars, but far inferior with K5 V and M1 V stars. (We did not observe K6 V and K9 V stars.) Thus, we conclude that the secondary star lies somewhere in the range K7 V through M0 V, and possibly as early as K6 V. This is a slightly later spectral type than preferred by Shahbaz et al. (1996; late-G/early-K) and Della Valle et al. (1997; later than G5–K0), though the former authors note that their derived mean density for the secondary (2.4 g cm<sup>-3</sup>) suggests a K5 V star. Our phased-average spectrum of the secondary star (see below) supports the late-K classification. Using the radial velocities evaluated from the correlations with BD+00 3090 (corrected for the radial velocity of BD+00 3090 itself; 46.6 km s<sup>-1</sup>, Evans 1967), we conducted a non-linear least-squares fit (i.e., a $`\chi ^2`$ fit; Press et al. 1986, p. 521) to obtain the best cosine curve to match the data (Fig. 5). The four-parameter fit (zero point, semiamplitude, period, and phase) yielded a systemic velocity of $`\gamma _2=40.7\pm 4.0`$ km s<sup>-1</sup>, a semiamplitude of $`K_2=475.4\pm 5.9`$ km s<sup>-1</sup>, a period of $`P=0.285206\pm 0.0000014`$ d (6.84 hr), and a starting time (heliocentric Julian day) for the phase of $`T_0=`$ HJD 2,450,835.0661 $`\pm `$ 0.0007, where $`T_0`$ is defined as the point of maximum redshifted velocity. All the uncertainties are the formal $`1\sigma `$ values derived from the $`\chi ^2`$ fit, but that of $`T_0`$ may be an underestimate because the choice of the range in which to search for $`T_0`$ is unconstrained by the data. A better measurement of the systemic velocity is $`\gamma _2=30.1\pm 5.0`$ km s<sup>-1</sup>, the weighted average of values obtained with the nine best standard stars. Note that when Star A (Fig. 1) was cross-correlated with the velocity standards, its derived velocities were constant to within $`\pm 7`$ km s<sup>-1</sup> (full range); thus, our results for Nova Vel 1993 are not an artifact of telescope position or instrument orientation, and our removal of the contamination by Star A (spectral type $``$K1) was effective. It is interesting to examine the distribution of possible periods resulting from our series of observations. In Figure 6, which shows $`\chi ^2`$ versus trial period, there are several major groups of possible periods; the structure within each group results from ambiguities in the counting of cycles between widely separated epochs of observation (for example, from January through March 1998). Until we obtained the observations of 1999 January 21, the periods near 0.22 d and 0.285 d were equally probable, with those near 0.18 d and 0.4 d distinctly inferior. With the additional data, however, we are now reasonably confident that 0.285 d (6.84 hr) is correct. Based on observations of photometric modulation observed during the decline from outburst, Bailyn & Orosz (1995) had speculated that the orbital period might be 1.6 hr or $`3`$ d, but these are clearly excluded by our radial velocity measurements. The possible presence of “superhumps” in the optical light curve obtained 4 months after the primary outburst led Della Valle et al. (1997) to deduce an orbital period of 4 hr, closer yet still incorrect. Our period is essentially identical that found by Shahbaz et al. (1996; $`6.86\pm 0.12`$ hr) from Gunn $`R`$-band ellipsoidal modulations in quiescence. Note that the secondary in Nova Vel 1993 must be a dwarf; with an orbital period of only 6.84 hr, the compact primary would be inside a giant or subgiant. The formal reduced $`\chi ^2`$ ($`\chi _\nu ^2`$) for the best fit is 1.55 (17 points, 13 degrees of freedom). Given that the velocity uncertainties given by the Tonry & Davis (1979) method don’t reflect external errors such as miscentering of the object in the slit, they could easily be too small. In our case, increasing them by only 25% yields a reduced $`\chi _\nu ^2`$ of 1.0. Moreover, had we assigned uncertainties to the adopted times (i.e., phases) of the observations, and performed the fit to simultaneously minimize residuals in both velocity and phase, we would have obtained a smaller value of $`\chi _\nu ^2`$. (The effective times of the observations can differ from the calculated midpoints due to variations in observing conditions — seeing, transparency, etc.) ### 3.3 The Phased Average Spectrum Having determined the orbital parameters of the secondary star, we obtained its master “rest-frame” spectrum by averaging the 17 spectra after Doppler shifting each one to zero velocity. This represents a total integration time of $``$5 hr, but of course the S/N ratio is lower than for a single 5-hr exposure (ignoring cosmic rays) due to the increase in readout noise. Note that the H$`\alpha `$ and He I emission lines are smeared out by this process, since they are produced in the accretion disk around the compact primary star. Similarly, interstellar absorption lines become less distinct. Figure 7 shows the phased average spectrum of Nova Vel 1993, in comparison with its typical spectrum (obtained at 11:14 UT on 1998 Feb. 1), the unphased average around H$`\alpha `$, and the spectrum of the M0 V velocity standard star. Scrutiny of the phased average spectrum reveals stellar absorption lines that are also present in the M0 V star (e.g., the Ca + Fe blend at 6498 Å), but those of Nova Vel 1993 are weak. While this could indicate that the spectral type of the secondary star is substantially earlier than M0, tests show that several other factors are likely to be more important: (1) rotational broadening, perhaps 50–100 km s<sup>-1</sup> (e.g., Wade & Horne 1988; Harlaftis et al. 1996); (2) orbital broadening, typically $`4K_2T/P=80`$–90 km s<sup>-1</sup> (Filippenko et al. 1995b); and, most importantly, (3) contamination by the featureless continuum of an accretion disk. We attempted to quantify the contribution of the accretion disk in Nova Vel 1993 by comparing its phased average spectrum with that of the M0 V star, broadening and diluting the latter by various amounts. A good match to the depths of the narrow absorption lines was found with accretion disk contamination fractions of 60%–70% of the total flux density at $`6300`$ Å. Because of the uncertainties in the adopted parameters (broadening, spectral type) and the relatively low S/N ratio of the Nova Vel 1993 spectrum, we cannot confidently exclude contributions somewhat outside this range, but it is clear that the accretion disk dominates the spectrum even in quiescence. No absorption line of Li I $`\lambda `$6708 is visible in our phased-average spectrum, to a $`3\sigma `$ upper limit of $`0.1`$ Å. An unexpectedly strong Li I line (EW = 0.25–0.48 Å) is seen in the spectra of several X-ray novae (Martín et al. 1994, and references therein; Filippenko et al. 1995a), but not in others (such as Nova Oph 1977; Filippenko et al. 1997; Harlaftis et al. 1997). ### 3.4 H$`\alpha `$ Measurements To investigate the motion of the compact primary in Nova Vel 1993, we fit a Gaussian to the high-velocity wings ($`v`$ 800–2000 km s<sup>-1</sup>) of the H$`\alpha `$ line in each individual spectrum using the IRAF task SPECFIT. The regions of the fit (including the continuum) were 6400–6545 Å and 6580–6710 Å; the double-horned core of the line (Fig. 3) was excluded. The velocity of the Gaussian peak and its formal $`1\sigma `$ uncertainty were adopted for each spectrum (Table 1). As shown in Figure 8, the derived velocity tends to be low in the first half of the orbit and high in the second half, perhaps suggesting periodic behavior. We used a least-squares fit to determine the H$`\alpha `$ radial-velocity cosine curve, forcing the data to have the period found for the secondary star (0.285206 d) but allowing all other parameters to vary. The formal results (Fig. 8; $`\chi _\nu ^2=1.34`$) are as follows: $`\gamma _1=4.6\pm 6.2`$ km s<sup>-1</sup>, $`K_1=65.3\pm 7.0`$ km s<sup>-1</sup>, and a zero point in the phase of HJD $`2,450,834.9686\pm 0.0093`$ d. The value of $`\gamma _1`$ is inconsistent with that of $`\gamma _2`$, but this is often the case in X-ray novae (e.g., Nova Oph 1977, Filippenko et al. 1997). Once again, the zero point in the phase is the maximum redshifted velocity; it implies that the compact object is 237 out of phase with the companion star, rather than the expected 180. This is comparable to the offsets in GRO J0422+32 ($`253^{}`$; Filippenko et al. 1995b) and GS 2000+25 ($`260^{}`$; Filippenko et al. 1995a), as well as in A0620–00 and Nova Mus 1991 (Orosz et al. 1994). To date, there is no satisfactory quantitative explanation for these distortions, but they suggest that the accretion disk often has a nonaxisymmetric distribution of surface brightness, noncircular velocities, or a warp. They also cast some doubt on the use of H$`\alpha `$ radial-velocity curves to determine the motion and mass of the primary star. On the other hand, the mass ratios determined in this manner are frequently quite consistent with those obtained with independent techniques (e.g., A0620–00, Orosz et al. 1994; GS 2000+25, Harlaftis et al. 1996; Nova Oph 1977, Harlaftis et al. 1997; GRO J0422+32, Harlaftis et al. 1999). Thus, here we will cautiously adopt the ratio of semiamplitudes as an estimate of the mass ratio: $`q=M_2/M_1=K_1/K_2=0.137\pm 0.015`$. ## 4 THE MASS OF THE COMPACT PRIMARY From the semiamplitude ($`K_2=475.4\pm 5.9`$ km s<sup>-1</sup>) and period ($`P=0.285206\pm 0.00000138`$ d) of the radial velocity curve of the secondary star, we find a mass function $`f(M_1)=PK_2^3/2\pi G=3.17\pm 0.12M_{}`$. This corresponds to the absolute minimum mass of the compact primary, and it is close to the maximum gravitational mass of a slowly rotating neutron star (3.0–3.2 $`M_{}`$; see Chitre & Hartle 1976, and the discussion in Filippenko et al. 1995b). No evidence for eclipses is seen in the data of Bailyn & Orosz (1995), Shahbaz et al. (1996), or Della Valle et al. (1997); similarly, we do not see significant variations in the apparent brightness of the secondary star over the orbital period. Hence, it is likely that the orbital inclination $`i80^{}`$, and the relation $`f(M_1)=M_1^3\mathrm{sin}^3i/(M_1+M_2)^2`$ then implies that $`M_1`$ 4.2–4.4 $`M_{}`$ for nominal secondary-star masses of 0.5–0.65 $`M_{}`$ (M0–K6 V; Allen 1976). Even if the secondary is quite undermassive (e.g., $`M_2=0.3M_{}`$), as in some X-ray binaries (e.g., van den Heuvel 1983), we derive $`M_13.9M_{}`$. The primary star is therefore almost certainly a black hole rather than a neutron star. Adopting the mass ratio derived from the measured semiamplitude of the radial velocity curve of the primary ($`q=M_2/M_1=K_1/K_2=0.137\pm 0.015`$), we find $`M_1=`$ 3.64–4.74 $`M_{}`$ if $`M_2=`$ 0.5–0.65 $`M_{}`$. The constraints from $`q`$ and the mass function yield $`M_1=4.4M_{}`$ and $`i78^{}`$ if we use a normal K7–K8 secondary ($`M_20.6M_{}`$). Indeed, consistency cannot be achieved for $`M_20.59M_{}`$ if the maximum inclination estimate ($`80^{}`$) is correct. We conclude that the secondary star cannot be substantially undermassive, and that the orbital inclination is probably rather large, almost making Nova Vel 1993 an eclipsing binary. Our suggested inclination is significantly higher than the nominal value derived by Shahbaz et al. (1996) from their observed $`R`$-band ellipsoidal modulations ($`i=44^{}\pm 7^{}`$). However, these authors admit that when contamination by light from the accretion disk is included, their allowed range for the inclination is $`37^{}`$$`80^{}`$. Of course, in view of the unexplained phase offset between the expected and observed H$`\alpha `$ radial velocity curves (§ 3.4), it is also possible that our derived value of $`q`$ is erroneous, thereby affecting our estimate of $`i`$. Further studies are needed to accurately determine the inclination. Recently, Bailyn et al. (1998) found that the distribution of masses of putative black holes in LMXBs is very strongly peaked at $`7M_{}`$, only V404 Cyg being a clear high-mass deviant. The number of objects in the sample is still quite small, but if our mass estimate for Nova Vel 1993 is correct, then it appears to be a low-mass counterexample ($`M_14.4M_{}`$). Another possible exception is GRO J0422+32 ($`M_15M_{}`$; Harlaftis et al. 1999). An independent measure of the mass ratio of Nova Vel 1993 from the rotational broadening of the secondary star’s absorption lines (e.g., Nova Oph 1977, Harlaftis et al. 1997; GRO J0422+32, Harlaftis et al. 1999), together with better constraints on the inclination derived from near-infrared ellipsoidal modulations (e.g., GS 2000+25, Callanan et al. 1996), would provide a very useful check on our estimated mass for the primary star. When we calculate the effective Roche lobe radius ($`R_L`$) of the companion star from the relation of Paczyński (1971; see also Eggleton 1983) and Kepler’s third law, we find that $`R_L=0.7R_{}`$ if $`M_14.4M_{}`$ and $`M_20.6M_{}`$. This is only slightly larger than the expected radius of a typical K8 dwarf ($`R=0.67R_{}`$; Allen 1976). Thus, the secondary star may be just starting its evolution off the main sequence. ## 5 OBSERVATIONS OF MXB 1659–29 IN QUIESCENCE As part of our effort to determine the mass functions of X-ray binaries, we observed the field of MXB 1659–29. This burst source was discovered in 1976 October with SAS 3 (Lewin et al. 1976). It was initially considered unusual because of its very stable burst intervals (Lewin 1977) and apparent absence of constant emission, but such emission was found a year later (Lewin et al. 1978; Share et al. 1978). An optical counterpart (now known as V2134 Oph) was discovered by Doxsey et al. (1979) at $`V=18.3`$ mag; it was quite blue, and possibly exhibited emission lines of He II $`\lambda `$4686 and C III/N III $`\lambda \lambda `$4640–4650. Figure 2 of Doxsey et al. (in which N is up and E to the left, although this isn’t stated) shows $`U`$-band and $`B`$-band finder charts for the object, from images taken on 1978 May 30 and June 1 with the CTIO 4-m telescope. We obtained three dithered $`R`$-band images (exposure times of 60, 60 and 30 s) of the MXB 1659–29 field on 1999 February 9 with LRIS/Keck-II, at airmass 1.8–1.9. These were bias-subtracted, flattened, registered, and combined in the usual manner. A small subset of the resulting crowded image (Galactic latitude 7.3) is shown in Figure 9a; the FWHM of stars is measured to be $`0.9^{\prime \prime }`$. Star A is close to the apparent position of V2134 Oph indicated in the relatively shallow charts published by Doxsey et al. (1979). However, seven LRIS/Keck-II long-slit spectra (PA = 160) of this star, each with a typical exposure time of 900–1000 s, do not reveal any H$`\alpha `$ emission characteristic of accretion disks. Moreover, cross-correlation of the individual spectra with the 13 velocity standards (G5 providing the best match) reveals no clear variability beyond the $`\pm 15`$ km s<sup>-1</sup> level, and no systematic trend among consecutive exposures, casting further doubt on Star A as the secondary. H$`\alpha `$ emission is also weak or absent in our noisy spectrum of Star B (Fig. 9a), and Star E has a spectral type of M. Shortly before the completion of this paper, MXB 1659–29 went into outburst again, after a hiatus of 21 years. During the interval 1999 April 2.06–3.47, the Wide Field Camera on BeppoSAX detected a transient X-ray source coincident with the position of MXB 1659–29 (in ’t Zand et al. 1999). This object was confirmed on April 5.83–6.05 with RXTE (Markwardt et al. 1999). Optical observations (Augusteijn, Freyhammer, & in ’t Zand 1999) on April 3.41 revealed a bright new source ($`V=18.3\pm 0.1`$) at that location, with a spectrum typical of LMXBs in an X-ray bright phase (emission lines of H I, He II, C III, and N III). An image (exposure time 30 s) obtained on April 18 with LRIS/Keck-II is shown in Figure 9b (stellar FWHM = $`0.65^{\prime \prime }`$); the optical counterpart ($`R=18.2\pm 0.05`$) is marked “F.” This star is also visible at the center of the circle in Figure 9a, barely at the detection limit. We measured the $`R`$ magnitude of V2134 Oph in quiescence (Fig. 9a) with the technique of point-spread-function (PSF) fitting, where the PSF was determined iteratively by subtracting faint stars near the ones chosen for the PSF. The zero point was obtained from twilight-sky observations of PG1525–071A,C (Landolt 1992). Star F, which we identify with V2134 Oph in quiescence, has $`R=23.6\pm 0.4`$ mag. If Star F is just a chance superposition, then the true optical counterpart is even fainter. Thus, any future attempts to obtain the mass function of MXB 1659–29 (V2134 Oph) will be extremely difficult to perform! As an aid for future photometry of this object, we note that the final magnitudes for Stars A, B, C, D, and E are 19.6, 22.5, 23.1, 23.3, and 21.0, respectively. The $`1\sigma `$ uncertainty is about 0.05 mag at the bright end (primarily due to the dearth of photometric standards) and perhaps 0.2 mag for stars at $`R23`$. ## 6 CONCLUSIONS Our observations of Nova Vel 1993 provide a definitive mass function of $`3.17\pm 0.12M_{}`$, and a likely mass of around $`4.4M_{}`$ for the primary star. Thus, Nova Vel 1993 joins the small but growing list of secure Galactic black holes first identified as X-ray novae. However, its mass seems to be lower than that of other objects in its class, bridging the apparent gap between $`3M_{}`$ (the theoretical maximum mass of a neutron star, though observed masses almost always yield $`M=1.0`$–1.8 $`M_{}`$; Thorsett et al. 1993) and $`7M_{}`$ (the mass of most Galactic black holes in binary systems with well-determined parameters). It will be important to measure the mass ratio and orbital inclination of the system with techniques independent of the indirect ones used here, to confirm our estimates ($`q=0.137\pm 0.015`$; $`i78^{}`$) and our derived mass. A similar study of MXB 1659–29 (V2134 Oph) eliminates several candidate stars as the optical counterpart. The recent new outburst, which occurred just prior to the submission of this paper, allows us to identify the quiescent nova at $`R=23.6\pm 0.4`$ mag, unless this is an unrelated star superposed along the line of sight. Data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. We thank T. Bida, R. Campbell, R. Goodrich, D. Lynn, G. Puniwai, H. Rodriguez, C. Sorensen, W. Wack, T. Williams, and other members of the Keck staff for their able assistance. We are also grateful to N. Vogt, A. C. Phillips, and D. C. Koo for obtaining a Keck image of MXB 1659–29 on April 18, after its recent outburst. This work was supported by the NSF through Grant AST–9417213 to A.V.F.
no-problem/9904/hep-th9904120.html
ar5iv
text
# Cosmology vs. Holography ## I Introduction Recently a new set of ideas was put forward, which was called “the holographic principle” . According to this set of ideas, under certain conditions all the information about a physical system is coded on its boundary, implying that the entropy of a system cannot exceed its boundary area in Planck units. This principle was motivated by the well-known result in black hole theory: the total entropy $`S_m`$ of matter inside of a black hole cannot be greater than the Bekenstein-Hawking entropy, which is equal to a quarter of the area of the event horizon in the Planck units, $`S_mS_{BH}=\frac{A}{4}`$ . One can interpret this result as a statement that all the information about the interior of a black hole is stored on its horizon. The main aim of the holographic principle is to extend this statement to a broader class of situations. This principle, in its most radical form, would imply that our world is two-dimensional in a certain sense, because all the information about physical processes in the world is stored at its surface. This conjecture is very interesting, and physical implications of its most radical version could be quite significant. There has been a lot of activity related to the use of the holographic principle in quantum gravity, string theory and M-theory. For example, there is a conjecture that the knowledge of a supersymmetric Yang-Mills theory at the boundary of an Anti-de-Sitter space may be sufficient to restore the information about supergravity/string theory in the bulk . However, if one tries to apply the holographic principle to cosmology, one immediately recognizes several problems. For example, a closed universe has finite size, but it does not have any boundary. What is the meaning of the holographic principle in such a case? If the universe is infinite (open or flat), then it does not have boundaries either. In these cases, one may try to compare the entropy inside of a box of size $`R`$ with its area, and then take the limit as $`R\mathrm{}`$. But in this limit the entropy is always larger than the area . Another possibility is to compare the area of a domain of the size of the particle horizon (the causally connected part of the universe) with the entropy of matter inside this domain. But this is also problematic. The entropy produced during reheating after inflation is proportional to the total volume of inflationary universe. During inflation, the volume inside the particle horizon grows as $`e^{3Ht}`$, whereas the area of the horizon grows as $`e^{2Ht}`$. Clearly, the entropy becomes much greater than the area of the horizon if the duration of inflation is sufficiently large. This means that an inflationary universe is not two-dimensional; information stored at its “surface” is not rich enough to describe physical processes in its interior. In fact, one of the main advantages of inflation is the possibility to study each domain of size $`H^1`$ as an independent part of the universe, due to the no-hair theorem for de Sitter space. This makes the events at the boundaries of an inflationary universe irrelevant for the description of local physics . Thus, the most radical version of the holographic principle seems to be at odds with inflationary cosmology. One may try to formulate a weaker form of this principle, which may still be quite useful. For example, Fischler and Susskind proposed to put constraints only on the part of the entropy which passed through the backward light cone . This formulation does not confront inflationary cosmology because it eliminates from the consideration most of the entropy produced inside the light cone during the post-inflationary reheating of the universe. They further concentrated on investigation of those situations where cosmological evolution is adiabatic. From the point of view of inflationary cosmology, this means that they considered the evolution of the universe after reheating. The largest domain in which all of the entropy crossed the boundary when the evolution is adiabatic is bounded by the light cone emitted after inflation and reheating. In what follows we will loosely call this light cone of size $`O(H^1)`$ “particle horizon,” even though the true particle horizon, describing the light cone emitted at the beginning of inflation, is exponentially large. Fischler and Susskind argued that in the case of adiabatic evolution the total entropy of matter within the particle horizon must be smaller than the area of the horizon, $`SA`$ . This conjecture is rather nontrivial. Indeed, the origin of the Bekenstein-Hawking constraint on the entropy of a black hole is the existence of the event horizon, which serves as a natural boundary for all processes inside a black hole. But there is no event horizon in a non-inflationary universe, and the idea to replace it by the particle horizon requires some justification. Also, the Bekenstein-Hawking constraint on the entropy is valid even if the processes inside a black hole are non-adiabatic. Thus it would be desirable to investigate this proposal and find a way to apply it to the situations when the processes can be non-adiabatic. Remarkably, Fischler and Susskind have shown that their conjecture is valid for a flat universe with all possible equations of state satisfying the condition $`0p\rho `$. This result suggests that there may be some deep reasons for the validity of holography. However, they also noticed that their version of the holographic principle is violated in a closed universe. One may consider this observation either as an indication that closed universes are impossible or as a warning, showing that the holographic principle may require additional justification and/or reformulation. Indeed, this principle is not a rigid scheme but a theory in the making. It may be quite successful in many respects, but one should not be surprised to see some parts of its formulation change. For example, Bak and Rey suggested to replace the particle horizon by an apparent horizon in the formulation of the holographic principle, claiming that their proposal does not suffer from any problems in the closed universe case . There were many attempts to apply various formulations of the holographic principle to various cosmological models, but the existing literature on cosmic holography is somewhat controversial. The entropy of the observed component of matter (such as photons) is well below $`10^{90}`$ . Meanwhile the constraint $`SA`$ applied to our part of the universe implies that $`S<10^{120}`$ , which does not look particularly restrictive. Holography could be quite important if it were able to rule out some types of cosmological models, but this possibility depends on the formulation and the range of validity of the holographic principle. One may try to use holography to solve the cosmological constant problem , but the progress in this direction was very limited. Recently it was claimed that holography puts strong constraints on inflationary theory , but the authors of Ref. argued that this is not the case. Holographic considerations were used in investigation of the pre-big bang theory , and on the basis of this investigation it was claimed that this theory solves the entropy problem in the pre-big bang theory, which is at odds with the results of . The main goal of this paper is to examine the basic assumptions of cosmic holography and check which of them may require modifications. We will try to find out whether holography indeed puts constraints on various cosmological models. We will show, in particular, that the original formulation of the holographic principle should be reconsidered more generally, and not only when applied to closed universes. The holographic entropy bound proposed in , as well as the formulation proposed in , is violated at late stages of evolution of open, flat and closed universes containing usual matter and a small amount of negative vacuum energy density. At the beginning of their evolution, such universes cannot be distinguished from the universe with a positive or vanishing vacuum energy density. Thus there is no obvious reason to consider such universes unphysical and rule them out. However, when the density of matter becomes diluted by expansion, a universe with a negative vacuum energy collapses, and the condition $`SA`$ becomes violated long before the universe reaches the Planck density. The investigation of universes with a negative cosmological constant gives an additional reason to look for a reformulation of the cosmological holographic principle. Our approach will be most closely related to the approach outlined by Easther and Lowe , and by Veneziano . They argued that the entropy of the interior of a domain of size $`H^1`$ cannot be greater than the entropy of a black hole of a similar radius. We will extend their discussion and propose a justification for the entropy bound obtained in Ref. for the case of an expanding noninflationary (or post-inflationary) universe. We will argue, in agreement with , that in those cases where the holographic bound of Ref. is valid, it is equivalent to the Bekenstein-Hawking bound, which does not require any assumptions about adiabatic evolution. This bound alone cannot resolve the entropy problem for the pre-big bang cosmology and does not lead to any constraints on inflation. ## II Cosmology and holography ### A Flat universe with $`p=\gamma \rho `$ Let us begin with a brief review of . We will restrict our attention to the case when gravitational dynamics is given by the Einstein’s equations, and the evolution is adiabatic. First we will consider flat homogeneous and isotropic FRW universes, whose metric is $$ds^2=dt^2+a^2(t)\left(dr^2+r^2d\mathrm{\Omega }\right).$$ (1) We will use the units $`8\pi G_N=1`$. For simplicity we will consider matter with the energy-momentum tensor $`T_{\mu \nu }`$ = diag$`(\rho ,p,p,p)`$. The independent equations of motion are $$H^2=\rho /3,\dot{\rho }+3H(\rho +p)=0,$$ (2) where $`H=\dot{a}/a`$ is the Hubble parameter, $`\rho `$ and $`p`$ are the energy density and pressure, and the overdot denotes the time derivative. We will assume that $`\rho >0`$, $`p=\gamma \rho `$, and that the energy-momentum tensor satisfies the dominant energy condition $`|\gamma |1`$. This will generalize the results of obtained for $`0\gamma 1`$, and is in fact the correct sufficient condition for the validity of the holographic bounds in flat and open FRW universes. The solutions of (2) for $`\gamma >1`$ can be written as $$a(t)=t^{\frac{2}{3(\gamma +1)}}.$$ (3) Here we took by definition $`a=1`$ at the Planck time $`t=1`$. Density decreases as $`\rho =\frac{\rho _0}{a^{3(\gamma +1)}}`$, where $`\rho _0=\frac{4}{3(\gamma +1)^2}`$ is the density at $`t=1`$. (For $`\gamma =1`$ one has the usual de Sitter solution.) The particle horizon is defined by the distance covered by the light cone emitted at the singularity $`t=0`$: $$L_H(t)=a(t)_0^t\frac{dt^{}}{a(t^{})}=a(t)r_H(t),$$ (4) where $`r_H`$ is the comoving size of the horizon defined by the condition $`\frac{dt}{a}=dr_H`$. Suppose first that $`\gamma >1/3`$. Then the comoving horizon is $$r_H=L_H/a=\frac{3(\gamma +1)}{3\gamma +1}t^{\frac{3\gamma +1}{3(\gamma +1)}},$$ (5) and $$L_H=\frac{3(\gamma +1)}{3\gamma +1}t=\frac{2}{3\gamma +1}H^1.$$ (6) At the Planck time $`t=1`$ one has $`L_H=\frac{3(\gamma +1)}{3\gamma +1}`$ which generically is $`O(1)`$. The volume of space within the distance $`L_H`$ from any point was also $`O(1)`$. The entropy density at that time could not be greater than $`O(1)`$, so one may say that initially $`\left(\frac{S}{A}\right)_0=\sigma 1`$. Later the total entropy inside the horizon grows as $`\sigma L_H^3/a^3`$, whereas the total area $`A`$ of the particle horizon grows as $`L_H^2`$. Therefore $$\frac{S}{A}\sigma \frac{L_H}{a^3}=\sigma \frac{r_H}{a^2}.$$ (7) This yields $$\frac{S}{A}\sigma t^{\frac{\gamma 1}{\gamma +1}}.$$ (8) Thus the ratio $`\frac{S}{A}`$ does not increase in time for $`1\gamma >1/3`$, so if the holographic constraint $`\frac{S}{A}1`$ was satisfied at the Planck time, later on it will be satisfied even better . A similar result can be obtained for $`1\gamma 1/3`$. However, investigation of this case involves several subtle points. First of all, in this case the integral in Eq. (4) diverges at small $`t`$. This is not a real problem though. It is resolved if one defines the particle horizon as an integral not from $`t=0`$, but from the Planck time $`t=1`$. A more serious issue is the assumption of adiabatic expansion of the universe. If one makes this assumption, then one can show that the holographic bound is satisfied for all $`\gamma `$ in the interval $`1\gamma 1`$, which generalizes the result obtained in . However, the universe with $`1+\gamma 2/3`$ (i.e. with $`\gamma 1`$) is inflationary. The density of matter after inflation becomes negligibly small, so it must be created again in the process of reheating of the universe. This process is strongly nonadiabatic. As we already mentioned in the Introduction, in inflationary cosmology the bounds of Ref. refer to the post-inflationary particle horizon, which means that the integration in Eq. (4) should begin not at $`t=0`$ or at $`t=1`$ but after reheating of the universe. One can easily verify that the bounds obtained in are valid in this case as well. ### B Closed universe The metric of a closed FRW universe is $$ds^2=dt^2+a^2(t)(d\chi ^2+\mathrm{sin}^2\chi d\mathrm{\Omega }),$$ (9) where the spatial part represents a $`3`$-sphere, with $`\chi `$ being the azimuthal angle and $`d\mathrm{\Omega }`$ the line element on the polar $`2`$-spheres. The lightcones are still bounded by the particle horizon. However, due to the curvature of the $`3`$-sphere, the light rays must now travel along the azimuthal direction in order to maximize the sphere of causal contact. The comoving horizon is the extent of the azimuthal angle traveled by light between times $`0`$ and $`t`$: $$\chi _H=\frac{L_H}{a}=_0^t\frac{dt^{}}{a(t^{})}.$$ (10) The boundary area of the causal sphere is then given by $$A4\pi a^2(t)\mathrm{sin}^2\chi _H.$$ (11) The volume inside of this sphere is $$V=_0^{\chi _H}𝑑\chi \mathrm{sin}^2\chi d\mathrm{\Omega }=\pi (2\chi _H\mathrm{sin}2\chi _H).$$ (12) Assuming a constant comoving entropy density $`\sigma `$, we find $$\frac{S}{A}=\sigma \frac{2\chi _H\mathrm{sin}2\chi _H}{4a^2(t)\mathrm{sin}^2\chi _H}.$$ (13) Here we have explicitly retained the contribution from the comoving entropy density $`\sigma `$, which was ignored in . Consider for simplicity a cold dark matter dominated universe, with $`p\rho `$. In this case $`a=a_{\mathrm{max}}\mathrm{sin}^2(\chi _H/2)`$. The moment $`\chi _H=\pi `$ corresponds to the maximal expansion, $`a=a_{\mathrm{max}}`$. But at that time the light cone emitted from the “North pole” of the universe converges at the “South pole,” the area of the horizon (10) vanishes, and the holographic bound on the ratio $`S/A`$ becomes violated . Note that in all other respects the point $`\chi _H=\pi `$ is regular, so one cannot argue, for example, that the violation of the holographic bound is a result of violent quantum fluctuations of the light cone. ### C Open, closed and flat universes with $`\mathrm{\Lambda }<0`$ Let us return to the discussion of the flat universe case and look at Eq. (7) again. The size of the comoving horizon $`r_H`$ can only grow. Despite this growth, the holographic bound is satisfied for $`\rho >0`$, $`p>\rho `$, because the value of $`a^2`$ grows faster than $`r_H`$ in this regime. But this bound can be violated if $`a^2`$ grows more slowly than $`r_H`$, and it will definitely be violated in all cases where a flat space can collapse. Usually, cosmologists believe that closed universes collapse, whereas open or flat universes expand forever. But the situation is not quite so simple. If there is a sufficiently large positive cosmological constant, then even a closed universe will never collapse. On the other hand, if the cosmological constant is negative, then, even if it is extremely small, eventually it becomes dominant, and the universe collapses, independently of whether it is closed, open or flat. In all of these cases the holographic principle, as formulated in , will be violated. For simplicity, we will consider a flat universe ($`k=0`$) with a negative vacuum energy density $`\lambda <0`$, so that $`\rho =p/\gamma \lambda `$. We will assume that $`\lambda 1`$ in Planckian units. For example, in our universe $`\lambda `$ cannot be greater than $`10^{122}`$. In an expanding universe $`\rho =\frac{\rho _0}{a^{3(\gamma +1)}}\lambda `$, and the Friedmann equation $$3H^2=\frac{\rho _0}{a^{3(\gamma +1)}}\lambda $$ (14) can be rewritten as $$\dot{a}=\pm \frac{1}{\sqrt{3}}\sqrt{\frac{\rho _0}{a^{3\gamma +1}}\lambda a^2}.$$ (15) Because of the presence of the cosmological term, in general we cannot write the integrals in a simple form. However, the exact form of the solutions is not necessary for our purpose here. First of all, we see that $`\dot{a}`$ vanishes at $`\lambda a^{3(\gamma +1)}=\rho _0`$, after which $`\dot{a}`$ becomes negative and the universe collapses. This happens within a finite time after the beginning of the expansion. From the definition of the particle horizon and (15), one can find the value of $`L_H`$ at the turning point: $$L_H(turning)=\frac{B(\frac{\gamma }{2(\gamma +1)},\frac{1}{2})}{3(\gamma +1)\sqrt{\lambda }},$$ (16) where $`B(p,q)`$ is the Euler beta function. Putting these formulas together, we see that at the turning point $$\frac{S}{A}\sigma \lambda ^{\frac{1\gamma }{2(1+\gamma )}}$$ (17) up to factors of order unity. For $`1\gamma >1`$, the power of $`\lambda `$ is positive and so the ratio $`S/A`$ is very small at the turning point. Now, we can consider what happens near the final stages of collapse, where the energy density reaches the Planckian scales. By symmetry, $`L_H2\frac{a_0}{a(turning)}L_H(turning)\lambda ^{(3\gamma +1)/[6(\gamma +1)]}`$ at this time, whereas $`\sigma /a^31`$. Hence, Eq. (8) yields $`S/A\lambda ^{(3\gamma +1)/[6(\gamma +1)]}1`$ when $`\gamma >1/3`$. Therefore, we see that the ratio $`S/A`$ reaches unity at some time after the turning point, and that the holographic bound becomes violated thereafter, but still well in the classical phase, when the universe is still very large. Indeed, we can estimate the density of matter at that time to be $`\rho \lambda ^{\frac{\gamma +1}{2}}1`$. A universe where the only energy density is in form of a negative cosmological constant is called the anti de Sitter space (AdS). In string theory, AdS spaces typically emerge after compactifying string or M theory on an internal, compact, Einstein space of positive constant curvature. Many interesting applications of the holographic principle have been elaborated for the pure AdS space. It is therefore quite interesting that in the cosmological context an AdS background containing matter describes a collapsing Friedmann universe with a negative vacuum energy, in which the cosmological holographic principle is violated. ### D AdS spaces with matter and an alternative formulation of cosmic holography In order to cure the problems of the original formulation of the cosmological holographic principle, Bak and Rey proposed a different formulation . They suggested to consider the so-called apparent horizon instead of the particle horizon and claimed that in this case the holographic bound holds even in a closed universe. We will not present here a detailed discussion of their proposal. Instead we will consider here their holographic bound in the three-dimensional spatially flat universe (d = 3), see Eq. (16) of : $$\frac{4\sigma }{3a^2(t)\dot{a}(t)}1.$$ (18) This condition is violated when the universe approaches the turning point at $`\lambda a^{3(\gamma +1)}=\rho _0`$, when one has $`\dot{a}=0`$. This violation occurs even much earlier than in the original formulation of the cosmological holographic principle of Ref. . One can propose two possible interpretations of these results. First of all, one may argue that closed universes are impossible, and that the universes with a negative cosmological constant are also impossible. We do not see how one could justify such a statement. After all, the main reason why the holographic constraint was violated in both cases studied above was related to the possibility of gravitational collapse. It would be very odd to expect that the holographic principle which was motivated by the study of black holes should imply that gravitational collapse cannot occur. Another possibility is that the formulations of the cosmic holography proposed in should be somewhat modified in the cases when the universe may experience collapse. It would also be interesting to understand the reasons why the holographic inequalities were correct in the flat universe case. We will discuss this issue in the next section. ## III Black holes as big as a universe The simplest way to understand the holographic bound on the entropy of the observable part of the universe is related to the theory of black holes. In what follows we will develop further an argument given by Easther and Lowe , and by Veneziano . The simplest cosmological models are based on the assumption that our universe is homogeneous. But how do we know that it is indeed homogeneous if the only part of the universe that we can see<sup>*</sup><sup>*</sup>*If one takes into account inflation, then particle horizon is exponentially large. Still we can see (by means of electromagnetic radiation) only a small part of the universe of size $`H^1t`$. It is important that this scale, rather than the particle horizon, determines the largest size of a black hole which can be formed in an expanding universe. has size $`H^1`$? We cannot exclude the possibility that if we wait for another 10 billion years, we will see that we live near the center of an expanding but isolated gravitational system of size $`O(H^1)`$ in an asymptotically flat space. Then we can apply the Bekenstein bound to the entropy of this system, $`SER`$, where $`E\rho R^3`$ is the total energy and $`RH^1`$ is the size of this system, with $`H^2\rho `$, in Planck units. This gives $`SH^2`$, which coincides with the holographic bound. Of course, the idea that our part of the universe is a small isolated island of size $`H^1`$ is weird, but we do not really advocate this view here. Rather, we simply say that since we cannot tell whether the universe is homogeneous, or it is an island of a size somewhat greater than $`H^1`$, the bound $`SH^2`$ must hold for a usual homogeneous universe as well. One can look at this constraint from a different perspective. It is well known that if our universe is locally overdense on a scale of horizon with $`\frac{\delta \rho }{\rho }=O(1)`$, the overdense part will collapse and form a black hole of a size $`H^1`$ . Then the entropy of this part of the universe will satisfy the black hole bound $`SH^2`$. Again, there is no indication that $`\frac{\delta \rho }{\rho }=O(1)`$ on the horizon scale, but since we cannot exclude this possibility on a scale somewhat greater than the present value of $`H^1`$, the bound should apply to the homogeneous universe as well. Instead of debating the homogeneity of our universe, one can imagine adding a sufficient amount of cold dark matter to a part of our universe of size $`R`$. This would not change its entropy, but it would lead to black hole formation. Then one can find an upper bound on the entropy of a black hole of size $`R`$: $`SR^2`$. If one takes $`RH^1`$, one again finds that $`SH^2`$. The bound $`SR^2`$ implies that the density of entropy satisfies the constraint $`s=S/R^3<1/R`$. Thus one could expect that it is possible to get a more stringent constraint on the density of entropy by considering black holes of size greater than $`H^1`$. However, according to Carr and Hawking , black holes formed in a flat universe cannot have size greater than $`O(H^1)`$. This constraint has a dynamical origin, and is not related to the size of the particle horizon. Usually the difference between $`H^1`$ and the particle horizon is not very large, but during inflation this difference is very significant: $`H^1`$ remains nearly constant, whereas the particle horizon grows exponentially. If an inflationary domain is homogeneous on a scale $`O(H^1)`$, then it is going to expand exponentially, independently of any inhomogeneities on a larger scale. Such a domain is not going to collapse and form a black hole until inflation ends and we wait long enough to see the boundaries of the domain. But this will not happen for an exponentially long time. Nevertheless the holographic constraints on the entropy can be derived for the processes after inflation, just as in the case considered above. These constraints will be related to the size of the largest black hole which can be formed during the expansion of the post-inflationary universe, $`RH^1`$, rather than to the exponentially large size of the particle horizon in an inflationary universe. As a result, the holographic bounds do not lead to the constraints on the duration of inflation, inflationary density perturbations, and other parameters of inflationary theory discussed in . If the universe is non-inflationary and closed, or if it has a negative cosmological constant, then, prior to the point of maximal expansion, the holographic constraints on the entropy within the regions of size $`H^1t`$ coincide with the constraints for the flat universe case. Once the universe begins to collapse, the constraints cannot be further improved because the typical time of formation of a black hole of size $`O(t)`$ at that stage will be of the same order of magnitude as the lifetime of the universe. But this fact does not imply the impossibility of collapsing universes. Note that in our consideration we did not make any assumptions about the adiabatic evolution of the universe. Thus, the cosmological holographic constraints on entropy are as general as their black hole counterparts. In fact, we believe that these two constraints have the same origin. ## IV Holography vs. Inflation As we already mentioned, all holographic constraints discussed in this paper apply only to the post-inflationary universe. Inflationary cosmology in its spirit is somewhat opposite to holography. The possibility of solving the horizon, homogeneity, isotropy, and flatness problems is related to the superluminal stretching of the universe, which erases all memory about the boundary conditions. The speed of rolling of the inflaton scalar field approaches an asymptotic value which does not depend on its initial speed. The gradients of the fields and the density of particles which existed prior to inflation (if there were any) become exponentially small. All particles (and all entropy) which exist now in the universe have been created after inflation in the process of reheating. This process occurs locally, so the properties of particles as well as their entropy do not depend on the initial conditions in the universe. In order to investigate this issue in a more detailed way, let us consider the simplest version of inflationary cosmology where the universe during inflation expands only $`10^{30}`$ times (the minimal amount which is necessary for inflation to work). We will also assume for simplicity that inflation occurs at the GUT scale, so that $`H10^6`$ and the temperature after reheating is $`T10^3`$ in the Planck units. In such a case the size of the particle horizon after inflation will be $`L_HH^1\times 10^{30}10^{36}`$, the area $`AL_H^210^{72}`$, and the entropy $`ST^3L_H^310^{99}`$, which clearly violates the bound $`S<A`$. This means that the information stored at the surface of an inflationary domain cannot describe dynamics in its interior. In practice, it is extremely difficult to invent inflationary theories where the universe grows only by a factor of $`10^{30}`$ because typically in such models $`\frac{\delta \rho }{\rho }=O(1)`$ at the scale of the horizon. In the simplest versions of chaotic inflation the universe grows more than $`10^{1000000}`$ times during inflation. The situation becomes especially dramatic in those versions of inflationary cosmology which lead to the process of eternal self-reproduction of inflationary domains. In such models the universe is not an expanding ball of a huge size, but a growing fractal consisting of many exponentially large balls. In the process of eternal self-reproduction of the universe all memory about the boundary conditions and initial conditions becomes completely erased . Of course, one can use the version of the holographic principle describing the post-inflationary evolution of the universe, as discussed in the previous sections. However, in realistic inflationary models the energy density at the end of inflation falls more than 15 orders of magnitude below the Planck density, and the most interesting part of dynamics of the universe where quantum gravity could play a significant role is already over. There is another interesting aspect of relations between inflation and holography. The holographic bound on the present entropy of the universe is $`SH^2`$. One has $`H^110^{60}`$ in the Planck units. This gives the constraint $$SH^210^{120}.$$ (19) Meanwhile, the entropy of matter in the observable part of the universe is smaller than $`10^{90}`$. If one thinks about cosmology in terms of the information which can be stored on the horizon (or, to be more accurate, on a surface of a sphere of size $`H^1`$), one can be encouraged by the fact that the holographic bound is satisfied with a wide safety margin, $`S/A10^{30}`$. On the other hand, if, as we have argued, the information stored on the sphere of size $`H^1`$ is not related to the initial conditions at the beginning of inflation, then its importance is somewhat limited. In such a case the only information about the universe that we gained is the bound $`S10^{120}`$, which is $`30`$ orders of magnitude less precise than the observational constraint on the entropy. But what is the origin of these $`30`$ orders of magnitude? Let us look back in time and assume that there was no inflation and the evolution of the universe was adiabatic. Our part of the universe today has size $`10^{28}`$ cm. At the Planck time its size $`l`$ would be $`10^{28}`$ cm multiplied by $`\frac{T_p}{T_0}`$, where $`T_0`$ is the present value of the temperature of the universe, and $`T_p1`$ is the Planck temperature. (Note that the scale of the universe is inversely proportional to $`T`$ during adiabatic expansion.) One therefore finds $`l10^3`$ cm, which is $`10^{30}`$ times greater than the Planck length. That is exactly the reason why we need the universe to inflate by the factor of $`10^{30}`$. (The true number depends on the value of reheating temperature after inflation.) If the universe did not inflate at all, it would be very holographic. A typical homogeneous part of the universe soon after the big bang would have Planck size, it would contain just one or two particles, and the constraint $`S<A`$ would be saturated. But we would be unable to live there. Let us assume, for the sake of the argument, that inflation starts and ends at the Planck density, and it has Planckian temperature after reheating. If the universe during this period inflated by more than $`10^{30}`$ times, then our part of the universe after inflation would have the size $`10^3`$ cm, i.e. $`10^{30}`$ in Planck units, just as we estimated above. Its entropy would be $`10^{90}`$. Then the universe expands by $`\frac{T_p}{T_0}10^{30}`$ times, and the area of our domain becomes $`10^{120}`$. This makes it clear that the factor of $`10^{30}`$ which characterizes the discrepancy between the holographically natural value of entropy $`10^{120}`$ and the observed value $`10^{90}`$ is the same factor which appears in the formulations of the entropy problem and flatness problem . Thus, in the final analysis, the reason why one has $`S10^{30}A`$ in our universe is related to inflation. Without inflation one would have $`SA`$, and a typical locally homogeneous patch of the universe would collapse within the Planck time. The safety margin of 30 orders of magnitude created by inflation makes the universe very large and long-living, but simultaneously prevents the holographic constraint on entropy from being very informative. A nontrivial relation between the holographic constraint and inflation does not mean that one can identify the entropy problem (existence of a huge entropy $`S10^{90}`$ in our part of the universe) and the holography problem (discrepancy between the holography bound $`10^{120}`$ and the true value of entropy $`10^{90}`$). For example, in one of the recent versions of the pre-big bang scenario the stage of the pre-big bang inflation begins from a state which can be identified with a black hole with a large area of the black hole horizon . In this case, the initial entropy of the gravitational configuration by definition satisfies the Bekenstein-Hawking bound, which coincides with the holographic bound. If one assumes that the entropy of matter inside the black hole saturates the Bekenstein-Hawking bound (this is just an assumption which does not follow from the black hole theory), then the holography problem will be resolved . However, one should still determine the origin of the enormously large black hole entropy in this scenario, which constitutes the entropy problem . ## V Conclusions The idea that all information about physical processes in the world can be stored on its surface is very powerful. It has many interesting implications in investigation of the nonperturbative properties of M-theory. However, it is rather difficult to merge this idea with cosmology. The universe may not have any boundary at all, or it may expand so fast that boundary effects become irrelevant for the description of the local dynamics. In this paper we have shown that some of the formulations of the holographic principle should be modified not only in application to a closed universe, but also for open, closed and flat universes with a negative cosmological constant. We believe that the cosmological holographic constraints on entropy, in those cases where they are valid, can be understood using the Bekenstein-Hawking bound on the entropy of black holes. These constraints are rather nontrivial, but if applied to our part of the universe they are much weaker than the observational constraints, as well as the constraints which follow from the theory of creation of matter after inflation. We believe that these constraints do not permit one to rule out the universes which may experience gravitational collapse, and they do not impose any additional constraints on inflationary cosmology. The constraints on entropy represent only one aspect of the holographic principle. A stronger form which has been advocated requires the existence of a theory living on the boundary surface which would describe physical processes in the enclosed volume. Validity of this conjecture in the cosmological context has not been demonstrated, and in fact one may argue that there exists a general obstacle on the way towards the realization of this idea. In the theory of black holes, the role of the holographic surface is played by the black hole horizon. Its area, and correspondingly the number of degrees of freedom living on the horizon, remains constant if one neglects quantum gravity effects. Thus it is not unreasonable to assume that there exists a unitary quantum theory associated with the black hole horizon. However, in an expanding universe the number of degrees of freedom associated with the cosmological horizon, or with apparent horizon, or with a horizon of a would-be black hole which provides holographic constraints on entropy, rapidly changes in time. For example, in a closed universe the initial area of the horizon is vanishingly small, then it grows until it reaches the maximum, and subsequently it disappears. Thus the number of degrees of freedom associated with such a surface strongly depends on time even when the evolution of the universe is adiabatic and the total number of degrees of freedom in the bulk is conserved . Therefore one may wonder whether the holographic theory existing on such a surface will violate unitarity. In addition, the disappearance of degrees of freedom after the moment of the maximal expansion implies that the entropy measured at the holographic surface will increase during the universe expansion, but then it will decrease during its contraction, and eventually it will vanish. This means that the second law of thermodynamics may be violated in the holographic theory. The situation with causality in such a theory is not clear as well. Indeed, information about the new degrees of freedom which are going to appear or disappear on the holographic surface is stored not on this surface but in the bulk. This information does not propagate along the surface, rather it crosses the surface when new particles enter the apparent horizon. But this suggests that the creation of the new degrees of freedom in the holographic theory will not look like an effect caused by the earlier existing conditions at the surface. It remains to be seen whether one can overcome all of these problems and make the holographic principle a useful part of the modern cosmological theory including inflationary theory. We should note, however, that quantum cosmology is extremely complicated and counterintuitive in many respects. It is still a challenging task to unify M-theory and inflationary cosmology. Any progress in this direction would be very important. One may expect that the ideas borne out by the investigation of quantum dynamics of black holes and enriched by the study of supergravity and string theory will play the key role in the development of a nonperturbative approach to quantum cosmology. We wish to thank R. Bousso, W. Fischler and L. Susskind for valuable discussions. This work has been supported in part by NSF grant PHY-9870115.
no-problem/9904/nucl-th9904006.html
ar5iv
text
# An unusual space-time evolution for heavy ion collisions at high energies due to the QCD phase transition \[ ## Abstract The space-time evolution of high energy $`non`$-$`central`$ heavy ion collisions is studied with relativistic hydrodynamics. The results are very sensitive to the Equation of State (EoS). For an EoS with the QCD phase transition, an unusual matter distribution develops. Before freeze-out, two $`shells`$ are formed which physically separate and leave a maximum in the center. We make specific predictions for the azimuthal dependence of the flow and for two-pion interferometry, contrasting our results with a resonance gas EoS. \] 1. One of the principal goals of the heavy ion collision program is to find and to quantify the QCD phase transition from hadronic matter to a new phase, the quark-gluon plasma (QGP) . Experiments at the Brookhaven AGS (lab energy 11 A\*GeV) and at the CERN SPS (lab energy 200 A\*GeV) are expected to produce a QGP/mixed phase during the initial stages of the collision, although currently there is only indirect evidence for this state (see the recent reviews ). With the completion of the Relativistic Heavy Ion Collider(RHIC) at Brookhaven and its much higher collision energy (100+100 GeV\*A in the center of mass frame), the experiments are expected to produce the QGP well above the transition temperature. In this work, we study how the strong QGP pressure can be observed at RHIC. The position-momentum correlations of the produced hadrons, colloquially known as collective $`flow`$, directly reflect the EoS of the excited matter. Multiple studies using cascade event generators and hydrodynamics (see e.g. ), have successfully reproduced the AGS/SPS hadronic spectra. A radial flow velocity of about (0.5-0.6)c is found in central PbPb collisions , but the flow develops principally during the late hadronic stages of the collisions and has little to do with the QGP. An EoS extracted from these model studies shows “softness” during the early stages of the collision, either due to the proximity of the QCD phase transition , or due to non-equilibrium phenomena such as the formation and fragmentation of strings . 2. Additional information about the EoS may be extracted from the azimuthal dependence of flow in non-central collisions, which depends non-trivially on the impact parameter and the collision energy. The ellipticity of the flow has been studied theoretically and experimentally . Because elliptic flow develops earlier than radial flow, its systematic measurement at the SPS may settle the mixed phase/pre-equilibrium controversy mentioned above. The original purpose of the present study was to further quantify elliptic flow within a hydrodynamic framework. Instead, we found that non-central collisions at RHIC/LHC energies have an unusual expansion pattern, which cannot be described as simply elliptic and which is qualitatively different from AGS/SPS energies. 3. Let us begin with a description of the transverse acceleration. In model calculations, the radial acceleration history changes from SPS to RHIC due to the QCD phase transition. The ratio of pressure to energy density, $`p/ϵ`$, has a deep minimum at the end of the mixed phase, known as the “softest point” of the equation of state . For AGS/SPS collision energies the matter is produced close to the softest point and the resulting transverse acceleration is small. Therefore, in non-central collisions the matter retains its initial elliptic shape and burns slowly inward. For RHIC/LHC collision energies, the early pressure starts an outward expansion. This outward expansion and the inward deflagration can cancel each other, making a stationary front, called the “burning log” in . Summarizing, at the AGS/SPS there is first softness and then a hadronic push, while at RHIC/LHC there is first a quark-gluon push, then softness, and then a hadronic push. In spite of this change, the final radial flow velocities at RHIC and at the SPS are expected to be similar . 4. The early push redistributes the matter, however. The early velocity has long time, $`10fm/c`$, to influence the matter distribution before freeze-out. The stiff QGP in the center, with $`T>>T_c`$ , pushes against the soft matter on the exterior, with $`TT_c`$, producing a shell-like structure. Since the final distorted distribution rather resembles a nut and its shell, we call this picturesque configuration the $`nutshell`$. For high energy non-central collisions, the matter expands preferentially in the impact parameter direction (the x axis) and the expanding shells leave a rarefaction behind. Furthermore, since the acceleration started rather early, the two half-shells partially separate, and by freeze-out three distinct fireballs are actually produced. We have called this consequence of early pressure the $`nutcracker`$ scenario. 5. Following , we assume a rapidity-independent longitudinal expansion. We then solve the 2+1 dimensional relativistic Euler equations in the transverse plane, with the coordinates x,y and proper time $`\tau =(t^2z^2)^{1/2}`$, using the HLLE Gudunov method . As in previous calculations , we have used a simple bag model equation of state with $`T_c=160MeV`$ and a $`1GeV`$ latent heat. We have modeled hadronic matter with a simple resonance gas EoS, $`p=.2ϵ`$ . The pressure was taken to be independent of baryon number which is a good approximation at high energies. The initial entropy distribution in the transverse plane was assumed to be proportional to distribution of participating nucleons as in . We parameterize the initial energy density by the total pion multiplicity, $`dN_\pi /dy`$. (How particle multiplicity maps to the collisions energy depends on the entropy production mechanism. This mapping will soon be determined experimentally at RHIC.) For definiteness, we consider PbPb collisions at an impact parameter of $`b=8fm`$ and freeze-out at a fixed temperature, $`T_f=140MeV`$. At SPS energies the flow develops late, and the matter retains its initial almond shape until the late hadronic stage. However, at RHIC the flow develops early and redistributes the matter by late the hadronic stage. Two sample matter distributions, in the transverse plane and at fixed proper time, are shown in Fig. 1. A resonance gas EoS, $`p=.2ϵ`$, produces little structure and simple elliptic flow (Fig. 1a). An ideal gas EoS, $`p=ϵ/3`$, also produces simple elliptic flow and even shorter lifetimes (not shown). Finally, our bag model EoS produces two “nut-shells” of matter which expand outward (Fig. 1b). Note that in Fig. 1b the matter is pushed into two shells, moving in x direction, while at the north and south poles, two holes develop. A maximum, the “nut”, remains in the center. The matter distribution becomes increasingly “nutty” with larger impact parameters and higher collision energies. The evolution is clarified by plotting the emission points of nucleons, integrated over periods of proper time. In Fig.2, (a) and (b), we show the x coordinates of the emitted nucleons for a resonance gas EoS and for a bag model EoS respectively. For both EoS, during the early stages, particles are slowly emitted from from a stationary freeze-out surface, making a peak at $`x=45`$ fm . For both EoS, however, 75% of the nucleons freeze-out during a short proper time interval of 2.5 fm/c. For a resonance gas EoS (Fig. 2a) the freeze-out positions are uniform and become increasingly centralized with time. In contrast, for a bag model EoS (Fig. 2b) the distribution has three distinct and comparable sources for the final 1.5 fm/c. There is a central “nut” and two extremal “shells” with a hole in between. 6. We turn now to the experimental consequences of this flow pattern. First we examine the $`\varphi `$ distribution of the produced particles at mid-rapidity (y=0). These distributions are expanded in harmonics and are sometimes weighted by the transverse momentum squared. $`{\displaystyle \frac{dN}{d\varphi dy}}|_{y=0}={\displaystyle \frac{v_0}{2\pi }}(1+{\displaystyle \underset{n1}{}}2v_{2n}\mathrm{cos}(2n\varphi ))`$ (1) $`{\displaystyle p_t^2\frac{dN}{dp_td\varphi dy}}|_{y=0}dp_t={\displaystyle \frac{\alpha _0}{2\pi }}(1+{\displaystyle \underset{n1}{}}2\alpha _{2n}\mathrm{cos}(2n\varphi ))`$ (2) We have calculated the single particle distributions for various secondaries, using the standard Cooper-Frye formula . The elliptic components, $`v_2`$ and $`\alpha _2`$, depend only weakly on collision energy, as found in previous studies . For nucleons, for example, we found $`v_27\%`$ and $`\alpha _213\%`$ from the highest SPS energies to LHC energies. Higher harmonics, in contrast, grow from SPS to RHIC. To summarize the effects of higher harmonics in the distributions, we have plotted in Fig. 3 the weighted net nucleon $`\varphi `$ distribution (the l.h.s. of equation(2)) for SPS and RHIC, normalized to the first two terms in the Fourier expansions shown above. In the dashed curve corresponding to RHIC, the early pressure forces a $`34\%`$ additional asymmetry in final net nucleon distributions beyond the elliptic component. At LHC energies the additional asymmetry is even more pronounced. The marked minimum at 45<sup>o</sup> is due to the square shape of the mater distribution, which somewhat reduces the flow on the diagonal. Note also a prominent positive correction to elliptic flow at 90<sup>o</sup>. Both of these effects are observable, given the expected statistics at RHIC. The distribution of deuterons and other heavy fragments should express the underlying flow more clearly. Because the emission points of the nucleons are bunched along the ridges of the nutshell, larger fragments are generally emitted from the shells. This inhomogeneity enhances the production probability of fragments and peaks their final flow in the x direction. Multiply strange barons such as $`\mathrm{\Omega }^{}`$ are also of interest. Since they do not re-scatter in the hadron phase, they reflect the early flow . Indeed any azimuthal dependence of the flow of multiply strange baryons would be fairly convincing evidence of collective motion in the quark phase. 7. The $`spatial`$ asymmetry of the matter distribution at freeze-out is probed by Hanbury Brown-Twiss (HBT) two particle interferometry. Strong flows strongly modify the source function. Each correlator with given pair momenta, is generated by its own “patch” , or “homogeneity region” . Taking these patches together gives a complete picture of the source. We will discuss this complicated issue elsewhere, and here show only two selected correlators which emphasize the qualitatively different predictions of different EoS. The correlators are found by taking the appropriate Fourier transform of the source function over the freeze-out surface . Below we display the Hanbury Brown-Twiss(HBT) radii $`R_{xx}`$ and $`R_{yy}`$ and employ the notation $`(p_1+p_2)_\mu =2K_\mu `$ and $`(p_1p_2)_\mu =q_\mu `$. For $`R_{xx}`$ we select $`\stackrel{}{q}`$ in the x direction , the direction we want to probe. $`\stackrel{}{K}`$ is chosen in the orthogonal y direction , with magnitude .5 GeV. For $`R_{yy}`$ the axes of $`\stackrel{}{K}`$ and $`\stackrel{}{q}`$ are simply reversed. The correlators may then be fit to the functional form $`C=1+exp(q_i^2R_{ii}^2)`$, where $`R_{ii}^2`$ has been interpreted as the source size at zero velocity . More specifically, $`R_{xx}^2=<x^2><x>^2`$ (3) $`R_{yy}^2=<y^2><y>^2`$ (4) These radii are shown in Fig. 4 for PbPb collisions at b=8 fm, as a function of the pion multiplicity scaled by the number of participants to central collisions (not b=8 fm). $`R_{xx}`$ and $`R_{yy}`$ are shown for a bag model EoS and for a simple resonance gas EoS, $`p=.2ϵ`$. For low energies near the left hand side of the plot, the two EoS show approximately the same radii, roughly corresponding to the initial elliptic shape of the matter distribution. For a simple resonance gas EoS, the HBT Radii show little energy dependence, while for an EoS with the phase transition the homogeneity regions increase steadily with beam energy. The rapid increase of $`R_{xx}`$ can be understood qualitatively. The contributing pions move in the y direction with rather high momenta, .5 GeV. The pair therefore originates, not from ridges of the nutshell, but from the region in between the shells. The rapid increase of $`R_{xx}`$ reflects the increased separation of the nutshells at higher collision energies. The increase in $`R_{yy}`$ reflects the flattening of the shells themselves. For flat, square-shaped, shells the homogeneity regions are larger than for curved elliptic shells. 8. In conclusion, for non-central heavy ion collisions, we predict an unusual space time evolution, which results from the interplay of a hard and soft EoS typical of the QCD phase transition. We have called this the “nutcracker” flow since two shells are produced and then separate. The azimuthal momentum asymmetry can be seen in the higher harmonics and flow of heavy secondaries, while the spatial asymmetry can be seen in HBT interferometry. We end with the experimental strategy. As the “nutcracker” flow persists for all sufficiently non-central events, and because the principal RHIC detectors can determine the impact parameter plane, absolutely $`any`$ observable, from strangeness, to flow, to $`J/\psi `$ suppression, should display marked azimuthal dependence, which reflects the fireball in different stages. We urge our experimental colleagues to look for this dependence from the first day of operation at RHIC. Acknowledgments. We thank J. Pons for essential numerical advice during the initial stages of this work and H. Sorge for many interesting discussions. This work is partially supported by US DOE, by the grant No. DE-FG02-88ER40388.
no-problem/9904/cond-mat9904189.html
ar5iv
text
# Elongation of confined ferrofluid droplets under applied fields ## I introduction Ferrofluids are oil- or water-based colloidal suspensions of permanently magnetized particles. In an applied magnetic field the particles align creating a strong paramagnetic response in the ferrofluid. Because they are fluids, these suspensions can flow in response to forces. For example, ferrofluid droplets elongate parallel to applied fields and undergo tip-sharpening transitions . When a ferrofluid droplet is confined between two plates in a “thin film” geometry, surrounded by an immiscible fluid, and a field is applied perpendicular to the plates, it undergoes field induced bifurcations leading to intricate labyrinthine patterns . Ferrofluid emulsions undergo structural transitions under an applied field from a randomly dispersed structure of the emulsion droplets to droplet chains, columns and worm like structures depending on volume fraction, sample geometry and the rate of field application. A droplet of ferrofluid elongates under applied field because of the demagnetizing fields of magnetic poles on the surface of the droplet. Surface poles arise wherever the droplet magnetization has a component perpendicular to the surface. The demagnetizing field that they create opposes the magnetization, creating a demagnetizing energy that depends on the shape of the droplet. The droplet elongates to reduce its demagnetizing field and energy. Because elongation increases the surface energy of the system, an equilibrium shape is reached when the magnetic forces balance against the surface tension forces. The elongation of freely suspended, 3-dimensional droplets has been well studied . The droplets can be assumed to be ellipsoids for small elongation. The demagnetizing field is thus uniform and the elongation (major axis minus minor axis divided by minor axis) is found to be proportional to the undeformed droplet radius. The case of droplets confined in “thin film geometry” however, involves two length scales, droplet thickness and its undeformed diameter. In the limit of small aspect ratio (droplet thickness divided by its undeformed diameter) the demagnetizing fields are stronger near the edges of the droplet than at its center. We find that the elongation divided by droplet thickness in this geometry is proportional to the logarithm of the aspect ratio. Prior experiments have proposed droplet elongation as a tool for measuring surface tension between the ferrofluid and the surrounding immiscible fluid. We improve on the existing theory by incorporating spatial variation of the demagnetizing field inside the droplet. We perform an experiment supporting our predicted logarithmic behavior. Section II of this paper presents our theoretical study of the elongation of a ferrofluid droplet confined within a thin film. Our principal result is a predicted logarithmic dependence of elongation on droplet aspect ratio. We contrast this result with the corresponding elongation of unconfined droplets. Section III describes an experiment done with ferrofluid emulsions that tests our theory. The experiment is in qualitative agreement with our theoretical prediction, but differs quantitatively in at least one respect. In section IV we discuss a possible explanation of the discrepancy based upon droplet contact angles with the confining plates. ## II theory Consider a paramagnetic liquid droplet confined in a thin film between two parallel plates with a gap, $`\mathrm{\Delta }`$, in the $`\widehat{z}`$ direction (see figure 1). An immiscible liquid surrounds the droplet. Let the thickness, $`\mathrm{\Delta }`$, be much smaller than the radius of the undeformed droplet, $`r_0`$. This small aspect ratio $$p=\frac{\mathrm{\Delta }}{2r_0}$$ (1) provides the pseudo-two-dimensional character of the problem. If a uniform, weak, field $`𝐇_0`$ is applied parallel to the plate, the droplet magnetizes. The magnetization creates an opposing demagnetizing field whose strength depends on the droplet shape. The droplet elongates to decrease its magnetic energy, reaching equilibrium when the magnetic forces balance against the restoring forces due to surface tension. In this section we define the elongation of the droplet and calculate the surface energy, $`E_S`$, and the magnetic energy, $`E_M`$, of the droplet as a function of its elongation. By minimizing the total energy with respect to the elongation we obtain the elongation as a function of $`𝐇_0`$, $`r_0`$, and $`\mathrm{\Delta }`$. For simplicity assume the elongated droplet has a uniform cross section, $`𝒞`$, independent of $`z`$. This corresponds to a contact angle of $`90^{}`$ between the paramagnetic liquid, the surrounding fluid and the glass plates, and a plate spacing much less than the capillary length of the two liquids. Thus the droplet has straight edges if viewed from the side (see figure 1). The role of contact angle will be discussed later in section IV. We write the equation for $`𝒞`$ in polar coordinates as a generic smooth perturbation to a circle, $$r=\alpha _1+\alpha _2\mathrm{cos}2\theta .$$ (2) We only include a single harmonic, since we expect coefficients for the higher harmonics to be much smaller than $`\alpha _2`$ for small perturbations. The cross section $`𝒞`$ has semi-major axis $`a`$, and semi-minor axis $`b`$ (see figure 1b), with $`\alpha _1=(a+b)/2`$ and $`\alpha _2=(ab)/2`$. We define the elongation of the droplet $$ϵ\frac{a}{b}1.$$ (3) We assume that the elongation, $`ϵ`$, is much less than $`1`$. Imposing the constraint that the volume of the droplet ($`\mathrm{\Delta }`$ times cross-sectional area) remains constant we calculate $$\alpha _1=\frac{r_0}{(1+k^2/2)^{1/2}},\alpha _2=\frac{r_0k}{(1+k^2/2)^{1/2}},$$ (4) where $`k=ϵ/(2+ϵ)`$. The surface energy is the sum of interfacial areas times surface tensions between all pairs of the three phases (solid glass, ferrofluid droplet and immiscible fluid). For the case of uniform cross-section ($`90^{}`$ contact angle) droplets, the glass-ferrofluid and glass-immiscible fluid interfacial areas are independent of the shape of $`𝒞`$ due to the fixed volume constraint. Hence we concern ourselves with the droplet-surfactant solution interface, the area of which is $`\mathrm{\Delta }`$ times the perimeter. The perimeter of cross section $`𝒞`$ can be calculated as a power series in $`ϵ`$, $$S=2\pi r_0(1+\frac{3}{16}ϵ^2+O(ϵ^3)).$$ (5) As expected, the leading correction to $`S`$ is second order in $`ϵ`$ since the perimeter should increase regardless of the sign of $`ϵ`$. The relevant surface energy of the droplet is $$E_S=\sigma _{FI}S\mathrm{\Delta }$$ (6) where $`\sigma _{FI}`$ is the surface tension of the ferrofluid-immiscible fluid interface. The total magnetic energy of any paramagnetic body under applied field is $$E_M=\frac{1}{2}_Vd^3𝐫𝐇_0𝐌(𝐫).$$ (7) The magnetization $`𝐌(𝐫)`$ is determined by the self consistent equation $$𝐌(𝐫)=\chi (𝐇_0+𝐇_D(𝐫))$$ (8) for linear susceptibility $`\chi `$, where $$𝐇_D(𝐫)=_Sd^2𝐫^{}(𝐌(𝐫^{})\widehat{𝐧}(𝐫^{}))\frac{𝐫𝐫^{}}{|𝐫𝐫^{}|^3}+_Vd^3𝐫^{}(𝐌(𝐫^{}))\frac{𝐫𝐫^{}}{|𝐫𝐫^{}|^3}$$ (9) is the demagnetizing field due to the magnetization $`𝐌(𝐫)`$, with $`\widehat{𝐧}(𝐫^{})`$ being the outward normal at any point on the surface. The surface integral gives the demagnetizing field due to the surface poles which appear wherever the magnetization has a component normal to the surface. The volume integral gives the contribution to the demagnetizing field due to volume charges which appear at points where the magnetization has non-zero divergence. To calculate the magnetic energy we expand $`𝐌`$ and $`𝐇_D`$ in power series in the susceptibility $`\chi `$, $$𝐌(𝐫)=𝐌^{(1)}(𝐫)+𝐌^{(2)}(𝐫)+𝐌^{(3)}(𝐫)+\mathrm{}$$ (10) $$𝐇_D(𝐫)=𝐇_D^{(1)}(𝐫)+𝐇_D^{(2)}(𝐫)+𝐇_D^{(3)}(𝐫)+\mathrm{},$$ (11) where $`𝐌^{(n)}(𝐫)`$ and $`𝐇_D^{(n)}(𝐫)`$ are proportional to $`\chi ^n`$. Equating terms in (8) of equal order in $`\chi `$ we get $$𝐌^{(1)}(𝐫)=\chi 𝐇_0$$ (12) and $$𝐌^{(n+1)}(𝐫)=\chi 𝐇_D^{(n)}(𝐫).$$ (13) Note that $`𝐌^{(1)}(𝐫)`$ is independent of $`𝐫`$ because the applied field is uniform whereas $`𝐌^{(n)}(𝐫)`$ may depend on $`(𝐫)`$ for $`n>1`$ because $`𝐇_D(𝐫)`$ may be non-uniform. To second order in $`\chi `$ we write the magnetic energy of the droplet in (7) as $$E_M=\frac{1}{2}\chi H_0^2V\frac{1}{2}_Vd^3𝐫𝐌^{(1)}𝐇_D^{(1)}.$$ (14) The first term in equation (14) for the magnetic energy is independent of the shape of the droplet and hence unimportant for our consideration. The second term in the energy is the demagnetizing energy $`E_D`$ due to a uniform magnetization $`𝐌^{(1)}=\chi 𝐇_0`$. Because $`𝐌^{(1)}`$ is uniform there are no volume charges, and the surface poles appear only along the droplet-immiscible fluid interface, to first order in $`\chi `$. Rewrite the second term in (14) as an energy due to the induced surface charges along the curved surface of the droplet $$E_D=\frac{1}{2}\chi ^2_0^\mathrm{\Delta }𝑑z_0^\mathrm{\Delta }𝑑z^{}𝑑s𝑑s^{}\frac{(\widehat{𝐧}𝐇_0)(\widehat{𝐧^{}}𝐇_0)}{|𝐫𝐫^{}|}.$$ (15) Here $`ds`$ and $`ds^{}`$ are infinitesimal arc-lengths along the contour of the droplet $`𝒞`$, and $`\widehat{𝐧}`$ and $`\widehat{𝐧^{}}`$ are the outward normals to the curved surface of the droplet at points $`(s,z)`$ and $`(s^{},z^{})`$ respectively. Write $`|𝐫𝐫^{}|=\sqrt{R^2+(zz^{})^2}`$, where $`R`$ is the in-plane distance between points at positions $`s`$ and $`s^{}`$ on $`𝒞`$. Integrating over $`z`$ and $`z^{}`$ in (15) yields $$E_D=\chi ^2\mathrm{\Delta }𝑑s𝑑s^{}(\widehat{𝐧}𝐇_0)(\widehat{𝐧}^{}𝐇_0)\mathrm{\Phi }(R/\mathrm{\Delta })$$ (16) where $$\mathrm{\Phi }(R/\mathrm{\Delta })=R/\mathrm{\Delta }\sqrt{1+(R/\mathrm{\Delta })^2}+\mathrm{ln}\left[(R/\mathrm{\Delta })/(\sqrt{1+(R/\mathrm{\Delta })^2}1)\right].$$ (17) Using equation (2) for $`𝒞`$ we calculate the demagnetizing energy in (16) as a series expansion in $`ϵ`$ and the aspect ratio $`p=\mathrm{\Delta }/2r_0`$ $$E_D=\chi ^2H_0^2V\left\{2p\mathrm{ln}\frac{B}{p}3ϵp\mathrm{ln}\frac{C}{p}+\mathrm{}\right\}$$ (18) where $`V=\pi r_0^2\mathrm{\Delta }`$ is the volume of the droplet, and $`B=4e^{1/2}`$ and $`C=4e^{5/6}`$ are geometrical constants. The term in the brackets can be identified as $`2\pi `$ times the demagnetizing factor of the droplet along the direction of applied field. Additional terms in the series in equation (18) are of higher order in $`ϵ`$ or in $`p`$. For small elongation and large aspect ratio we may neglect these higher order terms. Minimizing the total energy $`E=E_S+E_M`$ with respect to $`ϵ`$ gives $$ϵ=\frac{\chi ^2H_0^2\mathrm{\Delta }}{\sigma _{FI}}\mathrm{ln}\frac{C}{p}.$$ (19) Corrections to this result are higher order in aspect ratio $`p`$ or higher order in $`ϵ`$ itself. Interestingly, the elongation depends only logarithmically on the undeformed radius $`r_0`$, and has a much stronger dependence on the thickness, $`\mathrm{\Delta }`$, of the droplet. This result differs from an earlier theory which omits the logarithm because it assumes that the demagnetizing field is uniform inside the droplet. In the case of unconfined, nearly ellipsoidal droplets , the demagnetizing field is quite uniform inside the droplet. The demagnetizing energy is therefore proportional to the volume ($`(4/3)\pi r_0^3`$) of the droplet according to equation (14). The surface energy is proportional to the area ($`4\pi r_0^2`$) and the elongation is thus proportional to $`r_0`$. In the case of thin film geometry, however, the demagnetizing field is very non-uniform. For distances much less than $`\mathrm{\Delta }`$ near the droplet edge, the component of the demagnetizing field is of order $`M`$, since the edge acts like an infinite sheet of charge in the first approximation. For distances much greater than $`\mathrm{\Delta }`$ the demagnetizing field is of order $`M\mathrm{\Delta }/r`$ since the edge acts as a line charge in this case. The contribution to the integral for the demagnetizing energy in equation (14) mainly comes from the bulk of the droplet and goes like $`r_0\mathrm{\Delta }^2\mathrm{ln}(r_0/\mathrm{\Delta })`$. The surface energy is proportional to $`2\pi r_0\mathrm{\Delta }`$ and the elongation is therefore proportional to $`\mathrm{\Delta }\mathrm{ln}(r_0/\mathrm{\Delta })`$. The logarithmic variation of elongation with the aspect ratio is thus a signature of the non-uniform nature of the demagnetizing field inside the droplet. ## III experiment ### A Setup #### a Sample Preparation and Structure Our sample consisted of a ferrofluid/aqueous solution emulsion confined between two glass plates. The oil-based ferrofluid used was EMG 905 made by Ferrofluidics. To reduce the surface tension between the ferrofluid and the immiscible aqueous external phase, we incorporated surfactants in the aqueous phase. A solution of a commercial detergent made the best emulsions while solutions with other pure anionic surfactants either showed hardly any elongation of the ferrofluid droplets under applied field or produced droplets without sharp boundaries with the aqueous phase. In contrast, our stable, well behaved emulsions allowed us to probe and confirm the fundamental aspects of our model. To prepare the emulsions, a single drop of ferrofluid ($`0.1`$ ml) was added to $`10`$ ml of surfactant solution which was a $`12`$ times dilution of the commercial detergent. The liquid was shaken (by hand) to prepare the emulsion, creating ferrofluid droplets with diameters varying from $`5200\mu `$m. A small amount of this emulsion was then put between two glass plates which were circular, about $`2`$ cm in diameter and $`4`$ mm in thickness. These plates were cleaned using soap and alcohol and then rinsed with ROPure water. We also tried acid cleaning of the glass plates, however it did not result in any noticeable change in the quality of the sample. We used a rectangular spacer made of mylar foil to separate the plates and prevent the emulsion from leaking out from the edges of the plates. The mylar foil extended to to the edges of the glass plates and had a rectangular hole in the center into which the emulsion was inserted. The thickness of a single mylar spacer was measured to be $`6.54\pm 0.06\mu `$m. The experiment was performed with one and two spacers to ensure small aspect ratio. For the cell assembly, the mylar spacers were placed on the first plate and a drop of the emulsion was put in the center of the plate. The second plate was placed on top and the two plates were clamped together using a pair of brass rings. The rings were tightened by a set of $`4`$ equally spaced screws. We measured the thickness variation across the sample by making a “dry” sample (without the emulsion) and counting resulting white light interference fringes. Although the thickness of mylar spacers was measured to an accuracy of $`1`$ percent, the thickness variation across the sample was found to be $`10\%`$ resulting from the stresses due to clamping and possible entrapment of dust in the cell. #### b Apparatus A schematic diagram of the experimental setup is shown in figure 2. We put the sample at the center of a pair of Helmholtz coils to insure a homogeneous magnetic field. The field measured close to the sample using a Hall probe showed a variation of less than $`4\%`$ across the sample. The sample was set up horizontally to prevent gravitational settling of the ferrofluid droplets. Horizontal alignment was achieved using a bubble level. The sample was illuminated from below using a diffused light source and observed from above using a tele-microscope. The tele-microscope was connected to a CCD camera and the image from it was fed into a video recorder and recorded on video tape. Images from the recording were later processed using NIH Image. We calibrated the optical system using a measuring reticule aligned along the two orthogonal directions of the CCD array. Figure 3 shows a low magnification view of a typical sample. The ferrofluid droplets appear much darker in the image than the surfactant solution around them. #### c Experimental Procedure and Image Analysis During the experiment the applied field was incremented every few seconds. We found the response of the droplets to the field to be nearly instantaneous and the shape of the droplets remained constant at constant field. Experiments with decreasing field strength showed no hysteresis in droplet shape. While droplet elongations were observed to be small we incremented the field in steps of about $`1`$ Gauss, and increased the increments up to about $`5`$ Gauss as the elongation increased. Droplet elongations appeared to vary smoothly with applied fields over the entire range from $`0`$ to $`50`$ Gauss. During each experiment the droplets were observed on a video monitor and recorded on tape. After grabbing images of distorted droplets, we used a cut-off in pixel gray scale level to identify the droplet edge. The semi-major axis $`(a)`$ and the semi-minor axis $`(b)`$ were directly read off the image using NIH Image. At zero field measured elongations were small (RMS magnitude around $`0.003`$) and in random directions. These minor perturbations from a circular shape were likely due to microscopic distortion of the contact line pinned on weak surface heterogeneities. The “observed radius” $`r_0`$ was calculated as the average of the two semi-axes at zero field and the elongation at each field value was calculated using data analysis software. #### d Results For each of the $`48`$ droplets studied we plotted elongation $`ϵ`$, versus the square of the applied field, $`𝐇_0`$. Figure $`4`$ shows typical plots. The elongation is proportional to the square of the applied field for small applied fields as predicted. Saturation effects, although small, can be seen at higher values of the field. The plot of elongation, for each droplet was fitted to $$\frac{ϵ}{\mathrm{\Delta }}=k_0+k_1H_0^2+k_2H_0^4.$$ (20) We included terms only up to order $`H_0^4`$ because the saturation effects were observed to be small. We include $`k_0`$ to allow for the observed small elongations at zero field. The coefficients $`k_1`$ of each droplet were then plotted versus the inverse of the aspect ratio $`1/p=2r_0/\mathrm{\Delta }`$ on a semi-log plot (see figure 5). The theory predicts a slope of $`\chi ^2/\sigma _{FI}`$ and an intercept of $`1/C`$ on the horizontal axis with $`C=1.74`$. The data points in figure 5(a) fall on a straight line as predicted by the theory. Also, as predicted by the theory, the data points for two different droplet thicknesses overlay each other. There is substantial scatter in the data, but the deviations from a straight line are random and consistent with the error bars. The chief source of uncertainty was the $`10\%`$ uncertainty in thickness due to the variation observed across the sample. Figure 5(b) displays the deviation of $`k_1`$ from the best fit normalized by the uncertainty. The uncertainties in measuring $`ϵ,r_0`$ and $`𝐇_0`$ were found to be negligible in comparison. Dividing the susceptibility $`\chi =1.9`$ for the ferrofluid used by the slope = $`0.119\pm 0.004`$ cm/dyne obtained from the fitted line we get $`\sigma _{FI}=30.4\pm 1.1`$ dynes/cm, typical of oil-water surface tensions. From the fitted line we also get $`C=0.35\pm 0.08`$, differing substantially from our theoretically predicted value of $`1.74`$. It may be possible to explain this discrepancy by considering the effect of the contact angle of the ferrofluid-immiscible fluid interface with the glass plates. In the discussion section below we explore the qualitative effect of the contact angle. In figure 6(a) we plot $`ϵ/\mathrm{\Delta }`$ versus $`2r_0/\mathrm{\Delta }`$ on a linear scale. If the demagnetizing field inside the droplet was uniform like in the case of unconfined droplets, the plot would be a straight line. However, the plot is clearly not a straight line and the deviations from the best fitted straight line are systematic (see figure 6(b)). This further supports our theoretical result that the demagnetizing field inside a confined droplet is non-uniform and the elongation divided by thickness is proportional to the logarithm of the aspect ratio. ## IV Discussion The results discussed in section III agree with our theoretical prediction (19) of logarithmic variation of $`ϵ/\mathrm{\Delta }`$ with a proportionality constant of $`\chi ^2/\sigma _{FI}`$. However, our theoretical value for $`C`$ is $`4e^{5/6}=1.74`$ whereas the experimentally measured value for $`C`$ is $`0.35\pm 0.08`$. One possible explanation for the discrepancy in the value of $`C`$ is that the ferrofluid/glass contact angle is not $`90^{}`$ and consequently the cross-section of the droplet is not uniform. Our calculations are for uniform droplet cross-section, which corresponds to a contact angle $`\beta =90^{}`$ between the glass plate and liquid droplet. The experiment, however, was performed with an oil-based ferrofluid in a surfactant solution for which the oil-glass contact angle $`\beta <90^{}`$ (see figure 7). A contact angle of other than $`90^{}`$ will affect the elongation in two ways: by changing interfacial areas to alter the functional form of $`E_S`$ and by redistributing the magnetic surface poles to alter the functional form of $`E_M`$. We consider these two effects in turn. First, however, we must address an ambiguity in the definition of aspect ratio and elongation which results from the non-uniformity of droplet cross-section. Our experiment observes the profile of the largest cross-section of the droplet. For a circular droplet with $`\beta <90^{}`$ this is the radius $`r_1`$ defined as the radius at mid-gap as shown in figure 7. For an elongated droplet we measure the semi-major and -minor axes $`a_1`$ and $`b_1`$ and, through equation (3), the elongation $`ϵ_1`$. We also define $`r_2`$, $`a_2`$, $`b_2`$ and $`ϵ_2`$ associated with the ferrofluid-immiscible fluid-glass plate contact line (see figure 7). Since $`\mathrm{\Delta }`$ is much less than the capillary length of the ferrofluid/immiscible fluid, to a good approximation the profile of the droplet will be an arc of a circle, so the difference between $`r_1`$ and $`r_2`$ is of order $`\mathrm{\Delta }`$, and likewise for the semi-major and -minor axes. The difference $`ϵ_1ϵ_2`$ is of order $`\mathrm{\Delta }/r_1`$ relative to the elongation. Recall that our result (19) for the elongation is only the lowest order term in a series expansion in the aspect ratio. Thus the distinction between $`r_1`$ and $`r_2`$, and between $`ϵ_1`$ and $`ϵ_2`$, does not alter our result at the lowest order in aspect ratio. When the contact angle differs from $`90^{}`$, the cross section of the droplet depends on $`z`$. Consequently, the contact areas of the glass plates with the droplet and with the surfactant solution may vary as the droplet elongates. All the three interfacial areas must be taken into account to calculate the surface energy. The total surface energy is $$E_S=\sigma _{FI}A_𝒞+2\sigma _{FG}A_G+2\sigma _{IG}(AA_G),$$ (21) where the three surface tensions between ferrofluid-immiscible fluid, ferrofluid-glass, and surfactant solution-glass, are denoted by $`\sigma _{FI},\sigma _{FG}`$, and $`\sigma _{IG}`$ respectively, $`A_C`$ and $`A_G`$ are defined below and the total area of the sample is denoted by $`A`$. The factors of $`2`$ in the second and third terms of the surface energy account for the two glass surfaces. The area of the droplet-surfactant solution interface $`A_𝒞`$ is given approximately by the circumference of $`𝒞`$ multiplied by the arc length of the bulge $$A_C=2\pi r_1(1+\frac{3}{16}ϵ^2)\mathrm{\Delta }\frac{(\pi /2\beta )}{\mathrm{cos}\beta }.$$ (22) We use $`r_1`$ here to calculate the circumference of the droplet because is the radius observed during the experiment. To first order in the aspect ratio, using $`r_1`$ or $`r_2`$ in equation (22) yields the same result. The droplet’s contact area with the glass plates must be adjusted to maintain a constant total volume of ferrofluid as the droplet elongates. We approximate the volume of the bulging region by the circumference of $`𝒞`$ multiplied by the projected area of the bulge. The contact area $`A_G`$ must be adjusted so that $`A_G\mathrm{\Delta }`$ changes by the negative of the change in volume of the bulge. Thus we write $$A_G=2\pi r_2^2\left[1\frac{3}{32}\left\{\frac{(\pi /2\beta )}{\mathrm{cos}^2\beta }\mathrm{tan}\beta \right\}\frac{\mathrm{\Delta }}{r_2}ϵ^2\right].$$ (23) Using $`r_2`$ instead of $`r_1`$ makes the above result exact for zero elongation. The area of ferrofluid in contact with the glass plates decreases with elongation for an acute contact angle because the volume of the fluid contained in the outward bulge of the droplet increases and therefore the fluid contained in the bulk of the droplet decreases. For obtuse contact angles exactly the opposite happens for similar reasons. To understand how the contact angle affects the magnetic energy, consider the work done by the magnetic field as we change the contact angle from $`90^{}`$ to $`\beta `$ while keeping the volume of the droplet constant. This work, divided by the circumference , must be independent of $`r_1`$ in the limit $`r_1`$ going to infinity, since the magnetic field near the surface of the droplet will not depend on $`r_1`$ in the large $`r_1`$ limit. The work done by the magnetic field is the difference in energy between the straight edge droplet with contact angle of $`90^{}`$ and the bulging droplet with a contact angle of $`\beta `$. The demagnetizing energy of the bulging droplet must therefore have the same dependence on $`\mathrm{ln}(r_1/\mathrm{\Delta })`$ as the straight edge droplet or the difference in the demagnetizing energies divided by the circumference will be proportional to $`\mathrm{ln}(r_1/\mathrm{\Delta })`$ and will blow up in the large $`r_1`$ limit. Hence, the demagnetizing energy for the bulging droplet must be identical to equation (18) but with different values $`\stackrel{~}{B}`$ and $`\stackrel{~}{C}`$ replacing the constants $`B`$ and $`C`$. As the droplet bulges inward or outward the charges on the surface get distributed over a larger area, decreasing the demagnetizing energy. The constant $`\stackrel{~}{B}`$ therefore has a smaller value for a bulging (inward or outward) droplet than $`B`$, the value for a straight-edged droplet. However, since the demagnetizing energy is always positive, smaller demagnetizing energy ($`\stackrel{~}{B}<B`$) implies a weaker dependence of demagnetizing energy on elongation. Thus, we expect the value of $`\stackrel{~}{C}`$ to be smaller for a bulging droplet than the value $`C`$ for a straight-edged droplet. Finally, consider how the contact angle dependence of surface and magnetic energies affect the elongation calculated in equation (19) for the case $`\beta =90^{}`$. The $`ϵ`$ dependence of the surface energy remains quadratic, but the coefficient now depends upon a linear combination of the three surface tensions $`\sigma _{FI}`$, $`\sigma _{FG}`$ and $`\sigma _{IG}`$. This combination will replace $`\sigma _{FI}`$ in equation (19). The functional form of the magnetic energy remains unchanged, but the values of $`B`$ and $`C`$ depend on contact angle. Thus the smaller value $`\stackrel{~}{C}`$ replaces $`C`$ in equation (19). For $`\beta 90^{}`$ the experiment cannot be used to determine $`\sigma _{FS}`$ unless $`\sigma _{FG}`$ and $`\sigma _{IG}`$ are known. Since the $`\beta `$ in general is not $`90^{}`$, it is only possible to measure the effective surface tension during elongation, and not $`\sigma _{FI}`$ itself. ## V Conclusions We study the elongation ferrofluid droplets, confined in thin film geometry, under weak applied field. Our theoretical calculations predict the elongation of a droplet depends logarithmically on aspect ratio. This behavior contrasts with the case of unconfined 3-dimensional droplets where elongation is directly proportional to undeformed droplet radius. We measured the elongation of ferrofluid droplets in an experiment performed on ferrofluid droplets in a ferrofluid/water/surfactant emulsion. The results of our experiment agree with the functional form our theoretical prediction, however the experimentally measured value of $`C`$ differs from the predicted value.We suggest the droplet contact angle with the confining plates as a source of this discrepancy. ## ACKNOWLEDGMENTS We acknowledge partial support for this research under NSF grant DMR-9732567.
no-problem/9904/astro-ph9904171.html
ar5iv
text
# Nucleosynthesis in Supernovae ## Abstract Core collapse supernovae are dominated by energy transport from neutrinos. Therefore, some supernova properties could depend on symetries and features of the standard model weak interactions. The cross section for neutrino capture is larger than that for antineutrino capture by one term of order the neutrino energy over the nucleon mass. This reduces the ratio of neutrons to protons in the $`\nu `$-driven wind above a protoneutron star by approximately 20 % and may significantly hinder r-process nucleosynthesis. Core collapse supernovae are perhaps the only present day large systems dominated by the weak interaction. They are so dense that photons and charged particles diffuse very slowly. Therefore energy transport is by neutrinos (and convection). We beleive it may be useful to try and relate some supernova properties to the symmetries and features of the standard model weak interaction. Parity violation in a strong magnetic field could lead to an asymmetry of the explosion. Indeed, supernovae explode with a dipole asymmetry of order one percent in order to produce the very high ‘recoil’ velocities observed for neutron stars. However, calculating the expected asymmetry from P violation has proved complicated. Although explicit calculations have yielded somewhat small asymmetries it is still possible that more efficient mechanisms will be found. In this letter we calculate some effects from the difference between neutrino and antineutrino interactions. In Quantum Electrodynamics the cross section for $`e^{}p`$ is equal to that for $`e^+p`$ scattering (to lowest order in $`\alpha `$). In contrast, the standard model has $`\overline{\nu }`$-nucleon cross sections systematically smaller than $`\nu `$-nucleon cross sections. However at the low $`\nu `$ energies in supernovae, time reversal symmetry limits the difference between $`\nu `$ and $`\overline{\nu }`$ cross sections. Time reversal can relate $`\nu N`$ elastic scattering and $`\overline{\nu }N`$ where the nucleon scatters from final momentum $`p_f`$ to initial momentum $`p_i`$. If the nucleon does not recoil then the $`\nu `$ and $`\overline{\nu }`$ cross sections are equal. Thus the difference between $`\nu `$ and $`\overline{\nu }`$ cross sections are expected to be of recoil order $`E/M`$ where $`E`$ is the neutrino energy and $`M`$ the nucleon mass. We expect the difference for charged current interactions to be of the same order if one can neglect the neutron-proton mass difference. This ratio is relatively small in supernovae. However the coefficient multiplying $`E/M`$ involves the large weak magnetic moment of the nucleon (see below). The standard model has larger $`\nu `$ cross sections than those for $`\overline{\nu }`$. For neutral currents, this leads to a longer mean free path for $`\overline{\nu }_x`$ compared to $`\nu _x`$ (with x=$`\mu `$ or $`\tau `$). Thus even though $`\nu _x`$ and $`\overline{\nu }_x`$ are produced in pairs, the antineutrinos escape faster leaving the star neutrino rich. The muon and tau number for the protoneutron star in a supernova could be of order $`10^{54}`$. Supernovae may be the only known systems with large $`\mu `$ and or $`\tau `$ number. For charged currents, the interaction difference can change the equilibrium ratio of neutrons to protons and may have important implications for nucleosynthesis. We discuss this below. To our knowledge, all previous work on nucleosynthesis in supernovae assumed equal $`\nu `$ and $`\overline{\nu }`$ interactions (aside from the n-p mass difference). The neutrino driven wind outside of a protoneutron star is an attractive site for r-process nucleosynthesis. Here nuclei rapidly capture neutrons from a low density medium to produce heavy elements. This requires, as a bare minimum, that the initial material have more neutrons than protons. The ratio of neutrons to protons n/p in the wind depends on the rates for the two reactions: $$\nu _e+np+e^{},$$ $`(1a)`$ $$\overline{\nu }_e+pn+e^+.$$ $`(1b)`$ The standard model cross sections for Eqs. (1a,1b) to order $`E/M`$ are, $$\sigma =\frac{G^2\mathrm{cos}^2\theta _c}{\pi }(1+3g_a^2)E_e^2[1\gamma \frac{E}{M}\pm \delta \frac{E}{M}],$$ $`(2)`$ with $`G`$ the Fermi constant (and $`\theta _c`$ the Cabbibo angle), $`E_e=E\pm \mathrm{\Delta }`$ the energy of the charged lepton and $`\mathrm{\Delta }=1.293`$ MeV is the neutron-proton mass difference. The plus sign is for Eq. (1a) and the minus sign for Eq. (1b). We use $`g_a1.26`$.<sup>§</sup><sup>§</sup>§ Note in principle, there is another correction to Eq. (2) from the thermal motion of the nucleons. This is of order $`T/M`$ and increases both the $`\nu `$ and $`\overline{\nu }`$ cross sections. However we assume the temperature in the wind $`T`$ is much less than the neutrino sphere temperature and neglect this term. Equation (2) neglects small corrections involving the electron mass and coulomb effects (see for example) while the finite nucleon size only enters at order $`(E/M)^2`$. We refer to the $`\gamma `$ term as a recoil correction. It is the same for $`\nu `$ and $`\overline{\nu }`$. $$\gamma =(2+10g_a^2)/(1+3g_a^2)3.10$$ $`(3)`$ Finally, the $`\delta `$ term involves the interference of vector (1+2$`F_2`$) and axial ($`g_a`$) currents. This violates P, which by CP invariance also violates C. This increases the $`\nu `$ and decreases the $`\overline{\nu }`$ cross section. $$\delta =4g_a(1+2F_2)/(1+3g_a^2)4.12$$ $`(4)`$ Here $`F_2`$ is the isovector anomalous moment of the nucleon. (This is the weak magnetism contribution.) We average Eq. (2) over the $`\nu _e`$ spectrum to get, $$<\sigma >_\nu =\frac{G^2\mathrm{cos}^2\theta _c}{\pi }(1+3g_a^2)<E>ϵ[1+2\frac{\mathrm{\Delta }}{ϵ}+a_0\frac{\mathrm{\Delta }^2}{ϵ^2}][1+(\delta \gamma )a_2\frac{ϵ}{M}],$$ $`(5)`$ for Eq. (1a). Here the mean energy $`ϵ`$ is defined as, $$ϵ=<E^2>/<E>,$$ $`(6)`$ and $`a_2`$ is a shape factor $`a_2=<E^3><E>/<E^2>^2`$. Finally $`a_0=<E^2>/<E>^2`$ and $`<E^i>`$ are the ith energy moments of the $`\nu _e`$ spectrum. Note, $`ϵ1.2<E>`$. Likewise, averaging over the $`\overline{\nu }_e`$ spectrum for Eq. (1b) gives, $$<\sigma >_{\overline{\nu }}=\frac{G^2\mathrm{cos}^2\theta _c}{\pi }(1+3g_a^2)<\overline{E}>\overline{ϵ}[12\frac{\mathrm{\Delta }}{\overline{ϵ}}+a_0\frac{\mathrm{\Delta }^2}{\overline{ϵ}^2}][1(\delta +\gamma )a_2\frac{\overline{ϵ}}{M}],$$ $`(7)`$ with the mean antineutrino energy $`\overline{ϵ}=<\overline{E}^2>/<\overline{E}>`$ and $`<\overline{E}^i>`$ the ith moment of the $`\overline{\nu }_e`$ spectrum. We assume similar shape factors $`a_2`$ and $`a_0`$ for $`\overline{\nu }_e`$ and $`\nu _e`$. The shape factor $`a_2=1.23`$ (1.15) for a Fermi Dirac distribution with chemical potential $`\mu =\eta T_\nu `$ and temperature $`T_\nu `$ for $`\eta =0`$ (3.5). See for example. For simplicity we adopt $`a_2=a_0=1.2`$ The coefficient $`a_0`$ only makes a very small contribution and our results are insensitive to its value.. The equilibrium electron fraction per baryon $`Y_e`$ (which is equal to the proton fraction assuming charge neutrality) is simply related to the rate $`\overline{\lambda }`$ for Eq. (1b) divided by the rate $`\lambda `$ for Eq. (1a). $$Y_e=(1+\frac{\overline{\lambda }}{\lambda })^1$$ $`(8a)`$ This assumes the neutrino capture rates dominate those for other reactions. Reference contains some discussion of the small corrections from $`e^\pm `$ capture. The ratio n/p is, $$\frac{n}{p}=\frac{1}{Y_e}1.$$ $`(8b)`$ Taking the ratio of Eq. (7) to Eq. (5) gives, $$Y_e=\left(1+\frac{L_{\overline{\nu }_e}\overline{ϵ}}{L_{\nu _e}ϵ}QC\right)^1.$$ $`(9)`$ Here $`L_{\nu _e}`$ ($`L_{\overline{\nu }_e}`$) is the $`\nu _e`$ ($`\overline{\nu }_e`$) luminosity, $`Q`$ is the correction from the reaction Q value, $$Q=\frac{12\frac{\mathrm{\Delta }}{\overline{ϵ}}+a_0\frac{\mathrm{\Delta }^2}{\overline{ϵ}^2}}{1+2\frac{\mathrm{\Delta }}{ϵ}+a_0\frac{\mathrm{\Delta }^2}{ϵ^2}},$$ $`(10)`$ and the C violating term, Eq. (4), contributes the factor $`C`$, $$C=\frac{1(\delta +\gamma )a_2\frac{\overline{ϵ}}{M}}{1+(\delta \gamma )a_2\frac{ϵ}{M}}.$$ $`(11)`$ Note, the recoil term $`\gamma `$ makes a small but nonzero contribution to Eq. (11) because the $`\nu `$ and $`\overline{\nu }`$ energies are different. Simply evaluating Eq. (11) for typical parameters yields $`C0.8`$. Thus, the difference between $`\nu `$ and $`\overline{\nu }`$ interactions reduces the equilibrium n/p ratio by approximately 20 %. This is a major result of the present paper and will be discussed below. Figure 1 shows the values of $`ϵ`$ and $`\overline{ϵ}`$ necessary for $`Y_e=0.5`$. We assume equal luminosities $`L_{\nu _e}=L_{\overline{\nu }_e}`$. The region to the upper left is neutron rich and to the lower right proton rich. The conditions for $`Y_e=0.5`$, assuming $`C=1`$ in Eq. (9), are indicated by the dotted line. Including $`C`$ shifts the conditions for $`Y_e=0.5`$ to the solid line. Thus the difference in $`\nu `$ and $`\overline{\nu }`$ interactions converts the region between the solid and dotted lines from neutron rich to proton rich. We also show in Fig. 1 the values of $`ϵ`$ and $`\overline{ϵ}`$ from a supernova simulation by J.R. Wilson as reported in ref.. The symbols show how the mean energies evolve with time. As the protoneutron star becomes more neutron rich, the opacity for $`\overline{\nu }_e`$ decreases because there are fewer protons. This allows the $`\overline{\nu }_e`$ to escape from deeper inside the hot protoneutron star. Therefore $`\overline{ϵ}`$ increases with time. Without $`C`$ the wind starts out with $`Y_e0.5`$ and then becomes neutron rich. With $`C`$ the wind starts out proton rich and ends up with $`Y_e0.5`$. If $`L_{\overline{\nu }_e}L_{\nu _e}`$ the wind is never significantly neutron rich. If $`L_{\overline{\nu }_e}1.1L_{\nu _e}`$ the wind will end slightly neutron rich. However, n/p is still 20 % lower with $`C`$ than without. For example, if $`Y_e`$ drops as low as 0.42 in a model without $`C`$ it will only drop to approximately 0.48 when the difference between $`\nu `$ and $`\overline{\nu }`$ interactions is included. With this increase in $`Y_e`$, it is very unlikely that successful r-process nucleosynthesis can take place in the wind of this or similar models. Note, we are being slightly inconsistent to include the $`E/M`$ term in Eqs. (2,4) for the neutrino absorption while it is not included in the simulation used for $`ϵ`$ and $`\overline{ϵ}`$. Indeed this term could change the location of the neutrino spheres and slightly increase $`\overline{ϵ}`$ and decrease $`ϵ`$. This could cancel a small part of the effect on the n/p ratio. However, our preliminary estimates suggest this change in the spectrum is very small. Including the term in a full simulation would be useful. For completeness we give a C violating term for neutrino-electron scattering NES which may be useful for calculating differences between the $`\nu _x`$ and $`\overline{\nu }_x`$ spectrum. The total cross section $`\sigma _e`$ for NES (see ref. for example) is expanded in powers of $`E/E_F`$ where $`E_F`$ is the electron Fermi energy. To order $`(E/E_F)^2`$, $$\sigma _e\frac{G^2E^2}{\pi }(c_v^2+c_a^2)\frac{E}{5E_F}\left(1\pm \delta _e\frac{E}{E_F}\right),$$ $`(12)`$ with $`\delta _e=4c_vc_a/3(c_v^2+c_a^2)`$ and the plus sign is for $`\nu `$ and the minus sign for $`\overline{\nu }`$. The couplings are $`c_v=2\mathrm{s}\mathrm{i}\mathrm{n}^2\theta _W\pm 1/2`$ and $`c_a=\pm 1/2`$. Here the plus sign is for $`\nu _e`$ and the minus sign for $`\nu _x`$. The C violating coefficient $`\delta _e0.55`$ for $`\nu _e`$ and $`0.1`$ for $`\nu _x`$. Although this term is nominally of larger order, $`E/E_F`$ for NES than $`E/M`$ for nucleon scattering, the coefficient is smaller $`\delta _e\delta `$. Therefore we do not expect large differences from NES (except perhaps at low densities). With the approximately 20 % reduction in n/p from the difference between $`\nu `$ and $`\overline{\nu }`$ interactions, there appears to be very serious problems with r-process nucleosynthesis in the wind of present supernova models. In addition to the initial lack of neutrons, one has to overcome the effects of neutrino interactions during the assembly of $`\alpha `$ particles and during the r-process itself. These further limit the available neutrons per seed nucleus. Thus, it is unlikely that present wind models will produce a successful r-process. Of course, the wind in supernovae may not be the r-process site, although this may be unappealing (see for example). If the wind is not the site, one must look for alternative environments. However, the effects of neutrino interactions may be very general. The only requirement is that energy transport from neutrinos plays some role in helping material out of a deep gravitational well. Given this, it is quite likely that the n/p ratio will be determined by the relative rates of Eqs. (1a,1b). Therefore differences in $`\nu `$ and $`\overline{\nu }`$ interactions may be important for just about any nucleosynthesis site that involves neutrinos. Indeed, Haxton et al. claim the abundance of isotopes produced by neutrino spallation imply significant neutrino fluences during the r-process. If the $`\nu `$-driven wind is the r-process site, it is very likely, present models of the neutrino radiation in supernovae are incomplete. The high values of $`Y_e`$ make it almost impossible to have a successful r-process by only changing matter properties, such as the entropy. The neutrino fluxes will (almost assuredly) need to be changed. Changes in the astrophysics used in the simulations or new neutrino physics such as neutrino oscillations could change $`\overline{ϵ}`$, $`ϵ`$ and or the luminosities and lead to a more neutron rich wind. The oscillations of more energetic $`\overline{\nu }_x`$ with $`\overline{\nu }_e`$ could increase $`\overline{ϵ}`$. However, we have some information on the $`\overline{\nu }_e`$ spectrum from SN1987a. Thus one can not increase $`\overline{ϵ}`$ without limit. Indeed if anything, the Kamiokande data suggest a lower $`\overline{ϵ}`$. Any model which tries to solve r-process nucleosynthesis problems by increasing $`\overline{ϵ}`$ should first check consistency with SN1987a observations. Alternative modifications could include oscillations of $`\nu _e`$ to a sterile neutrino or a lowering of $`ϵ`$. (However, we know of no model which lowers $`ϵ`$.) Whatever the modification of the neutrino fluxes, one will still need to include the differences between $`\nu `$ and $`\overline{\nu }`$ interactions in order to accurately calculate n/p. In conclusion, supernovae are one of the few large systems dominated by energy transport from weakly interacting neutrinos. Therefore, some supernova properties may depend on symmetries and features of the standard model weak interactions. The cross secton for neutrino capture is larger than that for antineutrino capture by a term of order the neutrino energy over the nucleon mass. This difference between neutrino and antineutrino interactions reduces the ratio of neutrons to protons in the $`\nu `$-driven wind above a protoneutron star by approximately 20 % and may significantly hinder r-process nucleosynthesis. This work was supported in part by DOE grant: DE-FG02-87ER40365. FIG. 1. Mean antineutrino energy $`\overline{ϵ}`$ vs mean neutrino energy $`ϵ`$, see Eq. (6). The solid line indicates an equilibrium electron fraction $`Y_e=0.5`$ including the difference between $`\nu `$ and $`\overline{\nu }`$ interactions, $`C`$ term in Eqs. (9,11), while the dotted line shows $`Y_e=0.5`$ without this term. The symbols are the mean energies of a simulation by J.R. Wilson as reported in ref. for the indicated times in seconds after collapse. The $`\nu `$-driven wind is neutron rich in the upper left of the figure and proton rich in the lower right. The region between the dotted and solid lines is converted from neutron rich to proton rich by the $`C`$ term.
no-problem/9904/cond-mat9904094.html
ar5iv
text
# Stresses in silos: Comparison between theoretical models and new experiments ## Abstract We present precise and reproducible mean pressure measurements at the bottom of a cylindrical granular column. If a constant overload is added, the pressure is linear in overload and nonmonotonic in the column height. The results are quantitatively consistent with a local, linear relation between stress components, as was recently proposed by some of us. They contradict the simplest classical (Janssen) approximation, and may pose a rather severe test of competing models. PACS numbers: 46.10.+z, 83.70.Fn The prediction of static stresses in dry, cohesionless granular matter has become the focus of renewed attention (see ). Surprisingly, there is no consensus on what is the basic physics involved. Some argue that the behavior is essentially elastic (ultimately justified by the slight elastic deformation of individual grains); others that it is dominated by the extremely nonlinear constraint that tensile intergranular forces are absent . Indeed, some of us have argued that the statics of granular materials can be described, without considering elastic displacements, by assuming a local, history-dependent, relation between stress tensor components. This gives hyperbolic equations for the stress field, in contrast to the elliptic (or elliptic-hyperbolic) equations of conventional elastic (or elastoplastic) models. Our approach provides a simple continuum model of ‘force chains’ ; (physical) force chains become (mathematical) characteristics of the hyperbolic equations. In the simplest case, these form a regular array; stresses propagate through space via a wave equation . According to the model, the medium is ‘fragile’ in a precise sense : it responds linearly to a specific class of ‘compatible’ loads; all others cause plastic reorganization. This approach accounts well for the pressure ‘dip’ below the apex of a conical sandpile poured from a point source . (It also predicts that the dip is absent for a pile made of successive horizontal layers, as recently confirmed by experiment .) However, it has excited strong criticism in some quarters , and certainly demands further experimental test . For example, such models predict that if a small localized overload is placed on top of a granular layer, the excess weight at the bottom is maximal, not directly beneath the weight, but on a ring . To test this directly is difficult, because of strong nonlinearity and (especially) noise effects which hinder the interpretation of data . A more robust and practical situation, is the cylindrical granular column, or bin. Here also noise effects come into play; but ways around these (by careful ensemble averaging of experimental data) have been pioneered in . Below we report precise measurements (beyond those of ) of the effective mass $`M_e`$, supported by the bottom plate, as a function of the total mass $`M_t`$ poured into a (small) bin, with and without an added overload. With no overload, as expected, $`M_e(M_t)`$ first rises linearly, then saturates at a column height comparable to its width; for high bins, most of the mass is ‘screened’ by frictional transfer to the walls. A simple hyperbolic model (called osl for ‘oriented stress linearity’ ) gives bin results close to, but different from, the classical Janssen approximation (recalled below) . In contrast to traditional methodologies our new ensemble-averaged experiments can distinguish these predictions; we find that osl, which has an extra fitting parameter, is discernibly better. Another classical model (ife, see below) gives wholly inadequate answers unless unphysical values of the wall and bulk friction constants are used. There then follow, from the osl model, two important new predictions for the effect of a uniform overload of mass $`Q`$ at the top of the granular column. First, $`M_e`$ should be linear in $`Q`$; second, for large $`Q`$, $`M_e`$ should be nonmonotonic in $`M_t`$. We find that, with no further fitting, our overload experiments quantitatively confirm the osl predictions, strongly supporting the hyperbolic picture. At the end of this Letter, we comment on the challenge these new results pose to other modelling strategies. First we recall our own approach. By stress continuity, $$_i\sigma _{ij}=\rho g_j$$ (1) where $`\sigma _{ij}`$ is the (symmetric) stress tensor, $`\rho `$ is the density of the material, and $`g_j`$ is the gravitational acceleration. In general one needs extra physical assumptions to close Eq.1. For an elastic body, one assumes a (single-valued) displacement field, and a linear relation between stresses and strains (Hooke’s law). For poured cohesionless grains, the definition of a macroscopic displacement is problematic (see ). Instead we assume that the arrangement of granular contacts gives, on continuum length scales, a definite relation between components of the stress tensor . One such relation, often used in the literature, is the ife (‘incipient failure everywhere’) assumption: that the material is everywhere on the verge of Coulombic failure (see e.g. ). Then there exists a (locally varying) set of axes $`𝐧𝐦`$ such that $`\sigma _{nm}=\sigma _{nn}\mathrm{tan}\varphi `$ where $`\varphi `$ is the Coulomb angle. Our modelling strategy instead gives a fundamental rôle to the network of force chains which, if grains are undeformable, must carry forces longitudinally . One interpretation of our equations is that the friction between parallel force chains is fully mobilized; a Coulomb-like condition, $`\sigma _{nm}=\sigma _{nn}\mathrm{tan}\psi `$, then holds (with $`\psi \varphi `$ an ‘effective’ friction angle) but the orientation $`𝐦`$, which is directed along the force chains, is now fixed by the construction history and not (as in ife) by the load . (This assumes the load is a compatible one.) For simple construction histories, like piles and bins, we assume that $`𝐦`$ is the same everywhere, up to an inversion through the central symmetry axis; $`𝐦`$ must then have a fixed angle $`\tau `$ to the vertical. In cylindrical polars ($`z,r,\theta `$) with $`z`$ downwards, we recover the osl model : $$\sigma _{rr}=\eta _1\sigma _{zz}+\eta _2\sigma _{rz}$$ (2) with $`\eta _1=\mathrm{tan}\tau \mathrm{cot}(\tau \psi )`$ and $`\eta _2=\mathrm{tan}\tau \mathrm{cot}(\tau \psi )`$. Eq.2 closes the problem in two dimensions ($`d=2`$): inserting it into Eq.1, gives an anisotropic wave equation, with one characteristic along $`𝐦`$, and another along a direction $`𝐦^{}`$ at angle $`\tau \psi \pi /2`$. (These can be interchanged without affecting Eq.2; so $`𝐦^{}`$ describes a second family of force chains .) For $`d=3`$, a further closure equation is needed. Our choice here is $`\sigma _{rr}=\sigma _{\theta \theta }`$; but from work on conical piles, we expect insensitivity to this choice . In the bin geometry, the osl model can then be solved exactly ($`d=2`$) or numerically ($`d=3`$). Note that ife, like osl, gives propagative (hyperbolic) equations; but these are nonlinear, unlike our wave equation. For nonzero $`\eta _2`$, the force chain network distinguishes between inward and outward radial directions. This does not contradict the axial symmetry present . But if as well the medium is locally symmetric, then $`\eta _2=0`$; in Eq.2, this recovers the model of Ref.. The latter can be viewed as a local version of the classical Janssen hypothesis . Janssen proposed a constant ratio between horizontal and vertical stresses, $`\sigma _{rr}=K\sigma _{zz}`$, but neglected altogether their dependence on $`r`$. Assuming also that friction at the wall is fully mobilized, with a friction coefficient $`\mathrm{tan}\varphi _w`$, he found the equation: $$M_e=M_{\mathrm{}}\left(1\mathrm{exp}\left[M_t/M_{\mathrm{}}\right]\right)$$ (3) with $`M_{\mathrm{}}=\rho D^2/2K\mathrm{tan}\varphi _w`$ for $`d=2`$, and $`M_{\mathrm{}}=\rho \pi D^3/16K\mathrm{tan}\varphi _w`$ for $`d=3`$; $`D`$ is the bin diameter. We turn now to the experimental procedure, described in detail in . The bin is a tube of diameter $`D=3.8`$ cm, filled with beads of glass (density $`\rho _b=2.6`$ g/cm<sup>3</sup>, diameter $`2`$ mm). The bottom comprises a very stiff scale plate ($`2\times 10^4`$ N/m). Initially, the tube is filled with a low packing density; this is increased by giving it small taps. The bottom plate is then lowered (by a few tens of microns) and the effective mass decreases monotonically to an asymptotic value; $`M_e`$ and the mean density $`\rho `$ are measured. The density is again increased by tapping, the plate lowered and further measurements taken. This entire procedure is done about $`30`$ times – each run giving results for the whole range of densities. The measured results for $`M_e`$ show a certain ($`M_t`$-dependent) ‘error bar’: not a measurement error of the mass, but arising from intrinsic fluctuations in the packing. This protocol is a major advance because (a) an ensemble average value for $`M_e`$ is found, improving accuracy; (b) due to the downward motion of the base, that wall friction is fully mobilized, which might not otherwise be the case . The wall friction angle is measured separately as $`\varphi _w=22^o\pm 2^o`$ , thus eliminating one fit parameter. The experimental results, for a packing density $`\rho =1.53`$ g/cm<sup>3</sup>, are compared in Fig.1 with three models: ife (which has no adjustable parameter once the internal friction angle $`\varphi =25^o\pm 2^o`$ is known); Janssen’s equation (one adjustable parameter); and the osl model (two adjustable parameters). Each plotted datapoint is itself a mean value, with an error bar $`\mathrm{\Delta }`$ shown in inset (a). (This is small at small $`M_e`$ but then grows rapidly.) To find the best fits, we have minimized the following: $$E^2=N^1\underset{i}{}(\delta M_e^i/\mathrm{\Delta }^i)^2$$ (4) where $`\delta M_e^i`$ is the difference between the $`i`$th experimental datapoint and the theoretical $`M_e`$ value, $`\mathrm{\Delta }^i`$ the observed error bar, and $`N`$ the number of datapoints. For our data, the (active) ife approach, using the measured friction values $`\varphi `$ and $`\varphi _w`$ is plainly inadequate. Better agreement with ife is found by taking $`\varphi `$ and/or $`\varphi _w`$ as fit parameters. Even then, the fit remains poor (e.g. $`E=4.43`$ for $`\rho =1.53`$ g/cm<sup>3</sup>) ; and the fitted values, $`\varphi =\varphi _w=30^o`$ are incompatible with those found by direct experiment. For given $`\varphi _w`$, ife systematically overpredicts the asymptotic stress; so the fitted $`\varphi _w`$ exceeds the real one. In systems where the wall friction is not fully mobilized, the error is harmlessly absorbed by the fit. In our system, the fitted value is higher than the fully mobilized $`\varphi _w`$ measured separately, which is unphysical. Unlike the ife model, Janssen’s model gives a fair approximation ($`E2`$; Table 1) but, as shown in inset (b), there is a clear systematic deviation: screening by the walls is in turn over- and underestimated for small and large $`M_t`$ values. (Note also that our $`K`$ parameters are higher than those usually reported : but as with ife, low fitted values might compensate for incompletely mobilized of wall friction.) This has led two of us to propose elsewhere an empirical model (not shown) where an excess contribution from grains at the bottom of the pile is added to the Janssen result. As shown in Table 1, the best-fit osl model does as well as this empirical model, with an error $`E1`$ : the systematic deviations are reduced, in particular in the first part of the curve. This can be understood by noting that within the osl model, the grains contained within a ‘light-cone’, resting on the bottom plate, cannot interact with the walls ; the mass of these grains is completely unscreened. Note the values found for $`\eta _2`$. The minimum of $`E(\eta _2)`$ is not sharp, but positive $`\eta _2`$ is always preferred (as for other types of grains ). A positive $`\eta _2`$ means that most of the weight follows the ‘inward’ characteristic thus reducing the screening effect of the walls. Conversely, in sandpiles (created from a point source) $`\eta _2`$ is negative ; this ‘outward’ transfer of weight is responsible for the pressure dip underneath the apex. Positive $`\eta _2`$ could be caused by slight inward avalanches of material as the base is lowered. Its decrease at higher densities might indicate a diminished susceptibility to this effect; alternatively the tapping procedure could progressively erase a local assymmetry induced by the initial fill. We now turn to the key results of this paper, for the response to an overload $`Q`$ placed on top of the granular column. (This is a solid piston, just narrower than the cylinder.) This is taken into account within the osl model by modifying the boundary conditions to include a uniform downward stress at the top surface. Such a load is found to be compatible. From the linearity of the osl model (also true of Janssen’s model) we then have: $$M_e=M_{\mathrm{}}f_0\left(\frac{M_t}{M_{\mathrm{}}}\right)+Qf_Q\left(\frac{M_t}{M_{\mathrm{}}}\right)$$ (5) In Janssen’s description, $`f_0(x)=1e^x`$ and $`f_Q(x)=e^x`$, so $`M_e`$ is monotonic in the poured mass $`M_t`$ (and constant when $`Q=M_{\mathrm{}}`$). The result of the osl model are more surprising: $`f_0(x)`$ and $`f_Q(x)`$ have different $`x`$ dependences. Hence $`M_e`$ is not monotonic in $`M_t`$; at intermediate $`Q`$ there is an ‘overshoot’ (Fig.2). In addition, both functions have a (slight) oscillatory character, caused by ‘resonances’: these are standing-wave modes of the wave equation, damped by ‘absorption’ arising from wall friction (see and ). In Fig.2, we show the experimental results obtained for various overloads $`Q`$. As shown in the inset, these results do indeed obey the linear relation, Eq.5, to good accuracy. A clear overshoot effect is also seen, although any further ‘resonant’ oscillations are small (even theoretically). Note that the osl predictions in Fig.2 use the same parameters as determined previously for $`Q=0`$. Thus osl, with no further fitting, gives a good quantitative account of the data for all $`Q`$. We have shown that simple hyperbolic models , encoding the presence of linear force chains , can be used to reproduce quantitatively the observed stress response of cohesionless granular media, not only in piles , but in bins. The same is not true of the traditional Janssen analysis. Nor is it true of ife; this does predict resonant behaviour (at least in local stresses ), but our results, even without overload, rule it out entirely as a physical model. Any expectation of nonmonotonicity in Fig.2 based on ife would thus have been misplaced. What of other continuum modelling strategies? Much recent work on bins and silos has studied elastoplastic constitutive models (also widespread in soil mechanics), often by a finite-element method. There are many such models, and a recent comparative study found little consensus among them . But we wonder whether these approaches can, with reasonably few fit parameters, reproduce the results of Figs.1 and 2. For example, the observed linearity in $`Q`$ (seen even for $`Q/M_{\mathrm{}}1`$) may set a challenge, although one finds numerically that, after summing stresses over the base, the (non-linear) ife model obeys to a good precision the linear relation (5). Linearity is, of course, also recovered if the material is entirely Hookean. The challenge is then to explain within a purely elastic theory the nonmonotonic (if not oscillatory) curves of Fig.2. The investigation of these important questions is underway . Finally it is important to map out more clearly the domain of validity of the hyperbolic approach (see e.g. ). In particular, our granular columns are tiny: only twenty grains or so across. These data clearly do not rule out a crossover to more conventional elastic or elastoplastic behavior at larger scales (e.g. where the grains start to deform) , although the hyperbolic approach also works well in conical piles up to 1 metre wide . Careful overload experiments on larger bins could be very valuable, as well as local stress measurements, which would reveal more clearly the oscillatory nature of the response. We thank P.G. de Gennes, J. N. Roux and G. Combe for very useful discussions. E.C. and L.V. thank J. Lanuza for technical assistance.
no-problem/9904/astro-ph9904337.html
ar5iv
text
# Discovery of Pulsed X-ray Emission from the SMC Transient RX J0117.6-7330 ## 1 Introduction The PSPC instrument onboard the ROSAT spacecraft made an 8.98 ksec observation of the Small Magellanic Cloud on 1992 Sep 30 - Oct 2 leading to the discovery (Clark, Remillard & Woo 1996) of RX J0117.6-7330, a bright X-ray source within 5 arc minutes of SMC X-1. The source was not detected in an observation of the SMC a year earlier and was found 246 days later to be dimmer by more than 2 orders of magnitude. Further analysis showed that the X-ray luminosity of $`2.3\times 10^{37}`$ (D/60 kpc)<sup>2</sup> ergs s<sup>-1</sup> (0.2-2.5 keV) was derived assuming a position in the SMC (Clark, Remillard & Woo 1997). Spectral analysis showed the source to be relatively soft, with a power law index of around 2.7 (although a power-law is not the best fit model). A Fourier analysis did not reveal any significant periodicities, with the authors lamenting an increase in spectral noise at frequencies below 0.1 Hz. The companion star first suggested by Clark, Remillard & Woo (1996) was observed by Charles, Southwell & O’Donoghue (1996) optically 1996 January. These authors determined that the B1-2 star of magnitude 14.2 proposed as the companion showed a strong IR excess and Balmer lines and a reddening typical of an OB star in the SMC, thus strengthening the association of the X-ray source with the SMC. They also argue that the luminosity and companion type indicates that the X-ray source is a neutron star (Coe et al. 1997). However, Clark et al. (1997) hypothesize that the system could harbor a black hole based on the e-folding X-ray decay time of 44 days, the rather soft spectrum, and the lack of any neutron star rotation period in the X-ray analysis. We have performed a reanalysis of the ROSAT/PSPC data and coupled it with hard X-ray observations by the CGRO/BATSE instrument. In Section 2, we present evidence for X-ray emission pulsed at a 22.067 second period which definitively establishes the X-ray source as a neutron star. We show the frequency history for RX J0117.6-7330 during this 100 day outburst which reveals an extremely large average frequency derivative of $`8.9\times 10^{11}`$ Hz s<sup>-1</sup> corresponding to a spin-up time scale of 16 years. The frequency derivative peaked at $`1.2\times 10^{10}`$ Hz s<sup>-1</sup>, with the pulse frequency increasing by 1.8% during the BATSE observations. The broad-band X-ray pulse shapes and pulsed flux are calculated in Section 3. Section 4 summarizes our findings and discusses RX J0117.6-7330 in the context of the Be class of HMXB’s. ## 2 Periodicity Search RX J0117.6-7330 was 5 arcminutes from the center of the PSPC field-of-view during an 8985 second exposure taken from MJD 48895.7 \- 48897.6 (MJD = JD - 2400000.5). We determine a total source count rate of $`4.43\pm 0.03`$ cts/sec from 0.1-2.4 keV. Photons in a circle of radius 1 arcminute surrounding the source position J2000 RA,Dec: (01 17 36, -73 30 00) were extracted and barycentered using standard FTOOLS software. A total of 33989 photons were available for timing analysis. These photons were collected into 5 msec bins over the full length of the observation, a time span of 162 ksec. An FFT of the resultant time series was then calculated, sensitive to periods in the range from 10 msec to 81000 seconds with the power per channel normalized to unity using the average power for all frequencies above 0.01 Hz. No frequency derivatives were included at this point of the analysis. A peak of power 30, normalized as above, at a frequency of 0.090825(2) Hz was evident in this analysis which warranted further study despite increased noise due to the complicated ROSAT exposure induced window function and spacecraft wobble. Verification of the pulsed signal comes from an archival search of data from the BATSE instrument on the Compton Gamma-ray Observatory. BATSE is capable of nearly continuous monitoring of hard X-ray sources using both the earth occultation technique (Harmon et al. 1992) and timing techniques (Bildsten et al. 1997). A search of archival BATSE FFT results identified a possibly related outburst some 60 days after the 1992 ROSAT observation at a frequency of 0.0914825(2) Hz, with a frequency derivative of $`9.7(2)\times 10^{11}`$ Hz s<sup>-1</sup> consistent with the direction of the SMC. With follow-up Epoch-folding based searches of the BATSE data during the ROSAT observation, it became apparent that the originally detected frequency was the second harmonic of the pulse frequency. From six days of BATSE data centered on the ROSAT observation, a barycentric pulse frequency of 0.045316682(55) Hz (MJD 48896.65) and frequency derivative of $`9.81(8)\times 10^{11}`$ Hz s<sup>-1</sup> were determined. Folding the ROSAT data with this frequency derivative gives a peak power at 0.0453168(3) Hz, consistent with the BATSE pulse period. Figure 1 shows the resultant ROSAT power spectrum calculated using the Z$`{}_{2}{}^{}{}_{}{}^{2}`$ statistic (Buccheri et al. 1983) over a narrow frequency range utilizing the above-stated frequency derivative. Also plotted is the same statistic for the BATSE data at frequencies near the ROSAT signal. The results of searching the BATSE DISCLA channel 1 data (20-50 keV, 1.024s resolution) from 1992 August 16 (MJD 48850) to 1993 January 12 for pulsations from RX J0117-7730 are presented in Fig. 2. These searches were performed in six day intervals, using an epoch-folding based search (see Bildsten et al. 1997) which used only the first and second harmonic of the pulse profile, and incorporated a search in both pulse frequency and frequency derivative. For intervals where pulsations were detected the pulse frequency and frequency derivative are shown. The frequency derivative peaks at $`1.2\times 10^{10}`$ Hz s<sup>-1</sup> 25 days before the 1992 ROSAT observations. The pulse frequency increases by 1.8% during the outburst. The signal is present for approximately 100 days, starting about 34 days before the ROSAT/PSPC observation. ## 3 Broad-band Pulse Profile and Flux Using the measured frequency and frequency derivative, we can construct the pulse profile for RX J0117.6-7330 for both soft and hard X-ray energies. Figure 3 shows both the ROSAT/PSPC pulse profile for 1992 Sep 30 - Oct 2 (MJD 48895-48897)and the CGRO/BATSE profile for MJD 48893.65 - 48899.65. Both datasets use the pulse phase model based on the BATSE data. The optimal ROSAT frequency and frequency derivatives are slightly different. While the very high frequency derivative, coupled with the long integration times (6 days for BATSE, 2 days for ROSAT) make absolute timing comparisons slightly problematic, there is no evidence for a loss of coherence in the BATSE folding and the pulse phases from the two instruments should be directly comparable. Figure 3 shows the pulse shapes for both energy ranges using an epoch at phase zero of MJD 48896.65. An extra phase offset of 0.14 was added in order to make the BATSE minimum correspond to phase zero. The overall profile shapes are similar. The peaks and minima of the lightcurves are generally in phase, but the primary peak in the ROSAT energy range becomes the secondary peak in the BATSE range. Similarly, primary and secondary minima are interchanged. We have tested our timing analysis methods using contemporaneous ROSAT/PSPC and CGRO/BATSE observations of PSR 1509-58. In this case, we find pulse profiles with shapes and radio-phase offsets consistent with previously published results(Greiveldinger et al. 1995, Ulmer et al. 1993). The overall pulse shape for RX J0117.6-7330 in both energy ranges are similar, so it is not out of the quesion that the shape as a whole has simply shifted. Apparent phase shifts of simple profiles in different energy bands have been previously observed. For example, the 1.2-2.3 keV and 18.4-27.5 keV profiles of GS 0834-430 observed with Ginga by Akoi et al. (1992) show a complex evolution with energy. It would be somewhat coincidental, however, for the phase shift to be such that the minima and maxima still coincide. At this point, we consider the peaks to be aligned, with the relative strengths to be changing. From these pulse profiles one may determine the pulsed flux and pulsed fraction. Using XSPEC, we calculate a total flux of $`5.1\pm 0.3\times 10^{11}`$ ergs cm<sup>-2</sup> s<sup>-1</sup>. Similar to Clark et al. 1997, we find the best fit to the ROSAT spectrum is a combination of power-law and bremsstrahlung or blackbody although a straight power-law fit of index $`2.65\pm 0.07`$ is not much worse. Of the 33989 total counts extracted for the light curve, 3829 comprise the pulsed excess giving a pulsed percentage of $`11.3\pm 2.3`$%. This corresponds to a total pulsed flux in the 0.2 to 2.5 keV band of $`5.6\pm 1.7\times 10^{12}`$ ergs cm<sup>2</sup> s<sup>-1</sup>. In the BATSE energy range, the phase-averaged pulsed fraction is more difficult to assess. An occultation analysis of RX J0117.6-7330 detects a clear signal over the same time frame as the epoch-folding analysis. However, source confusion could significantly contribute to the detected flux. For a 20 day period around the time of the ROSAT observation, the average total flux is $`0.012\pm 0.02`$ cm<sup>-2</sup> s<sup>-1</sup>. This corresponds to an energy flux of $`2.3\pm 0.4\times 10^{10}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> (assuming a power law index 3.0, 20-100 keV). The BATSE pulsed spectrum is best fit by a thermal bremsstrahlung model with temperature $`18\pm 3`$ keV. The integrated 20-70 keV pulsed flux is $`1.8\pm 0.2\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup> ( $`7.8\times 10^{37}`$ (D/60 kpc)<sup>2</sup> ergs s<sup>-1</sup> ). These values provide us with a lower limit to the pulsed fraction in the 20-70 keV range of 78% (a lower limit since the measured occultation flux is considered to be an upper limit to total emission due to possible source confusion and the occultation analysis went up to 100 keV). The directly measured flux in the 0.2-2.5 and 20-70 keV bands alone is then at least $`2.3\pm 0.2\times 10^{10}`$ erg cm<sup>2</sup> s<sup>-1</sup> ($`1.0\pm 0.1\times 10^{38}`$ (D/60 kpc)<sup>2</sup> ergs s<sup>-1</sup>). ## 4 Discussion X-ray pulsations at a period of 22.07 seconds from the bright X-ray transient RX J0117.6-7330 have been detected in both the 0.1-2.4 and 20-70 keV energy bands. This confirms the identity of the X-ray source as a neutron star rather than a black hole. Transient X-ray pulsars are typically found in Be systems which is consistent with, and supports the identification of, the proposed optical counterpart. The CGRO/BATSE detection allows long-term monitoring of the outburst from this source. The hard X-rays are detectable for over 100 days starting 34 days before the ROSAT observation began. A large average spin-up is present over the duration of the outburst resulting in a 1.8% change in frequency. The peak frequency changes appear 10-15 days before the ROSAT observation and approximately 20 days after the outburst began. Some of the frequency derivative may be caused by the binary orbit. Some of the variations seen in Figure 2 near the peak of the frequency derivative may be an orbital signature. Such variations are weak, however, compared to the overall accretion induced changes in frequency. The high intrinsic spin-up rates imply that an accretion disk is present about the neutron star, as is generally seen in the ‘giant’ or type I outbursts of Be/X-ray pulsars (Bildsten et al. 1997). The measured luminosity is at least $`1.0\times 10^{38}`$ erg s<sup>-1</sup> during the ROSAT observation in the combined 0.2-2.5 and 20-70 keV bands. The peak frequency derivative was about 15% higher than during the ROSAT observation. However, we have no data in the energy range from 2.5 - 20 keV. With a complicated ROSAT X-ray spectrum and a changing pulse fraction, is difficult to extrapolate our results to this energy range. However, it is likely that the luminosity in this range is comparable to that measured in the 0.2-2.5 and 20-70 keV ranges. Thus the peak luminosity, when adjusted for higher frequency derivative and 2-20 keV emission is $`1\times 10^{38}`$ erg s<sup>-1</sup>, and was probably higher than the conventional Eddington limit. This is consistent with the trend for Magellanic cloud binaries to be much brighter on average than their galactic counterparts probably due to the absence of metals which supply accretion inhibiting absorption (Clark et al. 1978; van Paradijs & McClintock 1995). From the peak spin-up rate and standard accretion theory (Bildsten et al. 1997 and references therein), we can obtain a lower limit on the peak luminosity of around $`2.5\times 10^{38}`$ erg s<sup>-1</sup>. This is consistent with the mean luminosities of X-ray binaries in the SMC (van Paradijs & McClintock 1995). It is inconsistent with the source being a galactic object. The pulse profiles in the two energy bands are similar in that they both have a double peaked structure. However, the main and secondary peaks are interchanged. X-ray binaries typically have pulse profiles which are often strongly energy dependent (White, Swank & Holt 1983). In this case, the double peaked lightcurve in both energy bands show the same peak-to-peak separation of 0.5. The pulsed fraction increases from 11% in soft X-rays to at least 78% in hard X-rays. If the overall morphology of the pulses is indeed the same, with the exception of the relative strengths of the two peaks, this may indicate a high magnetic field since at energies above the cyclotron energy, the pulse shape is expected to change significantly (e.g. see Sturner and Dermer 1994). Acknowledgements This project made use of software and data provided by the High-Energy Astrophysics Archival Research Center (HEASARC) located at Goddard Space Flight Center. This work was supported at Caltech in part by NASA NAG 5-3239. ## 5 References Aoki, T., et al. 1992, PASJ, 44, 641 Bildsten, L., et al. 1997, ApJS, 113, 367 Buccheri, R., et al. 1983, A&A, 128, 245 Charles, P.A., Southwell, K.A., & O’Donoghue, D. 1996, IAUC 6305 Clark, G., Doxsey, R., Lie, F., Jernigan, J.G. & van Paradijs, J. 1978, ApJ, 221, L37 Clark, G., Remillard, R., & Woo, J. 1996, IAUC 6282 Clark, G., Remillard, R., & Woo, J. 1997, ApJ, 474, L111 Coe, M.J., Buckley, D.A.H., Charles, P.A., Southwell, K.A., & Stevens, J. B 1998, MNRAS, 293, 43C Harmon, A. et al. 1993, in “Compton Gamma-Ray Observatory”, AIP Conf. Proceedings 280, (AIP: New York), 314 Greiveldinger, C., Caucino, S., Massaglia, S., Ogelman, H. & Trussoni, E. 1995, ApJ, 454, 855 Sturner, S.J. & Dermer, C.D. 1994, A&A, 284, 161 Ulmer, M.P., et al. 1993, ApJ, 417, 738 van Paradijs, J. & McClintock, J.E. 1995, in ”X-ray binaries”, ed. Lewin, W.H.G., van Paradijs, & van den Heuvel, E.P.J., Cambridge University Press, p. 113 White, N.E., Swank, J.H., & Holt, S.S. 1983, ApJ, 270, 711 White, N.E., Giommi, P. & Angelini, L. 1995, BAAS, 185 Figure 1: The ROSAT power distribution encompassing the first harmonic of the pulsed frequency. The BATSE distribution is for a narrow range around the ROSAT detection frequency. Figure 2: BATSE pulse timing analysis of RX J0117.6-7330. The top panel is the frequency history and the bottom panel the frequency rate history. Both plots use the best fit values for 6 day intervals with the ROSAT observation date shown by the dashed line. centerline Figure 3: The folded light curves for ROSAT 0.1-2.4 keV (top) and BATSE 20 - 70 keV (bottom) data. The BATSE profile is limited to six Fourier harmonics, resulting in the smooth shape. Flux errors are given for a set of approximately independent phases.
no-problem/9904/astro-ph9904147.html
ar5iv
text
# A Possible Lateral Gamma-Ray Burst Jet from Supernova 1987A ## 1 Introduction The SN1987A in the Large Magellanic Cloud was a rare and unique event thanks to its nearness to us. It has been observed with all available modern instruments since its explosion (e.g., Chevalier 1992) and is expected to have another magnificent display in a few years when the expanding ejecta hits the circumstellar ring (e.g., Borkowski, Blondin, & McCray 1997). Perhaps one of the greatest mysteries about SN1987A is the mysterious bright companion spot that was observed by optical speckle interferometry (Nisenson et al. 1987, N87 hereafter; Meikle, Matcher, & Morgan 1987, M87 hereafter) about one month after the SN1987A explosion, with a projected displacement from SN1987A of about 17 light days. Its close proximity to SN1987A, the fact that it was seen for only a few weeks, and its high brightness (about one-tenth of the brightness of SN1987A itself) make it certain that the spot was related to SN1987A itself. Several models were proposed soon after its discovery (Burrow & Subramanian 1987; Rees 1987; Piran & Nakamura 1987; Goldman 1987; Felten, Dwek, & Viegas-Aldrovandi 1989) but close examination showed that there are formidable difficulties with all these models (Phinney 1988). Recently, there was an interesting development in the observations of gamma-ray bursts: the supernova 1998bw was observed (Kulkarni et al. 1998b) to coincide spatially and temporally with the gamma-ray burst GRB980425. This has led to suggestions that gamma-ray bursts (GRBs) and supernovae (SNe) may be related (Wang & Wheeler 1998; Cen 1998). Energetics dictate that if SNe are responsible for producing GRBs, GRBs have to be beamed, that is, GRBs are jets from SNe. Independently but consistently, it is also required that the jets have a beaming angle of a few degrees in order to reconcile the high rate of SN events with the low rate of GRB events. The pressing question that arises then is how to test this scenario, where the vast majority of SN jets would travel laterally and would not be seen as GRBs due to the small beaming angle. It is the goal of this Letter to examine the properties of such lateral jets, suggesting that the observed bright companion spot of SN1987A may be caused by such a jet from SN1987A. ## 2 A Possible GRB Jet from SN1987A The bright SN1987A companion spot was observed independently by two groups (N87; M87). It was observed at H<sub>α</sub> and several other optical wavelengths using speckle interferometry by the CfA group (N87) on days 30 and 38 after the SN1987A explosion at a separation of $`0^{^{\prime \prime }}.059\pm 0^{^{\prime \prime }}.008`$ from SN1987A. Adopting a fiducial value of $`50`$kpc for the distance to SN1987A (Panagia et al. 1991; Gould 1995; Sonneborn et al. 1997; Lundqvist 1999), one obtains a perpendicular separation of $`r_{}=17`$light-days. Assuming that the spot was due to an ultra-relativistic jet leaving SN1987A at the time of the explosion, it gives a travel time $`\mathrm{\Delta }t=34`$days and yields an apparent perpendicular velocity of $`v_{}=0.5c`$ ($`c`$ is the speed of light). Because $`v_{}=c\mathrm{sin}\theta /(1+\mathrm{cos}\theta )`$, where $`\theta `$ is the angle between the jet direction and the observer-SN1987A vector, one finds $`\theta =53^{}`$. Thus, if the spot was due to the working surface of a relativistic jet, the jet was a receding one! The spot detected by M87 on day 50 at a separation $`0^{^{\prime \prime }}.074\pm 0^{^{\prime \prime }}.008`$ is fully consistent with the observations of N87 for a jet traveling at near the speed of light. Interestingly, new image reconstructions from the CfA speckle data show possible indications of a second, weaker jet, with a larger separation, on the opposite side of the SN1987A (Nisenson & Papaliolios 1999). Although working surface models were disfavored earlier (Phinney 1988), in light of this new observation of a counter jet and possible association of supernovae with GRBs (see §1), it seems worthwhile to re-examine this type of models in the context of GRB jets. Let us now examine the spectral properties of an ultra-relativistic GRB jet (Cen 1998). The jet can be characterized by its initial equivalent isotropic energy $`E_{iso}`$, initial coasting Lorentz factor $`\mathrm{\Gamma }_i`$ and opening solid angle $`\mathrm{\Omega }`$. For the current analysis only an external shock model (Rees & Mészáros 1992) is considered for the jet. The reverse shock is not considered). It is assumed that the external shocked electrons have a power-law distribution function: $$N(\mathrm{\Gamma }_e)d\mathrm{\Gamma }_e=A(t)\mathrm{\Gamma }_e^pd\mathrm{\Gamma }_e,$$ (1) where $`\mathrm{\Gamma }_e`$ is the Lorentz factor of electrons in the jet comoving frame, and $`A(t)`$ is a coefficient (to be determined) that is assumed to be a function of time only. Time $`t`$ measured in the burster frame is used as the time variable to express various quantities in the derivations, but the final results are converted to be shown using observer’s time. We will only consider synchrotron radiation from the shock heated electrons. For the analysis below we will assume that $`p>1`$ (Tavani 1996) so the integral of equation (1) is convergent at the high end. We set $$_{\mathrm{\Gamma }_e}^{\mathrm{}}N(\mathrm{\Gamma }_e^{})𝑑\mathrm{\Gamma }_e^{}=\mathrm{\Omega }r^2cnt_{cool}\mathrm{\Gamma }(r),$$ (2) where $`n`$ the number density of the external medium into which the shock is propagating, $`r`$ is the distance of the shock from SN1987A ($`r`$ and $`t`$ are used interchangeably throughout the paper assuming $`r=ct`$) and $`t_{cool}`$ is the electron cooling time (see equation ). Equation (2) is equivalent to stating that the number of electrons with $`\mathrm{\Gamma }>\mathrm{\Gamma }_e`$ at time $`t`$ is the number of electrons that have been shocked within the last $`t_{cool}`$ time interval, and earlier shocked electrons have cooled to lower energies. The last factor $`\mathrm{\Gamma }(r)`$ on the right hand side of equation (2) accounts for the time boost of a moving object. Integrating equation (2) yields $$A(t)=(p1)\mathrm{\Omega }r^2cnt_{cool}\mathrm{\Gamma }_e^{p1}(r)\mathrm{\Gamma }(r).$$ (3) The synchrotron cooling time measured in the comoving frame for an electron with $`\mathrm{\Gamma }_e`$ is $$t_{cool}=\frac{\mathrm{\Gamma }_em_ec^2}{P_e},$$ (4) The majority of the freshly shocked electrons (as we will adopt $`p6`$) have a Lorentz factor $$\mathrm{\Gamma }_e(r)=\mathrm{\Gamma }(r)\frac{m_p}{m_e}\xi _e,$$ (5) where $`\mathrm{\Gamma }(r)`$ is the shock Lorentz factor, $`m_p`$ and $`m_e`$ are proton and electron mass and $`\xi _e`$ is an equipartition parameter (Waxman 1997). The synchrotron radiation power, $`P_e`$, for an average electron with $`\mathrm{\Gamma }_e`$ in a randomly directed magnetic field $`B`$ is (Blumenthal & Gould 1970): $$P_e=\frac{4}{3}\sigma _Tc\mathrm{\Gamma }_e^2\frac{B^2}{8\pi },$$ (6) where $`\sigma _T=6.6\times 10^{25}`$cm<sup>2</sup> is the Thomson cross section. $`B`$ (Waxman 1997) is linked to the energy density of the postshock external nucleons, $`4\mathrm{\Gamma }(r)^2nm_pc^2`$, by $$\frac{B^2}{8\pi }=4\mathrm{\Gamma }(r)^2nm_pc^2\xi _B,$$ (7) where $`\xi _B`$ is the equipartition parameter for the magnetic field. Now we may proceed to obtain the total emission. For the present purpose it is adequate to assume that the spectral emissivity of each electron is a delta function $`P_\nu =P_e\delta (\nu \nu _e)`$, where $`P_e`$ can be expressed by equation (6), and the characteristic synchrotron radiation frequency $`\nu _e`$ for electrons with $`\mathrm{\Gamma }_e`$ is (Rybicki & Lightman 1979) $$\nu _e=\mathrm{\Gamma }_e^2\frac{eB}{2\pi m_ec}.$$ (8) Multiplying equation (1) by $`P_\nu `$ and integrating over $`\mathrm{\Gamma }_e`$, and using equations (3,4,6,7,8) give the total emission in the comoving frame $$j(\nu ,t)=\frac{1}{2}(p1)\mathrm{\Omega }r^2n(r)m_ec^3\mathrm{\Gamma }_e(r)\mathrm{\Gamma }(r)\nu _e^1(r)\left(\frac{\nu }{\nu _e}\right)^{\frac{p1}{2}}.$$ (9) It is noted that the above expression for $`j(\nu ,t)`$ is valid only above a lower cutoff frequency, $`\nu _l`$, since the total energy has to be finite. We observe the following simple ansatz to obtain $`\nu _l`$: the total radiation emitted during the time interval $`t_{cool}`$ (in the comoving frame) should not exceed the total energy input to the thermalized electrons during the same time interval, which translates to the following relation: $$_{\mathrm{\Gamma }_e}^{\mathrm{}}\mathrm{\Gamma }_e^{}m_ec^2N(\mathrm{\Gamma }_e^{})𝑑\mathrm{\Gamma }_e^{}=t_{cool}_{\nu _l}^{\mathrm{}}j(\nu ,t)𝑑\nu .$$ (10) Integrating both sides of equation (10) and using equations (3,4,6,7,8) yield $$\nu _l(t)=\left(\frac{p2}{p3}\right)^{\frac{2}{p3}}\nu _e(t),$$ (11) where $`\nu _e`$ is given by equation (8). Note that the derived $`\nu _l(t)`$ is slightly larger than $`\nu _e(t)`$. Below $`\nu _l`$, $`j(\nu ,t)`$ scales as $$j(\nu ,t)=j(\nu _l,t)(\frac{\nu }{\nu _l})^{1/3}.$$ (12) Synchrotron self-absorption becomes important only at lower frequencies than those of interest here and is thus ignored in the present analysis. In order to compute $`j(\nu ,t)`$ as a function of time, one needs to specify the circumstellar medium density distribution and the evolution of the bulk Lorentz factor of the shock. The standard steady wind model for the distribution of the circumstellar medium of a red supergiant is adopted: $$\rho (r)=\frac{\dot{M}}{4\pi v_wr^2},$$ (13) where $`\dot{M}`$ is the mass loss rate of the star and $`v_w`$ is the wind velocity. Using $`\dot{M}=4\times 10^5\mathrm{M}_{}`$yr<sup>-1</sup> and $`v_w=10`$km/s, as inferred from analysis of SN 1993J (Fransson, Lundqvist, & Chevalier 1996) yields $$n(r)=\left(\frac{r}{r_0}\right)^2\text{atoms/cm}\text{3}$$ (14) with $`r_0=1.1\times 10^{19}\text{cm}`$. This adopted density distribution is in fact quite consistent with the measured circumstellar density of SN1987A (e.g., Sonneborn et al. 1998). It is assumed that radiative losses are small, which is appropriate at the later times of the fireball evolution of interest here. Then, for our adopted $`\rho (r)`$, we find the following scaling solution for $`\mathrm{\Gamma }(t)`$ (Blandford & McKee 1976) $$\mathrm{\Gamma }(t)=\mathrm{\Gamma }_i(t/t_{dec})^{1/2}$$ (15) for $`t>t_{dec}`$. For $`tt_{dec}`$ we simply set $`\mathrm{\Gamma }(t)=\mathrm{\Gamma }_i`$. The transition time $`t_{dec}`$, measured in the burster frame, is set to be that when the mass of the swept-up circumstellar medium is equal to $`1/\mathrm{\Gamma }_i`$ of the initial fireball rest mass, yielding $$t_{dec}=\frac{E_{iso}}{4\pi m_p\mathrm{\Gamma }_i^2c^3r_0^2}.$$ (16) The flux density (in units of erg cm<sup>-2</sup> s<sup>-1</sup> Hz<sup>-1</sup>) of the jet at the observer at observed frequency $`\nu _{obs}`$ at observer’s time $`t_{obs}`$ is (Blandford & Konigl 1979) $$S_\nu (\nu _{obs},t_{obs})=\frac{1}{4\pi d_{SN}^2}j(\frac{\nu _{obs}}{D},\frac{t_{obs}}{1+\mathrm{cos}\theta })D^3\left(\frac{t_{obs}}{1+\mathrm{cos}\theta }\right),$$ (17) where $`D(t)(1+\beta \mathrm{cos}\theta )^1\mathrm{\Gamma }^1(t)`$ is the Doppler factor of the moving surface. Flux density is then converted to magnitude to compare with observations. Figure 1 shows the magnitudes of the jet at $`6560\AA `$ (solid curve) and $`4500\AA `$ (dashed curve), as a function of time measured in the observer’s frame, $`t_{obs}`$ \[note $`t_{obs}=t(1+\mathrm{cos}\theta )`$\]. Note that the open circle at day 98 is from a recent re-analysis of the observational data (Nisenson 1999). The observed points have been dereddened for extinction using the observed color excess $`E(BV)=0.19`$ for SN1987A (Fitzpatrick & Walborn 1990) and the extinction curve given by Seaton (1979). The following parameter values are used for the results shown in Figure 1: $`\xi _e=1/3`$, $`\xi _B=1/4`$, $`p=6.0`$, $`E_{iso}=2\times 10^{54}`$erg, $`\mathrm{\Gamma }_i=300`$, $`\mathrm{\Omega }=1.5\times 10^3`$sr, $`\theta =53^{}`$ and $`d_{SN}=50`$kpc. All the parameters used are characteristic of a supernova GRB jet proposed (Cen 1998) and are consistent with known GRB observations. Note that $`E_{iso}=10^{54}`$erg is capable of accounting for the most luminous GRBs observed (e.g., GRB971214, Kulkarni et al. 1998a). A detailed analysis of the jet in the context of a GRB and its afterglows will be given elsewhere. The GRB jet model fits the speckle observations of the spot at $`6560\AA `$ reasonably well over the entire period where observational data are available. However, the model appears to be too “blue” in the sense, i.e., it appears to be too bright at shorter wavelengths. For example, the computed spot at $`4500\AA `$ appears to be too bright by about two magnitudes compared to the observed spot. While the model is consistent (not shown in the figure) with infared observations of SN 1987A (e.g., at $`4.6\mu `$m, Bouchet et al. 1987), it also appears to be too bright in the UV compared to the total flux of SN 1987A (e.g., $`3100\AA `$, Kirshner 1987) by about a factor of ten, consistent with Phinney (1988). Clearly, more work is needed to improve upon this simple model. One way to avoid excess flux at short wavelengths is to introduce a large, intrinsic color excess, say, $`E(BV)1.5`$. The sharp turn near days 30-40 is due to the sharp turn in the spectrum at $`\nu _l(t)`$. The peak of the evolution of the jet brightness at a given wavelength corresponds to the epoch when $`\nu (t)=\nu _l(t)`$ and the sharp turn (to faint) of brightness of the jet at earlier times is primarily due to the fact that the $`D^3`$ term in equation (17) goes roughly as $`t^{3/2}`$ and $`\nu _l`$ increases rapidly with decreasing time (roughly $`t^{5/2}`$) combined with the spectral form of $`\nu ^{1/3}`$ below $`\nu _l`$. The evolution of the brightness of the jet past the peak is primarily determined by the combined effect of the evolution of $`\nu _l`$ and $`p`$. The quantity $`p`$ is well constrained by the observed evolution of the optical spot. We find that $`p6`$ is required in order to provide an acceptable fit to the observed optical spot. A larger $`p`$ ($`>7`$) would produce too steep a decline around $`t_{obs}30`$days. A smaller $`p`$ ($`<5`$) would produce a flat to rising temporal evolution and is inconsistent with the observation, i.e., the spot should have been visible longer. This “counterspot” on the opposite side (Nisenson & Papaliolios 1999) has an apparent separation of $`0^{^{\prime \prime }}.16`$ at the same time when the first spot was seen, giving an apparent superluminal perpendicular velocity of $`v_{,second}=1.36c`$. If one assumes that this weaker jet was in the exact opposite direction from the first (i.e., $`\theta _{second}=180\theta =127^{}`$), it is required that $`v_{second}=0.84c`$. However, due to the uncertainties in $`d_{SN}`$, it is possible that $`vc`$ may be allowed for both jets. It is interesting and should be emphasized that the two jets have unequal strengths, a prediction of the model proposed by Cen (1998) to account for the asymmetrical natal kick of neutron stars (pulsars). The asymmetrical pair of jets would induce star-recoil with the induced bulk velocity of the star being $`650(m_{star}/10M_{})^1`$km/s (Cen 1998), which moves about $`0^\mathrm{"}.03(m_{star}/10M_{})^1`$ in ten years (using $`d_{SN}=50`$kpc). This effect might be observable by detecting a shift of the position of the neutron star/pulsar or the centroid of the debris (Garnavich 1999). Based on available debris data (Haas et al. 1990; Spyromilio, Meikle, & Allen 1990; Jennings et al. 1993; Utrobin, Chugai, & Andronova 1995; Wang et al. 1996), if seems that the debris does not share the recoil movement of the star but shares the movement of the jet. ## 3 Conclusion It is shown here that the bright companion spot of SN1987A may be due to a receding ultra-relativistic jet traveling at $`53^{}`$ to the observer-to-SN1987A vector, through a circumstellar medium with a stellar wind like density $`\rho (r)r^2`$. The model provides an adequate explanation for the evolution of the observed optical companion spot, at least energetically, although more modeling is required to produce a satisfactory color of the spot. The parameters for the jet are characteristic of or required by the observed GRBs (with $`E_{iso}=2\times 10^{54}`$erg, $`\mathrm{\Gamma }_i=300`$) with an openning angle of a few degrees. If the jet traveled towards us along the line of sight, a very bright GRB would be seen with an inferred isotropic energy of $`10^{54}`$erg. If this model is correct, it implies that at least some GRBs would be seen as going through a medium with density $`\rho (r)r^2`$, rather than a uniform density medium. It is urgent to systematically search for GRB-supernova associations or supernova-jet associations in order to test this hypothesis. I thank Arlin Crotts, Dick McCray and Pete Nisenson for stimulating discussions, Jeremy Goodman for suggesting looking for evidence of lateral GRB jets and Micheal Strauss for carefully reading the manuscript. I want to thank the second referee, Jim Felten, for working tirelessly to help improve the paper. The work is supported in part by grants NAG5-2759 and AST93-18185, ASC97-40300.
no-problem/9904/astro-ph9904414.html
ar5iv
text
# INTENSITY CORRELATION BETWEEN OBSERVATIONS AT DIFFERENT WAVELENGTHS FOR Mkn 501 IN 1997 ## 1 Introduction During several months in 1997, the Active Galactic Nuclei Mkn 501 (z=0.034) went into a very high state of activity. It is the second closest BL Lac object and one of the two extra-galactic sources confirmed at very high energy (the other being Mkn 421). The source was extensively observed at wavelengths ranging from radio to VHE (Very High Energy) gamma-rays. Here we use essentially X-ray and gamma-ray information respectively from ASM (All Sky Monitor) on board RXTE and CAT (Cherenkov Array at Thémis). The spectral energy distribution of Mkn 501 exhibits two bumps. In agreement with the unified scheme of the AGN (), the first peak with a maximum at 10-100 keV () is thought to be produced by synchrotron emission of particles in a jet pointing towards us; the second peak culminates around 1 TeV (,), making Mkn 501 the hardest BL Lac object ever observed. In Section 2, the data sample is described while Section 3 provides information about theoretical models. Then, Sections 4 and 5 are respectively devoted to the search for variability and micro-variability in the VHE band. Sections 6 and 7 presents studies of correlations between emissions in X-ray and gamma-ray and correlation between different bands inside the VHE domain. ## 2 Data sample The 17.8 m<sup>2</sup> CAT imaging telescope started operation on the site of the former solar plant Thémis in the French Pyrénées (southern France) in Autumn 1996. A very high definition camera of 600 phototudes (4.8 field of view) allows an analysis using the Cherenkov light distribution inside the image of the cosmic-ray shower. A complete description of the telescope and the camera can be found in , and details about the analysis method are available in , . After data cleaning, a total of 57.2 hours of observation of Mkn 501 and 22.5 hours on control regions is used to compute the light curve above 300 GeV. This BL Lac object is also monitored on a regular basis by the All Sky Monitor (ASM) on board the Rossi X-Ray Timing Explorer (RXTE) (), providing information about the X-ray activity in the energy region from 2 to 12 keV. The ASM count rates are determined from the “definitive” ASM data which have a dwell duration larger than 30 seconds and a flux fit with a reduced $`\chi ^2`$-value below 1.5. The light curve is extracted using the “ftools 4.0” package. Figure 1 presents the light curve obtained with the CAT and ASM data between March and September 1997. ## 3 Models In the case of leptonic model, the high energy emission of blazars can be well explained by Inverse Compton emission of relativistic particules. As BL Lacs objects are characterized by the weakness of their thermal component and of their emission lines, the principal source of photons for the Compton interaction should be the synchrotron emission radiated by the high energy particles (SSC model). In the case of hadronic models the high energy emission is produced by pair cascades resulting from initial photopion processes. In the both cases observation of photons with an energy of $`\gamma m_ec^2`$ requires particles with a Lorentz factor at least $`\gamma `$. Particle acceleration is so an important ingredient of all model with emission at high energy. The variability and correlation between different wavelengths of this emission can give severe constraints on both models. ## 4 Search for short-term variability The search for short term variability is of special importance as it provides immediately an upper limit on the source size $`R`$ for a given Doppler factor $`\delta `$ with few assumptions needed. In order to confront a specific model or define a region in the ($`R`$,$`\delta `$) plane, few hypothesis must be made. While variability at a daily scale is directly seen Fig. 1, the search for intra-night variability must be studied in more detail. In this paper the aim is not a systematic study of all scales of variability but the identification of a rise time, i.e. the necessary time for doubling the flux with a significance of at least 3 $`\sigma `$. However, as the source is observed for only about one-two hours per night, we can not check sub-day variations with durations larger about 30 minutes. At the other extreme, if we are interested in a significant rise time of 1-10 minutes, the flux has to be larger than 6 gamma per minute. Only three nights satisfied this requirement: MJD 50551.08 (8.7 $`\gamma `$/min), 50554.13 (14.3 $`\gamma `$/min) and 50606.96 (7.7 $`\gamma `$/min). The night of April 16, with the strongest flare (MJD 50554), appears to be flat when it is studied with time binnings of 1 to 10 minutes; this is also true for the night of April 13. Only June 7 exhibits a lightcurve with a regular increase of the flux at the beginning of the night (from 3.5 to 9 $`\gamma `$/min) in $`\mathrm{\Delta }t`$30 minutes, as shown in Fig. 2. The $`\chi ^2`$ of the fit by a constant on the first 6 points is 13.7. Thus the size $`R`$ of the emission region must be less than $`c\delta \mathrm{\Delta }t`$ which leads to $`R<\mathrm{5.4\; 10}^{13}\times \delta `$ cm. It is possible to combine this limit with two other constraints: * $`\gamma \gamma `$ opacity $`<`$ 1, to allow TeV photons to escape (upper limit); * compatibility with the observed ratio Lum<sub>sync</sub>/Lum<sub>IC</sub> from (lower limit). Fig. 3 presents the results obtained assuming an homogeneous source and a particle distribution in $`\gamma ^2exp(\gamma /\gamma _{max})`$ which leave a very restricted domain with $`R<10^{12}`$ cm and $`\delta >`$ 100. These values are quite unrealistic, the Doppler factor being too high. So the hypothesis of homogeneity should be certainly revised for further studies. Moreover, in such a homogeneous model, we would expect a very strong short-term correlation between X-ray and gamma-ray which is not observed (see Section 6). ## 5 Search for micro-variability As suggested by M. Urry (private communication), a flare could result of the superimposition of many “micro-flares” with duration of a few seconds. We can study the time arrival distribution of the photons for the 1.5 hours of observation taken the April 16<sup>th</sup>. Because of the very high flux ($``$ 8$`\times `$ the Crab flux), we can directly use the “ON” data which contain about 90% of gammas. The result is presented in Fig. 4. No deviation from the expected exponential distribution is observed: no flares of a few seconds contribute significantly to the very high flux observed during this night. ## 6 Search for correlation X-gamma A correlation between X-ray emission, due to synchrotron radiation and gamma-ray emission, assumed to be produced by inverse-Compton scattering would reinforce the fact that the same population of particles is at the origin of both emissions. It does not presume of the nature of these particles, leptonics or hadronics. Fig 5 presents the nightly correlation of the flux of Mkn 501 as measured by ASM and CAT. We restricted the sample to the months April and June when the flux was variable and enough data were available. The night April 16 is not included in the sample because it strongly dominates the fluctuations in the TeV band and does not correspond to any flare in the 2-12 keV band. Unfortunately, it is not possible to study correlation at shorter time scales because of the differences in time sampling: CAT observe the source consecutively for 1 to 3 hours per day while ASM takes data for 90 seconds more or less every hour: the statistics in the X-ray band is not significant per bin of a few hours only. The quite low correlation coefficient ($``$0.35$`\pm `$0.10) could be explained by the differences in time sampling. If we can not put by this way constraints on delays between X-ray and gamma-ray fluctuations, we have evidence of simultaneous evolution in both energy ranges, comforting the hypothesis of a common origin for both emissions. The lack of ASM flare corresponding to the April 16 gamma flare, despite an increase of the flux observed by BeppoSAX () in the 1-200 keV band simultaneously with the CAT flare, is understandable if the evolution of the spectral energy density follow the evolution schematized in Fig. 6. ## 7 Search for correlation gamma-gamma One can also search for time delay between “hard” and “soft” CAT photons. In an inhomogeneous model with pair creation (as described in for an external model), the most energetic photons should lag the softer ones because they can escape later. In order to test this hypothesis, the lightcurves in two energy bands were computed for two nights (see Fig. 7). Whatever the total flux is constant or increasing, no experimental evidence for time delay is seen. This chaotic behaviour needs a detail study of a time-dependent inhomogeneous model. Such models are in progress (for a model only including external Compton interaction see Renaud & Henri, these proceedings). ## 8 Conclusion During its strong outburst of 1997, the BL Lac object Mkn 501 was extensively observed at many wavelengths, in particular by All Sky Monitor in the X-ray band and by CAT at TeV energies. The search for variability and correlations can give clues for the understanding of the still most mysterious class of AGN. With a rise time of 30 minutes and a nightly correlation between X-ray and gamma-ray emissions, models with a same population of particles, leptonic or hadronic, yielding in a compact zone synchrotron radiation and inverse-Compton emission are reinforced. More detailed studies indicates that a simple homogeneous model can not account for the observations and further refinements like inhomogeneity seem to be necessary. ## 9 References
no-problem/9904/physics9904026.html
ar5iv
text
# 1 A variant of Crookes’ radiometer is an example of chiral interaction (CI). The asymmetry in the optical absorption coefficient between the black and the silver blades generates a temperature difference between them when light is shining at the device. This expands the air close to the black blade which, in turn, pushes it around the axis AB in the preferred direction towards the black vane. This is an example of physical rather than of a geometric chirality MECHANICAL ASPECT OF CHIRALITY AND ITS BIOLOGICAL SIGNIFICANCE G. Gilat Department of Physics Technion, Haifa 32000, Israel Abstract Chirality is not just a structural artifact in biology but it may provide for a genuine biological advantage. This is due to the phenomenon of chiral interaction (CI) which is described here for mechanical-chiral devices. The main mechanical feature of chiral interaction is its mode of selecting one direction of rotation out of two possible and opposite ones. For example, a given chiral device such as a rotating water sprinkler, rotates in one direction. What does rotate in the opposite direction is the mirror of this given sprinkler. This mode of operation indicates space-time (PT) invariance which causes it to be also time-irreversible. This also causes a chiral device to become non-ergodic on microscopic level. This prevents certain chiral systems from readily reaching thermal equilibrium, and causes the system to act non-ergodically, which is crucial for living systems as well as for molecular evolution. I. Introduction The phenomenon of structural chirality of crystals and molecules has been recognized since the early 19th century when Arago<sup>1</sup> and Biot<sup>2</sup> did demonstrate the effect of optical activity in quartz crystals. Louis Pasteur<sup>3</sup> was the first to observe chirality on a molecular level and specified it as “dissymmetry”. The term “Chirality” was first proposed by Kelvin<sup>4</sup>, who also defined this concept as a property of any object that cannot superimpose, or overlap, completely its mirror image. It is well known nowadays that most biological molecules consist only of one out of two possible enantiomers, e.g., left-handed (L) amino-acids or right-handed (D) sugars. This phenomenon leads to an interesting question concerning the origin of such a selection and there exist several speculations that try to solve this enigma. A considerably more constructive question to be asked is: “why are the molecules of life chiral?”, or “is there any biological advantage in their chiral nature when compared to achiral molecules?” And the answer to be given here is: “Yes”, and this is regardless of their being L or D. The source of such an advantage comes from a specific type of interaction that exists between various mechanical devices and different media such as flow of air or water and even light radiation. What is special about this interaction is the presence of chiral structure in these devices which makes their mode of operation quite different from other interactions which are based on achiral objects such as the Newtonian mass point. Such an interaction is to be labeled “chiral interaction” (CI) and it has already been described and treated in several publications.<sup>5-7</sup> It is interesting to note that this phenomenon of chirality is largely being overlooked in classical mechanics, and only a few physicists are aware of it. II Chiral Interaction in Mechanical Devices As mentioned above Chiral interaction is not limited to molecular structure only but there exist various mechanical chiral devices that function according to the same principle. The most spectacular example is the rotating windmill. When wind blows at the rotors of a mill it “knows” immediately in which direction to rotate, clock- or anti-clockwise. If the windmill, in particular its vanes, were symmetric with respect to their axis of rotation, the mill would not be able “to make up its mind” in which direction to rotate. The shape of the vanes that come in contact with the wind is designed to break the L-D symmetry in order to choose one specific sense of rotation out of two possible ones. In other words, the shape of the vanes where they come in contact with the wind, is chiral. Another simple mechanical chiral device is the rotating water sprinkler, or the wind propeller. The next example, shown in Fig. 1, is somewhat more sophisticated, and it depends on a different mode of chirality. This device is a simple variant of the Crookes’ radiometer. The active medium in this case is light radiation and the element of chirality consists of two different colors on both sides of the rotating blades, being black and silver, respectively. This is a special example of a physical rather than a geometric chirality. Physical chirality<sup>7,8</sup> is presented by a chiral distribution of a physical property rather than of a chiral geometric shape. Physical chirality differs from a geometric one in its capability of interacting with various media surrounding it. In the case of this special example of a variant of the ordinary Crookes’ radiometer, the physical distribution of the black and silver colors on the blades represents a large difference in the light absorption coefficient of the blades. The silver side reflects back the light, whereas the black side absorbs the light and therefore becomes warmer in comparison to the silver one. This causes the air at the black side to become heated and as a result it expands and pushes back the black blade which ends up in rotating the device in one preferred direction out of two possible ones, that is, in the direction of the black side of the blade. The selection of the sense of rotation of the blades is made by the variance of colors on the blades and their interaction with light. The physical chirality<sup>7-8</sup> here is represented by the distribution of the optical absorption coefficient on the blades and not by their geometric shape. So far, all the examples presented here are of mechanical nature, i.e., the effect of chiral interaction (CI) results in a mechanical rotation in one preferred direction out of two possible ones around a given axis of the device. This is so because the source of the interaction, i.e. the medium, usually is external to the chiral device. In the case of an electric device which generates a static current flowing in one preferred direction out of two possible ones, the source of the interaction may be embedded within the device. This is the case, for instance, of an electric cell which consists of two different electrodes coming in contact with an electrolyte. It is obvious that in order to reverse the direction of the current it is necessary to interchange the two electrodes with one another, but this does not necessarily require any chiral operation. This is so because the source of the current flow is internal, so that the structure of the device can be designed to be completely symmetric, as is the case of a cylindric battery. In the case of an electric thermocouple, the operation can still be regarded as CI since the source of the interaction, i.e. the temperature difference, is external to the device. To summarize the main features of CI in mechanical-devices let us notice that in all these examples there exists a specific medium with which the chiral device is interacting and this always happens at an interface separating the device from the active medium. The physical chirality is built into this very interface. CI is a process by which energy is transferred from the active medium into the chiral device which causes a rotational motion, being usually of mechanical nature. The most significant aspect of the chiral interaction process is its mode of selecting only one direction of rotation out of two possible ones, which is to be attributed to the chiral nature of the device. The mirror image of the given chiral device, interacting with the same medium, does produce the same rotational motion in the opposite direction. This is to be regarded as a main feature of chiral interaction. The effect of CI on a molecular level is less recognized in comparison to that of macro-chiral devices. The main reason for this is that CI occurs mainly within the chiral system, or molecule, in the form of a small perturbation which is not easy to detect experimentally. Much more recognizable are the physical effects associated with molecular chiral structure, such as optical activity and related effects. These are to be regarded as “chiral scattering”, rather than CI, since the observable effect concerns the polarized light being scattered away from the chiral molecule rather than its effect on the molecule itself which is the chiral interaction<sup>7</sup>. A physical model of chiral interaction (CI) in soluble proteins and amino-acids has already been developed and described in detail in several publications<sup>5-7,9-10</sup>. The description here contains only a few main features of this model. The active medium in this model consists of random motion of ions throughout the solvent, being mostly regular water. The chiral element that interacts with these ions is an electric dipole moment that exists in the protein structure. This interaction causes the moving ion to be deflected away from its original track of motion, which creates a continuous perturbation along the $`\alpha `$-helix of which the proteins consist, and this perturbation moves along the helix in one preferred direction out of two possible ones. This is an abbreviated description of the model of the CI that occurs in soluble proteins. A more detailed description appears in earlier publications.<sup>5-7,9-11</sup> The perturbation resulting from this CI is of electric nature, rather than mechanical one. Another interesting aspect of this CI is that it happens at an interface separating the interior of the protein molecule from the solvent and this is due to the globular structure of the soluble protein. It is well known that all soluble proteins become globular before they can function as enzymes<sup>12</sup>. As mentioned above chiral interaction is not easy to observe experimentally on a molecular level due to the smallness of this effect. Nevertheless, there exists a certain strong supporting evidence owing to an experiment performed by Careri et al.<sup>13</sup>. This experiment concerns the effect of dehydration on the protonic, or ionic, motion throughout the hydration layers surrounding soluble proteins. The amount of water around each protein is crucial for free protonic motion around the molecule. By dehydrating these water layers, a level is being reached when protonic motion becomes awkward and stops, and so does also, simultaneously, the enzymatic activity of the protein molecule. On re-hydrating the molecule, protonic motion becomes possible again and this, in turn, causes also the onset of enzymatic activity of the protein molecule. This experiment shows that free ionic motion around soluble protein molecules is crucial for their enzymatic activity. III Physical Aspects and Biological Significance The main objective of the present article is to draw several physical conclusions from the phenomenon of chiral interaction in macro-chiral devices which are quite different from the regular rules that exist in classical physics. The source of these differences arises from the presence of chirality as a major physical object instead of the Newtonian mass point that plays a basic role in classical mechanics and is also of ideal spherical symmetry, that is, completely achiral. From these conclusions analogies can be drawn for the function of molecular chiral systems which may well be of considerable significance in molecular biology. The first conclusion concerns the symmetry operation of time-reversibility that exists in many examples of classical physics. In the case of chiral devices time-reversibility does not exist. The windmill, for example, rotates about its axis in a given direction due to its chiral design. Upon reversing time, the rotational velocity changes its direction, so that it rotates in the opposite direction. This cannot happen mechanically, since there is no mechanism in a windmill that can rotate it backward. What is rotating in an opposite direction is the mirror-image of the given windmill but not the given windmill itself. The meaning of this mode of symmetry operation is that a windmill is time-irreversible, but it obeys space-time invariance. Let us now express the space-time inversion by $`P`$ and $`T`$, respectively: then a windmill does obey the PT-invariance transformation. The same is true for all the examples given here of macro-chiral devices. The same is also true for the protein molecule example. This rule of PT-invariance (or CPT invariance) is recognized in physics due to the presence of a spin in quantum mechanics, but is absent in classical physics because the concept of structural chirality in physics it is largely ignored. This concept appears much more in chemistry due to the presence of many chiral molecules in organic compounds, but chirality is mostly regarded and treated in chemistry in terms of shape, rather than in its physical properties and contents. For this reason the concept of CI has so far been largely overlooked in researches concerning chirality. These space-time symmetry operations for chiral devices contain also a certain aspect of practicality. This is in contrast to their presence in the domain of elementary particles in physics. From this view point any time-reversible process is almost completely useless from any aspect of practicality. For instance, any machine operation that produces a certain function or object, or any information transfer process are completely time-irreversible. These include also biomolecular functions such as enzymatic activity and other processes which are totally time-irreversible. For such reasons of practicality the function of chiral devices or molecules is of special significance in comparison to the time reversible phenomenon that appears in many physical operations that involve the presence of the Newtonian mass point. The next consideration involves the mode of selection where only one direction of rotation is excited by CI, whereas the opposite direction remains largely inactive. Judging it from a thermodynamical aspect, what is happening here is that only one half of the energy that can be activated by the device is excited by CI whereas the other half remains inactive. On a molecular level this means that only one half of the energy states of the system are populated by CI become active, whereas the other half remains empty. In other words the system does not readily reach thermal equilibrium. This conclusion is of very substantial and significant meaning for living systems because reaching thermal equilibriuim means death. Another way to look at this effect is from the view-point of ergodicity. This concept was introduced by Boltzmann about a century ago and it regards the mode of approaching thermal equilibrium of a single particle. This is done in a process of time average instead of an ensemble statistical average. In view of this, the average velocity of such a particle in any given direction approaches zero as a function of time. This is not the case if, for instance, the average angular velocity of a windmill is regarded as a function of time. This is, actually, true for any effect of CI when averaged as a function of time. The selection of one direction of motion out of two possible ones, which is typical of CI, makes its mode of motion to become a non-ergodic entity, which again causes it to avoid thermal equilibrium. This property of CI on a microscopic level is, apparently, one of the most crucial advantages that chirality, or CI, does contribute to molecular biology. It does postpone thermal equilibrium, or death, for a considerable length of time, so that the biological function of these molecules can go on and not be affected by approaching thermal equilibrium. In this context it is interesting to mention also Schrödinger who became interested in the phenomenon of life and wrote a book “What is Life?” in 1944<sup>14</sup>. His main conclusion in this book was: “It feeds on negative entropy”, and this is exactly what CI is performing in its mode of selecting only one direction of motion. It is thus reducing the entropy of the system. In relation to the phenomenon of non-ergodicity it is also important to mention its relevance to the process of evolution, which is crucial in biology. It is reasonable to deduce that systems that reach readily thermal equilibrium never undergo the process of evolution and remain basically unchanged forever. Non-ergodic molecular systems have a better chance to undergo evolutionary changes. IV Discussion and Conclusions Another aspect of CI regards the nature of this effect, as well as the amount of energy that is involved in such a process. In discussing this case it is not relevant to consider macro-chiral devices and our main concern is CI of biomolecular systems. Unfortunately, our knowledge, at present, of this effect is very limited and this is mainly because of the small amount of energy involved in this effect, being, in fact, subthermal in size<sup>5-6,10</sup>, which is quite difficult to observe experimentally. This may evoke criticism as to its possible significance. Such a criticism is rather common among scientists who tend to attribute significance to energy according to its size. What may be much more significant than the amount of energy involved in a process, is its quality, or degree of sophistication. This is particularly so in complex systems such as certain biomolecules, proteins for example. The feature of time-irreversibility of CI does contribute a degree of sophistication. In addition to this there exist quite a few examples of highly sophisticated modes of energy which require rather minute quantities of energy. For instance, an information transfer process requires a high degree of sophistication in wave modulation and its size of energy is relatively small. In comparison, boiling a kettle of water requires much more energy, but what is its degree of sophistication? Another example is the small amount of energy required to switch on and off a much larger source of energy. This example can be regarded as a mode of control mechanism energy which may also be the significance of CI in biology. Another, rather cruel example, concerns the magnitude of energy change that occurs over a short time interval during which a creature ceases to live. The change in energy is quite small but its significance is impressive. In these examples and many others, the amount of energy involved in their performance is of little interest, but their main effect is in their degree of sophistication. It is too early now to attempt to specify any definite mode of sophisticated performance of CI on a biomolecular level. Such effects have to be studied further in order to become better understood. The experiment of Careri et al.<sup>13</sup> provides for a supporting evidence for the significance of CI in the enzymatic activity of proteins. It is quite reasonable to assume that in biology, or in any living substance, the phenomenon of existence of such modes of highly sophisticated and low energy signals may have an important function in its life process. In conclusion, let us mention again the significance and importance of the phenomenon of chirality in biology, in particular the features of chiral interaction (CI) that differ largely from those of classical physics that do not contain chiral structure in their interactions. These include the PT-invariance of chiral interaction, which causes it to be time-irreversible. The selectivity nature of CI by preferring one mode of motion out of two possible ones, enables CI to become non-ergodic, which is a crucial element in life processes and biological evolution. References 1. F. Arago, “Memoires de la Classe des Sciences Math. et Phys. de l’Institut Imperial de France”, Part 1, p. 93 (1811). 2. J.B. Biot, “Memoires de la Classe des Sciences Math. et Phys. de l’Institut Imperial de France, Part 1, 1 (1812). 3. L. Pasteur, Ann. Chim. 24, 457 (1848). 4. W.T. Kelvin, “Baltimore Lectures”, C.J. Clay & Sons, London (1904). 5. G. Gilat, Chem. Phys. Lett. 121, 9 (1985). 6. G. Gilat, Mol. Eng. 1, 161 (1991). 7. G. Gilat, “The Concept of Structural Chirality”, in “Concepts in Chemistry”, Ed. D.H. Rouvray (Research Studies Press and Wiley & Sons, London, New York, 1996) p. 325. 8. G. Gilat, J. Phys. A.22, p. L545 (1989) ibid Found. Phys. Lett. 3, 189 (1990). 9. G. Gilat and L.S. Schulman, Chem. Phys. Lett. 121 13 (1985). 10. G. Gilat, Chem. Phys. Lett. 125, 129 (1986). 11. G. Gilat, to appear in the Proceedings of a Conference on “Biological Homochirality”, Serramazzoni, Italy, September 1998. 12. H. Tschersche in “Biophysics”, Eds. W. Hoppe, W. Lohmann, H. Markl & H. Ziegler (Springer Verlag, Berlin 1983) p. 37. 13. G. Careri, A. Giasanti and J.A. Rupley, Phys. Rev. A37, 2763 (1988). 14. E. Schrödinger, “What is Life”, Cambridge University Press, Cambridge, 1944.
no-problem/9904/cond-mat9904331.html
ar5iv
text
# Magnetotransport in manganites and the role of quantal phases I: Theory ## Abstract A microscopic picture of charge transport in manganites is developed, with particular attention being paid to the neighborhood of the ferromagnet-to-paramagnet phase transition. The basic transport mechanism invoked is inelastically-assisted carrier hopping between states localized by magnetic disorder. In the context of the anomalous Hall effect, central roles are played by the Pancharatnam and spin-orbit quantal phases. PACS numbers: 75.30 Vn, 03.65 Bz, 71.23 An Introduction: The double-exchange interaction (DEI) has long been understood to play a major role in the ferromagnet to paramagnet transition (FPT) in the manganite systems La<sub>1-x</sub>A<sub>x</sub>MnO<sub>3</sub> (where A stands for Ca, Sr or Pb), the transition being accompanied by a metal-insulator transition (MIT). In this DEI picture, proposed by Zener and elaborated by Anderson and Hasegawa , intra-atomic Hund’s Rule coupling leads to a modulation of the amplitude for the hopping of outer-shell carriers between neighboring Mn ions. It is now recognized, however, that the physics of the DEI is insufficient to fully explain the observed phenomenon of colossal magnetoresistance (CMR) (i.e., the strong magnetic-field induced suppression of the resistivity, and the shift to higher temperatures of the peak in its temperature-dependence ). Moreover, interest in CMR has led to a re-examination of the nature of the FPT and the MIT in manganites and related compounds. In these contexts, Millis, Shraiman and co-workers have proposed that the DEI is accompanied by a large Jahn-Teller lattice distortion that would cause the polaronic collapse of any conduction band. Varma and Sheng et al. have argued, in contrast, that the MIT in manganites is an Anderson localization transition, resulting from magnetic and nonmagnetic disorder. The purpose of the present Letter is to address charge transport in manganites in the vicinity of the FPT and MIT from the vantage point afforded by the Hall effect. In a companion Letter , we present and analyze experimental data on the Hall effect and CMR in La<sub>2/3</sub>(Ca,Pb)<sub>1/3</sub>MnO<sub>3</sub>. We shall argue that, near the FPT, owing to charge-carrier localization, transport is via hopping between localized states. The central part of our analysis is the discussion of the microscopic mechanism of the Hall effect (HE) in manganites. In ferromagnetic metals HE’s include an ordinary Hall effect (an OHE, which arises from the Lorentz force acting on the current carriers), as well as an anomalous Hall effect (AHE), i.e., a Hall current proportional to the average magnetization and independent of demagnetization effects. For metallic states, microscopic mechanisms yielding the AHE have been discussed, e.g., in Ref. , the essential ingredient being the spin-orbit interaction (SOI), which leads to an AH current in the presence of magnetization (of any origin) . If charge transport near the FPT and MIT in manganites does indeed occur via hopping, then we are led to the general issue of the microscopic mechanism of the AHE in hopping conductors. This AHE cannot be captured by a picture based solely on the Anderson-Hasegawa analysis of the DEI within a pair of Mn ions. Such a picture includes only the modulation of the magnitude of the hopping between the pair determined by the relative alignment of the core spins on the ions \[via a factor $`\mathrm{cos}(\theta /2)`$, where $`\theta `$ is the angle between (semiclassical) directions of the core spins\]. This insufficiency of a pair-based picture is an analog of Holstein’s observation that to capture the OHE in hopping conductors requires the analysis of at least triads of atoms, and of the attendant Aharonov-Bohm (AB) fluxes through the polygons whose vertices are the atomic sites. Therefore, we shall examine a mechanism for the AHE involving hopping within triads of sites, in which fundamental roles are played by two quantal phases: (i) the SOI phase, acquired by electrons propagating in the presence of SOI; and (ii) the (quantal) Pancharatnam phase (an electronic analog of the (optical) Pancharatnam phase accrued by classical light under a sequence of polarization changes ). In this electronic analog, outer-shell carriers, hopping from ion to ion, acquire a phase determined by the solid angle subtended by the spherical polygon whose vertices are the orientations of the core-spins of the ions visited. Recently, Kim et al. revisited the theory of the AHE, in the context of a model that includes DE, SOI, and gauge fluxes arising from interactions. In work done in parallel with the present work, Ye et al. , focusing on the metallic regime, address the relationship between the AHE, Berry phases and the SOI, and like the present work, incorporate the effect of topological spin excitations. Localization of carrier states in manganites: Several general ideas support the notion that the carrier states are localized at temperatures near to the (zero magnetic field) FPT, as well as at higher temperatures. Approaching the FPT from the ferromagnetic side, there is a net magnetization of the core spins, but strong thermal fluctuations render typical instantaneous configurations of the spins rather inhomogeneous. Among these fluctuations there are “hedgehog” excitations which, owing to their topological stability, are long-lived, and become more numerous as the FPT is approached . Due to the resulting inhomogeneity, the carrier-transfer matrix elements are reduced . In the quasi-static approach, the (fast) carrier motion takes place through a slowly (time-)varying background core-spin configuration. In generic instantaneous random backgrounds, the carriers are expected to be localized. Support for this notion comes from the close similarity between manganites and a system of randomly located identical impurities (i.e., off-diagonal disorder) for which localization has been established by Lifshitz . Although spin-induced randomness in manganites \[arising from the random $`\mathrm{cos}(\theta /2)`$ factors\] is weaker than the randomness considered in , the two systems are expected to exhibit similar localization behavior. Furthermore, the condition for localization (viz., that the characteristic spatial scale of the outer-shell wavefunctions be much smaller than distance between sites) is well obeyed in manganites. Therefore, provided that there is appreciable randomness in the core-spins orientations, the transport properties should be determined by the short-distance physics of clusters of ions and magnetic correlations between such clusters. Moreover, nonmagnetic disorder and possible states bound to the A-ions are capable of amplifying the trend towards localization . Thus the following picture of transport in manganites emerges. (i) In the paramagnetic insulating state, the percolative motion of strongly localized carriers is suppressed by magnetic randomness. (ii) With decreasing temperature, the carrier hopping (which is assisted by phonons) becomes less frequent, so that the resistivity grows, and (iii) reaches a maximum when the core spins become sufficiently correlated that a tenuous but infinite conducting network emerges. (iv) With further reduction in temperature, the resistivity decreases abruptly, in line with the traditional percolation picture , as more and more hopping paths become available to carriers, owing to further alignment of core spins. This abrupt decrease terminates when the newly available hopping paths are effectively shunted by the existing network. (v) Further decrease in temperature leads to further core-spin alignment and, ultimately, to the metallic state. Anomalous Hall effect and the Pancharatnam phase: In order to discuss the AHE in conditions of charge-carrier localization, we begin by considering a triad of magnetic sites formed by neighboring Mn ions, as shown in Fig. 1. Within such triads, there is an elementary AHE, which arises from interference between hopping processes connecting two sites: e.g., between the direct process (having complex amplitude $`𝒜`$) and the indirect process of hopping via the third site (with amplitude $`𝒜^{}`$). Ignoring any AB flux (as we shall not be concerned with the OHE), we observe that any phase difference between the amplitudes $`𝒜`$ and $`𝒜^{}`$ stems from spin quantal phases and transfer-assisting mechanisms (e.g., electron-phonon processes); we call the latter transfer phases $`\varphi _\mathrm{T}`$. To understand the nature of the spin quantal phases we examine the single-particle quantum mechanics of a carrier hole added to a triad of $`\mathrm{Mn}^{3+}`$ ions. We regard the spin-3/2 core spins of the Mn ions as large enough to be treated classically, so that one can assign a definite direction to each. Thus, a generic configuration is characterized by the unit vectors $`\{𝐧_1,𝐧_2,𝐧_3\}`$ located at the triad of sites $`\{𝐑_1,𝐑_2,𝐑_3\}`$ (see Fig. 1). Due to Hund’s Rules there is, at each site, a single state available to the added hole, its spin opposing the core-spin direction. We treat the remaining spin (and orbital) states as simply being inaccessible. Postponing to below the effects of SOI, we assume that the transfer of holes (being effected by either the kinetic energy or the electron-phonon interaction) has no effect on the spin of the carriers. However, such transfer in the presence of the constraints set by the core-spin orientations has a striking effect on the quantal dynamics of the carriers: in the quantal amplitude for a hole to move once around the triad, viz. $`𝒜^{}𝒜\mathrm{Tr}P_3P_2P_1`$ \[where the operator $`P_j(1+𝝈𝐧_j)/2`$ projects onto the spinor aligned with $`𝐧_j`$\], there arises a quantal phase, $`\mathrm{\Omega }/2=\mathrm{tan}^1\left[𝐧_1(𝐧_2\times 𝐧_3)/(1+𝐧_1𝐧_2+𝐧_2𝐧_3+𝐧_3𝐧_1)\right],`$ which modulates the interference between direct and indirect hopping between sites of a triad. $`\mathrm{\Omega }`$ is the (oriented) solid angle of the geodesic triangle on the unit sphere having vertices at $`\{𝐧_1,𝐧_2,𝐧_3\}`$. It is the quantal analogue of the classical optical phase discovered in the context of polarized light by Pancharatnam . What Pancharatnam showed is that a cyclic change of the polarization state of light is accompanied by a phase shift (i.e., a phase anholonomy) determined by the geometry of the cycle, as represented on the Poincaré sphere of light polarizations, via the area $`\mathrm{\Omega }`$ of the geodesic polygon whose vertices are these polarizations. In the DE electronic analog, the transporting of a d-shell carrier to an ion with a differently oriented core spin in a spin-independent process amounts to a connection, which determines the phase of the spin state in terms of the sequence of sites visited. A hole returning to a site returns to the same spin-state, except that its phase is augmented by a quantal Pancharatnam phase, determined by the geometry of the cycle, as represented on the sphere of core-spin orientations, via half the area of the geodesic polygon whose vertices are these orientations. In contrast to Berry’s adiabatic phase , the phenomenon described here is associated with sudden changes in the carrier-spin state, and need not be slow. In the hopping regime, this Pancharatnam phase leads to an AHE in an elementary triad in much the same way that an AB flux leads to the OHE in Holstein’s spinless model . In Holstein’s model, carrier hopping between sites of a triad occurs due to carrier-phonon interaction; the Hall current arises due to the interference of direct and indirect hopping. The transfer phase is nontrivial ($`\varphi _\mathrm{T}=\pi /2`$) when this interference involves processes assisted by two phonons . (For the longitudinal conductivity, a single phonon-assisted transfer is sufficient.) In a uniform magnetic field $`𝐁`$, processes associated with a nontrivial $`\varphi _\mathrm{T}`$ lead to an OH conductivity , $$\sigma _{\mathrm{OH}}=G\{ϵ_j\}\mathrm{sin}\varphi _\mathrm{T}\mathrm{sin}\left(𝐁𝐐/\varphi _0\right),$$ (1) where $`\varphi _0`$ is the (electromagnetic) flux quantum, $`𝐐`$ is the (oriented, real space) area of the triad, and $`\{ϵ_j\}_{j=1}^3`$ are the energies of the three single-particle eigenstates, which are invariant under reversal of the AB flux. The explicit expression for $`G`$ can be found in Ref. . ($`G`$ also depends on the populations of these states, which themselves may depend on particle-particle correlations.) We now turn from the OHE in a spinless triad to the elementary AHE in a triad of magnetic sites. Like the OHE, this AHE results from two-phonon processes, but is due to the Pancharatnam phase instead of the AB phase. (We do not yet included the effects of the SOI.) Mutatis mutandis, we arrive at the AH conductivity, $$\sigma _{\mathrm{AH}}=G\{\epsilon _j\}\mathrm{sin}\varphi _\mathrm{T}\mathrm{cos}\frac{\theta _{13}}{2}\mathrm{cos}\frac{\theta _{32}}{2}\mathrm{cos}\frac{\theta _{21}}{2}\mathrm{sin}\frac{\mathrm{\Omega }}{2},$$ (2) where $`\mathrm{cos}\theta _{jk}𝐧_j𝐧_k`$, $`\mathrm{cos}(\theta _{jk}/2)`$ are Anderson-Hasegawa factors, and $`\{\epsilon _j\}`$ are the energies of the three single-particle eigenstates consistent with Hund’s Rules, these energies depending on $`𝐧_j𝐧_k`$ and $`\mathrm{cos}(\mathrm{\Omega }/2)`$. Note that $`G`$ is invariant under Pancharatnam flux reversal $`\mathrm{\Omega }\mathrm{\Omega }`$, and $`\sigma _{\mathrm{AH}}`$ is odd under it. We have shown that, for a triad with given set of core-spin orientations, an AHE arises from the quantal Pancharatnam flux. However, there is a significant difference between this AHE and the OHE. In the former (nonmagnetic) case, a uniform applied magnetic field leads to a net macroscopic OHE, even though contributions of triads may cancel one another . In the latter case (magnetic sites, Pancharatnam flux, and no SOI), even the presence of macroscopic magnetization of the core spins is insufficient to cause a macroscopic Hall current. The reason for this is that in obtaining the macroscopic AH current from Eq. (2) we must average over the configurations of the core spins. In the absence of SOI, the distribution of these configurations, although favoring a preferred direction (i.e., the magnetization direction $`𝐦𝐌/M`$), is invariant under a reflection of all core-spin vectors in any plane containing the magnetization. This fact, coupled with the fact that $`\{\epsilon _j\}`$ are also invariant under such reflections, guarantees that the macroscopic AH current will average to zero. (We do, however, expect significant AH current noise, in the FPT regime, owing to the fluctuations of the Pontryagin charge of the triads of core spins and, hence, elementary Pancharatnam fluxes.) In order to capture the AHE in materials such as manganites, we must consider not only the Pancharatnam phase but also some agent capable of lifting the reflection invariance of the energies $`\{\epsilon _j\}`$ and the distribution of core-spin configurations, and hence of inducing sensitivity to the sign of the Pancharatnam flux. Such an agent is provided by the SOI, $`H_{\mathrm{so}}=\alpha 𝐩\left(𝝈\times \mathbf{}U\right),`$ where $`U`$ includes ionic and impurity potentials, $`\alpha `$ is the SOI constant, $`𝐩`$ is the electron momentum, and $`𝝈`$ are the Pauli operators. The SOI leads to an effective SU(2) gauge potential $`𝐀_{\mathrm{so}}=\alpha m(𝝈\times \mathbf{}U)`$ , providing an additional source of quantal phase. For a given core-spin configuration, SOI favors one sense of carrier-circulation around the triad over the other, and thus favors one sign of Pancharatnam phase over the other. There are two resulting contributions to the AHE. The first, $`I_{\mathrm{AH}}^{(1)}`$, arises from the SOI-generated dependence of $`\{\epsilon _j\}`$ on the three vector-products $`𝐍_{jk}𝐧_j\times 𝐧_k`$ which, together with the magnetization direction $`𝐦`$, yield a preferred value for the triad Pontryagin charge $`q_\mathrm{P}`$ \[$`𝐧_1(𝐧_2\times 𝐧_3)`$\] and, hence, a preferred Pancharatnam flux. To see the origin of this dependence on $`𝐍_{jk}`$, let us analyze corrections, due to the SOI, of hole eigenenergies. If the on-site energies of the holes are nondegenerate, it is straightforward to determine that phase sensitivity first enters at third order (in the transfer matrix elements): $`\delta \epsilon _j=_{h,k(j)}\mathrm{Tr}T_{jh}T_{hk}T_{kj}/(\epsilon _j\epsilon _h)(\epsilon _j\epsilon _k)`$, where $`T_{jk}P_jV_{jk}P_k`$ are the transfer amplitudes, $`V_{jk}`$ are the hopping matrix elements, and $`\mathrm{Tr}`$ denotes a trace in spin space. (For degenerate $`\epsilon `$’s one should obtain the splitting of the $`\epsilon `$’s due to transfer in the absence of SOI, and then include SOI at the final step, arriving at the result to be given below.) The hopping matrix elements are sensitive to the SOI quantal phase, and can be written in the form $`V_{jk}=V_{jk}^{\mathrm{orb}}L_{jk}`$, where $`L_{jk}(1+i𝝈𝐠_{jk})`$, $`V_{jk}^{\mathrm{orb}}`$ is an orbital factor, and $`𝐠_{jk}`$ ($`\alpha _{\mathrm{so}}`$) is an appropriate vector that describes the average SOI for the transition $`jk`$ . Then, e.g., the first-order (in $`\alpha `$) shifts in the $`\epsilon `$’s are given by $`\delta \epsilon _j\mathrm{Tr}T_{13}T_{32}T_{21}=4\mathrm{Re}\mathrm{Tr}P_1L_{13}P_3L_{32}P_2L_{21}`$ (3) $`=𝐍𝐠+2\left(𝐍_{13}𝐠_{13}+𝐍_{32}𝐠_{32}+𝐍_{21}𝐠_{21}\right),`$ (4) where $`𝐍𝐍_{13}+𝐍_{32}+𝐍_{21}`$, and $`𝐠𝐠_{13}+𝐠_{32}+𝐠_{21}`$. When $`U`$ in the SOI is a superposition of spherically-symmetric ionic potentials, the vectors $`𝐠_{jk}`$ have a transparent geometrical meaning, and are proportional to the triangle area $`Q`$. In this case, $`𝐠_{jk}=a_{jk}\left(𝐑_j𝐑_h\right)\times \left(𝐑_k𝐑_h\right)=a_{jk}𝐐`$. Then the SOI-generated shift in the carrier eigenenergies has the Dzyaloshinski-Moriya form . By incorporating the shifts (4), together with the Pancharatnam phase, we arrive at the elementary AH conductivity $$\sigma _{\mathrm{AH}}^{(1)}=𝐧_1(𝐧_2\times 𝐧_3)_j\delta \epsilon _jG/\epsilon _j.$$ (5) As discussed above, Eq. (5) has a nonzero macroscopic average, owing to the presence of a characteristic Pontryagin charge constructible from the $`𝐍_{jk}`$, that feature in the energy shifts, and the magnetization direction. A second consequence of the SOI-generated carrier-energy shift (4) leads to the second contribution, $`\sigma _{\mathrm{AH}}^{(2)}`$. Due to the feedback of the (fast) carrier freedoms, which provide an effective potential for the (slow) spin system, determined by Eq. (4), the equilibrium probabilities of spin configurations having opposing Pancharatnam fluxes will no longer be equal. (For this contribution, which is related not to $`G/\epsilon _j`$ but to $`G`$ itself, there is no need to account for SOI-induced carrier-energy shifts in the current now being averaged over a nonsymmetric spin-configuration distribution.) A contribution with this origin has also been considered in Ref. . $`\sigma _{\mathrm{AH}}^{(1)}`$ and $`\sigma _{\mathrm{AH}}^{(2)}`$ are of the same order of magnitude. We now consider the question of how the physics of elementary triads relates to the macroscopic properties of manganites. For hopping conductivity, the pathways taken by the current depend sensitively on the details of the core-spin configuration, and regions having certain local spin configurations will tend to be avoided by the current. This fact renders rather subtle the spin-configuration averaging procedure, which must also account for effects such as local spin correlations and excitations of various types. Let us try to identify which triads the AH current tends to favor. To favor their participation in the conducting network, the three core spins in the triad should at least have positive components along the magnetization direction. For magnetic compatibility with its neighbors, the net magnetization of the triad should be roughly that of the bulk. Furthermore, to contribute appreciably to the AH current, the triad should be as splayed as possible, given the above constraints. This favors symmetrical configurations of the triad spins; we call these triads optimal triads. As we shall see in the companion Letter , these observations allow us to explain the striking experimental finding that the Hall resistivity depends on the magnetic field and temperature only through the magnetization and, moreover, to predict the explicit form of this dependence. Spin-orbit quantal phase and AHE in nonmagnetic triads: We conclude with a remark concerning the hopping AHE in systems with nonmagnetic ions (in which case no Pancharatnam phases arise). In this case, the SOI quantal phase itself leads to an AHE. In the presence of the SU(2) gauge potential $`𝐀_{\mathrm{so}}`$ electrons moving around a nonmagnetic triad acquire a full SU(2) phase, not projected due to Hund’s Rules. Due to carrier-spin polarization, this phase leads to an AH current in the same way that the AB flux leads to the OHE . We emphasize that that the OH and AH effects in such systems should be experimentally distinguishable from one another. For example, the AHE in the hopping regime should be observable in inversion layers of doped semiconductors in the absence of magnetic field, when electron spin-polarization is induced by circularly polarized light. We thank I. L. Aleiner and V. L. Pokrovskii for helpful discussions, and authors of Ref. for communicating preliminary results of their work. This work was supported by DOE Grant DEFG02-96ER45439.
no-problem/9904/cond-mat9904272.html
ar5iv
text
# Effect of Coulomb blockade on STM current through a granular film \[ ## Abstract The electron transport through an array of tunnel junctions consisting of an STM tip and a granular film is studied both theoretically and experimentally. When the tunnel resistance between the tip and a granule on the surface is much larger than those between granules, a bottleneck of the tunneling current is created in the array. It is shown that the period of the Coulomb staircase(CS) is given by the capacitance at the bottleneck. Our STM experiments on Co-Al-O granular films show the CS with a single period at room temperature. This provides a new possibility for single-electron-spin-electronic devices at room temperature. \] Charging effects on single electron tunneling such as Coulomb blockade and Coulomb oscillation have attracted much interest. Recent advances in nano-technology enable us to fabricate small tunnel junctions where charging effects play an essential role. The I-V characteristics for double tunnel junction systems have been extensively studied, where the step-like structure called Coulomb staircase(CS) is observed when the resistances between the junctions are not equal. For these asymmetric double junction systems, the junction with a large resistance behaves like a bottleneck of the tunneling current and the central island is charged through the other junction up to the maximum charge. The tunneling current jumps when the maximum charge changes. Recently, the CS has been observed by using a tip of scanning tunneling microscope(STM) of nanometer-size in highly resistive granular films as well as metal-droplet systems . For a granular film, which is considered to be an array of tunnel junctions, the observed CS implies that a bottleneck exists in the conducting paths. However, the physics behind the CS in a granular film is not clear because it contains many granules with different size and the conducting path may form a three-dimensional network in a thick granular film. Bar-Sadeh et al. have studied the STM current through a nonmagnetic granular film, Au-Al<sub>2</sub>O<sub>3</sub>, by using the cryogenic STM. They observed the CS at temperatures $`T=4.2`$ and 78 K, and analyzed the experimental data by using a triple barrier model. They assumed that the rate for tunneling between two granules is small and the number of excess electrons in each granule is treated independently. Because of these assumptions, the CS was given by the superposition of two different periods in their model: one was determined by the tunnel process between the STM tip and a granule, the other between another granule and the base electrode. On the contrary, as we will show later, the CS has a single period which is determined by the capacitance at the bottleneck. In this Letter, we study the electron transport through an array of tunnel junctions consisting of an STM tip and a granular film both theoretically and experimentally. In this system, we can vary the tunnel resistance between the tip and a granule on the surface by changing the distance between them. When the tunnel resistance between the tip and a granule on the surface is much larger than those between granules, a bottleneck of the tunneling current is created in the array. Theoretically, we find that the period of the CS is given by the capacitance of the bottleneck even in a thick film with many granules between the tip and the base electrode. We present results from STM experiments on 10 nm- and 1$`\mu `$m-thick Co-Al-O granular films which have the CS with a single period at room temperature. We propose that tunnel magnetoresistance (TMR) oscillates with the same period as the CS for magnetic granular films. Our theoretical and experimental studies provide a new direction for single-electron-spin-electronic devices at room temperature. Our setup is schematically shown in Figs. 1(a) and 1(b). The current flows from the STM tip to the base electrode through a granular film. The system with a thin granular film in panel (a) is modeled by the one-dimensional array of tunnel junctions as in panel (c). We will show that our experimental results for the 10 nm-thick film are well explained by this model with $`N=3`$. On the other hand, such a one-dimensional array is not appropriate for a thick granular film, because, as illustrated in Fig. 1(b), the conducting paths spread and form a three-dimensional network as the distance from the tip increases. We model this system by a one-dimensional array of $`N`$-junctions connected to a Bethe-lattice network with 3 nearest neighbors as shown in Fig. 1 (d). Each junction is characterized by a tunnel resistance $`R_j`$, capacitance $`C_j`$, and carrying charge $`Q_j`$. The number of excess electrons in the $`k`$-th granule is represented by $`n_k`$. The free energy for the state characterized by the set of charges $`\{n_i\}(n_1,n_2,\mathrm{})`$ is given by $$F(\{n_i\})=\underset{i}{}\frac{Q_i^2}{2C_i}(Q_1e\xi )V,$$ (1) where $`Q_1`$ represents the charge at the surface, $`\xi `$ is the number of electrons supplied by the voltage source and $`i`$ goes from 1 to $`N`$. When an electron tunnels through the $`k`$-th junction, the charge $`Q_i`$ deviates from its initial value by $`\delta Q_i^k`$. Let us consider the energy change due to the single electron tunneling, $`E_k^\pm `$, where the superscript $`+()`$ denotes the process that an electron tunnels upward(downward) through the $`k`$-th junction in Figs. 1(c) and 1(d). From Eq. (1), we obtain $`E_k^\pm (\{n_i\})`$ $`=`$ $`{\displaystyle \underset{i}{}}\left(\stackrel{~}{{\displaystyle \underset{j<i}{}}}{\displaystyle \frac{\delta Q_j^k}{C_j}}\right)n_i+{\displaystyle \frac{1}{2}}{\displaystyle \underset{i}{}}{\displaystyle \frac{\left(\delta Q_i^k\right)^2}{C_i}}`$ (3) $`(\delta Q_1^k\pm e\delta _{1,k})V,`$ where $`\stackrel{~}{_i}`$ represents the summation along the conducting path, and $`\delta _{1,k}`$ is Kronecker’s delta function. The deviation $`\delta Q_i^k`$ is determined by Kirchhoff’s law and is independent of the number of excess electrons $`\{n_i\}`$. The tunneling rate is obtained by using the golden rule as $$\mathrm{\Gamma }_k^\pm (\{n_i\})=\frac{E_k^\pm (\{n_i\})}{e^2R_k[\mathrm{exp}(E_k^\pm (\{n_i\})/T)1]}.$$ (4) By solving the master equation for the probability of states $`p(\{n_i\})`$ , the tunneling current through the $`k`$-th junction is obtained as $$I_k=e\underset{\{n_i\}}{}p(\{n_i\})\left[\mathrm{\Gamma }_k^+(\{n_i\})\mathrm{\Gamma }_k^{}(\{n_i\})\right].$$ (5) For simplicity, we neglect the effect of residual fractional charge, cotunneling, spin accumulation and level quantization in the granules. We first look at the I-V characteristics for a one-dimensional array of tunnel junctions with a bottleneck of the tunneling current between the tip and a granule on the surface. In Figs. 2(a) and 2(b), we present the numerical results for tunnel junctions with $`N=25`$ at $`T=0`$, $`N`$ being the number of junctions. Due to the charging energy in each junction, single electron tunneling is blocked as long as the bias voltage $`V`$ is lower than the threshold value $`V_T`$. From Eq. (3) the bias voltage $`V_T^k`$, above which the initial state $`(0,\mathrm{},0)`$ is unstable for electron tunneling through $`k`$-th junction, i.e., $`E_k^+(0,\mathrm{},0)<0`$, is given by $$V_T^k=\frac{_i(\delta Q_i^k)^2/C_i}{2(\delta Q_1^k+e\delta _{1,k})}=\frac{e}{2}\left(\frac{1}{C_T}\frac{1}{C_k}\right).$$ (6) The last expression in Eq. (6) is easily obtained by considering the equivalent network shown in the inset of Fig. 2(a), where $`1/C_L=_{i<k}1/C_i`$ and $`1/C_R=_{i>k}1/C_i`$. The threshold voltage is given by $`V_T=\mathrm{min}_kV_T^k`$. As pointed out by Melsen et al., the Coulomb blockade region increases as the number of junctions, $`N`$, increases as shown in Figs. 2(a) and 2(b). For large $`N`$, the threshold voltage is expressed in terms of the average capacitance $`C`$ as $`V_T(e/2C)(N1)`$ and is proportional to the film thickness, because the total capacitance is inversely proportional to the layer thickness. For a thick film with many granules between the tip and the base electrode, the conducting paths form a three-dimensional network as shown in Figs. 1(b) and 1(d). The decrease of the total capacitance for a junction network with increasing film thickness is much weaker than that for a one-dimensional array. Therefore, the threshold voltage $`V_T`$ of a thick film remains of the same order of the magnitude as that for a thin film. Later we will show that the experimental results for 1 $`\mu `$m-thick film are well explained by considering the junction network as the Bethe-lattice as shown in Fig. 1(d). The CS is classified into two types as shown in Figs. 2(a) and 2(b). The criterion is whether the capacitance $`C_1`$ of the bottleneck is the smallest of all of the junctions, i.e., whether $`V_T=V_T^1`$ or $`V_T=\mathrm{min}_{k1}V_T^k`$. A typical CS for $`V_T=\mathrm{min}_{k1}V_T^k`$ is given in Fig. 2(a), where electrons start to accumulate at the bottleneck once the bias voltage exceeds $`V_T`$. Until the accumulated electrons tunnel out through the bottleneck, the voltage drop caused by them forbids electrons to tunnel through the other junctions. Therefore, the stable state is given by $`(n_1,0,\mathrm{},0)`$ and the tunneling current jumps at the bias voltage where the number of accumulated electrons $`n_1`$ changes. This number $`n_1`$ is the minimum value satisfying the conditions $$E_1^+(n_1,0,\mathrm{},0)<0,E_{k1}^+(n_1,0,\mathrm{},0)0.$$ (7) The first condition represents the accumulated electrons tunneling out through the first junction. The second indicates that electrons cannot tunnel through the other junctions, and is rewritten as $`(e^2/C_1)n_1eVV_T^{k1}`$. Therefore, the tunneling current jumps at the bias voltage $`V`$ given by $$V=\underset{k1}{\mathrm{min}}V_T^k+(e^2/C_1)(n_11).$$ (8) From Eq. (8), one can easily see that the CS has a single period of $`e/C_1`$. The tunneling current for each plateau of the CS is approximately given by $`Ie\mathrm{\Gamma }_1^+(n_1,0,\mathrm{},0)`$. On the other hand, if $`C_1`$ is the smallest, the tunneling current does not jump at $`V_T`$ as shown in Fig. 2(b). For $`V_T<V<\mathrm{min}_{k1}V_T^k`$, the tunneling current is approximately given by $`Ie\mathrm{\Gamma }_1^+(0,\mathrm{},0)`$. Once $`V`$ exceeds $`\mathrm{min}_{k1}V_T^k`$, electrons start to accumulate at the bottleneck and the I-V curve shows a CS. The bias voltage $`V`$ at which the tunneling current jumps is given by Eq.(8). When the bottleneck is placed at another junction $`j(1<j<N)`$, the stable state is $`(0,\mathrm{},0,n_{j1}=n_j,n_j,0,\mathrm{},0)`$ and the period of the CS is determined by the capacitance $`C_j`$ at the bottleneck. The CS is classified into two types in the same way as those shown in Figs. 2(a) and 2(b). However, the criterion is whether the capacitance $`C_j`$ is the smallest or not. The period of CS in a thick film shown in Figs. 1(b) and 1(d) is also determined from Eqs. (3) and (7), as long as the tip is coupled to a single granule on the surface and the bottleneck is created between them. Therefore, the voltage where the tunneling current jumps is given by Eq. (8). The CS has a single period determined by the capacitance at the bottleneck, even for a thick film. We have performed STM experiments on Co-Al-O granular films. Samples with different thicknesses were prepared by an oxygen-reactive sputtering with a Co-Al alloy target; 10 nm- and 1 $`\mu `$m-thick Co<sub>36</sub>Al<sub>22</sub>O<sub>42</sub> films consisting of Co granules embedded in an Al-oxide matrix were deposited on glass substrates. For the 10 nm-thick Co<sub>36</sub>Al<sub>22</sub>O<sub>42</sub> film, a 200 nm-thick Co-Al alloy layer was inserted between the Co-Al-O granular film and the glass substrate as a base electrode. A conventional STM system was used for I-V measurements under high vacuum. The I-V curves were obtained by using a platinum tip at room temperature, and by placing the tip on a Co granule. The tunnel resistance between granules for Co-Al-O granular films is estimated to be about $`10^510^6\mathrm{\Omega }`$ from the average diameter of granules ($``$ 3 nm), the average intergranular distance ($``$ 1 nm), and electrical resistivity . On the other hand, the tunnel resistance between the tip and a granule on the surface, $`R_1`$, is about $`10^810^9\mathrm{\Omega }`$. Therefore, $`R_1`$ is $`10^210^4`$ times larger than the other tunnel resistances. The experimental I-V curves for a 10 nm-thick Co<sub>36</sub>Al<sub>22</sub>O<sub>42</sub> film are plotted in Fig. 3 (a). There exist two or three granules between the tip and base electrode. Even at room temperature, the tunneling current shows a clear CS with a single period. The I-V curves for a 1 $`\mu `$m-thick Co<sub>36</sub>Al<sub>22</sub>O<sub>42</sub> film are plotted in Fig. 4(a). We also find the CS with a single period. Note that for this thick film, 200 to 300 Co granules exist in the direction perpendicular to the film plane between the tip and the substrate. Let us now analyze our experimental results by using the theory presented above. We first examine the I-V curves for the 10 nm-thick film. A triple tunnel junction model with a bottleneck between the tip and a granule on the surface is used for the calculation. The calculated I-V curves are shown in Fig. 3 (b). Parameter values were chosen in the ranges estimated from the experiments. We find that the theoretical curves explain the experimental ones very well. For the thick film, on the other hand, the conducting paths are considered to form a three-dimensional network inside the film; this is a more complicated system compared to the thin granular films for which the CS was observed so far. The value of the threshold voltage $`V_T0.5`$V is not explained by the one-dimensional array model, because the threshold voltage $`V_T`$ is proportional to the layer thickness in this model and is about 100 times larger than that for 10 nm-thick film; this is in contrast with the experimental data. As mentioned before, this discrepancy can be resolved by considering a network of the conducting paths. We describe the thick film as a one-dimensional array connected to a Bethe-lattice network as shown in Fig. 1(d). The bottleneck is created between the tip and a granule on the surface. Therefore, the stable states are given by $`(n_1,0,\mathrm{},0)`$ and the tunneling current jumps when the number of accumulated electrons $`n_1`$ changes. The voltage where the current jumps is given by Eq. (8) and the CS has a single period of $`e/C_1`$ even for a thick film. The equivalent network to obtain the threshold voltage for $`k`$-th junction $`V_T^k`$ is shown in Figs. 4 (b) and 4(c), where we assume, for simplicity, the same capacitance $`C`$ for all junctions. The key point is that for electron tunneling in the one-dimensional array, the Bethe-lattice network is replaced by its total capacitance as shown in Fig. 4 (b). The total capacitance for the Bethe-lattice is $`C/2`$ and the bias voltage $`V`$ at which the tunneling current jumps (see Eq. 8) is given by $`V=(e/2C)(N+1)+(e/C)(n_11)`$. The experimental results shown in Fig. 4(a) are consistent with our model with $`N2`$. The TMR in tunnel junctions with ferromagnetic electrodes and magnetic granular films is another attractive topic. Recently, the TMR oscillations in asymmetric double tunnel junctions with ferromagnetic electrodes have been studied. The condition for TMR oscillations is that the tunneling current shows the CS and the magnetic field dependence of the tunnel resistance is not the same for all tunnel junctions. The TMR oscillations with the same period as the CS may be observed for magnetic granular films. Our calculation shows that the amplitude of the TMR oscillation is about 5% at room temperature for the curve A in Fig. 3(b). In summary, the electron transport through an array of tunnel junctions consisting of an STM tip and a granular film has been studied both theoretically and experimentally. When the tunnel resistance between the tip and a granule on the surface is much larger than those between granules, a bottleneck is created in the array and a CS with a single period is observed in the I-V curve. We predicted that the period of the CS is given by the capacitance at the bottleneck even in a thick film with many granules between the tip and the base electrode. Our STM experiments on 10 nm- and 1$`\mu `$m-thick Co-Al-O granular films confirmed the CS with a single period at room temperature. TMR oscillations for magnetic granular films are also predicted. Our theoretical and experimental studies provide a new direction for single-electron-spin-electronic devices at room temperature. We thank P. M. Levy for reading the manuscript. This work is supported by a Grant-in-Aid from Scientific Research Priority Area for Ministry of Education, Science, Sports and Culture of Japan, a Grant from the Japan Society for Promotion of Science, and NEDO Japan.
no-problem/9904/gr-qc9904043.html
ar5iv
text
# Impact of a Multi-TeraFlop Machine to Gravitational Physics ## I Introduction - Astronomy of the Next Century and Gravitational Physics Two major directions of astronomy in the next century are high energy ($`x`$-ray, $`\gamma `$-ray) astronomy and gravitational wave astronomy. The former is driven by observations by $`x`$\- and $`\gamma `$-ray satellites, e.g., CGRO, AXAF, XTE, HETE II, GLAST , current or planned for the next few years. High energy radiation is often emitted in regions of strong gravitational fields, near black holes (BHs) or neutron stars (NSs). One of the biggest mysteries of modern astronomy, $`\gamma `$-ray bursts, is likely to be generated by events involving NSs or BHs. For the full description of strong, dynamic gravitational fields, we need Einstein’s theory of general relativity. The second major direction, gravitational wave astronomy, involves directly the dynamical nature of spacetime in the Einstein theory of gravity. The tremendous recent interest in this frontier is driven by the gravitational wave observatories presently being built or planned in US, Europe and outer space, e.g., LIGO, VIRGO, GEO600, LISA, LAGOS , and the Lunar Outpost Astrophysics Program . The American LIGO and its European counterparts VIRGO and GEO600 are scheduled to be on line in a few years , making gravitational wave astronomy a reality. These observatories provide a completely new window on the universe: existing observations are mainly provided by the electromagnetic spectrum, emitted by individual electrons, atoms or molecules, easily absorbed, scattered and dispersed. Gravitational waves are produced by coherent bulk motion of matter and travel nearly unscathed through space, coming to us carrying the information of the strong field regions where they were originally generated. This new window will provide very different information about our universe that is either difficult or impossible to obtain by traditional means. The numerical determination of the gravitational waveform is crucial for gravitational wave astronomy. Physical information in the data is to be extracted through template matching techniques , which presupposes that reliable example waveforms are known. Gravitational waveforms are important both as probes of the fundamental nature of gravity, and for the unique physical and astronomical information they carry. The information would be difficult to obtain otherwise, ranging from nuclear physics (e.g., the EOS of NSs ) to cosmology (e.g., direct determination of the Hubble constant ). In most situations, the gravitational waveforms cannot be calculated without full scale general relativistic numerical simulations. In short, both of these frontiers of astronomy call for numerical simulations based on the Einstein theory of gravity. If astrophysicists are to fully understand the non-linear and dynamical gravitational fields involved in these observational data, detailed modeling taking dynamic general relativity into full account must be carried out. ## II Challenges of Computational General Relativistic Astrophysics The application of the Einstein theory of gravity to realistic astrophysical systems needs computational power in the range of (at least) multi-TeraFlop/TeraByte, and corresponding capabilities in visualization, networking and storage. $``$ Computational challenges due to the complexity of the physics involved: The Einstein equations are probably the most complex partial differential equations in all of physics, forming a system of dozens of coupled, nonlinear equations, with thousands of terms, of mixed hyperbolic, elliptic, and even undefined types in a general coordinate system. The evolution has elliptic constraints that should be satisfied at all times. In simulations without symmetry, as would be the case for realistic processes, it involves hundreds of 3D arrays, and ten of thousands of operations per grid point per update. Moreover, for simulations of astrophysical processes, we need to integrate numerical relativity with traditional tools of computational astrophysics, including hydrodynamics, nuclear astrophysics, radiation transport and magneto-hydrodynamics, which govern the evolution of the source terms (i.e., the right hand side) of the Einstein equations. This complexity demands massively parallel computation. $``$ The object under numerical construction being the spacetime itself presents unique challenges: According to the singularity theorems of general relativity, region of strong gravity often generate spacetime singularities. Due to the need to avoid spacetime singularities , and to obtain long term stability in the numerical simulations, sophisticated control of the coordinate system is needed for the construction of a numerical spacetime. This dynamic interplay between the spacetime being constructed and the computational coordinate choice itself (“gauge choice”) is a unique feature of general relativity that makes the numerical simulations much more demanding. Beside extra computational power, advanced visualization tools, preferably real time interactive “window into the oven” visualization, are particularly useful in the numerical construction. $``$ The multi-scale problem: Astrophysics of strongly gravitating systems inherently involves many length and time scales. The microphysics of the shortest scale (the nuclear force), controls macroscopic dynamics on the stellar scale, such as the formation and collapse of neutron stars (NSs). On the other hand, the stellar scale is at least 10 times less than the wavelength of the gravitational waves emitted, and many orders of magnitude less than the astronomical scales of their accretion disk and jets; these larger scales provide the directly observed signals. Numerical studies of these systems, aiming at direct comparison with observations, fundamentally require the capability of handling a wide range of dynamical time and length scales. While such multi-scale problems can be handled with advanced 3D AMR techniques, it leads to further requirements on computation power and (3D AMR) visualization. In short, in order to meet the challenges of Computational General Relativistic Astrophysics we need to push not only the frontier of the computation power for number crunching. The visualization requires basically as much computer power as what generates the data. The highly multi-disciplinary nature of the research demands collaborative code development. The large amount of data, visualization needs, and collaborative effort require high performance networking and meta-computing. In the following section we use a specific sample problem to illustrate the requirements on Flop rate, memory, disk and storage sizes, which in turns determine the base line of visualization and networking requirements. Where we stand at present will also be discussed briefly. ## III Neutron Star Coalescence As An Example on Computational Requirements We use the problem of coalescing binary neutron stars to show the computational requirements in general relativistic astrophysics. The reason that the coalescence of neutron stars is a meaningful example is many-fold: It is a significant problem in astrophysics and astronomy; it involves many ingredients in general relativistic astrophysics; and it is a problem attracting much current research effort both nationally and internationally. $``$ Coalescing neutron star binary systems are common in the Universe, with the well known Hulse-Taylor binary pulsar PSR1913+16 being an example. The coalescence events are expected to be detectable by LIGO, with an observation rate of 29 yr<sup>-1</sup> for $`h=0.5`$ and 43 yr<sup>-1</sup> for $`h=0.8`$. $``$ The physical information in LIGO data is to be extracted through the standard template matching technique . For this we need to determine the waveforms of the gravitational radiation generated by the coalescence events, which can only be obtained through large scale simulations. $``$ A very enticing reason for studying the coalescence event lies in the fact that observations of such events by gravitational wave observatories may allow us to determine cosmological parameters like $`H_0`$ and $`q_0`$, without going through the cosmic distance ladder, and is independent of the optical identification of the source and the evolution of the source rate density with redshift. $``$ Gravitational wave signals from coalescing binaries may reveal important information on the equation of state of dense nuclear matter, including the nuclear compression modulus, the hadronic effective masses, the relative hyperon-nucleon and nucleon-nucleon coupling constants, possible kaon condensation and a quark/hadron phase transition. $``$ Study of coalescing neutron star binaries may also answer other long standing questions in nuclear astrophysics. NSNS binary mergers may eject extremely neutron-rich matter which decompresses, beta-decays and neutron captures, forming the classical r-process . Detailed numerical simulations of the shock heating and mass ejection process are needed. $``$ Coalescing neutron star binaries are among the most popular candidates of gamma ray bursts. In order to evaluate the feasibility of the model, detailed studies taking the full general relativistic effects are needed to determine the maximum possible energy released, heating and mass ejection in the coalescence process. ### A Minimum Configuration $``$ Description of the Physical System: Two 1.4 solar mass neutron stars in head-on collision falling in from infinity. General relativistic simulation begins when the two stars are $`4R`$ apart, with $`R=`$ radius of star. Simulation covers $`10ms`$ in time for the dynamics of the merging and ringdown phases, and $`20R`$ in space for resolution and boundary considerations. $``$ Purposes: Study the general relativistic dynamics of the merging and ringdown phases of head-on collision. $``$ Grid Setup: Resolution=25 gridpoints/$`R`$, Total Grid Size = $`500^3=10^8`$ $``$ Memory Requirement: $`180GBytes`$ $``$ Floating Point Operations: Flops/gridpt/time step = $`10^4`$ (With only weak coordinate control) Total number of time steps = $`10^4`$ Total flops = $`10^{16}`$ Run time = 3 hours (With 1 TeraFlops sustained) $``$ Disk: Run time disk size = 800 GBytes (Output 10 functions with 1/100 sampling) Storage = 8 TeraBytes (with 10 runs for comparison studies) $``$ Present Status: Code for carrying out this simulation is currently available. A code constructed for the NASA Neutron Star Grand Challenge Project which is capable of solving the full Einstein equations coupled to general relativistic hydrodynamics has recently been released. This code has been tested on a 1024 node T3E-1200 (provided for the neutron star project for performance tests, though not available for production runs), achieving 142GFLops and linear scaling up to 1024 nodes. A summary of the test results are given below. (The NSF Black Hole Grand Challenge Project is also constructing massively parallel code for solving the Einstein equations, see for present status.) Code tested: NASA Neutron Star Grand Challenge GR3D Einstein Spacetime (ADM) coupled to MAHC HYPERBOLIC\_HYDRO (code tested with the released version, without special tuning for this 1024 node machine.) Date tested: May 10, 1998 ``` 32 bit 64 bit -------------------------------------------------------------------- Grid Size per Processor 84x84x84 66x66x66 Processor topology 8 x8 x16 8 x8 x16 Total Grid Size 644 x 644 x 1284 500 x 500 x 996 Single Proc MFlop/sec 144.35 118.33 Aggregate GFlop/sec 142.2 115.8 Scaling efficiency 96.2% 95.6% -------------------------------------------------------------------- ``` ### B Medium Configuration $``$ Description of Physical System: Two 1.4 solar mass neturon stars in inspiral coalescence. Full general relativistic simulation begins when stars enter the last 8 orbits. Simulation covers $`60ms`$ in time and $`22R`$ in space. $``$ Purposes: Study the general relativistic inspiral dynamics beginning with the 3PN breakpoint. This enables reliable initial data to be set. Study the effects of the angular momentum and gravitational radiation backreaction on shock heating in the merger phase. $``$ Grid Setup: Resolution=50 gridpoints/$`R`$, Total Grid Size = $`10^9`$ $``$ Memory Requirement: $`1.8TBytes`$ $``$ Floating Point Operations: Flops/gridpt/time step = $`10^4`$ (With only weak coordinate control) Total number of time steps = $`10^5`$ Total flops = $`10^{18}`$ Run time = 300 hours (With 1 TeraFlops sustained) $``$ Disk: Run time disk size = 20 TBytes (Output 10 functions with 1/400 sampling) Storage = 100 TeraBytes (with 5 runs for comparison studies) $``$ Visualization: Need parallel visualization engine. $``$ Present Status: Code basically ready for pilot studies. Tests of the effects of the implementation of weak coordinate control to be performed. ### C Preferred Configuration $``$ Description of Physical System: Two 1.4 solar mass neturon stars in inspiral coalescence. Full general relativistic simulation begins when stars enter the last 8 orbits. Simulation covers $`60ms`$ in time and $`40R`$ in space (one wavelength for gravitational wave with period 1ms). $``$ Purposes: Study the same system with strong coordinate control and more reliable wavefrom extraction. $``$ Grid Setup: Resolution=50 gridpoints/$`R`$, Total Grid Size = $`10^{10}`$ $``$ Memory Requirement: $`18TBytes`$ $``$ Floating Point Operations: Flops/gridpt/time step = $`10^5`$ (With strong coordinate control) Total number of time steps = $`10^5`$ Total flops = $`10^{20}`$ Run time = 3,000 hours (With 10 TeraFlops sustained) $``$ Disk: Run time disk size = 200 TBytes (Output 10 functions with 1/400 sampling) Storage = 1000 TeraBytes (with 5 runs for comparison studies, template preparation not included) $``$ Need to push the frontiers on computation, storage, visualization, and networking. $``$ Present Status: Code basically ready for pilot studies. Efficient control the coordinate system to be investigated. ## IV Acknowledgements I thank S. Finn, K. Blackburn, M. Miller, L. Smarr, B. Sugar, M. Tobias, J. Towns, C. Will, and J. York for useful input in preparing this document. The gereral relativistic astrophysics code ”GR3D” discussed in Sec. 4 is developed by the NCSA-Potsdam-Wash U numerical relativity collaboration, with support from the NSF Gravitational Physics Program Grant No. Phy-96-00507, NASA HPCC/ESS Grand Challenge Applications Grant No. NCCS5-153, NSF NRAC Allocation Grant no. MCA93S025, and the Albert Einstein Institute.
no-problem/9904/cond-mat9904168.html
ar5iv
text
# The Superconductor-Insulator Transition in 2D ## I INTRODUCTION After about two decades of research, the superconductor-insulator (SI) transition in disordered films of metals remains a controversial subject, mainly due to contradictory results in both theoretical and experimental studies. This work aims to improve the understanding of this phenomenon, which might also be relevant for high-T<sub>c</sub> superconductors and possibly connected to novel metal-insulator transitions in 2D electron systems. The superconductor-insulator transition in ultrathin films of metals is believed to occur at the absolute zero of temperature when the quantum ground state of the system is changed by tuning disorder, film thickness, carrier concentration or magnetic field. Unlike finite temperature phase transitions in which thermal fluctuations are crucial, $`T=0`$ phase transitions are driven purely by quantum fluctuations. At finite temperatures, an underlying quantum phase transition manifests itself in the scaling behavior of the resistance with the appropriate tuning parameter and the temperature, along with the coherence length and dynamical critical exponents, $`\nu `$ and $`z`$ respectively. Various models of the superconductor-insulator transition in disordered films can be roughly divided in two groups: those in which the superconductivity is destroyed by fluctuations of the amplitude of the order parameter, and those which focus only on the phase fluctuations. If superconductivity is destroyed only by phase fluctuations, then Cooper pairs persist on the insulating side of the transition and the transition may be described by a model of interacting bosons in the presence of disorder. Based on this assumption, Fisher and co-workers suggested a scaling theory and a phase diagram for a two-dimensional system as a function of temperature, disorder, and magnetic field. The superconducting phase is considered to be a condensate of Cooper pairs with localized vortices, and the insulating phase is a condensate of vortices with localized Cooper pairs. At the transition, both vortices and Cooper pairs are mobile as they exchange their roles, which leads to a finite resistance. Some important predictions of the model are the universal value of this critical resistance and specific values of the critical exponents $`\nu `$ and $`z`$. This so-called ”dirty boson” problem has been extensively studied using quantum Monte Carlo simulations , real-space renormalization group techniques , strong-coupling expansions and in other ways . Finite temperature behavior in the vicinity of a quantum critical point was also studied analytically. A transition from a superfluid to a Mott insulator was found in the pure case, and to a Bose glass insulator in the presence of disorder, but there is still considerable disagreement as to the universality class of the transition, as well as the value of the critical resistance. An alternative picture of interacting electrons proposes a different mechanism: the density of states and the Cooper pairing are suppressed on the insulating side of the superconductor-insulator transition due to an enhanced Coulomb interaction. The SI transition occurs as a consequence of fluctuations in the amplitude, rather than the phase of the order parameter. In other words, Cooper pairs break up into single electrons at the transition. Therefore the superconducting gap would also vanish at the transition. The model of interacting electrons has also been studied numerically. Quantum Monte Carlo simulations of an attractive fermion Hubbard model with on-site interactions yielded a direct superconductor-to-insulator transition in two dimensions without an intervening metallic phase. The critical resistance was found to depend on the strength of the attractive interaction, as a function of which, a crossover from a fermionic to a bosonic regime occurs. The results of this theory qualitatively resemble the experimental data. A recent calculation of the effect of disorder on the gap in the density of states, using a similar model , showed that the existence of a gap on the insulating side of the transition depends on the coupling strength, allowing for a Fermi insulator at weak and a Bose insulator at strong coupling. Experimentally, the destruction of superconductivity by disorder has been studied in films of MoGe , InO<sub>x</sub> and Bi, Pb, Ga, Al among others. Evidence was found of $`T_c`$ going to zero with increasing disorder implying the destruction of Cooper pairs at the transition. Tunneling experiments also seem to support the fermionic picture. Valles et al. found that the superconducting gap and the mean field transition temperature are both suppressed as disorder is increased, and that the gap vanishes on the insulating side of the superconductor-insulator transition . Hsu et al. carried out tunneling studies of the superconductor-insulator transition in PbBi/Ge films, and found a large number of quasiparticle states near the Fermi energy . They estimated the average number of Cooper pairs in a coherence volume to be on the order of one at the superconductor-insulator transition. This result, in combination with the disappearance of the energy gap, was interpreted as evidence of the superconductor-insulator transition being driven by fluctuations in the amplitude of the order parameter. Alternatively, it is possible for the superconducting energy gap to be reduced or the tunneling density of states to be broadened as a consequence of phase fluctuations . Thus, the absence of the gap in these tunneling studies does not necessarily mean that Cooper pairs are absent on the insulating side of the superconductor-insulator transition, but it may imply that a full picture might have to include fermionic degrees of freedom. Evidence of the importance of the bosonic picture can be found in the work of Paalanen et al.. These workers studied the magnetoresistance and the Hall effect in amorphous InO<sub>x</sub> films and observed two distinct transitions: one at a critical field $`B_{xx}^c`$ where the longitudinal resistance diverges and the system presumably undergoes a transition from a superconducting phase to a Bose glass insulator with localized Cooper pairs, and the other at a higher field $`B_{xy}^c`$, where the transverse resistance diverges and the Cooper pairs of the Bose glass insulator presumably unbind. The transition in the transverse resistance occurred at the same magnetic field where the longitudinal resistance showed a maximum. Since a Bose insulator might be expected to have a higher resistance than an insulator with localized single electrons, and from the disorder dependence of $`B_{xx}^c/B_{xy}^c`$, this was interpreted as evidence of the bosonic nature of the insulating state close to the superconductor-insulator transition. Similar behavior was observed by other groups . Magnetoresistance studies of amorphous $`InO_x`$ films by Gantmakher et al. also seem to support the bosonic picture. Furthermore, a linear component of the magnetoresistance observed in the insulating regime in amorphous Bi films can be interpreted as a signature of vortex motion . In the context of the scaling behavior, the thickness tuned transition of ultrathin films of amorphous $`Bi`$ has been studied in zero magnetic field. A scaling analysis of the magnetic field tuned SI transition has been carried out for thin films of $`In0_x`$ and $`MoGe`$. All of these investigations found $`\nu 1.3`$ and $`z1`$, consistent with the theoretical predictions of the boson Hubbard model. Yet another interpretation of the experimental data has recently been proposed by Shimshoni et al. and expanded upon by Mason and Kapitulnik . In this picture, a film contains both insulating and superconducting puddles, and transport is dominated by tunneling or activated hopping between them. The SI transition then occurs as a consequence of the percolation of one phase or the other. Since the correlation length exponent in 2D classical percolation is 4/3, this is consistent with $`\nu 1.3`$ observed in most experiments. This model also predicts a saturation of the resistance at very low temperatures, which seems to be supported by the experimental data of Ephron et al. , and Yazdani and Kapitulnik. Similar effects have been observed in the much earlier work of Wang et al. on underdoped high-$`T_c`$ (cuprate) films. These ideas may be relevant to similar features of the results of Kravchenko et al. on two dimensional electron gas systems . In all studies in which there is flattening in $`R(T)`$ at low temperatures, one must be concerned with the possibility of electrical noise being the source of the effect. Also in multi-component materials such as MoGe and underdoped cuprates there is always a possibility of second phases affecting the outcome. Furthermore, it has recently been proposed that the flattening in $`R(T)`$ at low temperatures may be a signature of Bose metal, a phase in which the Cooper pairs are mobile but do not condense . The quantitative results of the study of the magnetic field tuned superconductor-insulator transition presented here for disordered metal systems are in serious disagreement with previous measurements of this transition, adding yet another puzzle to this problem, and calling for a re-examination of existing models. The thickness-tuned transition has also been studied in a nonzero magnetic field. This allows for the construction of a phase diagram and a direct comparison of the two different ways of tuning the SI transition, by varying thickness or magnetic field. This paper is organized as follows: the finite-size scaling procedures used to determine the critical exponents are described in Section II. Experimental details are given in Section III. Section IV focuses on the magnetic field-tuned transition, while the analysis of the thickness-tuned transition in finite magnetic field, which has not been studied before, is presented in Section V. In Section VI, the phase diagram as a function of thickness and magnetic field is presented. The critical resistance and its apparent non-universality are discussed in Section VII. The results and their implications are summarized and further discussed in Section VIII. A brief account of a portion of this work has been previously reported . ## II SCALING PROCEDURES The scale of fluctuations on either side of a quantum phase transition is set by a diverging correlation length $`\xi \delta ^\nu `$ and a vanishing characteristic frequency $`\mathrm{\Omega }\xi ^z.`$ Here $`\delta `$ is the deviation from the critical point $`\delta =|KK_c|,`$ where $`K`$ is the control or tuning parameter, which drives the system through the transition (i.e. disorder, thickness, magnetic field, etc.), $`K_c`$ is the critical value of $`K`$ at the transition, $`\nu `$ is the correlation length exponent and $`z`$ is the dynamical critical exponent. The exponents $`\nu `$ and $`z`$ determine the universality class of the transition. They may not depend on the microscopic details of the physics of the system under study, but on its dimensionality, the symmetry group of its Hamiltonian and the range of interactions. The resistance of a two-dimensional system in the quantum critical regime follows the scaling relation : $$R(\delta ,T)=R_cf(\delta T^{1/\nu z})$$ (1) Here $`\delta =|dd_c|`$ in the case of the thickness-tuned transition and $`\delta =|BB_c|`$ in the case of the magnetic field-tuned transition. $`R_c`$ is the critical resistance at $`\delta =0`$, and $`f(x)`$ is a universal scaling function such that $`f(0)=1`$. The first step in the analysis of the experimental data is to determine the critical value of the tuning parameter and plot the resistance as a function of $`\delta `$. The $`\delta `$-axis is then re-scaled by a factor $`t`$: $$R(\delta ,t)=R_cf(\delta t)$$ (2) where the parameter $`t(T)`$ is determined at each temperature by performing a numerical minimization which yields the best collapse of the data. If the resistance really follows the scaling law (Eq. 1), it is obvious that $`t(T)`$ has to be a power law in temperature, $`t(T)T^{1/\nu z}`$. The exponent product $`\nu z`$ is then found by plotting $`t(T)`$ as a function of $`T`$ on a log-log scale, and determining the slope which is then equal to $`1/\nu z`$. Similarly, at a constant temperature : $$R(\delta ,E)=R_cf(\delta E^{1/\nu (z+1)})$$ (3) where E is the electric field across the sample. This time, the $`\delta `$-axis is re-scaled by a field-dependent factor, $`t(E)`$, which should be a power law in electric field, $`t(E)E^{1/\nu (z+1)}`$, and the exponent $`\nu (z+1)`$ can then be determined from the field dependence of the parameter $`t(E)`$. The main advantage of this scaling procedure is that it requires neither prior knowledge of the critical exponents, nor the temperature and thickness dependence of the resistance. The critical exponents are determined empirically from the data, with the critical exponent product as the only adjustable parameter, while the critical value of the tuning parameter is determined independently. The temperature scaling determines the product $`\nu z`$, while the electric field scaling determines $`\nu (z+1)`$. Combining the two results, the correlation length exponent $`\nu `$ and the dynamical exponent $`z`$ can be determined separately. An alternative way to determine these critical exponent products is to evaluate a derivative of the resistance with respect to $`K`$ at its critical value $`K_c`$ : $$(R/K)_{K_c}R_cT^{1/\nu z}f^{^{}}(0)$$ (4) where $`Kd`$ at the thickness-tuned transition and $`KB`$ at the magnetic field-tuned transition, and $`f^{}(0)`$ is a constant. Plotting $`(R/K)_{K_c}`$ as a function of $`T^1`$ on a log-log scale should yield a straight line, with a slope equal to $`1/\nu z`$. The same method can be applied to the electric field scaling to determine $`1/\nu (z+1)`$, and then $`\nu `$ and $`z`$ can be calculated from the results. In the work described below, both scaling procedures were used to obtain the critical exponents, in order to check their consistency. The exponents obtained using two different methods were found to be the same, within the experimental uncertainty. ## III EXPERIMENTAL METHODS Ultrathin Bi films were evaporated on top of a $`10\AA `$ thick layer of amorphous Ge, which was pre-deposited onto either $`SrTiO_3`$ or glazed alumina substrates. The substrate temperature was kept well below $`20K`$ during all depositions and all the films were grown in situ under UHV conditions ($``$ $`10^{10}`$ Torr). The film thickness was gradually increased through successive depositions in increments of $`0.10.2\AA `$. Resistance measurements were carried out between the depositions using a standard DC four-probe technique, with currents up to 50 nA. A detailed temperature dependence of the resistance in zero field and in magnetic field was recorded at each film thickness in the temperature range between 0.14K and 15K, where the lowest temperatures were achieved using a dilution refrigerator. As the film thickness increased from 7Å to 15Å, the temperature dependence of the resistance of the system changed from insulator-like to superconductor-like at low temperatures, with no sign of reentrant behavior typically observed in granular films . The films that were superconducting in zero field were driven insulating by applying a magnetic field of up to 12 kG perpendicular to the plane of the sample using a superconducting split-coil magnet. The scaling procedures described above were applied to the magnetic field-tuned transition, as well as to the thickness-tuned transition in both zero field and in a fixed magnetic field. ## IV MAGNETIC FIELD-TUNED SI TRANSITION The resistance as a function of temperature for seven films with varying degrees of disorder was studied in magnetic fields up to 12 kG applied perpendicular to the plane of the sample. A typical temperature dependence of the resistance as the magnetic field changes is shown in Fig. 1. In zero field, the resistance decreases with decreasing temperature suggesting the existence of superconducting fluctuations. A magnetic field destroys this downward curvature, and at some critical magnetic field, $`B_c`$, the resistance is independent of temperature. In magnetic fields higher than $`B_c`$ the film is insulating, with $`R/T<0`$. Figure 2 shows the resistance as a function of magnetic field for different temperatures. If the sheet resistance is normalized by the value of the critical resistance at each thickness, $`R/R_c(d),`$ then all the data can be collapsed onto a single curve. The collapse of the normalized resistance data as a function of $`\delta t`$ for five samples is shown in Fig. 3. The critical exponent product $`\nu z`$, determined from the temperature dependence of the parameter $`t`$ (inset of Fig.3. ), is found to be $`\nu z=0.7\pm 0.2`$, apparently independent of the film thickness. The same exponent products were obtained using the alternative method of plotting $`(R/B)_{B_c}`$vs. $`T^1`$ on a log-log plot and determining the slope which is equal to $`1/\nu z`$, as shown on Fig. 4. Electric field scaling was also carried out for one of the samples. Unfortunately, there was not enough data available for the insulating side of the transition to carry out a complete analysis, but the data on the superconducting side was sufficient to obtain the value of the critical exponent product $`\nu (z+1)`$. The magnetic field dependence of the sheet resistance for different values of electric field applied across the sample is shown on Fig. 5. The resistance data were then plotted as a function of $`(BB_c)`$, and re-scaled by a parameter $`t(E)`$ to obtain the best collapse of the data, shown in Fig. 6. For the electric field dependence of the parameter $`t(E)`$, shown in the inset of Fig. 6., the best power law fit was obtained for $`\nu (z+1)1.4.`$Combining this result with the result of the temperature scaling, it follows that $`z1`$ and $`\nu 0.7`$ for the magnetic field tuned superconductor-insulator transition. In contrast with our findings, previous studies of thin films of amorphous $`InO_x`$ and $`MoGe`$ both showed $`\nu 1.3`$ and $`z=1`$ for the magnetic field tuned superconductor-insulator transition. Our surprising result is also in obvious disagreement with the prediction of the scaling theory (from which $`\nu 1`$ for a disordered system), as well as with the percolation-based models (from which $`\nu 1.3`$ would be expected). ## V THICKNESS-TUNED SI TRANSITION For very thin films, the resistance increases exponentially with decreasing temperature, while for the thicker films the resistance goes to zero as the films become superconducting. At the critical thickness $`d_c,`$ the resistance is temperature independent, and the system is expected to stay metallic down to $`T=0`$. Using the same methods described above, the critical exponent product $`\nu z`$ was determined to be $`1.2\pm 0.2`$ when the superconductor-insulator transition was tuned by changing the film thickness in zero magnetic field . A similar scaling behavior has been found in ultrathin films of Bi by Liu et al., with the critical exponent product $`\nu z=2.8`$ on the insulating side and $`\nu z=1.4`$ on the superconducting side of the transition. The fact that $`\nu z`$ was found to be different on the two sides of the transition raises the question of whether the measurements really probed the quantum critical regime. It is likely that the scaling was carried out too deep into the insulating phase, forcing the scaling form (Eq. 1.) on films which were in a fundamentally different insulating regime. Such films should not be expected to scale together with the superconducting films, hence the discrepancy on the insulating side of the transition. In the present work, the measurements were carried out at lower temperatures than previously studied and with more detail in the range of thicknesses close to the transition. Both sides of the transition scaled with $`\nu z1.2`$, which is close to the value obtained by Liu et al. on the superconducting side of the transition. This result is also in very good agreement with the predictions of the scaling theory , renormalization group calculations , and Monte Carlo simulations . All previous experiments which studied the thickness or disorder tuned superconductor-insulator transition were carried out in zero magnetic field. An applied magnetic field is generally expected to change the universality class of the transition since it breaks time reversal symmetry. One would therefore expect the critical exponent product $`\nu z`$ to be different in the presence of a finite magnetic field. Furthermore, the thickness-tuned transition in a finite magnetic field might be expected to be in the same universality class as the magnetic field-tuned transition at fixed thickness. The thickness-tuned superconductor-insulator transition in a finite magnetic field was probed by sorting the magnetoresistance data which were carefully taken as a function of temperature and magnetic field for each film. A detailed scaling analysis was carried out at fixed magnetic fields of: 0.5 kG, 1 kG, 2 kG, 3 kG, 4.5 kG and 7 kG for one set of films, and 12 kG for a different set of films. For each value of the magnetic field, the resistance was plotted as a function of the film thickness at different temperatures, ranging from 0.14 K to 0.5 K, in order to determine the critical thickness at that field. If the sheet resistance is normalized by the critical value at each field $`R/R_c(B)`$, then the normalized resistance data as a function of the scaling variable for all temperatures and all values of the magnetic field collapsed onto a single curve, as shown on Fig. 7. The critical exponent product determined from the parameter $`t(T)`$ (Inset of Fig. 7) was found to be $`\nu z=1.4\pm 0.2`$, apparently independent of the magnetic field. Once again, the alternative scaling procedure yielded very similar results, as shown in Fig. 8. This value of the product $`\nu z`$ is a factor of two larger than that obtained for the magnetic field-tuned transition. It is, however, very close to that obtained from the analysis of the zero-field transition carried out using data from the same set of films, which was $`\nu z=1.2\pm 0.2.`$ Given the experimental uncertainties, it is hard to say whether this difference in value of the exponent products reflects a difference between the universality classes of the thickness driven transitions in zero and finite magnetic field. These exponent products are close to those found in Monte Carlo simulations of the (2+1)-dimensional classical XY model with disorder by Cha and Girvin . ## VI THE PHASE DIAGRAM Combining the data obtained from the thickness-tuned transitions in a fixed magnetic field and the field-tuned transitions at the fixed thickness, one can construct a phase diagram with thickness and magnetic field as independent variables. This is shown in Fig.9. The films characterized by parameters which lie above the phase boundary are ”insulating” ($`R/T<0`$ at finite temperatures), and the ones below it are ”superconducting” ($`R/T>0`$ at finite temperatures). The phase boundary is a power law: $$B_c\left|dd_c\right|^x$$ (5) The best fit to the data yields $`x=0.7`$. Near the critical thickness for the zero field transition, a simple dimensionality argument suggests that the critical magnetic field should scale as: $$B_c\frac{\mathrm{\Phi }_0}{\xi ^2}$$ (6) where $`\mathrm{\Phi }_0`$ is the flux quantum. Since the correlation length is $`\xi \left|dd_c\right|^\nu `$, one might expect the critical field to be: $$B_c\left|dd_c\right|^{2\nu }$$ (7) According to the phase boundary obtained in this experiment (see Eq. 5.), this would mean that $`\nu =0.35`$, a value not consistent with the results of the scaling analysis carried out on the same films. It also does not agree with $`\nu =1.3`$ obtained by Refs. and . There is no obvious physical reason for such a small value of $`\nu `$ and implied large values of $`z`$, so this discrepancy is a mystery at this time. It is possible that the simple argument expressed in Eqs. 6 and 7 is too naive. Another surprising feature of the experimental results is that the critical exponent product $`\nu z`$ evidently depends on whether the phase boundary is crossed vertically (changing the thickness at a constant magnetic field), in which case $`\nu z1.4`$, or horizontally (changing the magnetic field at a fixed thickness), in which case $`\nu _Bz_B0.7`$. One might expect the critical exponents to not depend on the direction in which the boundary is crossed. If, however, the actual tuning parameter were not film thickness, but some other physical parameter which was a function of thickness, a factor of two in the critical exponent product determined from an analysis using thickness rather than the ”correct” control parameter might result. The ”correct” control parameter might be some measure of disorder, electron screening, damping, or Cooper pair density. The detailed functional form of the thickness dependence of these parameters for quench-condensed films is not known. Another possibility is that there are actually two phase boundaries, separating three different regimes, so that each exponent belongs to a different phase boundary. There has been some indication of a vortex liquid phase in between the superconducting (vortex glass) phase and the insulating (Bose glass) phase . Since there only appears to be one phase boundary, that is probably not the case. It is possible, however, that the two boundaries could be indistinguishable over the range of parameters explored in these studies, but would become apparent at higher fields, greater film thicknesses, or lower temperatures. These matters need to be investigated further. ## VII THE CRITICAL RESISTANCE The critical resistance for the field-driven transition, contrary to the predictions of the dirty boson models, does not seem to be universal. Figure 10 shows that $`R_c`$ decreases as the critical field increases, roughly in a linear fashion. Since thicker films have lower normal state resistances and higher critical fields, this also means that $`R_c`$ decreases with increasing thickness and decreasing normal state resistance. Very similar behavior was observed by Yazdani and Kapitulnik . In order to explain the non-universal behavior of the critical resistance, these authors proposed a two-channel conduction model, in which the conductance due to the electron (fermion) channel adds to the conductance due to the boson channel. When the unpaired electrons are strongly localized, the conduction is mostly due to bosons, and the resistance is close to $`R_Q=h/4e^2,`$ as predicted by the boson Hubbard model. In the opposite limit, unpaired electrons contribute significantly to the conduction at the transition. Films with lower normal state resistances would then have lower critical resistances due to the larger fraction of normal electrons. The critical resistances in our experiment, however, are all greater than $`R_Q`$ and their values could only be explained this way if the quantum resistance due to pairs was itself greater than $`R_Q`$. The conductance due to the electronic channel in a magnetic field might also depend on the strength of the spin-orbit interactions, which is another difference between our samples and those of Refs. and . The strength of the spin-orbit interactions is typically proportional to $`Z^4`$, where $`Z`$ is the atomic number. Since $`Bi`$ is a heavy metal, spin-orbit interactions are stronger than in the lighter $`InO_x`$ and $`MoGe`$. It is known that in the weakly localized systems with strong spin-orbit interactions the magnetoresistance is positive, while it is negative otherwise . If weakly localized unpaired electrons really contributed significantly to the conduction at the magnetic field-tuned superconductor-insulator transition at the experimentally accessible finite temperatures, the contribution to the magnetoresistance due to localization effects could have a positive or a negative sign, depending on the strength of the spin-orbit interactions. This would make $`R_c`$ bigger in the case of Bi films, and smaller in the case of $`InO_x`$ and $`MoGe`$, consistent with experimental observations. There is, however, a striking similarity in the magnetic field and normal state resistance dependence of the critical resistance of the $`Bi`$ films and $`MoGe`$ films of Ref. : even though their critical resistances fall on the opposite sides of $`R_Q`$, they both decrease with magnetic field roughly linearly, with almost the same slope. Strictly speaking, the critical resistance is predicted to be universal only at $`T=0`$, while the finite temperature corrections are expected to be scaled by the with the Kosterlitz-Thouless transition temperature, $`T_c`$ : $$R_c(B_c,T)=R_c^{}+O(\frac{T}{T_c})^2$$ (8) where $`R_c^{}`$ is the universal resistance at $`T=0`$, and $`R_c`$ is the critical resistance at some finite temperature as measured in the experiments. A closer look at the crossing plots such as that of Fig. 2, reveals that the critical resistance is indeed slightly temperature dependent. A considerable amount of noise over the accessible temperature range made it hard to compare this temperature dependence with Eq. 8, but qualitative behavior is shown on Fig. 11. Normal state resistances of the $`MoGe`$ films are a factor of 3-10 lower than the $`Bi`$ films considered here, which means that our samples are probing a different part of the phase diagram (normal state resistances are inversely proportional to the film thickness in our experiment), and the finite temperature corrections might be more important in one case then the other. Indeed, somewhat higher critical resistances were found in $`InO_x`$ films if the temperature dependence of $`R_c`$ is taken into account . A recent analytical calculation of the critical resistance of a two dimensional system at finite temperatures in the dirty boson model including Coulomb interactions yielded a critical resistance of $`1.4R_Q`$. The author suggested that the next order correction would bring the result closer to $`R_Q`$. This result is in excellent agreement with the critical resistance found in the present measurements, which was $`1.11.2R_Q`$. Monte Carlo simulations of the $`(2+1)`$-dimensional XY model without disorder also find the critical resistance to be $`R_c=7.7k\mathrm{\Omega }`$, again very close to the value found in this work. ## VIII DISCUSSION A lot of attention has been focused recently on the effects of dissipation on SI transitions . Within the picture proposed by Shimshoni et al., the transition between the superconducting and the insulating state is of a percolative nature. On the insulating side of the transition, electrical transport occurs through activation or tunneling of Cooper pairs between superconducting domains. Likewise, on the superconducting side, vortices tunnel from one insulating domain to another. Using incoherent Boltzmann transport theory, Shimshoni et al. derive resistivity laws in different temperature regimes and predict finite dissipation at $`T=0`$ for all values of the magnetic field. Their results seem to be supported by measurements on several different systems: thin films , 2D Josephson junction arrays , Si MOSFETs and QH systems , where a saturation of the resistance at low temperatures is observed and attributed to dissipation effects. The percolative nature of the transition can explain the value of $`\nu 1.3`$ found in most of the field-tuned experiments on thin films , as well as the apparent symmetry between insulating and the conducting phase observed in other experiments . In contrast with the above mentioned results, we do not observe any saturation in the temperature dependence of the resistance as the temperature decreases, or in other words, $`\delta R/\delta T`$ is non-zero down to the lowest temperatures, which were above 0.1K. Of course investigation down to even lower temperatures might lead to a different conclusion. However a satisfactory fit to the resistivity laws predicted by Shimshoni et al. could not be obtained. Mason and Kapitulnik recently proposed a new phase diagram for the SI transition which takes into account the possibility of a coupling of the system to a dissipative bath. They argued that this coupling, which becomes important when the critical point is approached, can result in a new, metallic-like phase. In this picture, a direct SI transition is still possible for very weak coupling, while for a stronger coupling the system goes through a metallic phase and is truly superconducting only at the lowest magnetic fields. The fact that the typical sheet resistances of our samples are about a factor of five higher than those in which resistance leveling was observed might just mean that our samples are in the weak coupling regime. However, the correlation length exponent determined in our experiment for the magnetic field-tuned transition, using two different methods, on different physical samples and at several levels of disorder was found to be $`\nu 0.7`$,which is not consistent with the exponent expected from the classical 2D percolation theory, $`\nu =4/3,`$ even with much more generous error bars. A coherence length exponent of 0.7 is also inconsistent with what was believed to be an exact theorem, which predicts $`\nu 1`$ in two dimensions in a presence of disorder. It is interesting to note that our exponent agrees with the result of the classical 3D XY model which is suggested to be relevant in the absence of disorder . Numerical simulations of a (2+1)-dimensional XY model and the Boson-Hubbard model at $`T=0`$ without disorder also find $`z=1`$ and $`\nu =0.7.`$ However, recently it was suggested that the nature of disorder averaging may introduce a new correlation length, different from the intrinsic one, which might lead to $`\nu <1`$ even for a disordered system . There is also a possibility that the local dissipation coupled to the phase of the superconducting order parameter due to gapless electronic excitations might change the universality class of the system and lead to a non-universal critical resistance . The critical resistance would then be expected to increase with increasing damping due to dissipation. The latter would be expected to increase with decreasing normal state resistance. However, we observe that the critical resistance decreases as the normal state resistance decreases, which is exactly the opposite of the behavior predicted by Wagenblast et al. We should note that $`\nu 0.7`$ was also found for the magnetic field-tuned insulator-conductor transition in Si MOSFET samples , suggesting a possible connection between the two phenomena. Our results for the magnetic field-tuned SI transition seem to be consistent with the predictions of bosonic models, rather than percolation models. This is further supported by the transport studies in the insulating regime, where the magnetoresistance cannot be explained by the weak localization theory only , and the temperature dependence of the resistance fits the predictions of Das and Doniach for the bosonic conduction. These observations still need to be reconciled with the results of the tunneling experiments which find no superconducting gap in the insulating regime. The tunneling experiments might however be emphasizing regions of the samples containing quasi-localized single electron states below the gap, or those in which the amplitude fluctuations break the system into superconducting ”islands” with finite spectral gaps in the density of states, as recently predicted . A highly non-uniform gap has also been predicted by Herbut for the case of large disorder. This problem might be clarified using spatially resolved scanning tunneling spectroscopy at low temperatures, which may be able to detect local variations in the density of states. Such studies might also help answer the question as to why $`\nu `$ different for the thickness- and magnetic field-tuned transitions on the same samples. In the case of the thickness-tuned transition, the correlation length exponent is close to what might be expected from the percolation theory. There is a major difference between the magnetic field-tuned and the thickness-tuned transitions: when the transition is tuned by the magnetic field, the microstructure of the sample stays fixed, while in the case of the thickness-tuned transitions it changes slightly with each film in the sequence. It may be that in this case the percolation effects become relevant, complicating the determination of the critical exponents . Finally, the shape of the phase boundary poses a further challenge to theorists. We are currently investigating the role of the dissipation in this system in more detail, using a 2D electron gas as a substrate, similar to the experiment of Rimberg et al. . We gratefully acknowledge useful discussions with A. P. Young, S. Sachdev and P. Phillips. This work was supported in part by the National Science Foundation under Grant No. NSF/DMR-9623477. command. command.
no-problem/9904/astro-ph9904163.html
ar5iv
text
# The Mass of the Oppenheimer-Snyder Black Hole ## Abstract The only instance when the General Relativistic (GTR) collapse equations have been solved (almost) exactly to explicitly find the metric coefficients is the case of a homogeneous spherical dust ( Oppenheimer and Snyder in 1939 in Phy. Rev. 56, 455). The equation (37) of their paper showed the formation of an event horizon for a collapsing homogeneous dust ball of mass $`M`$ in that the circumference radius the outermost surface, $`r_br_0=2GM/c^2`$ in a proper time $`\tau _0r_0^{1/2}`$ in the limit of large Schrarzschild time $`t\mathrm{}`$. But Eq.(37) was approximated from the Eq. (36) whose essential character is ($`tr_0\mathrm{ln}\frac{\sqrt{y}+1}{\sqrt{y}1}`$) where, at the boundary of the star $`y=r_b/r_0=r_bc^2/2GM`$. And since the argument of a logarithmic function can not be negative, one must have $`y1`$ or $`2GM/r_bc^21`$. This shows that, atleast, in this case (i) trapped surfaces are not formed, (ii) if the collapse indeed proceeds upto $`r=0`$, we must have $`M=0`$, and (iii) proper time taken for collapse $`\tau \mathrm{}`$. Thus, the gravitational mass of OS black holes are unique and equal to zero. One of the oldest and most fundamental problem of physics and astrophysics is that of gravitational collapse, and, specifically, that of the ultimate fate of a sufficiently massive collapsing body. Most of the astrophysical objects that we know of, viz. galaxies, stars, White Dwarfs (WD), Neutron Stars (NS), in a broad sense, result from gravitational collapse. It is well known that the essential concept of BHs, in a primitive form, was born in the cradle of Newtonian gravitation. In Newtonian gravity, the mass of a collapsing gas cloud is constant even if it is radiating, and equal to its baryonic mass $`M_i=M_f=M_0`$. Thus as $`r`$ decreases, natuarally, the value of $`M/r=M_0/r`$ steadily increases and at a certain stage one would have $`2GM/r=c^2`$ when the “escape velocity” from the surface of the fluid becomes equal to the speed of light $`c`$ (henceforth $`G=c=1`$). Soon afterwards, a Newtonian BH would be born. And in the context of classical General Theory of Relativity (GTR), it is believed that the ultimate fate of sufficiently massive bodies is collapse to a Black Hole (BH). In contrast to Newtonian gravity, here, the gravitational mass of a radiating fluid constantly decrease and therefore unlike the Newtonian case, one can not predict with real confidence the actual value of $`M_f`$ when one would have $`2M_f/r=1`$. Given an initial gravitational mass $`M_i`$, the three quantities, $`M_i`$, $`M_f`$ and $`M_0`$ are not connected amongst themselves by means of any fundamental constants or by any basic physical principles. Thus, in a strict sense, one reqires to solve the Einstein equations for the collapsing fluid for realistic and ever evolving equation of state (EOS) and radiation transport properties. Unfortunately, the reality is that even when one does away with the EOS by assuming the fluid to be a dust whose pressure is zero everywhere including the center, there is no unique solution to the problem. Depending upon the initial conditions like density distributions and assumptions, like self-similarity, adopted, one may find either a BH or a “naked singularity”. It is only when the dust is assumed to be homogeneous, a unique solution can be found. Homogeneous or inhomogeneous, all dust balls have a very special property: they have no internal energy and they can not radiate. Consequuenly, the graviational mass of a dust remains constant during the collapse process, which is, essentially, a Newtonian property. And if the dust collapse starts from a state of infinite dilution at $`r=r_{\mathrm{}}=\mathrm{}`$, the graviational mass of a dust must be equal to its baryonic mass $`M=M_0`$, another Newtonian property. Having made these remarks, we would now proceed to self-consistently analyze the pioneering work of Oppenheimer and Snyder (OS) to determine the mass of the BH whose formation it suggested. Since the OS solutions are the only (asymptotic and near) exact solutions for the GTR collapse, and are believed to explicitly show the formation of an “event horizon” (EH), it is extremely important to critically reexamine them. The study of any collapse problem becomes more tractable if one uses the comoving coordinates which are free from any kind of “coordinate singularities”. By definition, for a given fluid element, the comoving coordinate $`R`$ is fixed. The most natural choice for $`R`$ is the number of baryons within a given mass shell $`N(R)`$ or any number proportional to it. The comoving time $`\tau `$ is of course the time recordrd by a clock attached to a fluid element at $`R=R`$. Since a dust is in perennial free fall, comoving time is synonymous with “proper time” and, therefore, the dust metric is $$ds^2=d\tau ^2e^{\overline{\omega }}dR^2e^\omega (d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)$$ (1) It turns out that, for any spherically symmetric metric, the angular part is the same and we have $$e^\omega =r^2$$ (2) where $`r`$ is the invariant circumference radius, and this was the Eq. (27) in the OS paper. OS tried to solve the collapse equation by using the metric (2); and by skipping several intermediate equations, we shall focus attention on the solutions they oobtained. The Eq. (23-27) of their paper essentially leads to a relationship between $`\tau `$ and $`r`$: $$\tau =\frac{2}{3}\frac{R^{3/2}r^{3/2}}{(R/R_b)^{3/2}r_0^{1/2}};RR_b$$ (3) By transposing, the foregoing equation yields $$\frac{r}{R}=\left(1\frac{3}{2}\frac{r_0^{1/2}\tau }{R_b^{3/2}}\right)^{2/3};RR_b$$ (4) Apart from the the comoving coordinate system, there is another useful coordinate system, the Schwarzschild system $$ds^2=e^\nu dt^2e^\lambda dr^2r^2(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)$$ (5) and, in principle, it should be possible to work out the collapse (or any) problem in this coordinate system. The time $`t`$ appearing in the Schwarzschild cooordinate system is the proper time of a distant inertial observer. Further, by definition, the “comoving” coordinates can not be extended beyond the boundary of the fluid. Thus from either consideratiion, the external solutions must be expressed in terms of Schwarzschild system. OS obtained a general form of the Schwarzschild metric coefficients which involved derivatives, $`\dot{r}=r/\tau `$ and $`\dot{t}=t/\tau `$ : $$g_{rr}=e^\lambda =(1\dot{r}^2)^1$$ (6) and $$g_{tt}=e^\nu =\dot{t}^2(1\dot{r})^2$$ (7) Again by skipping few intermediate steps, we note that obtained a general relationship between the coordinate time $`t`$ and coordinate radius $`R`$ ifor the region inside the collapsing body: $$t=\frac{2}{3}r_0(R_b^{3/2}r_0^{3/2}y^{3/2})2r_0y^{1/2}+r_0\mathrm{ln}\frac{y^{1/2}+1}{y^{1/2}1}$$ (8) where $$y\frac{1}{2}\left[(R/R_b)^21\right]+\frac{R_br}{r_0R}$$ (9) It is the above Eq. (8) which corresponds to Eq. (36) in the OS paper, and, for very large $`t`$, it attains the form $$t=r_0\mathrm{ln}\frac{y^{1/2}+1}{y^{1/2}1}$$ (10) Now we trace some of the intermediate steps used by OS and not contained in their paper. By using the simple relation $`\mathrm{ln}(m/n)=\mathrm{ln}(n/m)`$, one may rewrite the above equation as $$t=r_0\mathrm{ln}\frac{y^{1/2}1}{y^{1/2}+1}$$ (11) In the limit of large $`t`$ and $`y1`$, the above equation becomes $$t=r_0\mathrm{ln}\left(\frac{y1}{4}\right)$$ (12) Or, $$y1=4e^{t/r_0}$$ (13) However, OS overlooked the numerical factor of “4” in their exercise. From Eqs. (10) and (14), we find that $$y1=\frac{1}{2}\left[(R/R_b)^23\right]+\frac{R_br}{r_0R}=4e^{t/r_0}$$ (14) The following equation obtained from the foregoing one is used to eliminate $`r`$ from relevant equations $$\frac{R_br}{r_0R}=4e^{t/r_0}\frac{1}{2}\left[(R/R_b)^23\right]$$ (15) And using Eqs. (15) and (16) into (13) we obtain $$tr_0\mathrm{ln}\left\{\frac{1}{8}\left[\left(\frac{R}{R_b}\right)^23\right]+\frac{R_b}{4r_0}\left(1\frac{3r_0^{1/2}\tau }{2R_b^{3/2}}\right)^{2/3}\right\}$$ (16) However, the corresponding equation in the OS paper (Eq.) contained two small errors: $$tr_0\mathrm{ln}\left\{\frac{1}{2}\left[\left(\frac{R}{R_b}\right)^23\right]+\frac{R_b}{2r_0}\left(1\frac{3r_0^{1/2}\tau }{2R_b^2}\right)^{2/3}\right\}$$ (17) While the first error is a genuine one; which is due to the omission of the factor 4, the second one is a typographical error : the power of $`R_b`$ in the numerator of the last term of this equation should be $`3/2`$ and not $`2`$. From the foregoing equation, they concluded that, “for a fixed value of $`r`$ as $`t`$ tends toward infinity, $`\tau `$ tends to a finite limit, which increases with $`r`$”. The fact that $`\tau `$ is finite at $`r=r_0`$ or $`r=0`$ is evident from Eq. (4) too (provided $`r_b\mathrm{}`$ and $`r_0>0`$). This was essentially the idea behind the occurrence of an Event Horizon (EH). By differtiating Eq. (16) with respect to $`\tau `$, we obtain $$\dot{t}=\frac{e^{t/r_0}}{4}\left(\frac{r_0R}{rR_b}\right)^{1/2}$$ (18) Similarly differentiating Eq. (15), and using the above result, we obtain $$\dot{r}=\frac{R^{3/2}r_0^{1/2}}{R_b^{3/2}r^{1/2}}$$ (19) Now using Eqs. (15), (19) and (20) in Eqs. (7-8), we obtain $$e^\lambda =1(R/R_b)^2\left\{4e^{t/r_0}+\frac{1}{2}\left[3(R/R_b)^2\right]\right\}^1$$ (20) and $$e^\nu =e^{\lambda 2t/r_0}\left\{\frac{e^{t/r_0}}{4}+\frac{1}{8}\left[3(R/R_b)^2\right]\right\}$$ (21) However, in the paper of OS, these two foregoing equations appear (Eqs. \[38-39\]) in a slightly erroneous form because of the omission of the nemerical factor of 4 in Eq. (13): $$e^\lambda =1(R/R_b)^2\left\{e^{t/r_0}+\frac{1}{2}\left[3(R/R_b)^2\right]\right\}^1$$ (22) and $$e^\nu =e^{\lambda 2t/r_0}\left\{e^{t/r_0}+\frac{1}{2}\left[3(R/R_b)^2\right]\right\}$$ (23) Note these equations were obtained by eliminating $`r`$ and the $`t\mathrm{}`$ limit covers both the $`rr_0`$ limit as well as the further $`r0`$ limit. Now, if we recall that the comoving coordinates $`R`$ and $`R_b`$ are fixed, and, when there is a total collapse to a physical point at $`r=0`$, the metric coefficients must blow up irrespective of the value of $`R`$ (i.e, the the label of a given mass shell). But this does not happen for the solutions obtained by OS! OS correctly pointed out that, it is only the outer boundary ($`R_b`$) for which the (one of the) metric coefficients assume the desired form : $`e^\lambda \mathrm{}`$. “For $`R`$ equal to $`R_b`$, $`e^\lambda `$ tends to infinity like $`e^{t/r_0}`$ as $`t`$ approaches infinity. But for any interior point, the limitting values are different, and infact, $`e^\lambda =finite`$ even when the collapse is supposed to be complete! OS noted “For $`R`$ less than $`R_b`$, $`e^\lambda `$ tends to a finite limit as $`t`$ tends to infinity” without trying to find why it is so. And on the other hand, when $`e^\lambda =finite`$, one sees that $`e^\nu 0`$. But, for the external boundary, for which $`e^\lambda \mathrm{}`$, it can not be predicted whether $`e^\nu `$ indeed approaches zero (these limits remain unchanged even when one ignores the numerical factor of 4). They admitted this problem while they wrote “Also for $`rr_0`$, $`\nu `$ tends to minus infinity”. This $`\nu \mathrm{}`$ limit would correspond to the singularity $`R0`$. However they did not ponder why for $`R=R_b`$, $`e^\nu `$ does not behave in the desired manner. Specifically the fact that for $`R<R_b`$, $`e^\lambda `$ fails to blow up at the singularity definitely hints that there is some tacit assumption, made in the OS analysis, which is not realized in Nature (GTR) or there is a basic fault in the formulation of the problem. In our attempt for a possible resolution of this physical anomaly with regard to the unphysical aspect of OS solutions, we see from Eq. (8), that, $`t\mathrm{}`$ if either or both of the two following conditions are satisfied: $$r_00;t\mathrm{}$$ (24) and $$y1;t\mathrm{}$$ (25) OS implicitly assumed that $`r_0`$ is finite and then $`y1`$ and then $`y<1`$ as $`t\mathrm{}`$ “For $`\lambda `$ tends to a finite limit for $`rr_0`$ as $`t`$ approaches infinity, and for $`r_b=r_0`$ tends to infinity. Also for $`rr_0`$, $`\nu `$ tends to to minus infinity.” But, while doing so, they completely overlooked the most important feature of Eq. (8) (their Eq. 36), and Eq. (10) that in view of the presence of the $`t\mathrm{ln}\frac{y^{1/2}+1}{y^{1/2}1}`$ term, in order that $`t`$ is definable at all, one must have $$y1$$ (26) For an insight into the problem, we first focus attention on the outermost layer where $`y_b=r_b/r_0`$, so that the above condition becomes $$r_br_0$$ (27) This condition tells that $`r_b`$ can never plunge below $`r_0`$: Thus a careful analysis of the GTR homogeneous dust problem as enunciated by OS themselves actually tell that trapped surfaces can not be formed even though one is free to chase the limit $`rr_0`$. We may further rewrite this condition as $$\frac{r_b}{r_0}1;\frac{2M}{r_b}1$$ (28) This means that, if the collapse indeed proceeds upto $`r_b=0`$, the final gravitational mass of the configuration would be $$M_f(r=0)=0$$ (29) But then, for a dust or any adiabatically evolving fluid $$M_i=M_f=constant$$ (30) Therefore, we must have $`M_i=0`$ too. This means that $`r_0=0`$ in this case, and then, it may be promptly verified that, irrespective of the value of $`r`$, Eqs. (21) and (22) lead to a unique limiting value for the metric coefficients : $$e^\lambda 1\left(4e^{t/r_0}+1\right)0$$ (31) or, $$e^\lambda \mathrm{}$$ (32) and $$e^\nu e^{\lambda 2t/r_0}0$$ (33) because $`t/r_0`$ approaches $`\mathrm{}`$ much faster than $`\lambda `$, if $`r_0=0`$. Thus, technically, the final solutions of OS are correct, except for the fact they did not organically incorporate the crucial $`y0`$ condition in the collapse equations (here we ignore the missing numerical factor of 4). And all we have done here is to rectify this colossal lacunae to fix the value of $`r_0=0`$. This conclusion that the work of OS demands $`r_0=0`$ could have obtained in a much more direct fashion simply from the definition of $`y`$ in Eq. (15). To this effect, we rewrite this equation as $$y\frac{1}{2}(\alpha ^21)+\frac{r}{r_0\alpha }$$ (34) Note that during the collapse process $`R/R_b\alpha `$ remains fixed for a given mass shell. Suppose we are considering the collapse of an interior shell with $`\alpha <1`$ and $`\alpha ^21<0`$. Then if $`r_0>0`$, the second term on the left hand side of the foregoing equation can ne made arbitrarily small as $`r0`$. Therefore $`y`$ would become negative as the collapse progresses if $`r_0>0`$. To avoid this, we must have $`r/r_01`$. We may recall here that, long ago, Harrison et al. (pp. 75) mentioned that spherical gravitational collapse should come to a decisive end with $`M_f=M^{}=0`$. And they termed this understanding as a “Theorem” (without offering a real proof): “THEOREM 23: Provided that matter does not undergo collapse at the microscopic level at any stage of compression, then, regardless of all features of the equation of state - there exists for each fixed number of baryons a gravitationally collapsed configuration, in which the mass-energy $`M^{}`$ as sensed externally is zero.” In a somewhat more realistic way Zeldovich & Novikov (see pp. 297) discussed the possibility of having an ultracompact configuration of degenerate fermions obeying an equation of state $`p=e/3`$, where $`e`$ is the proper internal energy density, with $`M0`$. And it is also well known that the so-called “naked singularities” could be of zero-gravitational mass. We have already seen that the coordinate time required for collapse to the event horizon or beyond is $`t=\mathrm{}`$. And, it may be found that the proper time for collapse of the outer boundary of a dust ball to the central singularity is $$\tau =\pi \left(\frac{r_{\mathrm{}}}{8M}\right)^{1/2}$$ (35) where, the dust ball is assumed to be at “rest” at $`r=r_{\mathrm{}}`$ at $`\tau =t=0`$. This equation is also obtainable from Eq. (4) provided one chooses $`R_b`$ in such a way thay $$R_b^{3/2}=\frac{\sqrt{2}\pi }{3}r_{\mathrm{}}^{3/2}$$ (36) Since $`M=0`$ for the OS problem, the proper time for collapse is infinite: $`\tau =\mathrm{}`$, and not finite. Physically this means that, at any finite proper or coordinate time, there is never any OS-black -hole. The collapse process goes on and on indefinitely as spacetime becomes infinitely curved near the would be singularity $`r=0`$. If this picture is correct, for self-consistency, Nature should not allow the existence of finite mass BHs. And at this juncture, some readers may argue that, this is not so. Irrespective of the results of the OS problem, as highlighted by us in this paper, it may be argued that GTR allows the existence of arbitray mass Schwarzschild BHs. In another related paper, we assumed the existence of a finite mass BH which is described by Kruskal- Szekeres metric. If this assumption of the existence of a finite mass BH is indeed allowed by GTR, one would find the world line of a material partcle to be timelike, $`ds^2>0`$, everywhere except probably at the central singularity $`r=0`$. However, we have found that, even in the Kruskal-Szekeres metric the metric appears to be null, $`ds^2=0`$, at $`R=2M`$. If we describe the region interior to the EH by Lemaitre metric, we have found that, in this case too, the metric becomes null at the EH. We have further shown that, in the Kruskal-Szekeres metric, the metric continues to be null for $`R<2M`$ too. And this can be explained only when we realize that the Event Horizon, itself is the end of spacetime for the free falling particle. This implies that mass of the BH is actually $`M0`$, in complete agreement with the present exact analysis. This is also in complete agreement with the idea of Einstein that Schwarzschild singularity can not be realized in practice ($`\tau =0`$).
no-problem/9904/patt-sol9904008.html
ar5iv
text
# Spatial Optical Solitons due to Multistep Cascading ## Abstract We introduce a novel class of parametric optical solitons supported simultaneously by two second-order nonlinear cascading processes, second-harmonic generation and sum-frequency mixing. We obtain, analytically and numerically, the solutions for three-wave spatial solitons and show that the presence of an additional cascading mechanism can change dramatically the properties and stability of two-wave quadratic solitary waves. As is known, optical cascaded nonlinearities due to parametric wave mixing can lead to a large nonlinear phase shift and spatial solitary waves, resembling those for a Kerr medium . However, solitary waves supported by cascaded nonlinearities demonstrate much richer dynamics due to nonintegrability of governing nonlinear equations and, unlike solitons of the Kerr nonlinearity, the quadratic solitons can become unstable in a certain narrow region of their parameters . In this Letter we introduce a novel class of parametric spatial solitons supported simultaneously by two nonlinear quadratic (or $`\chi ^{(2)}`$) optical processes: second-harmonic generation (SHG) and sum-frequency mixing (SFM). As has been recently shown by Koynov and Saltiel for continuous waves, under the condition that the two wave-mixing processes are nearly phase matched, the presence of multistep cascading leads to a four fold reduction of the input intensity required to achieve a large nonlinear phase shift. Here, we demonstrate that the multistep cascading can lead to a new type of parametric solitons. Introducing a third wave generated via a SFM process, we find that it can alter both the general properties and stability of the two-wave $`\chi ^{(2)}`$ spatial solitons. Moreover, we reveal the existence of a new type of the so-called quasi-soliton, that appear for a negative mismatch of the SFM process. To introduce the model of multistep cascading, we consider the fundamental beam with frequency $`\omega `$ entering a noncentrosymmetric nonlinear medium with a $`\chi ^{(2)}`$ response. As a first step, the second-harmonic wave with frequency $`2\omega `$ is generated via the SHG process. As a second step, we expect the generation of higher order harmonics due to SFM, for example, a third harmonic ($`\omega +2\omega =3\omega `$) or even fourth harmonic ($`2\omega +2\omega =4\omega `$) . When both such processes are nearly phase matched, they can lead, via down-convertion, to a large nonlinear phase shift of the fundamental wave . Additionally, as we demonstrate in this paper, the multistep cascading can support a novel type of three-wave spatial solitary waves in a diffractive $`\chi ^{(2)}`$ nonlinear medium, multistep cascading solitons. We start our analysis with the reduced amplitude equations derived in the slowly varying envelope approximation with the assumption of zero absorption of all interacting waves (see, e.g., Ref. ). Introducing the effect of diffraction in a slab waveguide geometry, we obtain $$\begin{array}{c}2ik_1\frac{A_1}{z}+\frac{^2A_1}{x^2}+\chi _1A_3A_2^{}e^{i\mathrm{\Delta }k_3z}+\chi _2A_2A_1^{}e^{i\mathrm{\Delta }k_2z}=0,\hfill \\ 4ik_1\frac{A_2}{z}+\frac{^2A_2}{x^2}+\chi _4A_3A_1^{}e^{i\mathrm{\Delta }k_3z}+\chi _5A_1^2e^{i\mathrm{\Delta }k_2z}=0,\hfill \\ 6ik_1\frac{A_3}{z}+\frac{^2A_3}{x^2}+\chi _3A_2A_1e^{i\mathrm{\Delta }k_3z}=0,\hfill \end{array}$$ (1) where $`\chi _{1,2}=2k_1\sigma _{1,2}`$, $`\chi _3=6k_1\sigma _3`$, and $`\chi _{4,5}=4k_1\sigma _{4,5}`$, and the nonlinear coupling coefficients $`\sigma _k`$ are proportional to the elements of the second-order susceptibility tensor which we assume to satisfy the following relations (no dispersion), $`\sigma _3=3\sigma _1`$, $`\sigma _2=\sigma _5`$, and $`\sigma _4=2\sigma _1`$. In Eqs. (1), $`A_1`$,$`A_2`$ and $`A_3`$ are the complex electric field envelopes of the fundamental harmonic (FH), second harmonic (SH), and third harmonic (TH), respectively, $`\mathrm{\Delta }k_2=2k_1k_2`$ is the wavevector mismatch for the SHG process, and $`\mathrm{\Delta }k_3=k_1+k_2k_3`$ is the wavevector mismatch for the SFM process. The subscripts ‘1’ denote the FH wave, the subscripts ‘2’ denote the SH wave, and the subscripts ‘3’, the TH wave. Following the technique earlier employed in Refs. , we look for stationary solutions of Eq. (1) and introduce the normalised envelope $`w(z,x)`$, $`v(z,x)`$, and $`u(z,x)`$ according to the relations, $$A_1=\frac{\sqrt{2}\beta k_1}{\sqrt{\chi _2\chi _5}}e^{i\beta z}w,A_2=\frac{2\beta k_1}{\chi _2}e^{2i\beta z+i\mathrm{\Delta }k_2z}v,A_3=\frac{\sqrt{2\chi _2}\beta k_1}{\chi _1\sqrt{\chi _5}}e^{3i\beta z+i\mathrm{\Delta }kz}u,$$ (2) where $`\mathrm{\Delta }k\mathrm{\Delta }k_2+\mathrm{\Delta }k_3`$. Renormalising the variables as $`zz/\beta `$ and $`xx/\sqrt{2\beta k_1}`$, we finally obtain a system of coupled equations, $$\begin{array}{c}i\frac{w}{z}+\frac{^2w}{x^2}w+w^{}v+v^{}u=0,\hfill \\ 2i\frac{v}{z}+\frac{^2v}{x^2}\alpha v+\frac{1}{2}w^2+w^{}u=0,\hfill \\ 3i\frac{u}{z}+\frac{^2u}{x^2}\alpha _1u+\chi vw=0,\hfill \end{array}$$ (3) where $`\alpha =2(2\beta +\mathrm{\Delta }k_2)/\beta `$ and $`\alpha _1=3(3\beta +\mathrm{\Delta }k)/\beta `$ are two dimensionless parameters that characterise the nonlinear phase matching between the parametrically interacting waves. Dimensionless material parameter $`\chi \chi _1\chi _3/\chi _2^2=9(\sigma _1/\sigma _2)^2`$ depends on the type of phase matching, and it can take different values of order of one. For example, when both SHG and SFM are due to quasi-phase matching (QPM), we have $`\sigma _j=(2/\pi m)(\pi /\lambda _1n_1)\chi ^{(2)}[\omega ;(4j)\omega ;(3j)\omega ]`$, where $`j=1,2`$. Then, for the first-order $`(m=1)`$ QPM processes (see, e.g., Ref. ), we have $`\sigma _1=\sigma _2`$, and therefore $`\chi =9`$. When SFM is due to the third-order QPM process (see, e.g., Ref. ), we should take $`\sigma _1=\sigma _2/3`$, and therefore $`\chi =1`$. At last, when SFM is the fifth-order QPM process, we have $`\sigma _1=\sigma _2/5`$ and $`\chi =9/25`$. Dimensionless equations (3) present a fundamental model for three-wave multistep cascading solitons in the absence of walk-off. Additionally to the type I SHG solitons (see, e.g., Refs ), the multistep cascading solitons involve the phase-matched SFM interaction ($`\omega +2\omega =3\omega `$) that generates a third harmonic wave. If this latter process is not phase-matched, we should consider $`\alpha _1`$ as a large parameter, and then look for solutions of Eq. (3) in the form of an asymptotic series in $`\alpha _1`$. Substituting $`w=w_0+\epsilon w_1+\mathrm{}`$, $`v=v_0+\epsilon v_1+\mathrm{}`$ and $`u=\epsilon u_1`$, where $`\epsilon \alpha _1^1`$, we find $`u_1\chi vw`$, and the system (3) reduces to a model of competing nonlinearities, $$\begin{array}{c}i\frac{w}{z}+\frac{^2w}{x^2}w+w^{}v+\epsilon \chi |v|^2w=0,\hfill \\ 2i\frac{v}{z}+\frac{^2v}{x^2}\alpha v+\frac{w^2}{2}+\epsilon \chi |w|^2v=0.\hfill \end{array}$$ (4) In the limit $`\epsilon 0`$, Eqs. (4) coincide with the model of two-wave solitons due to the type I SHG earlier analysed in Refs. . For smaller $`\alpha _1`$, the system (3) cannot be reduced to Eq. (4), and its two-parameter family of localised solutions consists of three mutually coupled waves. It is interesting to note that, similar to the case of nondegenerate three-wave mixing , Eqs. (3) possess an exact solution. To find it, we make a substitution $`w=w_0\mathrm{sech}^2(\eta x)`$, $`v=v_0\mathrm{sech}^2(\eta x)`$ and $`u=u_0\mathrm{sech}^2(\eta x)`$, and obtain unknown parameters from the following algebraic equations $$w_0^2=\frac{9v_0}{3+4\chi v_0},\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}4}\chi v_0^2+6v_0=9,u_0=\frac{2}{3}\chi w_0v_0,$$ (5) valid for $`\eta =\frac{1}{2}`$ and $`\alpha =\alpha _1=1`$. Equations (5) have two solutions corresponding to positive and negative values of the amplitude. This indicates a possibility of multivalued solutions, even within the class of exact solutions. In a general case, three-wave solitons of Eqs. (3) can be found only numerically. Figures 1(a) and 1(b) present two examples of solitary waves for different sets of the mismatch parameters $`\alpha `$ and $`\alpha _1`$. When $`\alpha _11`$ \[see Fig. 1(a)\], which corresponds to an unmatched SFM process, the amplitude of the third harmonic is small, and it vanishes for $`\alpha _1\mathrm{}`$ according to the asymptotic solution of Eq. (4) discussed above. To summarise different types of three-wave solitary waves, in Fig. 2 we plot the dependence of the total soliton power defined as $$P=_{\mathrm{}}^+\mathrm{}𝑑x\left(|w|^2+4|v|^2+\frac{9}{\chi }|u|^2\right),$$ (6) on the mismatch parameter $`\alpha _1`$, for fixed $`\alpha =1`$. It is clearly seen that for some values of $`\alpha _1`$ (including the exact solution at $`\alpha _1=1`$ shown by two filled circles), there exist two different branches of three-wave solitary waves, and only one of those branches approaches, for large values of $`\alpha _1`$, a family of two-wave solitons of the cascading limit (Fig. 2, dashed). The slope of the branches changes from negative (for small $`\alpha _1`$) to positive (for large $`\alpha _1`$), indicating a possible change of the soliton stability. However, the soliton stability should be defined in terms of physical parameters, and in the case of two-parameter solitons as we have here, the stability threshold is determined by a certain integral determinant condition, similar to that first derived for the three-wave mixing problem . Ratios of the maximum amplitudes of the soliton components for the three-wave solitons of the lower branch in the model (3) are presented in Fig. 3, where the upper dashed curve is the asymptotic limit of two-wave solitons for $`\alpha _1\mathrm{}`$. Soliton solutions of the second (upper) branch in Fig. 2 correspond to large values of the total power and they have been verified numerically to be unstable. The analysis of the asymptotics for Eqs. (3) suggests that localised solutions should not occur for $`\alpha _1<0`$. However, we reveal the existence of an extended class of very robust localised solutions which we classify as ‘quasi-solitons’ , solitary waves with small-amplitude oscillating tails. In principle, such solitons are known in one-component models (see, e.g., Ref. ) but here the nonvanishing tails appear only due to a resonance with the third-harmonic field \[see Fig. 4(a)\]. Such solitons are expected to be weakly unstable, and this is indeed demonstrated in Fig. 4(b) for rather long propagation distances. Existence of quasi-solitons for any value of negative phase-matching with a higher-order harmonic field indicates that all two-wave quadratic solitons can become unstable due to an additional SFM process. This is confirmed in Figs. 4(c,d) where we present the results of numerical simulations of the dynamics of an initially launched two-wave soliton for two cases, positive and negative phase-matching of a SFM process. For $`\alpha _1>0`$ \[see Fig. 4(c)\], a very small harmonic ($`v_{\mathrm{max}}0.1`$) is generated and the initial two-component beam converges to a three-wave soliton. In contrast, for $`\alpha _1<0`$ \[see Fig. 4(d)\], the input beam decays rapidly into radiation and diffracting harmonic fields. In conclusion, we have investigated, analytically and numerically, multistep cascading and nonlinear beam propagation in a diffractive optical medium and introduced a novel type of three-wave parametric spatial optical solitons, multistep cascading solitons. The detailed analysis of the soliton stability, the effect of walk-off, higher-dimensional and spatio-temporal effects are possible directions of the future research. The authors are indebted to K. Koynov, R. Schiek, and E. Kuznetsov for useful discussions. The work has been partially supported by the Australian Photonics Cooperative Research Centre and the Australian Research Council.
no-problem/9904/cond-mat9904239.html
ar5iv
text
# 1 Introduction ## 1 Introduction In many physical phenomena percolation effects play an important role . Particularly, some dilute magnets are well described, in what concerns magnetic phase transitions, by uncorrelated diluted models . In these models, magnetic sites on a lattice are randomly replaced by non-magnetic ones, and a bond links each pair of occupied (magnetic) first neighbours. At zero temperature, the problem is purely geometrical and is described in the following way. Sites on a lattice are randomly occupied with probability $`p`$, while all bonds are considered to be present. A cluster is then defined as a collection of present sites which are connected to each other through steps between occupied first-neighbours. As $`p`$ increases, an infinite cluster appears for the first time at a critical probability $`p_c`$, which is lattice dependent. In analogy with thermal critical phenomena, some quantities are singular at the critical point, following a power-law behavior near $`p_c`$ . Typical examples are the probability $`P(p)`$ that a site belongs to the infinite cluster, which behaves as $`P(p)(pp_c)^\beta `$ near $`p_c`$, and the correlation length $`\xi (p)`$, which diverges at $`p_c`$, according to $`\xi (p)|pp_c|^\nu `$. Nevertheless, some systems were found to be better described by correlated percolation models, where the presence of sites (or bonds) depends also on their neighbourhood. Typical examples of correlated percolation models are bootstrap percolation and site-bond correlated percolation . The physical motivation for the introduction of the former model comes from diluted magnetic systems where competition between exchange (which favour a magnetic ground state) and crystal-field (which leads to a non-magnetic ground state) interactions takes place. To mimic this competition at zero temperature, the bootstrap percolation model was introduced : in this model, sites on a lattice are randomly occupied with probability $`p`$ but only those with at least $`m`$ occupied first-neighbours remain occupied. In the stable final configuration, all occupied sites have at least $`m`$ occupied first-neighbours or the whole lattice is empty. As our purpose in this paper is to study one implementation of the bootstrap percolation model, we briefly review some of its properties in what follows (for a thourough discussion of the results up to 1990, see Reference ). The $`m=0`$ case regains the usual (uncorrelated) site percolation model, where the transition is continuous and $`p_c<1`$ in two and three dimensions. More specifically, the most precise evaluations of the critical exponents for three-dimensional lattices are $`\nu =1/y_p=0.875\pm 0.008`$ and $`\beta =0.412\pm 0.010`$ from simulation and $`\nu =1/y_p=0.872\pm 0.070`$ and $`\beta =0.405\pm 0.025`$ from series . In the $`m=1`$ case, only isolated sites are removed by the culling process: these sites do not contribute to the critical behaviour and $`p_c`$ and all critical exponents remain the same as for usual percolation. For $`m=2`$, on the other hand, isolated clusters with two sites are eliminated, as well as some dangling structures of more compact clusters. The elimination of these dangling structures, however, does not break the infinite cluster (whenever present) and, therefore, the critical probability is the same as for usual percolation. Moreover, as the exponent $`\nu `$ is also connected to the formation of this infinite cluster, its value is the same for $`m=2`$ and $`m=0`$. In what concerns “field” exponents ($`\beta `$, for instance), previous results for two-dimensional systems indicate a higher value of this exponent for $`m=2`$ than for usual percolation . Nevertheless, it has been shown later, through simulations on bigger lattices in two dimensions and general arguments applied to both two and three dimensions , that the exponent $`\beta `$ is the same for uncorrelated and $`m=2`$ bootstrap percolation models. Let us now turn our atention to higher values of $`m`$. It is generally believed that, for any value of $`m`$ where only infinite clusters can survive the culling process, $`p_c=1`$. This is indeed the case for $`m=2d1`$ on hypercubic lattices ($`d`$ is the dimension of the lattice) , for $`m=4`$ on cubic and triangular lattices and for $`m=5`$ on the triangular lattice . Moreover, it has been shown that, for these cases, the usual finite-size scaling relation does not hold. This finite-size scaling predicts that, if a suitable definition of a finite-lattice “critical” point, $`p_{av}`$, is made, this point will approach the critical point in the thermodynamic limit, $`p_c`$, as: $$p_cp_{av}L^{1/\nu },$$ (1) where $`L`$ is the linear size of the finite system, such that $`L1`$, and $`\nu `$ is the usual critical exponent. This finite-size behaviour is indeed observed for $`m2`$, but fails for high values of $`m`$. For $`m=2d1`$ on hypercubic lattices, it has been proven that $`p_cp_{av}1/(\mathrm{log}^{d1}L)`$ , e.g., proportional to $`1/\mathrm{log}(\mathrm{log}L)`$ for $`d=3`$ dimensions. Also for $`m=4`$ on the cubic lattice, the correct finite-size behaviour is $`p_cp_{av}1/\mathrm{log}(\mathrm{log}L)`$, with $`p_c=1`$ . These results for high $`m`$ have been conjectured or tested in numerical simulations . The results stated in the previous paragraphs do not apply to the $`m=3`$ case on the cubic lattice. For this model, one expects $`p_c`$ to be above the value for uncorrelated percolation (where $`p_c=0.311605\pm 0.000005`$ ), since the infinite cluster for usual percolation at $`p_c`$ is not stable with respect to the culling process for $`m=3`$. On the other hand, since finite clusters are still stable for this value of $`m`$ on the cubic lattice, it is expected that $`p_c<1`$. Numerical simulations confirmed this scenario , although the relative small sizes used in those works indicate that the values might not be precise (we will return to this point later). In what concerns the critical exponents, previous results indicate that the exponent $`\nu `$ is the same as for usual percolation but $`\beta `$ is higher than its uncorrelated counterpart . However, in neither of these works a extrapolation to the thermodynamic limit is attempted, leaving the possibility that finite-size effects might be the reason for the discrepancy in the values of $`\beta `$. This possibility was first proven to be right, in the context of two-dimensional bootstrap percolation models, in References and later confirmed for $`m=3`$ on the triangular lattice . The possibility of a new universality class for bootsrap percolation with $`m=3`$ on the cubic lattice is the problem we address in this work. We resort to numerical simulation methods, which, together with finite-size scaling analysis, allowed us to obtain more precise values for the critical parameters. We also study the so-called critical spanning probability, i.e., the probability that a given lattice has a cluster connecting its boundaries at criticality . This quantity shows some degree of universality: it depends on the dimension and shape of the system and on the specific boundary condition but not on the lattice type (simple cubic of f.c.c., for instance) and on the particular kind of percolation (site or bond) . It is then interesting to see whether it remains invariant for percolation models such that long-range correlation is involved, like bootstrap percolation. The remainder of the paper is organized as follows. In the next section we present the method and discuss some technical details, as well as the results for the critical parameters. In Section 3 we discuss some previous results concerning the critical spanning probability for percolation and our results for bootstrap percolation. Finally, we summarize our results in the last section. ## 2 Method and Results The method we use is connected to real-space renormalization-group and finite-size scaling procedures . The approach needs that precise values of the physical quantities are available; these are only obtained for high values of the linear system size $`L`$. We then studied finite systems of size $`L^3`$, with $`32L480`$; from the results for $`L1`$, it is possible to use finite-size scaling techniques to extrapolate to the thermodynamic limit ($`L=\mathrm{}`$). The critical probability $`p_c`$ and the critical exponent $`\nu `$ are calculated as follows. For a lattice of size $`L`$, we occupy each site with probability $`p`$, apply the bootstrap condition and test the lattice for percolation (here we define percolation as the presence of a cluster which connects the bottom and top planes of the finite cubic lattice; we discuss this and other technical points below). Our finite-size estimate of the critical probability, $`p^{}`$, is taken as the value of $`p`$ at which the cell percolates for the first time, when $`p`$ is increased from zero; this procedure is made for $`𝒩`$ different runs (which correspond to $`𝒩`$ different seeds to the random number generator). Each run leads to a different value of $`p^{}`$, since the lattice is finite. We take the average of the $`𝒩`$ values of $`p^{}`$ as our estimate of $`p_{av}`$ (see Section 1). It is then assumed that $`p_{av}`$ will approach $`p_c`$ as given by Equation 1. Moreover, it is possible to calculate the width $`\sigma =\sqrt{<p^^2>p_{av}^2}`$, which behaves as ($`<p^^2>`$ stands for the average of $`p^^2`$ over the $`𝒩`$ realizations): $$\sigma L^{1/\nu }.$$ (2) From the previous equation and a $`\mathrm{log}\mathrm{log}`$ plot of $`\sigma \times L`$, it is then possible to obtain the value of the critical exponent $`\nu `$. Tha data is depicted in Figure 1; from the slope of the straight line we obtain $`\nu =0.89\pm 0.04`$. We compare this value with other evaluations in Table 1; within the numerical precision, this exponent is the same for bootstrap percolation with $`m=3`$ and ordinary percolation on the cubic lattice and agrees with previous evaluations of $`\nu `$ for bootstrap percolation with $`m=3`$ on the cubic lattice. Note from the graph that the straight line regime is achieved for $`L128`$, while for uncorrelated percolation this regime sets in for smaller values of $`L`$. This is expected, since finite-size effects are stronger for correlated percolation problems than for their uncorrelated counterparts . Therefore, we neglect the data for $`L96`$ and used only lattices with $`128L480`$ in our linear regression analysis. In order to calculate the critical threshold, we resort to Equations 1 and 2, which imply that, for $`L1`$: $$p_cp_{av}\sigma .$$ (3) This is a convenient way to calculate $`p_c`$, since it does not depend on the value of $`\nu `$; such dependence would appear if Equation 1 was used. It is shown in Figure 2 a plot of $`p_{av}\times \sigma `$: $`p_c`$ is given by the linear coeficient and the value obtained is $`p_c=0.57256\pm 0.00006`$. This value is slightly above the one calculated in Reference , in which the value of $`L`$ varied from 10 to 110. If we use the same range in our calculation the extrapolated value of $`p_c`$ is consistent with the result of (see Table 1). Let us know discuss some technical points. An important quantity is the value of $`p`$ at which the lattice percolates for the first time, when $`p`$ is increased from zero. One needs to define what “percolate” means for a finite lattice. In this work, we used the rule called $`_1`$ in Reference , i.e., a lattice percolates if, after the bootstrap culling process, there is a path of present sites which links the boundaries of the lattice in a fixed direction (vertical, say). There are other possible definitions (see ) and it is expected that all of them lead to the same value of $`p_c`$ in the thermodynamic limit. Note, however, that the value of the critical spanning probability does depend on this definition, as we will discuss im Section 3. The numerical procedure we used to test for percolation is the Hoshen-Kopelman algorithm : for usual percolation, it requires the storage of only one plane. For bootstrap percolation, on the other hand, the bootstrap iteration needs the storage of the whole lattice, due to correlation effects. To cope with this drawback, we store the lattice in bits, instead of words: this saves memory and time, since the updates connected to the bootstrap rule can be made in parallel for a set of 32 sites . To define all six first-neighbours of sites at the boundaries of the lattice, periodic boundary conditions are used. It is expected that the boundary condition does not affect the values of the critical parameters in the thermodynamic limit, since it is a “surface” effect. Finally, let us mention that the number of realizations $`𝒩`$ (see Section 1) varied from 12000 for $`L=32`$ to 640 for $`L=480`$; the errors were calculated as three times the standard deviation for subsets of the total number of realizations. To have access to the “magnetic” scaling power $`y_h`$, one possible way is to define a “ghost”- site, which is linked to all sites of the lattice with probability $`h`$ . Within a real-space renormalization group framework, it is possible to calculate the eigenvalue $`\lambda _h`$ through $`\lambda _h=<n>/p_c`$, where $`<n>`$ is the average number of occupied sites linked to one of the boundary planes of the finite cubic lattice at the critical threshold, $`p=p_c`$. The value of $`p_c`$ is obtained as explained above and is then used to calculate $`<n>`$ and hence $`\lambda _h(L)`$, averaging over $``$ configurations. The value of $``$ varied from $`320000`$, for the smaller lattices, to $`20000`$, for the bigger ones. We use two procedures to obtain $`y_h`$ in the thermodynamic limit. The first one is based on the fact that, for $`L\mathrm{}`$: $$\lambda _h(L)=L^{y_h},$$ (4) with $`y_h`$ independent of $`L`$; this equation leads to a straight line in a $`\mathrm{log}\mathrm{log}`$ plot of $`\lambda _h\times L`$, for $`L1`$. As depicted in Figure 3, this is indeed the case for $`128L480`$. From a linear fitting of the data for this range of $`L`$, and resorting to the scaling relation $`\beta =(dy_h)\nu `$, we obtained the value $`\beta =0.37\pm 0.03`$ (see Table 1). Alternatively, we could take into account correction-to-scaling terms, through: $$\lambda _h=L^{y_h}\left(1+B/L\right).$$ (5) From this equation, we see that local slopes of $`\mathrm{log}\lambda _h\times \mathrm{log}L`$ provide estimates of $`y_h(L)`$, which, when extrapolated to $`L\mathrm{}`$, leads to an evaluation of $`y_h`$ in the thermodynamic limit. We have applied this procedure for $`32L480`$, using three consecutive points to calulate the local slopes and extrapolating to $`L\mathrm{}`$ through a $`y_h(L)\times 1/L`$ graph. The value thus obtained for $`\beta `$ is the same as for the first procedure. In Table 1 we see that our estimate of $`\beta `$ disagrees with the values calculated in References and . In evaluating this exponent, the former uses lattices of linear size $`L=35`$, while the latter uses $`L=80`$; neither attempted to make an extrapolation to the thermodynamic limit. From Table 1, we can infer that our estimate of $`\beta `$ for the bootstrap model we study is, within the error bars, equal to the corresponding value of this exponent for usual (uncorrelated) percolation. Since we have already seen that $`\nu `$ is also the same for both models, we can draw the conclusion that usual percolation and bootstrap percolation with $`m=3`$ on the cubic lattice belong to the same universality class. This result contradicts References and ; we believe this is caused by the small lattices used in those works. ## 3 Critical spanning probability It has been established some time ago that the critical spanning probability, $`R(p_c)`$, defined as the probability of spanning a lattice at the critical point, shows some degree of universality . More precisely, this quantity does not depend on the lattice type and on the kind of percolation. This result contradicts the assumption made on early applications of real-space renormalization group procedures to percolation. In those, it was assumed that the critical spanning probability is equal to the critical percolation threshold, $`p_c`$ and, therefore, lattice dependent . However, later numerical tests confirmed and expanded the universality proposal . Nevertheles, no study has been made on correlated models, to the best of our knowledge. While it is expected that short-range correlations do not change the universality scenario , it is not clear whether $`R(p_c)`$ changes when long-range correlations are introduced. A convenient model to test these possibilities is the bootstrap percolation one. While the behaviour of $`R(p_c)`$ is trivial for the cases where $`p_c=1`$, for $`m=3`$ on the simple cubic lattice the presence of correlation may lead to a non trivial behavior. We study this possibility using numerical simulation on simple cubic lattices of size $`L^3`$, with $`32L480`$. The programs and algorithms used are essentially the same as the ones described in the previous section. The basic procedure is to generate $``$ independent runs for each lattice size and compute the fraction of those which percolate after the bootstrap condition is used and a stable configuration is reached. The values of $``$ are the same as those used in the calculation of $`y_h`$ (see previous section). The results are depicted in Figure 4: we can infer that $`R(p_c)=0.270\pm 0.005`$, where the error bar is a rough estimate. There are two previous calculation of $`R(p_c)`$ for the uncorrelated site percolation on the cubic lattice with free boundary conditions: they lead to $`R(p_c)=0.265\pm 0.005`$ and $`R(p_c)=0.28`$ . Our value agrees, within the numerical error, with the first one but the result of Reference cannot be ruled out. It is then reasonable to infer that usual percolation and $`m=3`$ bootstrap percolation on the cubic lattice belong to the same universality class, also in what regards the critical spanning probability. ## 4 Summary We calculate, using numerical simulations and finite-size scaling techniques, the critical parameters of the bootstrap percolation model with $`m=3`$ on the cubic lattice, using finite lattices of size $`L^3`$, with $`32L480`$. Our evaluations of $`\nu `$ and $`\beta `$ strongly support the conclusion that usual percolation and the model we study belong to the same universality class. This result disagrees with previous calculations ; we believe this is due to the small sizes used in those works. To support this assumption, we note that the finite-size scaling assumptions hold for lattices of linear size $`L128`$, which are higher then the sizes studied in previous works. The critical spanning probability, $`R(p_c)`$, is also calculated. It has been shown that this quantity shows some degree of universality, but, to the best of our knowledge, no study concerning correlated percolation models have been done so far. Our result for bootstrap percolation with $`m=3`$ on the cubic lattice, $`R(p_c)=0.270\pm 0.005`$, is, within the numerical accuracy, the same value as for usual percolation with free boundary conditions. Therefore, we can infer that $`R(p_c)`$ is not sensitive to short range correlations and even to some long range correlations, like the one studied in this paper. We would like to thank Dr. D. Stauffer for fruitful discussions at early stages of this work and for a critical reading of the manuscript.
no-problem/9904/cond-mat9904208.html
ar5iv
text
# Blocking and Persistence in the Zero-Temperature Dynamics of Homogeneous and Disordered Ising Models ## Abstract A “persistence” exponent $`\theta `$ has been extensively used to describe the nonequilibrium dynamics of spin systems following a deep quench: for zero-temperature homogeneous Ising models on the $`d`$-dimensional cubic lattice $`Z^d`$, the fraction $`p(t)`$ of spins not flipped by time $`t`$ decays to zero like $`t^{\theta (d)}`$ for low $`d`$; for high $`d`$, $`p(t)`$ may decay to $`p(\mathrm{})>0`$, because of “blocking” (but perhaps still like a power). What are the effects of disorder or changes of lattice? We show that these can quite generally lead to blocking (and convergence to a metastable configuration) even for low $`d`$, and then present two examples — one disordered and one homogeneous — where $`p(t)`$ decays exponentially to $`p(\mathrm{})`$. In modelling the nonequilibrium dynamics of spin systems following a deep quench, the following question naturally arises : given a spin system at zero temperature with random starting configuration and evolving according to the usual Glauber dynamics, what is the probability $`p(t)`$ at time $`t`$ that a spin has not yet flipped? For the homogeneous ferromagnetic Ising model on $`Z^d`$, this probability has been found to decay at large times as a power law $`p(t)t^{\theta (d)}`$ for $`d<4`$. The “persistence” exponent $`\theta (d)`$ is considered to be a new universal exponent governing nonequilibrium dynamics following a deep quench . The persistence problem can be extended to positive temperatures by considering the dynamics of the local order parameter rather than that of single spins . In this paper we confine our attention to dynamics at zero temperature in infinite spin systems. In the usual case of asynchronous updating, a spin is chosen at random (this can be made precise for infinite systems, as in ) and then: always flips if the resulting configuration has lower energy, never flips if the energy is raised, and flips with probability $`1/2`$ if the resulting energy change is zero. We will consider these dynamics for random initial configurations $`\sigma ^0`$ (in which each spin is equally likely to be up or down, independent of the others) in both disordered ferromagnets and spin glasses with continuous coupling distributions, and also for uniform ferromagnets on lattices other than $`Z^d`$ (e.g., hexagonal lattices in $`2D`$). Our first result is that the persistence phenomenon as described above is unstable to the introduction of randomness into the spin couplings, or even to some changes in lattice structure. For the random ferromagnet, spin glass, $`2D`$ hexagonal ferromagnet, and others to be discussed below, we will see that a positive fraction of spins never flip and every spin flips only finitely many times. The “frozenness” of a nonvanishing fraction of spins (sometimes referred to as “blocked” spins ) has been reported in numerical simulations of Ising ferromagnets on $`Z^d`$ with $`d>4`$ and $`q`$-state Potts models on square lattices for $`q>4`$ . The problem can then be recast by restricting attention to only those spins that eventually do flip, and asking for the conditional probability that such spins haven’t yet flipped by time $`t`$. Simulations of Potts models appeared to indicate that this probability (proportional to $`p(t)p(\mathrm{})`$) also decays as a power law at long times (however, some curvature on their log-log plots was noted). We will examine the same question for disordered Ising systems and also for homogeneous systems that show blocking. Although we cannot yet answer this question in general, we will present calculations on two systems, the homogeneous ferromagnet on a quasi-$`1D`$ “ladder” and the $`1D`$ disordered spin chain , showing that $`p(t)p(\mathrm{})`$ decays exponentially as $`t\mathrm{}`$. Exponential decay for $`d2`$ will also be discussed. Persistence and local nonequilibration. The analysis of persistence exponents suggests that the fraction of sites that remain in the same phase (for $`T>0`$) or spin value (at $`T=0`$) from time $`t_1`$ to time $`t_2`$ tends to zero for $`1t_1t_2`$. It therefore implies the presence of local nonequilibration (LNE) : that in any fixed, finite region, there exists no finite time after which the spins within remain in a single phase; that is, domain walls forever sweep across the region. At zero temperature, the presence of LNE means every spin flips infinitely often (in almost every sample). Why does the decay to zero of $`p(t)`$ (coming from the analysis of persistence exponents at $`T=0`$) suggest that every spin flips infinitely many times? Suppose instead that a positive fraction of spins flip only finitely many times. Then it is reasonable to expect that a (smaller but still positive) fraction of spins never flip, and $`p(t)`$ would not decay to zero. While not proved in general, this argument applies to all systems treated here. It was proved in (see also ) that, in the homogeneous Ising ferromagnet on the square lattice with a random initial spin configuration, every spin indeed flips infinitely often at zero temperature, consistent with persistence results in the literature. (Similar results apply to several other systems, and can be extended to positive temperature with the local order parameter in a region replacing individual spins .) Blocking. What about the zero-temperature dynamics of systems with continuous disorder? In any dimension and on any lattice, it can be proved for these (and many other) systems that every spin flips only finitely many times. These systems exhibit “blocking” and for them $`p(t)`$ does not decay to zero. These are examples of a general result applying to the dynamical evolution (following a deep quench) of infinite-volume Ising spin systems with Hamiltonian $$=\underset{<xy>}{}J_{xy}\sigma _x\sigma _y,$$ (1) where the sum is over nearest neighbors. If the distribution of couplings is continuous with finite mean, then it can be proved that every spin flips only finitely many times (for almost every $`\sigma ^0`$, realization $`\omega `$ of the dynamics and realization $`𝒥`$ of the couplings). The proof of this theorem yields a more general result that shows that, even without the continuity assumption on the distribution of couplings, for almost every $`𝒥`$, $`\sigma ^0`$, and $`\omega `$, there can be only finitely many flips of any spin that cause a nonzero energy change. This is why the above result applies to ordinary spin glasses and random ferromagnets with a continuous distribution of couplings (e.g., Gaussian or uniform): the probability of a “tie” in any sum or difference of a given spin’s nearest-neighbor coupling strengths (and therefore the probability of a spin flip costing zero energy) is zero, and the result follows. We sketch the proof here; for further details, see . Let $`\sigma _x^t`$ be the value of $`\sigma _x`$ at time $`t`$ for fixed $`\omega `$, $`\sigma ^0`$ and $`𝒥`$. Let $$E(t)=(1/2)\overline{\underset{y:|xy|=1}{}J_{xy}\sigma _x^t\sigma _y^t}$$ (2) where the bar indicates an average over $`𝒥`$, $`\sigma ^0`$, and $`\omega `$. By translation-ergodicity of the distributions from which $`𝒥`$, $`\sigma ^0`$, and $`\omega `$ were chosen, and using the assumption that $`\overline{|J_{xy}|}<\mathrm{}`$, it follows that $`E(t)`$ exists, is independent of $`x`$, and equals the energy density (i.e., the average energy per site) at time $`t`$ in almost every realization of $`𝒥`$, $`\sigma ^0`$, and $`\omega `$. Because every spin flip lowers the energy, $`E(t)`$ monotonically decreases in time (note that $`E(0)=0`$) and has a finite limit $`E(\mathrm{})`$ ($`d\overline{|J_{xy}|}`$). Now choose any fixed number $`ϵ>0`$, and let $`N_x^ϵ`$ be the number of spin flips (over all time) of the spin at $`x`$ that lower the energy by an amount $`ϵ`$ or greater. Then $`\mathrm{}<E(\mathrm{})ϵ\overline{N_x^ϵ}`$ so that for every $`x`$ and $`ϵ>0`$, $`N_x^ϵ`$ is finite. Let $`ϵ_x`$ be the minimum energy (magnitude) change resulting from a flip of $`\sigma _x`$; then although $`ϵ_x`$ varies (differently in each $`𝒥`$) with $`x`$, it is sufficient that it is strictly positive. This result applies also to homogeneous systems on certain lattices, such as Ising ferromagnets on lattices with an odd number of nearest-neighbors so that ties in energy cannot occur. Such lattices include the hexagonal (or honeycomb) lattice in $`2D`$, and the double-layered cubic lattices $`Z^d\times \{0,1\}`$ (i.e., a “ladder” when $`d=1`$, two horizontal planes separated by unit vertical distance when $`d=2`$, and so on) . As for blocking in these systems, it is elementary to show that a positive fraction of spins will never flip. Consider first the hexagonal lattice. If the spins on any single hexagon are all up or all down, they will form a stable configuration that will never change. Such configurations (and of course similar larger-scale ones) occur with positive density in almost every $`\sigma ^0`$. Similarly, in the ladder, any square with all spins up or down is stable. The extension to general $`Z^d\times \{0,1\}`$ is straightforward. Turning to disordered systems, consider first the random ferromagnet on $`Z^2`$. For almost every $`𝒥`$, there will be a positive density of plaquettes whose couplings satisfy the following: on each of the four corners, the sum of the two couplings that connect to adjacent corners of the square is greater than the sum of the two couplings to sites outside the square. If the spins at the four corners initially are all up or all down, the spin configuration on the square will again be stable. A similar construction can be used for general $`d`$ and for spin glasses. To summarize, our first result has been to prove that many ordered and disordered spin systems display two important zero-temperature dynamical properties, which when taken together lead them to exhibit novel persistence behavior. The first concerns the presence of blocking, meaning a positive fraction of spins never flip. In the systems we treat, this is a zero time property in that some of the spins are blocked by the nature of $`\sigma ^0`$ (and $`𝒥`$), regardless of the dynamics realization. The second property concerns infinite time: the existence of a limiting (metastable) spin configuration $`\sigma ^{\mathrm{}}`$, since every spin flips only finitely many times. Although the second of these properties probably implies the first, the first does not imply the second . Our next result is to show that for at least some of these systems, these two properties lead to an exponential (as opposed to power law) decay of the quantity $`p(t)p(\mathrm{})`$ at large times. Exponential decay. In this section we study the large-time behavior of $`p(t)p(\mathrm{})`$, the probability that a spin will flip at some time but has not yet flipped by time $`t`$. We will prove that this quantity decays exponentially by showing the same for the larger probability $`\stackrel{~}{p}(t)`$ that a spin will flip at some time after $`t`$ (whether or not it has flipped before). We consider two systems, one homogeneous (the uniform Ising ferromagnet on the ladder) and one disordered (the $`1D`$ continuously disordered Ising chain). Consider first a homogeneous system where every site has an odd number $`M`$ of neighbors. (Systems such as $`\pm J`$ spin glasses where the signs are disordered but not the $`|J_{xy}|`$’s also fall into this category of examples.) Consider at time $`\tau `$, all sites $`y`$ such that the spin at $`y`$ will flip after time $`\tau `$, and denote by $`𝒞_x(\tau )`$ the cluster of such sites that contains $`x`$ (an empty cluster if the spin at $`x`$ will not flip after time $`\tau `$). We will show below for the ladder model that with $`\tau =0`$, the distribution of the number of sites $`|𝒞_x(\tau )|`$ in these clusters has an exponential tail; i.e., the probabilities for large cluster sizes are bounded by: $$P(|𝒞_x(\tau )|n)Ae^{kn}$$ (3) for some $`A<\mathrm{}`$ and $`k>0`$. We next show that this implies exponential decay of $`\stackrel{~}{p}(t)`$. Since each flip in $`𝒞_x(\tau )`$ lowers the energy of that cluster by at least $`2`$ and since the total energy of the cluster lies somewhere between $`M|𝒞_x(\tau )|`$ and $`M|𝒞_x(\tau )|`$ (we take $`J=1`$ here), it follows that the entire cluster must reach its final configuration after no more than $`M|𝒞_x(\tau )|`$ flips. Let $`T_1`$ denote the (random) amount of time after $`\tau `$ until the first flip in $`𝒞_x(\tau )`$, $`T_2`$ the amount of time after $`\tau +T_1`$ until the second flip, etc. Clearly, as long as flips are possible, the $`T_i`$’s are bounded above by independent exponential (mean one) random variables $`T_i^{}`$. Thus the time of the last flip of $`x`$ is bounded above by $`\tau +T_1^{}+\mathrm{}+T_{M|𝒞_x(\tau )|}^{}`$ and so for $`t>\tau `$, $$p(t)p(\mathrm{})\stackrel{~}{p}(t)\underset{n=1}{\overset{\mathrm{}}{}}P(|𝒞_x(\tau )|=n)P(T_1^{}+\mathrm{}+T_{Mn}^{}t\tau ).$$ (4) The probability density of $`T_1^{}+\mathrm{}+T_j^{}`$ is $`f(s)=s^{j1}e^s/(j1)!`$ and so for $`t>\tau `$, $`p(t)p(\mathrm{})`$ $``$ $`{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}Ae^{kn}{\displaystyle _{t\tau }^{\mathrm{}}}[s^{Mn1}/(Mn1)!]e^s𝑑s`$ (5) $``$ $`{\displaystyle _{t\tau }^{\mathrm{}}}{\displaystyle \underset{j=1}{\overset{\mathrm{}}{}}}A(e^{k/M})^j[s^{j1}/(j1)!]e^sds`$ $`=`$ $`Ae^{k/M}{\displaystyle _{t\tau }^{\mathrm{}}}\mathrm{exp}(e^{k/M}ss)𝑑s`$ $`=`$ $`A^{}e^{k^{}t}`$ where the constants $`A^{}`$ and $`k^{}`$ depend on $`A,k,M`$ and $`\tau `$. It remains to show that (3) is valid in the ladder ferromagnet with $`\tau =0`$. (Similar arguments work for the ladder antiferromagnet or $`\pm J`$ spin glass.) To do this we note that a single plaquette has an initial probability $`p_0=1/8`$ of having its four corner spins all plus or all minus (we call such a blocked plaquette “frigid”). The lattice’s sites take integer values $`(x_i,y_i)`$, with $`\mathrm{}<x_i<\mathrm{}`$ and $`y_i=0,1`$. A lower bound for the initial number of frigid plaquettes can be obtained by considering only those plaquettes whose left edges occur at even $`x_i`$ (we define the location of a plaquette by the position of its left edge); such plaquettes do not overlap and so their probabilities of being frigid are independent. If the plaquette at the origin is frigid, and $`P(02n)`$ is the probability that there is no frigid plaquette between 0 and $`x_{2n}`$, then $$P(02n)(1p_0)^n=\mathrm{exp}(kn),$$ (6) where $`k=|\mathrm{log}(1p_0)|`$. So the ladder is broken up into finite segments, bounded to either side by a frigid plaquette, whose length distribution has an exponential tail. This yields Eq. (3). Our second example is a disordered $`1D`$ spin chain in zero field. The analysis is essentially the same for either the spin glass or the random ferromagnet, so for specificity we study a ferromagnet whose couplings $`J_zJ_{z,z+1}`$ are independent random variables taken from the uniform distribution on $`[0,1]`$. The key idea here is that the chain breaks up into finite, disjoint “influence segments” whose union is the infinite chain. An influence segment is a dynamical construct defined (for a given $`𝒥`$) as follows: two sites $`x`$ and $`y`$ belong to the same influence segment if and only if either the state of $`\sigma _x`$ can dynamically induce a change in the state of $`\sigma _y`$ or vice-versa (or both). To illustrate, suppose that the coupling $`J_x`$ is larger than both $`J_{x1}`$ and $`J_{x+1}`$; i.e., $`J_x`$ is a local maximum. Then it is clear that the state of $`\sigma _x`$ can be dynamically influenced by $`\sigma _{x+1}`$ but not by $`\sigma _{x1}`$ (and similarly, the state of $`\sigma _{x+1}`$ can be dynamically influenced by $`\sigma _x`$ but not by $`\sigma _{x+2}`$). That is, no state of the spin $`\sigma _{x1}`$ can alter the sign of the energy change $`\mathrm{\Delta }_x`$ that would result from a flip of $`\sigma _x`$. To summarize, two sites $`x`$ and $`y`$ are defined to be in the same influence cluster if and only if either $`\sigma _x`$ can influence $`\sigma _y`$, or $`\sigma _y`$ can influence $`\sigma _x`$, or both . Influence segments for the disordered $`1D`$ chain are then constructed as follows . Consider the doubly infinite sequences $`x=x_m`$ of sites where $`J_x`$ is a local maximum and $`y=y_m(x_m,x_{m+1})`$ where $`J_y`$ is a local minimum: the couplings are strictly increasing from $`y_{m1}`$ to $`x_m`$ and strictly decreasing from $`x_m`$ to $`y_m`$. The set of spins at the sites $`\{y_{m1}+1,y_{m1}+2,\mathrm{}y_m\}`$ determines a single influence segment. To see this, note that the spin at $`y_m`$ cannot influence the one at $`y_m+1`$ (or vice-versa); similarly, at the other end $`y_{m1}+1`$ cannot influence $`y_{m1}`$. Now consider the spins at $`x_m`$ and $`x_m+1`$, which are within the interval $`\{y_{m1}+1,y_{m1}+2,\mathrm{}y_m\}`$. Clearly, the spin at $`x_m1`$ can never influence the spin at $`x_m`$, and the spin at $`x_m+2`$ can never influence the one at $`x_m+1`$. So once the spins at $`x_m`$ and $`x_m+1`$ agree (either initially in $`\sigma ^0`$ or after either spin flips) the final value of every spin in the interval $`\{y_{m1}+1,y_{m1}+2,\mathrm{}y_m\}`$ is determined through a “cascade” of influence to either side of $`\{x_m,x_m+1\}`$ (which is put into effect as the Poisson clocks successively ring) until $`y_{m1}+1`$ and $`y_m`$, respectively, are reached. Given this, the analysis leading to Eqs. (4) and (5) applies as before. One needs only an estimate analogous to Eq. (3) for the probability distribution of influence segment sizes. In fact, the decay here for large size $`n`$ is faster than in Eq. (3); the probability of $`n`$ independent coupling random variables being ordered so as to have a single local maximum falls off as $`1/n!`$ (times an exponential factor). In these two examples different factors determine the distribution of dynamical cluster sizes: for the homogeneous ferromagnet on the ladder, they’re determined by the initial spin configuration $`\sigma ^0`$; for the disordered $`1D`$ chain, they’re determined by the coupling realization $`𝒥`$. In this section we considered two examples, one ordered and one disordered, but both one-dimensional (or quasi-one-dimensional). There is another system that shows the same behavior in any dimension: the highly disordered spin glass (or ferromagnet) . Using similar arguments, this system can also be shown to display an exponential decay to its final state . We expect that a related model, in which coupling magnitudes are “stretched” in the manner of references but only up to a finite length scale, would show similar behavior. This last model is of interest because its thermodynamic behavior is expected to be similar to that of the ordinary spin glass (or random ferromagnet). Discussion. Most work on persistence at zero temperature has examined systems, such as the homogeneous Ising ferromagnet on $`Z^d`$ in low $`d`$, where the quantity $`p(t)`$ decays to zero as a power law. We have shown here that there is a second class of models in which $`p(\mathrm{})>0`$: these include systems with continuous disorder and homogeneous systems on other lattices. In several of these the persistence decay is exponential rather than power law. It would be of interest to see whether this fast decay holds in other systems in this general class, such as the $`2D`$ homogeneous ferromagnet on a hexagonal lattice or an ordinary spin glass with $`d>1`$. Although we don’t know the answer in the general case, we can speculate using a rough argument that the answer may be yes. If every spin flips only finitely often, as time progresses an increasing number of spins will “freeze”; i.e., they cease to flip. It is reasonable to expect that after some finite time “unfrozen” spins no longer percolate, so that the dynamics is confined to noninteracting finite clusters, as in the examples treated here. Of course, there remain serious gaps: this is not independent percolation, and the dynamics in the localized clusters (should they exist) would need to be worked out, so the conclusion should be treated with caution. There is a third class of systems not discussed here; in these, a positive fraction of spins flip infinitely often and a positive fraction flip only finitely many times. One such system is the two-dimensional $`\pm J`$ spin glass . Although it appears that $`p(\mathrm{})>0`$, determining the large-time behavior of $`p(t)p(\mathrm{})`$ remains an open problem. Acknowledgments. This research was partially supported by NSF Grants DMS-98-02310 (CMN) and DMS-98-02153 (DLS).
no-problem/9904/chao-dyn9904049.html
ar5iv
text
# Non Asymptotic Properties of Transport and Mixing ## I Introduction Transport processes play a crucial role in many natural phenomena. Among the many examples, we just mention the particle transport in geophysical flows which is of obvious interest for atmospheric and oceanic issues. The most natural framework for investigating such phenomena is to adopt a Lagrangian viewpoint in which the particles are advected by a given Eulerian velocity field $`𝒖(𝒙,t)`$ according to the differential equation $$\frac{d𝒙}{dt}=𝒖(𝒙,t)=𝒗(t),$$ (1) where, by definition, $`𝒗(t)`$ is the Lagrangian particle velocity. Despite the apparent simplicity of (1), the problem of connecting the Eulerian properties of $`𝒖`$ to the Lagrangian properties of the trajectories $`𝒙(t)`$ is a very difficult task. In the last $`2030`$ years the scenario has become even more complex by the recognition of the ubiquity of Lagrangian chaos (chaotic advection). Even very simple Eulerian fields can generate very complex Lagrangian trajectories which are practically indistinguishable from those obtained in a complex, turbulent, flow . Despite these difficulties, the study of the relative dispersion of two particles can give some insight on the link between Eulerian and Lagrangian properties at different length-scales. Indeed, the evolution of the separation $`𝑹(t)=𝒙^{(2)}(t)𝒙^{(1)}(t)`$ between two tracers is given by $$\frac{d𝑹}{dt}=𝒗^{(2)}(t)𝒗^{(1)}(t)=𝒖(𝒙^{(1)}(t)+𝑹(t),t)𝒖(𝒙^{(1)}(t),t)$$ (2) and thus depends on the velocity difference on scale $`𝑹`$. It is obvious from (2) that Eulerian velocity components of typical scale much larger than $`𝑹`$ will not contribute to the evolution of $`𝑹`$. Since, in incompressible flows, separation $`𝑹`$ typically grows in time we have the nice situation in which from the evolution of the relative separation we can, in principle, extract the contributions of all the components of the velocity field. For these reasons, in this paper we prefer to study relative dispersion instead of absolute dispersion. For spatially infinite cases, without mean drift there is no difference; for closed basins the relative dispersion is, for many aspects, more interesting than the absolute one, which is dominated by the sweeping induced by large scale flow. There are very few general results on the link between Eulerian and Lagrangian properties and only for asymptotic behaviors. Let us suppose that the Eulerian velocity field is characterized by two typical length-scales: the (small) scale $`l_u`$ below which the velocity is smooth, and a (large) scale $`L_0`$ representing the size of the largest structures present in the flow. Of course, in most non turbulent flows it will turn out that $`ł_uL_0`$. At very small separations $`Rł_u`$ we have that the velocity difference in (2) can be reasonably approximated by a linear expansion in $`R`$, which in most time-dependent flows leads to an exponential growth of the separation of initially close particles, a phenomenon known as Lagrangian chaos $$\mathrm{ln}R(t)\mathrm{ln}R(0)+\lambda t$$ (3) (the average is taken over many couples with initial separation $`R(0)`$). The coefficient $`\lambda `$ is the Lagrangian Lyapunov exponent of the system . The rigorous definition of the Lyapunov exponent imposes to take the two limits $`R(0)0`$ and then $`t\mathrm{}`$: in physical terms these limits amount to the requirement that the separation has not to exceed the scale $`ł_u`$ but for very large times. This is a very strict condition, rarely accomplished in real flows, rendering often infeasible the experimental observation of the behavior (3). On the opposite limit, for very long times and for separations $`RL_0`$, the two trajectories $`𝒙^{(1)}(t)`$ and $`𝒙^{(2)}(t)`$ feel two velocities which can practically be considered as uncorrelated. We thus expect normal diffusion, i.e. $$R^2(t)2Dt.$$ (4) Also in this case it is necessary to remark that the asymptotic behavior (4) cannot be attained in many realistic situations, the most common of which is the presence of boundaries at a scale comparable with $`L_0`$. In absence of boundaries it is possible to formulate sufficient conditions on the nature of the Eulerian flow, under which normal diffusion (4) always takes place asymptotically . Between the two asymptotic regimes (3) and (4) the behavior of $`R(t)`$ depends on the particular flow. The study of the evolution of the relative dispersion in this crossover regime is very interesting and can give an insight on the Eulerian structure of the velocity field. To summarize, in all systems in which the characteristic length-scales are not sharply separated, it is not possible to describe dispersion in terms of asymptotic quantities. In such cases, different approaches are required. Let us mention some examples: the symbolic dynamics approach to the sub-diffusive behavior in a stochastic layer and to mixing in meandering jets ; the study of tracer dynamics in open flows in terms of chaotic scattering and the exit time description for transport in semi-enclosed basins and open flows . The aim of the present paper is to discuss the use of an indicator – the Finite Size Lyapunov Exponents (FSLE), originally introduced in the context of predictability problems – to study and characterize non-asymptotic transport in non-ideal systems, e.g. closed basins and systems in which the characteristic length-scales are not sharply separated. In section II we introduce the basic tools for the finite-scale analysis and we discuss their general properties. Section III is devoted to the evaluation of our method on some numerical examples. We shall see that even in apparently simple situations the use of finite scale analysis avoids possible misinterpretation of the results. In section IV the method is applied to two physical problems: the analysis of experimental drifter data and the numerical study of relative dispersion in fully developed turbulence. Conclusions are presented in section V. The appendices report, for sake of self-consistency, some technical aspects. ## II Finite size diffusion coefficient In order to introduce the finite size analysis for the dispersion problem let us start with a simple example. We consider a set of N particle pairs advected by a smooth (e.g. spatially periodic) velocity field with characteristic length $`l_u`$. Denoting with $`R_i^2(t)`$ the square separation of the $`i`$-th couple, we define $$R^2(t)=\frac{1}{N}\underset{i=0}{\overset{N}{}}R_i^2.$$ (5) We assume that the Lagrangian motion is chaotic, thus we expect the following regimes to hold $$R^2(t)\{\begin{array}{cc}R_0^2\mathrm{exp}(L(2)t)\hfill & \text{if }R^2(t)^{1/2}l_u\hfill \\ 2Dt\hfill & \text{if }R^2(t)^{1/2}l_u\hfill \end{array},$$ (6) where $`L(2)2\lambda `$ is the generalized Lyapunov exponent , $`D`$ is the diffusion coefficient and we assume that $`R_i(0)=R_0`$. An alternative method to characterize the dispersion properties is by introducing the “doubling time” $`\tau (\delta )`$ at scale $`\delta `$ as follows : given a series of thresholds $`\delta ^{(n)}=r^n\delta ^{(0)}`$, one can measure the time $`T_i(\delta ^{(0)})`$ it takes for the separation $`R_i(t)`$ to grow from $`\delta ^{(0)}`$ to $`\delta ^{(1)}=r\delta ^{(0)}`$, and so on for $`T_i(\delta ^{(1)}),T_i(\delta ^{(2)}),\mathrm{}`$ up to the largest considered scale. The $`r`$ factor may be any value $`>\mathrm{\hspace{0.17em}1}`$, properly chosen in order to have a good separation between the scales of motion, i.e. $`r`$ should be not too large. Strictly speaking, $`\tau (\delta )`$ is exactly the doubling time if $`r=\mathrm{\hspace{0.17em}2}`$. Performing the doubling time experiments over the $`N`$ particle pairs, one defines the average doubling time $`\tau (\delta )`$ at scale $`\delta `$ as $$\tau (\delta )=<T(\delta )>_e=\frac{1}{N}\underset{i=1}{\overset{N}{}}T_i(\delta ).$$ (7) It is worth to note that the average (7) is different from the usual time average (see Appendix A). Now we can define the Finite Size Lagrangian Lyapunov Exponent (see for a detailed discussion) in terms of the average doubling time as $$\lambda (\delta )=\frac{\mathrm{ln}r}{\tau (\delta )},$$ (8) which quantifies the average rate of separation between two particles at a distance $`\delta `$. Let us remark that $`\lambda (\delta )`$ is independent of $`r`$, for $`r`$ close to $`1`$. For very small separations (i.e. $`\delta l_u`$) one recovers the Lagrangian Lyapunov exponent $`\lambda `$, $$\lambda =\underset{\delta 0}{lim}\frac{1}{\tau (\delta )}\mathrm{ln}r.$$ (9) In this framework the finite size diffusion coefficient , $`D(\delta )`$, dimensionally turns out to be $$D(\delta )=\delta ^2\lambda (\delta ).$$ (10) Note the absence of the factor $`2`$, as one may expect from (6), in the denominator of $`D(\delta )`$; this is because $`\tau (\delta )`$ is a difference of times. For a standard diffusion process $`D(\delta )`$ approaches the diffusion coefficient $`D`$ (see eq. (6)) in the limit of very large separations ($`\delta l_u`$). This result stems from the scaling of the doubling times $`\tau (\delta )\delta ^2`$ for normal diffusion. Thus the finite size Lagrangian Lyapunov exponent $`\lambda (\delta )`$ behaves as follows: $$\lambda (\delta )\{\begin{array}{cc}\lambda \hfill & \text{if }\delta l_u\hfill \\ D/\delta ^2\hfill & \text{if }\delta l_u\hfill \end{array},$$ (11) One could naively conclude, matching the behaviors at $`\delta l_u`$, that $`D\lambda l_u^2`$. This is not always true, since one can have a rather large range for the crossover due to nontrivial correlations which can be present in the Lagrangian dynamics . One might wonder that the introduction of $`\tau (\delta )`$ is just another way to look at $`R^2(t)`$. This is true only in limiting cases, when the different characteristic lengths are well separated and intermittency is weak. A similar idea of using times for the computation of the factor diffusion coefficient in nontrivial cases was developed in Ref. . If one wants to identify the physical mechanisms acting on a given spatial scale, the use of scale dependent quantities is more appropriate than time dependent ones. For instance, in presence of strong intermittency (which is indeed a rather usual situation) $`R^2(t)`$ as a function of $`t`$ can be very different in each realization. Typically one has (see figure 1a), different exponential growth rates for different realizations, producing a rather odd behavior of the average $`R^2(t)`$ not due to any physical mechanisms. For instance in figure 1b we show the average $`R^2(t)`$ versus time $`t`$; at large times one recovers the diffusive behavior but at intermediate times appears an “anomalous” diffusive regime which is only due to the superposition of exponential and diffusive contributions by different samples at the same time. On the other hand, by exploiting the tool of doubling times one has an unambiguous result (see figure 1c) . An important physical problem where the behavior of $`\tau (\delta )`$ is essentially well understood is the relative dispersion in 3D fully developed turbulence. Here the smallest Eulerian scale $`l_u`$ is the Kolmogorov scale at which the flow becomes smooth. In the inertial range $`l_u<R<L_0`$ we expect the Richardson law to hold $`R^2(t)t^3`$; for separations larger than the integral scale $`L_0`$ we have normal diffusion. In terms of the finite size Lyapunov exponent we thus expect three different regimes: 1. $`\lambda (\delta )=\lambda `$ for $`\delta l_u`$ 2. $`\lambda (\delta )\delta ^{2/3}`$ for $`l_u\delta L_0`$ 3. $`\lambda (\delta )\delta ^2`$ for $`\delta L_0`$ We will see in section IV than even for large Reynolds numbers, the characteristic lengths $`l_u`$ and $`L_0`$ are not sufficiently separated and the different scaling regimes for $`R^2(t)`$ cannot be well detected. The fixed scale analysis in terms of $`\lambda (\delta )`$ for fully developed turbulence presents clear advantages with respect to the fixed time approach. ## III Numerical Results on simple flows In this section we shall discuss some examples of applications of the above introduced indicator $`\lambda (\delta )`$ (or equivalently $`D(\delta )`$) for simple flows. The technical and numerical details of the finite size Lyapunov exponent computation are settled out in Appendix A. In a generic case in addition to the two asymptotic regimes (11) discussed in section II, we expect another universal regime due to the presence of the boundary of given size $`L_B`$. For separations close to the saturation value $`\delta _{max}L_B`$ we expect the following behavior to hold for a broad class of systems : $$\lambda (\delta )=\frac{D(\delta )}{\delta ^2}\frac{(\delta _{max}\delta )}{\delta }.$$ (12) The proportionality constant is given by the second eigenvalue of the Perron-Frobenius operator which is related to the typical time of exponential relaxation of tracers’ density to uniform distribution (see Appendix B). ### A A model for transport in Rayleigh-Bénard convection The advection in two dimensional incompressible flows in absence of molecular diffusion is given by Hamiltonian equation of motion where the stream function, $`\psi `$, plays the role of the Hamiltonian: $$\frac{dx}{dt}=\frac{\psi }{y},\frac{dy}{dt}=\frac{\psi }{x}.$$ (13) If $`\psi `$ is time-dependent one typically has chaotic advection. As an example let us consider the time-periodic Rayleigh-Bénard convection, which can be described by the following stream function : $$\psi (x,y,t)=\frac{A}{k}\mathrm{sin}\left\{k\left[x+B\mathrm{sin}(\omega t)\right]\right\}W(y),$$ (14) where $`W(y)`$ satisfies rigid boundary conditions on the surfaces $`y=0`$ and $`y=a`$ (we use $`W(y)=\mathrm{sin}(\pi y/a)`$). The two surfaces $`y=a`$ and $`y=0`$ are the top and bottom surfaces of the convection cell. The time dependent term $`B\mathrm{sin}(\omega t)`$ represents lateral oscillations of the roll pattern which mimic the even oscillatory instability . Concerning the analysis in terms of the finite size Lyapunov exponent one has that, if $`\delta `$ is much smaller than the domain size, $`\lambda (\delta )=\lambda `$. At larger values of $`\delta `$ we find standard diffusion $`\lambda (\delta )=D/\delta ^2`$ with good quantitative agreement with the value of the diffusion coefficient evaluated by the standard technique, i.e. using $`R^2(t)`$ as a function of time $`t`$. In order to study the effects of finite boundaries on the diffusion properties we confine the tracers’ motion in a closed domain. This can be achieved by slightly modifying the stream function (14). We have modulated the oscillating term in such a way that for $`|x|=L_B`$ the amplitude of the oscillation is zero, i.e. $`BB\mathrm{sin}(\pi x/L_B)`$ with $`L_B=2\pi n/k`$ ($`n`$ is the number of convective cells). In this way the motion is confined in $`x[L_B,L_B]`$. In figure 2 we show $`\lambda (\delta )`$ for two values of $`L_B`$. If $`L_B`$ is large enough one can distinguish the three regimes: exponential, diffusive and the saturation regime eq. (12). Decreasing the size of the boundary $`L_B`$, the range of the diffusive regime decreases, while for small values of $`L_B`$, it disappears. ### B Point vortices in a Disk We now consider a two-dimensional time-dependent flow generated by $`M`$ point vortices, with circulations $`\mathrm{\Gamma }_1,\mathrm{},\mathrm{\Gamma }_M`$, in a disk of unit radius . The passive tracers are advected by the time dependent velocity field generated by the vortices and behave chaotically for any $`M>2`$. Let us note that in this case the scale separation is not imposed by hand, but depends on $`M`$ and on the energy of the vortex system . Figure 3a shows the relative diffusion as a function of time in a system with $`M=4`$ vortices. Apparently there is an intermediate regime of anomalous diffusion. However from figure 3b one can clearly see that, with the fixed scale analysis, only two regimes survive: exponential and saturation. Comparing figure 3a and figure 3b one understands that the appearance of the spurious anomalous diffusion regime in the fixed time analysis is due to the mechanism described in section II. The absence of the diffusive regime $`\lambda (\delta )\delta ^2`$ is due to the fact that the characteristic length of the velocity field, which is comparable with the typical distance between two close vortices, is not much smaller than the size of the basin. ### C Random walk on a fractal object: an anomalous diffusive case In this section we discuss the case of particles performing a continuous random walk on a fractal object of fractal dimension $`D_F`$, where one has sub-diffusion. We show that also in situation of anomalous diffusion (e.g. sub-diffusion) the FSLE is able to recognize the correct behavior. In the following section we consider the case of fully developed turbulence which displays super-diffusion. In a fractal object due to the presence of voids, i.e. forbidden regions for the particles, one expects a decreasing of the spreading, and because of the self-similar structure of the domain (i.e. voids on all scales) a sub-diffusive behavior is expected. It is worth to note that the particles do not diffuse with the same law from any points of the domain (due to the presence of voids), hence in order to define a diffusive-like behavior one has to average over all possible particles’ position. For discrete random walk on a fractal lattice it is known that the diffusion follows the law $`<R^2(t)>t^{2/D_W}`$ with $`D_W>2`$, i.e. sub-diffusion . The quantity $`D_W`$ is related to the spectral or fracton dimension $`D_S`$, by the relation $`D_W=2D_F/D_S`$, and it depends on the detailed structure of the fractal object . We study the relative dispersion of 2-D continuous random walk in a Sierpinsky Carpet with fractal dimension $`D_F=\mathrm{log}\mathrm{\hspace{0.17em}8}/\mathrm{log}\mathrm{\hspace{0.17em}3}`$. In our computation we use a resolution $`3^5`$, i.e. the fractal is approximated by five steps of the recursive building rule, in practice we perform a continuous random walk in a basin obtained with the above approximation of the Sierpinsky Carpet. We initialize the particles inside one of the smallest resolved structures, then we follow the growth of the relative dispersion with the FSLE method, and redeploy the particles in a small cell randomly chosen at the beginning of each doubling time experiment. From fig. 4 one can see that $`\lambda (\delta )\delta ^{1/.45}`$ which is an indication of sub-diffusion the exponent is in good agreement with the usual relative dispersion analysis (see the inset of fig. 4). ## IV Application of the FSLE ### A Drifter in the Adriatic Sea: data analysis and modelization Lagrangian data recorded within oceanographic programs in the Mediterranean Sea offer the opportunity to apply the fixed scale analysis to a geophysical problem, for which the standard characterization of the dispersion properties gives poor information. The Adriatic Sea is a semi-enclosed basin, about 800 by 200 $`km`$ wide, connected to the whole Mediterranean Sea through the Otranto Strait . We adopt the reference frame in which the $`x,y`$axes are aligned, respectively, with the short side (transverse direction), orthogonal to the coasts, and the long side (longitudinal direction), along the coasts. We have computed relative dispersion along the two axes, $`R_x^2(t)`$, $`R_y^2(t)`$ and FSLE $`\lambda (\delta )`$. The number of selected drifters for the analysis is 37, distributed in 5 different deployments in the Strait of Otranto, happened during the period December 1994 - March 1997, containing respectively 4, 9, 7, 7 and 10 drifters. To get as high statistics as possible, even to the cost of losing information on the seasonal variability, we shift the time tracks of all of the 37 drifters to $`tt_0`$, where $`t_0`$ is the time of the deployment, so that the drifters can be treated as a whole cluster. Moreover, to restrict the analysis only to the Adriatic basin, we discarded a drifter as soon as its latitude goes south of $`39.5`$ N or its longitude goes beyond $`19.5`$ E. Before presenting the results of the data analysis, let us introduce a simplified model for the Lagrangian tracers motion in the Adriatic Sea. We assume as main features of the surface circulation the following elements : the drifter motion is basically two-dimensional; the domain is a quasi-closed basin; an anti-clockwise coastal current; two large cyclonic gyres; some natural irregularities in the Lagrangian motion induced by small scale structures. On the basis of these considerations, we introduce a deterministic chaotic model with mixing properties for the Lagrangian drifters. The stream function is given by the sum of three terms: $$\mathrm{\Psi }(x,y,t)=\mathrm{\Psi }_0(x,y)+\mathrm{\Psi }_1(x,y,t)+\mathrm{\Psi }_2(x,y,t)$$ (15) defined as follows: $$\mathrm{\Psi }_0(x,y)=\frac{C_0}{k_0}[\mathrm{sin}(k_0(y+\pi ))+\mathrm{cos}(k_0(x+2\pi ))]$$ (16) $$\mathrm{\Psi }_i(x,y,t)=\frac{C_i}{k_i}\mathrm{sin}(k_i(x+ϵ_i\mathrm{sin}(\omega _it)))\mathrm{sin}(k_i(y+ϵ_i\mathrm{sin}(\omega _it+\varphi _i))),(i=1,2),$$ (17) where $`k_i=\mathrm{\hspace{0.17em}2}\pi /\lambda _i`$, for $`i=0,1,2`$, $`\lambda _i`$’s are the wavelengths of the spatial structures of the flow; analogously $`\omega _j=\mathrm{\hspace{0.17em}2}\pi /T_j`$, for $`j=1,2`$, and $`T_j`$’s are the periods of the perturbations. In the non-dimensional expression of the equations, the length and time units have been set to $`200km`$ and $`7.5days`$, respectively. The stationary term $`\mathrm{\Psi }_0`$ defines the boundary large scale circulation with positive vorticity. The contribution of $`\mathrm{\Psi }_1`$ contains the two cyclonic gyres and it is explicitly time-dependent through a periodic perturbation. The term $`\mathrm{\Psi }_2`$ gives the motion over scales smaller than the size of the large gyres and it is time-dependent as well. The zero-value isoline is defined as the boundary of the basin. According to observation, we have chosen the parameters so that the velocity range is around $`\mathrm{\hspace{0.33em}0.3}ms^1`$; the length scales of the Eulerian structures are $`L_B\mathrm{\hspace{0.33em}1000}km`$ (coastal current), $`L_0\mathrm{\hspace{0.33em}200}km`$ (gyres) and $`l_u\mathrm{\hspace{0.33em}50}km`$ (vortices); the typical recirculation times, for gyres and vortices, are $``$ 1 month and $``$ 1 week, respectively; the oscillation periods are $`10days`$ (gyres) and $`2days`$ (vortices). Let us discuss now the comparison between data and model results. The relative dispersion along the two directions of the basin, for data and model trajectories, are shown in figures 5a,b. The results for the model are obtained from the spreading of a cluster of $`10^4`$ initial conditions. When a particle reaches the boundary ($`\mathrm{\Psi }=0`$) it is eliminated. For the diffusion properties, one cannot expect a scaling for $`R_{x,y}^2(t)`$ before the saturation regime, since the Eulerian characteristic lengths are not too small compared with the basin size. Indeed, we do not observe a power law behavior neither for the experimental data nor for the numerical model. Let us stress that by opportunely fitting the parameters, we could obtain the model curves even closer to the experimental ones, but this would not be very meaningful since there is no clear theoretical expectation in a transient regime. Let us now discuss the finite size Lyapunov exponent. The analysis of the experimental data has been averaged over the total number of couples out of 37 trajectories, under the condition that the evolution of the distance between two drifters is no longer followed when any of the two exits the Adriatic basin. In fig. 6 we show the FSLE for data and model. In our case, as discussed above, we are far from asymptotical conditions, therefore we do not observe the scaling $`\lambda (\delta )\delta ^2`$. The $`\lambda _M(\delta )`$ obtained from the minimal chaotic model (15-17) shows the typical step-like shape of a system with two characteristic time scales, and offers a scenario about how the FSLE of real trajectories may come out. The relevant fact is that the large-scale Lagrangian features are well reproduced, at least at a qualitative level, by a relatively simple model. We believe that this agreement is not due to a particular choice of the model parameters, but rather to the fact that transport is mainly dominated by large scales whereas small scale details play a marginal role. It is evident the major advantages of FSLE with respect to the usual fixed time statistics of relative dispersion: from the relative dispersion analysis of fig. 5 we are unable to recognize the underlying Eulerian structures, while the FSLE of fig. 6 suggests the presence of structures on different scales and with different characteristic times. In conclusion, the fixed scale analysis gives information for discriminating among different models for the Adriatic Sea. ### B Relative dispersion in fully developed turbulence We consider now the relative dispersion of particles pairs advected by an incompressible, homogeneous, isotropic, fully developed turbulent field. The Eulerian statistics of velocity differences is characterized by the Kolmogorov scaling $`\delta v(r)r^{1/3}`$, in an interval of scales $`ł_urL_0`$, called the inertial range, $`l_u`$ is now the Kolmogorov scale. Due to the incompressibility of the velocity field particles will typically diffuse away from each other . For pair separations less than $`l_u`$ we have exponential growth of the separation of trajectories, typical of smooth flows, whereas at separations larger than $`L_0`$ normal diffusion takes place. In the inertial range the average pair separation is not affected neither by large scale components of the flow, which simply sweep the pair, nor by small scale ones, whose intensity is low and which act incoherently. Accordingly, the separation $`R(t)`$ feels mainly the action of velocity differences $`\delta v(R(t))`$ at scale $`R`$. As a consequence of the Kolmogorov scaling the separation grows with the Richardson law $$R^2(t)t^3.$$ (18) Non-asymptotic behavior takes place in such systems whenever $`l_u`$ is not much smaller than $`L_0`$, that is when the Reynolds number is not high enough. As a matter of facts even at very high Reynolds numbers, the inertial range is still insufficient to observe the scaling (18) without any ambiguity. On the other hand, we shall show that FSLE statistics is effective already a relatively small Reynolds numbers. In order to investigate the problem of relative dispersion at various scale separations a practical tool is the use of synthetic turbulent fields. In fact, by means of stochastic processes it is possible to build a velocity field which reproduces the statistical properties of velocity differences observed in fully developed turbulence . In order to avoid the difficulties related to the presence of sweeping in the velocity field, we limit ourselves to a correct representation of two-point velocity differences. In this case, if one adopts the reference frame in which one of the two tracers is at rest at the origin (the so called a Quasi-Lagrangian frame of reference), the motion of the second particle is ruled by the velocity difference in this frame of reference, which has the the same single time statistics of the Eulerian velocity differences . The detailed construction of the synthetic Quasi-Lagrangian velocity field is presented in Appendix C. In figure 7 we show the results of simulations of pair dispersion by the synthetic turbulent field with Kolmogorov scaling of velocity differences at Reynolds number $`Re10^6`$ . The expected super-diffusive regime (18) can be well observed only for huge Reynolds numbers (see also Ref. . To explain the depletion of scaling range for the relative dispersion let us consider a series of pair dispersion experiments, in which a couple of particles is released at a separation $`R_0`$ at time $`t=0`$. At a fixed time $`t`$, as customarily is done, we perform an average over all different experiments to compute $`R^2(t)`$. But, unless $`t`$ is large enough that all particle pairs have “forgotten” their initial conditions, the average will be biased. This is at the origin of the flattening of $`R^2(t)`$ for small times, which we can call a crossover from initial condition to self similarity. In an analogous fashion there is a crossover for large times, of the order of the integral time-scale, since some couples might have reached a separation larger than the integral scale, and thus diffuse normally, meanwhile other pairs still lie within the inertial range, biasing the average and, again, flattening the curve $`R^2(t)`$. This correction to a pure power law is far from being negligible for instance in experimental data where the inertial range is generally limited due to the Reynolds number and the experimental apparatus. For example, references show quite clearly the difficulties that may arise in numerical simulations with the standard approach. To overcome these difficulties we exploit the approach based on the fixed scale statistics. The outstanding advantage of averaging at a fixed separation scale is that it removes all crossover effects, since all sampled pairs belong to the inertial range. The expected scaling properties of the doubling times is obtained by a simple dimensional argument. The time it takes for particle separation to grow from $`R`$ to $`2R`$ can be estimate as $`T(R)R/\delta v(R)`$; we thus expect for the inverse doubling times the scaling $$\frac{1}{T(R)}\frac{\delta v(R)}{R}R^{2/3}$$ (19) In figure 8 the great enhancement of the scaling range achieved by using the doubling times is evident. In addition, by using the FSLE it is possible to study in details the effect of Eulerian intermittency on the Lagrangian statistics of relative dispersion. See Ref. for a detailed discussion and a comparison with a multifractal scenario. The conclusion that can be drawn is that in this case doubling time statistics makes it possible a much better estimate of the scaling exponent with respect to the standard – fixed time – statistics. ## V Conclusions In the study of relative dispersion of Lagrangian tracers one has to tackle situations in which the asymptotic behavior is never attained. This may happen in presence of many characteristic Eulerian scales or, what is typical of real systems, in presence of boundaries. It is worth to stress that such kind of systems are very common in geophysical flows , and also in plasma physics . Therefore a close understanding of non-asymptotic transport properties can give much relevant information about these natural phenomena. To face these problems, in recent years, there have been proposed different approaches whose common ingredient is basically an “exit time” analysis. We remind the symbolic dynamics and the chaotic scattering approaches, the exit time description for transport in semi-enclosed basins , symplectic maps , open flows and in plasma physics . In this paper we have discussed the applications of the Finite Size Lyapunov Exponent, $`\lambda (\delta )`$, in the analysis of several situations. This method is based on the identification of the typical time $`\tau (\delta )`$ characterizing the diffusive process at scale $`\delta `$ through the exit time. This approach is complementary to the traditional one, in which one looks at the average size of the clouds of tracers as function of time. For values of $`\delta `$ much smaller than the smallest characteristic length of the Eulerian velocity field, one has that $`\lambda (\delta )`$ coincides with the maximum Lagrangian Lyapunov Exponent. For larger $`\delta `$ the shape of $`\lambda (\delta )`$ depends on the detailed mechanisms of spreading, i.e. the structure of the advecting velocity field and/or the presence of boundaries. The diffusive regime corresponds to the behavior $`\lambda (\delta )D/\delta ^2`$. If $`\delta `$ gets close to its saturation value, i.e. the characteristic size of the basin, the universal shape of $`\lambda (\delta )`$ can be obtained on the basis of dynamical system theory. In addition, we have shown that the fixed scale method is able to recognize the presence of a genuine anomalous diffusion. A remarkable advantage of working at fixed scale (instead of at fixed time as in the traditional approach) is its ability to avoid misleading results, for instance apparent anomalous scaling over a certain time interval. Moreover, with the FSLE one obtains the proper scaling laws also for a relatively small inertial range for which the standard technique gives rather controversial answers. The proposed method can be also applied in the analysis of drifter experimental data or in numerical model for Lagrangian transport. ## VI Acknowledgments We thank V. Artale, E. Aurell, L. Biferale, P. Castiglione, A. Crisanti, M. Falcioni, R. Pasmanter, P.M. Poulain, M. Vergassola and E. Zambianchi for collaborations and discussions in last years. A particular acknowledgment to B. Marani for the continuous and warm encouragement. We are grateful to the ESF-TAO (Transport Processes in the Atmosphere and the Oceans) Scientific Program for providing meeting opportunities. This paper has been partly supported by INFM (Progetto di Ricerca Avanzato PRA-TURBO), MURST (no. 9702265437), and the European Network Intermittency in Turbulent Systems (contract number FMRX-CT98-0175). ## A Computation of the Finite size Lyapunov exponent In this appendix we discuss in detail the method for computing the Finite Size Lyapunov Exponent for both continuous dynamics (differential equations) and discrete dynamics (maps). The practical method for computing the FSLE goes as follows. Defined a given norm for the distance $`\delta (t)`$ between the reference and perturbed trajectories, one has to define a series of thresholds $`\delta _n=r^n\delta _0`$ ($`n=1,\mathrm{},P`$), and to measure the “doubling times” $`T_r(\delta _n)`$ that a perturbation of size $`\delta _n`$ takes to grow up to $`\delta _{n+1}`$. The threshold rate $`r`$ should not be taken too large, because otherwise the error has to grow through different scales before reaching the next threshold. On the other hand, $`r`$ cannot be too close to one, because otherwise the doubling time would be of the order of the time step in the integration. In our examples we typically use $`r=2`$ or $`r=\sqrt{2}`$. For simplicity $`T_r`$ is called “doubling time” even if $`r2`$. The doubling times $`T_r(\delta _n)`$ are obtained by following the evolution of the separation from its initial size $`\delta _{min}\delta _0`$ up to the largest threshold $`\delta _P`$. This is done by integrating the two trajectories of the system starting at an initial distance $`\delta _{min}`$. In general, one must choose $`\delta _{min}\delta _0`$, in order to allow the direction of the initial perturbation to align with the most unstable direction in the phase-space. Moreover, one must pay attention to keep $`\delta _P<\delta _{max}`$, so that all the thresholds can be attained ($`\delta _{max}`$ is the typical distance of two uncorrelated trajectory). The evolution of the error from the initial value $`\delta _{min}`$ to the largest threshold $`\delta _P`$ carries out a single error-doubling experiment. At this point one rescales the model trajectory at the initial distance $`\delta _{min}`$ with respect to the true trajectory and starts another experiment. After $`N`$ error-doubling experiments, we can estimate the expectation value of some quantity $`A`$ as: $$A_e=\frac{1}{N}\underset{i=1}{\overset{N}{}}A_i.$$ (A1) This is not the same as taking the time average because different error doubling experiments may takes different times. Indeed we have $$A_t=\frac{1}{T}_0^TA(t)𝑑t=\frac{\underset{i}{}A_i\tau _i}{_i\tau _i}=\frac{A\tau _e}{\tau _e}.$$ (A2) In the particular case in which $`A`$ is the doubling time itself we have from (A2) $$\lambda (\delta _n)=\frac{1}{T_r(\delta _n)_e}\mathrm{ln}r.$$ (A3) The method above described assumes that the distance between the two trajectories is continuous in time. This is not true for maps or for discrete sampling in time, thus the method has to be slightly modified. In this case $`T_r(\delta _n)`$ is defined as the minimum time at which $`\delta (T_r)r\delta _n`$. Because now $`\delta (T_r)`$ is a fluctuating quantity, from (A2) we have $$\lambda (\delta _n)=\frac{1}{T_r(\delta _n)_e}\mathrm{ln}\left(\frac{\delta (T_r)}{\delta _n}\right)_e.$$ (A4) We conclude by observing that the computation of the FSLE is not more expensive than the computation of the Lyapunov exponent by standard algorithm. One has simply to integrate two copies of the system and this can be done also for very complex simulations. ## B Universal saturation behavior of $`\lambda (\delta )`$ In this appendix we present the derivation of the asymptotic behavior (12) of $`\lambda (\delta )`$ for $`\delta `$ close to the saturation. The computation is explicitly done for the simple case of a one dimensional Brownian motion in the domain $`[L_B,L_B]`$, with reflecting boundary conditions: the numerical simulations indicate that the result is of general applicability. The evolution of the probability density $`p`$ is ruled by the Fokker-Planck equation $$\frac{p}{t}=\frac{1}{2}D\frac{^2p}{x^2}$$ (B1) with the Neumann boundary conditions $`\frac{p}{x}(\pm L_B)=0`$ . The general solution of (B1) is $$p(x,t)=\underset{k=\mathrm{}}{\overset{\mathrm{}}{}}\widehat{p}(k,0)e^{ikx}e^{t/\tau _k}+c.c$$ (B2) where $$\tau _k=\left(\frac{D}{2}\frac{\pi ^2}{L_B^2}k^2\right)^1,k=0,\pm 1,\pm 2,\mathrm{}$$ (B3) At large times $`p`$ approaches the uniform solution $`p_0=1/2L_B`$. Writing $`p`$ as $`p(x,t)=p_0+\delta p(x,t)`$ we have, for $`t\tau _1`$ , $$\delta p\mathrm{exp}(t/\tau _1).$$ (B4) The asymptotic behavior for the relative dispersion $`R^2(t)`$ is $$R^2(t)=\frac{1}{2}(xx^{})^2p(x,t)p(x^{},t)𝑑x𝑑x^{}$$ (B5) For $`t\tau _1`$ using (B4) we obtain $`R^2(t)\left(\frac{L_B^2}{3}Ae^{t/\tau _1}\right)`$. Therefore for $`\delta ^2(t)=R^2(t)`$ one has $`\delta (t)\left(\frac{L_B}{\sqrt{3}}\frac{\sqrt{3}A}{2L_B}e^{t/\tau 1}\right)`$ The saturation value of $`\delta `$ is $`\delta _{max}=L_B/\sqrt{3}`$, so for $`t\tau _1`$, or equivalently for $`(\delta _{max}\delta )/\delta 1`$, we expect $$\frac{d}{dt}\mathrm{ln}\delta =\lambda (\delta )=\frac{1}{\tau _1}\frac{\delta _{max}\delta }{\delta }$$ (B6) which is (12). Let us remark that in the previous argument for $`\lambda (\delta )`$ for $`\delta \delta _{max}`$ the crucial point is the exponential relaxation to the asymptotic uniform distribution. In a generic deterministic chaotic system it is not possible to prove this property in a rigorous way. Nevertheless one can expect that this request is fulfilled at least in non-pathological cases. In chaotic systems the exponential relaxation to asymptotic distribution corresponds to have the second eigenvalue $`\alpha `$ of the Perron-Frobenius operator inside the unitary circle; the relaxation time is $`\tau _1=\mathrm{ln}|\alpha |`$ . ## C Synthetic turbulent velocity fields The generation of a synthetic turbulent field which reproduces the relevant statistical features of fully developed turbulence is not an easy task. Indeed to obtain a physically sensible evolution for the velocity field one has to take into account the fact that each eddy is subject to the action of all other eddies. Actually the overall effect amounts only to two main contributions, namely the sweeping exerted by larger eddies and the shearing due to eddies of comparable size. This is indeed a substantial simplification, but nevertheless the problem of properly mimicking the effect of sweeping is still unsolved. To get rid of these difficulties we shall limit ourselves to the generation of a synthetic velocity field in Quasi-Lagrangian (QL) coordinates , thus moving to a frame of reference attached to a particle of fluid $`𝒓_1(t)`$. This choice bypasses the problem of sweeping, since it allows to work only with relative velocities, unaffected by advection. As a matter of fact there is a price to pay for the considerable advantage gained by discarding advection, and it is that only the problem of two-particle dispersion can be well managed within this framework. The properties of single-particle Lagrangian statistics cannot, on the contrary, be consistently treated. The QL velocity differences are defined as $$𝒗(𝒓,t)=𝒖(𝒓_1(t)+𝒓,t)𝒖(𝒓_1(t),t),$$ (C1) where the reference particle moves according to $$\frac{\mathrm{d}𝒓_1(t)}{\mathrm{d}t}=𝒖(𝒓_1(t),t).$$ (C2) These velocity differences have the useful property that their single-time statistics are the same as the Eulerian ones whenever considering statistically stationary flows . For fully developed turbulent flows, in the inertial interval of length scales where both viscosity and forcing are negligible, the QL longitudinal velocity differences show the scaling behavior $$\left|𝒗(𝒓)\frac{𝒓}{r}\right|^pr^{\zeta _p}$$ (C3) where the exponent $`\zeta _p`$ is a convex function of $`p`$, and $`\zeta _3=1`$. This scaling behavior is a distinctive statistical property of fully developed turbulent flows that we shall reproduce by means of a synthetic velocity field. In the QL reference frame the first particle is at rest in the origin and the second particle is at $`𝒓_2=𝒓_1+𝑹`$, advected with respect to the reference particle by the relative velocity $$𝒗(𝑹,t)=𝒖(𝒓_1(t)+𝑹,t)𝒖(𝒓_1(t),t)$$ (C4) By this change of coordinates the problem of pair dispersion in an Eulerian velocity field has been reduced to the problem of single particle dispersion in the velocity difference field $`𝒗(𝒓,t)`$. This yields a substantial simplification: it is indeed sufficient to build a velocity difference field with proper scaling features in the radial direction only, that is along the line that joins the reference particle $`𝒓_1(t)`$ – at rest in the origin of the QL coordinates – to the second particle $`𝒓_2(t)=𝒓_1(t)+𝑹(t)`$. To appreciate this simplification, it must be noted that actually all moments of velocity differences $`𝒖(𝒓_1(t)+𝒓^{},t)𝒖(𝒓_1(t)+𝒓,t)=𝒗(𝒓^{},t)𝒗(𝒓,t)`$ should display power law scaling in $`|𝒓^{}𝒓|`$. Actually these latter differences never appear in the dynamics of pair separation, and so we can limit ourselves to fulfill the weaker request (C3). Needless to say, already for three particle dispersion one needs a field with proper scaling in all directions. We limit ourselves to the two-dimensional case, where we can introduce a stream function for the QL velocity differences $$𝒗(𝒓,t)=\times \psi (𝒓,t).$$ (C5) The extension to a three dimensional velocity field is not difficult but more expensive in terms of numerical resources. Under isotropic conditions, the stream function can be decomposed in radial octaves as $$\psi (𝒓,\theta ,t)=\underset{i=1}{\overset{N}{}}\underset{j=1}{\overset{n}{}}\frac{\varphi _{i,j}(t)}{k_i}F(k_ir)G_{i,j}(\theta ).$$ (C6) where $`k_i=2^i`$. Following a heuristic argument, one expects that at a given $`r`$ the stream function is essentially dominated by the contribution from the $`i`$ term such that $`r2^i`$. This locality of contributions suggests a simple choice for the functional dependencies of the “basis functions”: $$F(x)=x^2(1x)\text{for}\mathrm{\hspace{0.17em}\hspace{0.17em}0}x1$$ (C7) and zero otherwise, $$G_{i,1}(\theta )=1,G_{i,2}(\theta )=\mathrm{cos}(2\theta +\phi _i)$$ (C8) and $`G_{i,j}=0`$ for $`j>2`$ ($`\phi _i`$ is a quenched random phase). It is worth remarking that this choice is rather general because it can be derived from the lowest order expansion for small $`r`$ of a generic streamfunction in Quasi-Lagrangian coordinates. It is easy to show that, under the usual locality conditions for infra red convergence, $`\zeta _p<p`$ , the leading contribution to the $`p`$-th order longitudinal structure function $`|v_r(r)|^p`$ stems from $`M`$-th term in the sum (C6), $`|v_r(r)|^p|\varphi _{M,2}|^p`$ with $`r2^M`$. If the $`\varphi _{i,j}(t)`$ are stochastic processes with characteristic times $`\tau _i=2^{2i/3}\tau _0`$, zero mean and $`|\varphi _{i,j}|^pk_i^{\zeta _p}`$, the scaling (C3) will be accomplished. An efficient way of to generate $`\varphi _{i,j}`$ is : $$\varphi _{i,j}(t)=g_{i,j}(t)z_{1,j}(t)z_{2,j}(t)\mathrm{}z_{i,j}(t)$$ (C9) where the $`z_{k,j}`$ are independent, positive definite, identically distributed random processes with characteristic time $`\tau _k`$, while the $`g_{i,j}`$ are independent stochastic processes with zero mean, $`g_{i,j}^2k_i^{2/3}`$ and characteristic time $`\tau _i`$. The scaling exponents $`\zeta _p`$ are determined by the probability distribution of $`z_{i,j}`$ via $$\zeta _p=\frac{p}{3}\mathrm{log}_2z^p.$$ (C10) As a last remark we note that by simply fixing the $`z_{i,j}=1`$ we recover the Kolmogorov scaling, which has been used in the simulations presented in section IV B FIGURE CAPTIONS a) Three realizations of $`R^2(t)`$ as a function of $`t`$ built as follows: $`R^2(t)=\delta _0^2\mathrm{exp}(2\gamma t)`$ if $`R^2(t)<1`$ and $`R^2(t)=2D(tt_{})`$ with $`\gamma =0.08,0.05,0.3`$ and $`\delta _0=10^7,D=1.5`$. b) $`R^2(t)`$ as function of $`t`$ averaged on the three realizations shown in figure 1a. The apparent anomalous regime and the diffusive one are shown. c) $`\lambda (\delta )`$ vs $`\delta `$, with Lyapunov and diffusive regimes. Lagrangian motion given by the Rayleigh-Bénard convection model with: $`A=0.2,B=0.4,\omega =0.4,k=1.0,a=\pi `$, the number of realizations is $`𝒩=2000`$ and the series of thresholds is $`\delta _n=\delta _0r^n`$ with $`\delta _0=10^4`$ and $`r=1.05`$. $`\lambda (\delta )`$ vs $`\delta `$, in a closed domain with $`6`$ (crosses) and $`12`$ (diamonds) convective cells. The lines are respectively: (a) Lyapunov regime with $`\lambda =0.017`$; (b) diffusive regime with $`D=0.021`$; (c) saturation regime with $`\delta _{max}=19.7`$; (d) saturation regime with $`\delta _{max}=5.7`$. (a)$`R^2(t)`$ for the four vortex system with $`\mathrm{\Gamma }_1=\mathrm{\Gamma }_2=\mathrm{\Gamma }_3=\mathrm{\Gamma }_4=1`$. The threshold parameter is $`r=1.03`$ and $`\delta _0=10^4`$, the dashed line is the power law $`R^2(t)t^{1.8}`$. The number of realizations is $`𝒩=2000`$. (b)$`\lambda (\delta )`$ vs $`\delta `$ for the same model and parameters. The horizontal line indicates the Lyapunov exponent ($`\lambda =0.14`$), the dashed curve is the saturation regime with $`\delta _{max}=0.76`$ FSLE computed for particle diffusion in a Sierpinsky Carpet of fractal dimension $`D_f=\mathrm{log}(8)/\mathrm{log}(3)`$ obtained by iteration of the unit structure up to a resolution $`3^5`$, one has: $`\lambda (\delta )\delta ^{1/.45}`$, which is in agreement with the value obtained for $`<R(t)>`$ versus $`t`$ shown in the inset (i.e. $`<R(t)>t^{.45}`$). Relative dispersion of Lagrangian trajectories, $`<R_{x,y}^2(t)>`$ versus $`t`$, in the Adriatic Sea, for data (continuous line) and model (dashed line), along the natural axes of the basin: (a) transverse direction ($`x`$axis) and (b) longitudinal direction ($`y`$axis). The time is measured in $`days`$ and the mean square radius of the cluster is in $`km^2`$. FSLE of Lagrangian trajectories in the Adriatic Sea, for data (continuous line) and model (dashed line). The scale $`\delta `$ is in $`km`$, $`\lambda (\delta )`$ is in $`days^1`$. Relative dispersion $`R(t)`$ for $`N=20`$ octaves synthetic turbulent simulation averaged over $`10^4`$ realizations. The line is the theoretical Richardson scaling $`t^{3/2}`$. Average inverse doubling time $`1/T(R)`$ for the same simulation of the previous figure. Observe the enhanced scaling region. The line is the theoretical Richardson scaling $`R^{2/3}`$.
no-problem/9904/cond-mat9904395.html
ar5iv
text
# Two-Dimensional Electron-Hole Systems in a Strong Magnetic Field: Composite Fermion Picture for Multi-Component Plasmas ## Introduction. Recently there has been considerable interest in two dimensional systems containing both electrons and holes in the presence of a strong magnetic field. In such systems, neutral ($`X^0`$) and charged excitons ($`X^{}`$) and larger exciton complexes ($`X_k^{}`$, $`k`$ neutral $`X^0`$’s bound to an electron) can occur. The excitonic ions $`X_k^{}`$ are long-lived Fermions, whose energy spectra contain Landau level structure. In this paper we investigate by exact numerical diagonalization small systems containing $`N_e`$ electrons and $`N_h`$ holes ($`N_eN_h`$), confined to the surface of a Haldane sphere. For $`N_h=1`$ these systems serve as simple guides to understanding photoluminescence. For larger values of $`N_h`$ it is possible to form a multi-component plasma containing electrons and $`X_k^{}`$ complexes. We propose a model for determining the incompressible quantum fluid states of such plasmas, and confirm the validity of the model by numerical calculations. In addition, we introduce a new generalized composite Fermion (CF) picture for the multi-component plasma and use it to predict the low lying bands of angular momentum multiplets for any value of the magnetic field. ## Bound States. In a sufficiently strong magnetic field, the only bound electron-hole complexes are the neutral exciton $`X^0`$ and the spin-polarized charged excitonic ions $`X_k^{}`$ (electron $`e^{}X_0^{}`$, charged exciton $`X^{}X_1^{}`$, charged biexciton $`X_2^{}`$, etc.). All other complexes found at weaker magnetic fields (e.g. spin-singlet charged exciton or spin-singlet biexciton) unbind. The angular momenta of complexes $`X^0`$ and $`X_k^{}`$ on a Haldane sphere with monopole strength $`2S`$ are $`l_{X^0}=0`$ and $`l_{X_k^{}}=|S|k`$. The binding energies of an exciton, $`\epsilon _0=E_{X^0}`$, and of excitonic ions, $`\epsilon _k=E_{X_{k1}^{}}+E_{X^0}E_{X_k^{}}`$ ($`E_A`$ is the energy of complex $`A`$) are listed in Tab. I for several different values of $`2S`$. It is apparent that $`\epsilon _0>\epsilon _1>\epsilon _2>\epsilon _3`$. Depending on the ratio $`N_e:N_h`$, we expect to find different combinations of complexes that have the largest total binding energy. When $`N_e=N_h`$ we expect $`N_h`$ neutral excitons $`X^0`$ to form. When $`N_e2N_h`$ the low lying states will contain $`N_h`$ charged excitons $`X^{}`$ and $`N_e2N_h`$ free electrons $`e^{}`$. For $`N_h<N_e<2N_h`$ we expect to find larger charged exciton complexes. ## Pseudopotentials. Whether the states with largest binding energy form the lowest energy band of the electron-hole system depends on the interaction between charged complexes $`X_k^{}`$. The interaction of a pair of charged particles $`A`$ and $`B`$ of angular momentum $`l_A`$ and $`l_B`$ can be described by a pseudopotential $`V_{AB}(L)`$ where $`\widehat{L}=\widehat{l}_A+\widehat{l}_B`$ is the total pair angular momentum. It is convenient to plot pseudopotentials as a function of the relative angular momentum $`=l_A+l_BL`$. Fig. 1 shows $`V_{AB}()`$ for the pairs $`e^{}e^{}`$, $`e^{}X^{}`$, $`X^{}X^{}`$, and $`e^{}X_2^{}`$, at the monopole strength $`2S=17`$. Roughly, the pseudopotential parameters $`V_{AB}()`$ calculated for different pairs $`AB`$ and for a given $`2S`$ lie on the same curve. Small differences between energies $`V_{AB}`$ calculated for different pairs at the same $``$ are due to different values of $`l_A`$ and $`l_B`$ and to the finite size and polarization of composite particles. Only the latter effect, important at small $``$, persists for $`2S\mathrm{}`$, i.e. in the planar geometry. The major and critical difference between four plotted pseudopotentials lies in the allowed values of $``$. If all $`A`$ and $`B`$ were point charges, the allowed pair angular momenta for two identical Fermions ($`A=B`$) would be $`L=2l_Aj`$, where $`j`$ is an odd integer, i.e. $`=1`$, 3, … and $`2l_A`$. For two distinguishable particles ($`AB`$), the values of $`L`$ would satisfy $`|l_Al_B|Ll_A+l_B`$, i.e. $`=0`$, 1, 2, …and $`2\mathrm{min}(l_A,l_B)`$. However, if $`A`$ or $`B`$ is a composite particle, one or more pair states with largest $`L`$ (smallest $``$) are forbidden, and the corresponding pseudopotential parameters are effectively infinite ($`AB`$ repulsion has a hard core). For $`A=X_{k_A}^{}`$ and $`B=X_{k_B}^{}`$, the smallest allowed $``$ can be deduced from the mapping between the electron-hole and two-spin systems, $$_{AB}^{\mathrm{min}}=2\mathrm{min}(k_A,k_B)+1.$$ (1) Thus, in Fig. 1, $`_{e^{}X^{}}1`$, $`_{X^{}X^{}}3`$, etc. Low lying states of a system of $`N_e`$ electrons and $`N_h`$ holes can contain a number of charged complexes $`X_k^{}`$ ($`X^{}`$ and possibly larger ones) interacting with one another and with electrons through appropriate pseudopotentials. It has been shown that the Laughlin $`\nu =1/m`$ state occurs in the gas of (identical) Fermions if the pseudopotential increases faster than linearly as a function of $`L(L+1)`$ in the vicinity of $`=m`$. As seen in the inset in Fig. 1, this is true for both $`V_{e^{}e^{}}`$ and $`V_{X^{}X^{}}`$, and also (at even values of $``$) for $`V_{e^{}X^{}}`$ and $`V_{e^{}X_2^{}}`$. In Ref. we found Laughlin states of one-component $`X^{}`$ gas formed at $`N_e=2N_h`$. In the present note we concentrate on a more general situation, where more than one kind of charged particles occur in an electron-hole system, and find incompressible fluid states of such multi-component plasma. ## Numerical Results. As an illustration, we present first the results of exact diagonalization performed for the system with $`N_e=8`$ and $`N_h=2`$. We expect low lying bands of states containing the following combinations of complexes: (i) $`4e^{}+2X^{}`$, (ii) $`5e^{}+X_2^{}`$, (iii) $`5e^{}+X^{}+X^0`$, and (iv) $`6e^{}+2X^0`$. All groupings (i)–(iv) contain an equal number $`N=N_eN_h`$ of singly charged complexes, however, both the angular momenta of involved complexes and the relevant hard cores are different. The total binding energies are: $`\epsilon _\mathrm{i}=2\epsilon _0+2\epsilon _1`$, $`\epsilon _{\mathrm{ii}}=2\epsilon _0+\epsilon _1+\epsilon _2`$, $`\epsilon _{\mathrm{iii}}=2\epsilon _0+\epsilon _1`$, and $`\epsilon _{\mathrm{iv}}=2\epsilon _0`$. Clearly, $`\epsilon _\mathrm{i}>\epsilon _{\mathrm{ii}}>\epsilon _{\mathrm{iii}}>\epsilon _{\mathrm{iv}}`$. However, which of the groupings contains the (possibly incompressible) ground state depends upon not only the total binding energy, but the interactions between all the charged particles which depends on $`2S`$. In Fig. 2, we show the low energy spectra of the $`8e+2h`$ system at $`2S=9`$ (a), $`2S=13`$ (c), and $`2S=14`$ (e). Filled circles mark the non-multiplicative states, and the open circles and squares mark the multiplicative states with one and two decoupled excitons, respectively. In frames (b), (d) and (f) we plot the low energy spectra of different charge complexes interacting through appropriate pseudopotentials (see Fig. 1), corresponding to four possible groupings (i)–(iv). By comparing left and right frames, we can identify low lying states of type (i)–(iv) in the electron-hole spectra. In general, energies calculated from pseudopotentials $`V_{AB}`$ in Fig. 2 underestimate energies of the corresponding electron-hole system if $`N`$ and $`2S`$ are large. This can be partially understood in terms of polarization effects in the two-particle pseudopotentials. For a particular grouping and value of $`2S`$, it is possible to calculate pseudopotentials that give a very good fit to the electron-hole spectrum. The “correct” pseudopotentials for the $`8e+2h`$ system are close to those of a pair of point charges with appropriate angular momenta $`l_A`$ and $`l_B`$, except for the hard cores. It is unlikely that a system containing a large number of different species (e.g. $`e^{}`$, $`X^{}`$, $`X_2^{}`$, etc.) will form the absolute ground state of the electron-hole system. However, different charge configurations can form low lying excited bands. An interesting example is the $`12e+6h`$ system at $`2S=17`$. The $`6X^{}`$ grouping (v) has the maximum total binding energy $`\epsilon _\mathrm{v}=6\epsilon _0+6\epsilon _1`$. Other expected low lying bands correspond to the following groupings: (vi) $`e^{}+5X^{}+X^0`$ with $`\epsilon _{\mathrm{vi}}=6\epsilon _0+5\epsilon _1`$ and (vii) $`e^{}+4X^{}+X_2^{}`$ with $`\epsilon _{\mathrm{vii}}=6\epsilon _0+5\epsilon _1+\epsilon _2`$. Although we are unable to perform an exact diagonalization for the $`12e+6h`$ system in terms individual electrons and holes, we can use appropriate pseudopotentials and binding energies of groupings (v)–(vii) to obtain the low lying states in the spectrum. The results are presented in Fig. 3. There is only one $`6X^{}`$ state (the $`L=0`$ Laughlin $`\nu _X^{}=1/3`$ state) and two bands of states in each of groupings (vi) and (vii). A gap of 0.0626 $`e^2/\lambda `$ separates the $`L=0`$ ground state from the lowest excited state. ## Generalized Laughlin Wavefunction. It is known that if the pseudopotential $`V()`$ decreases quickly with increasing $``$, the low lying multiplets avoid (strongly repulsive) pair states with one or more of the smallest values of $``$. For the (one-component) electron gas on a plane, avoiding pair states with $`<m`$ is achieved with the factor $`_{i<j}(x_ix_j)^m`$ in the Laughlin $`\nu =1/m`$ wavefunction. For a system containing a number of distinguishable types of Fermions interacting through Coulomb-like pseudopotentials, the appropriate generalization of the Laughlin wavefunction will contain a factor $`(x_i^{(a)}x_j^{(b)})^{m_{ab}}`$, where $`x_i^{(a)}`$ is the complex coordinate for the position of $`i`$th particle of type $`a`$, and the product is taken over all pairs. For each type of particle one power of $`(x_i^{(a)}x_j^{(a)})`$ results from the antisymmetrization required for indistinguishable Fermions and the other factors describe Jastrow type correlations between the interacting particles. Such a wavefunction guarantees that $`_{ab}m_{ab}`$, for all pairings of various types of particles, thereby avoiding large pair repulsion. Fermi statistics of particles of each type requires that all $`m_{aa}`$ are odd, and the hard cores defined by Eq. (1) require that $`m_{ab}_{ab}^{\mathrm{min}}`$ for all pairs. ## Generalized Composite Fermion Picture. In order to understand the numerical results obtained in the spherical geometry (Figs. 2 and 3), it is useful to introduce a generalized CF picture by attaching to each particle fictitious flux tubes carrying an integral number of flux quanta $`\varphi _0`$. In the multi-component system, each $`a`$-particle carries flux $`(m_{aa}1)\varphi _0`$ that couples only to charges on all other $`a`$-particles and fluxes $`m_{ab}\varphi _0`$ that couple only to charges on all $`b`$-particles, where $`a`$ and $`b`$ are any of the types of Fermions. The effective monopole strength seen by a CF of type $`a`$ (CF-$`a`$) is $$2S_a^{}=2S\underset{b}{}(m_{ab}\delta _{ab})(N_b\delta _{ab})$$ (2) For different multi-component systems we expect generalized Laughlin incompressible states (for two components denoted as $`[m_{AA},m_{BB},m_{AB}]`$) when all the hard core pseudopotentials are avoided and CF’s of each kind fill completely an integral number of their CF shells (e.g. $`N_a=2l_a^{}+1`$ for the lowest shell). In other cases, the low lying multiplets are expected to contain different kinds of quasiparticles (QP-$`A`$, QP-$`B`$, …) or quasiholes (QH-$`A`$, QH-$`B`$, …) in the neighboring incompressible state. Our multi-component CF picture can be applied to the system of excitonic ions, where the CF angular momenta are given by $`l_{X_k^{}}^{}=|S_{X_k^{}}^{}|k`$. As an example, let us first analyze the low lying $`8e+2h`$ states in Fig. 2. At $`2S=9`$, for $`m_{e^{}e^{}}=m_{X^{}X^{}}=3`$ and $`m_{e^{}X^{}}=1`$ we predict the following low lying multiplets in each grouping: (i) $`2S_e^{}^{}=1`$ and $`2S_X^{}^{}=3`$ gives $`l_e^{}^{}=l_X^{}^{}=1/2`$. Two CF-$`X^{}`$’s fill their lowest shell ($`L_X^{}=0`$) and we have two QP-$`e^{}`$’s in their first excited shell, each with angular momentum $`l_e^{}^{}+1=3/2`$ ($`L_e^{}=0`$ and 2). Addition of $`L_e^{}`$ and $`L_X^{}`$ gives total angular momenta $`L=0`$ and 2. We interpret these states as those of two QP-$`e`$’s in the incompressible state. Similarly, for other groupings we obtain: (ii) $`L=2`$; (iii) $`L=1`$, 2, and 3; and (iv) $`L=0`$ ($`\nu =2/3`$ state of six electrons). At $`2S=13`$ and 14 we set $`m_{e^{}e^{}}=m_{X^{}X^{}}=3`$ and $`m_{e^{}X^{}}=2`$ and obtain the following predictions. First, at $`2S=13`$: (i) The ground state is the incompressible state at $`L=0`$; the first excited band should therefore contain states with one QP-QH pair of either kind. For the $`e^{}`$ excitations, the QP-$`e^{}`$ and QH-$`e^{}`$ angular momenta are $`l_e^{}^{}=3/2`$ and $`l_e^{}^{}+1=5/2`$, respectively, and the allowed pair states have $`L_e^{}=1`$, 2, 3, and 4. However, the $`L=1`$ state has to be discarded, as it is known to have high energy in the one-component (four electron) spectrum. For the $`X^{}`$ excitations, we have $`l_X^{}^{}=1/2`$ and pair states can have $`L_X^{}=1`$ or 2. The first excited band is therefore expected to contain multiplets at $`L=1`$, $`2^2`$, 3, and 4. The low lying multiplets for other groupings are expected at: (ii) $`L=2`$ and 3; (iii) $`2S_{X_2^{}}^{}=3`$ gives no bound $`X_2^{}`$ state; setting $`m_{e^{}X^{}}=1`$ we obtain $`L=2`$; and (iv) $`L=0`$, 2, and 4. Finally, at $`2S=14`$ we obtain: (i) $`L=1`$, 2, and 3; (ii) incompressible \[3\*2\] state at $`L=0`$ ($`m_{X^{}X^{}}`$ is irrelevant for one $`X^{}`$) and the first excited band at $`L=1`$, 2, 3, 4, and 5; (iii) $`L=1`$; and (iv) $`L=3`$. For the $`12e+6h`$ spectrum in Fig. 3 the following CF predictions are obtained: (v) For $`m_{X^{}X^{}}=3`$ we obtain the Laughlin $`\nu =1/3`$ state with $`L=0`$. Because of the hard core of $`V_{X^{}X^{}}`$, this is the only state of this grouping. (vi) We set $`m_{X^{}X^{}}=3`$ and $`m_{e^{}X^{}}=1`$, 2, and 3. For $`m_{e^{}X^{}}=1`$ we obtain $`L=1`$, 2, $`3^2`$, $`4^2`$, $`5^3`$, $`6^3`$, $`7^3`$, $`8^2`$, $`9^2`$, 10, and 11. For $`m_{e^{}X^{}}=2`$ we obtain $`L=1`$, 2, 3, 4, 5, and 6. For $`m_{e^{}X^{}}=3`$ we obtain $`L=1`$. (vii) We set $`m_{X^{}X^{}}=3`$, $`m_{e^{}X_2^{}}=1`$, $`m_{X^{}X_2^{}}=3`$, and $`m_{e^{}X^{}}=1`$, 2, or 3. For $`m_{e^{}X^{}}=1`$ we obtain $`L=2`$, 3, $`4^2`$, $`5^2`$, $`6^3`$, $`7^2`$, $`8^2`$, 9, and 10. For $`m_{e^{}X^{}}=2`$ we obtain $`L=2`$, 3, 4, 5, and 6. For $`m_{e^{}X^{}}=3`$ we obtain $`L=2`$. In groupings (vi) and (vii), the sets of multiplets obtained for higher values of $`m_{e^{}X^{}}`$ are subsets of the sets obtained for lower values, and we would expect them to form lower energy bands since they avoid additional small values of $`_{e^{}X^{}}`$. However, note that the (vi) and (vii) states predicted for $`m_{e^{}X^{}}=3`$ (at $`L=1`$ and 2, respectively) do not form separate bands in Fig. 3. This is because the $`V_{e^{}X^{}}`$ pseudopotential increases more slowly than linearly as a function of $`L(L+1)`$ in the vicinity of $`_{e^{}X^{}}=3`$ (see Fig. 1). In such case the CF picture fails. The agreement of our CF predictions with the data in Figs. 2 and 3 (marked with lines) is really quite remarkable and strongly indicates that our multi-component CF picture is correct. We were indeed able to confirm predicted Jastrow type correlations in the low lying states by calculating their coefficients of fractional parentage. We have also verified the CF predictions for other systems that we were able to treat numerically. If exponents $`m_{ab}`$ are chosen correctly, the CF picture works well in all cases. ## Summary. Charged excitons and excitonic complexes play an important role in determining the low energy spectra of electron-hole systems in a strong magnetic field. We have introduced general Laughlin type correlations into the wavefunctions, and proposed a generalized CF picture to elucidate the angular momentum multiplets forming the lowest energy bands for different charge configurations occurring in the electron-hole system. We have found Laughlin incompressible fluid states of multi-component plasmas at particular values of the magnetic field, and the lowest bands of multiplets for various charge configurations at any value of the magnetic field. It is noteworthy that the fictitious Chern–Simons fluxes and charges of different types or colors are needed in the generalized CF model. This strongly suggests that the effective magnetic field seen by the CF’s does not physically exist and that the CF picture should be regarded as a mathematical convenience rather than physical reality. Our model also suggests an explanation of some perplexing observations found in photoluminescence, but this topic will be addressed in a separate publication. We thank P. Hawrylak and M. Potemski for helpful discussions. AW and JJQ acknowledge partial support from the Materials Research Program of Basic Energy Sciences, US Department of Energy. KSY acknowledges support from the Korea Research Foundation (Project No. 1998-001-D00305).
no-problem/9904/hep-ph9904236.html
ar5iv
text
# 1 Introduction ## 1 Introduction The conventional framework for particle physics beyond the standard model (SM) assumes that the fundamental mass scale of nature is the Planck mass: $`M_{Pl}10^{19}`$ GeV. It is then natural to ask: why are the masses of the elementary particles so small? Proposed solutions to this hierarchy problem have a common feature: new non-perturbative gauge interactions dynamically generate a much lower scale, $`M_{dyn}`$, from which electroweak symmetry breaking is generated, and hence all the masses of the known elementary particles. Schematically, this mass hierarchy is $$M_{Pl}M_{dyn}M_W\mathrm{}m_e.$$ (1) In supersymmetric theories, $`M_{dyn}`$ is the scale at which supersymmetry is broken, and the triggering of electroweak symmetry breaking may be mediated, for example, by gravitational-scale physics, or by gauge interactions at much lower energy scales. Alternatively, $`M_{dyn}`$ may be the scale of a new gauge force, technicolor, which forms fermion condensates that directly break $`SU(2)\times U(1)`$. Finally, new strong gauge forces could bind a composite Higgs boson. Recently an alternative framework has been proposed in which spacetime is enlarged to contain large extra compact spatial dimensions. At distances smaller than the size of these extra dimensions the gravitational force varies more rapidly than the inverse square law, so that the fundamental mass scale of gravity can be made much smaller than $`M_{Pl}`$. The conventional mass hierarchy of (1) is completely avoided if this fundamental mass scale is of order the weak scale. In this case, the length scale of the extra dimensions is much larger than the scales probed experimentally at colliders, and hence this framework requires that the quarks, leptons and gauge quanta of the SM are spatially confined to a $`3+1`$ dimensional sub-space of the enlarged spacetime. The physics at the fundamental scale, $`\mathrm{\Lambda }`$, which may well be that of string theory, will be directly accessible to colliders of sufficiently high energy; but even at lower energies this physics may be experimentally probed. At energies below the fundamental mass scale, physics is described by an effective Lagrangian, which we take to be the most general set of $`SU(3)\times SU(2)\times U(1)`$ invariant operators involving quark, lepton and Higgs doublet fields of the SM: $$_{eff}=_{SM}+\underset{i}{}\frac{c_i}{\mathrm{\Lambda }^p}𝒪_i^{4+p}$$ (2) where $`_{SM}`$ is the SM Lagrangian, $`i`$ runs over all gauge invariant operators, $`𝒪_i^{4+p}`$, of dimension $`4+p`$ with $`p1`$, and $`c_i`$ are unknown dimensionless couplings. In this letter we study consequences of several of the dimension-6 operators. First we derive bounds on the $`c_i/\mathrm{\Lambda }^2`$ from existing experimental results under very conservative assumptions about flavor-breaking in the ultraviolet theory. We then re-examine the precision electroweak bounds on the Higgs boson mass. Analyses within the standard model find a light Higgs; however, we will show that such results do not survive the addition of non-renormalizable operators, even if those operators are suppressed by scales as large as $`11\mathrm{TeV}`$. In theories with large extra dimensions there is no good argument for a light Higgs over a heavy Higgs or a non-linearly realized $`SU(2)\times U(1)`$ symmetry, in which case (2) must be replaced by a chiral Lagrangian. Finally we examine two operators in particular and their effects on the discovery of Higgs bosons: $`𝒪_G`$ $`=`$ $`\phi ^{}\phi G_{\mu \nu }^aG^{a\mu \nu }`$ (3) $`𝒪_\gamma `$ $`=`$ $`\phi ^{}\phi F_{\mu \nu }F^{\mu \nu }`$ (4) where $`G_{\mu \nu }^a`$ and $`F^{\mu \nu }`$ are the QCD and electromagnetic field strengths, and $`\phi `$ is the Higgs doublet with Re$`\phi ^0=(v+h)/\sqrt{2}`$. The first operator contributes to Higgs production at hadron colliders via gluon-gluon fusion, and the second to Higgs decay to $`\gamma \gamma `$. There are two reasons why these effects provide a significant discovery potential for extra dimensions: first, they are competing against a SM signal which is suppressed by loop factors, and second, the SM $`\mathrm{\Gamma }(h\gamma \gamma )`$ is further suppressed by $`e^410^2`$, where $`e`$ is the electromagnetic coupling constant. However we assume that the physics at scale $`\mathrm{\Lambda }`$ which generates (3)–(4), does so in a way that the coefficients are not suppressed by powers of the SM gauge coupling constants (see also ). Such a behavior is certainly not expected if the theory at $`\mathrm{\Lambda }`$ is a 4-dimensional gauge field theory: in that case operators of the form (3)–(4) would arise by integrating out heavy fields, but these fields must couple to $`F_{\mu \nu }`$ and $`G_{\mu \nu }`$ with the usual SM gauge couplings, and further, as shown in , they must be also be loop-suppressed. Thus even if the gauge theory at $`\mathrm{\Lambda }`$ were strongly-coupled, it seems unlikely that coefficients of $`𝒪(1)`$ could be generated. This is very important — the effect of the interaction $`(e^2/\mathrm{\Lambda }^2)\phi ^{}\phi F^2`$ on the $`h\gamma \gamma `$ branching ratio has been studied, and is small for $`\mathrm{\Lambda }1\mathrm{TeV}`$ . Thus observation of the physics we will describe in Section 5 would provide support for an extra-dimensional theory. ## 2 Some Constraints on $`\mathrm{\Lambda }`$ Are the coefficients $`c_{G,\gamma }/\mathrm{\Lambda }^2`$ expected to be large enough for an observable $`h\gamma \gamma `$ signal? In general this cannot be excluded, since physics induced by operators $`𝒪_i`$ will place bounds on $$\frac{f_i}{\mathrm{\Lambda }_i^p}\frac{c_i}{\mathrm{\Lambda }_{}^p}(f_i=\pm 1)$$ (5) not on $`\mathrm{\Lambda }`$. However, it would be unreasonable to expect $`c_{G,\gamma }`$ to be orders of magnitude larger than all the other $`c_i`$. It is tempting to assume that although the dimensionless coefficients $`c_i`$ are unknown, they are all of order unity. However, in this case operators which violate baryon number constrain $`\mathrm{\Lambda }\stackrel{>}{}10^{16}\mathrm{GeV}`$, and $`CP`$ violating operators contributing to $`ϵ_K`$ constrain $`\mathrm{\Lambda }\stackrel{>}{}10^5\mathrm{GeV}`$. Thus the framework of large compact extra dimensions, allowing a fundamental scale close to the weak scale, is clearly excluded unless the low energy effective theory possesses an approximate flavor symmetry, in which case one expects $$c_i=\epsilon _{Fi}c_i^{}$$ (6) with $`c_i^{}`$ of order unity. The flavor symmetry breaking parameters, $`\epsilon _{Fi}`$, depend on the flavor symmetry group and the pattern of flavor symmetry breaking. For operators which violate flavor and $`CP`$ they must be small, while for operators which conserve flavor and $`CP`$ they may be set to unity. To allow low values for $`\mathrm{\Lambda }`$, the flavor group should be large, and its breaking should be kept to a minimum, consistent with the observed quark and lepton masses and mixings. The maximum flavor group of the SM is $`U(3)^5`$. The three generations of quarks and leptons transform as $`q_L=(u_L,d_L)(3,1,1,1,1)`$; $`u_R(1,3,1,1,1)`$; $`d_R(1,1,3,1,1)`$; $`\mathrm{}_L=(\nu _L,e_L)(1,1,1,3,1)`$; $`e_R(1,1,1,1,3)`$. If there are only three symmetry breaking parameters, one for each of the up, down and charged lepton mass matrices, $`\epsilon _u(3,\overline{3},1,1,1)`$; $`\epsilon _d(3,1,\overline{3},1,1)`$; $`\epsilon _e(1,1,1,3,\overline{3})`$, then baryon number and lepton number remain unbroken. (The $`\epsilon _i`$ are equal to the Yukawa couplings up to an $`𝒪(1)`$ factor, $`c_i`$: $`\lambda _{u,d,e}=c_{u,d,e}\epsilon _{u,d,e}`$.) However, even after imposing such a flavor symmetry, there remain operators such as $$𝒪_{qq}=(\overline{q}_L\gamma ^\mu \epsilon _u\epsilon _u^{}q_L)^2=c_u^4(\overline{q}_L\gamma ^\mu \lambda _u\lambda _u^{}q_L)^2$$ (7) which contribute to $`ϵ_K`$ and constrain $`\mathrm{\Lambda }\stackrel{>}{}4.2\mathrm{TeV}\times (\sqrt{c_{qq}}/c_u^2)`$. There are two ways to avoid this bound. First, since the bound depends quadratically on $`c_u`$, values slightly larger than 1 will weaken the bound significantly; this seems entirely natural to us. Second, one could postulate that $`\epsilon _{u,d}`$ are real and the observed $`ϵ_K`$ has an exotic origin; we view this as disfavored given that measurements of $`V_{ub}/V_{cb}`$ and $`B\overline{B}`$ mixing indicate values of the CKM matrix elements consistent with a standard model origin of $`ϵ_K`$ to better than $`30\%`$. For the $`h\gamma \gamma `$ signal, we are interested in the operators (3)–(4), which conserve $`U(3)^5`$. Hence, even if the higher dimension flavor violating operators, such as (7), are completely absent, it is important to study constraints on $`\mathrm{\Lambda }`$ expected from operators which conserve $`U(3)^5`$. Such operators include flavor-conserving four-fermion operators and operators involving the Higgs doublet and the gauge fields. There have been many analyses to date which obtain constraints from these operators, and here we will simply repeat the results of these analyses, in the notation we are using for $`\mathrm{\Lambda }`$. (An analysis similar to ours was recently presented in .) Among the $`CP`$-conserving four-fermion operators, the strongest constraints come from atomic parity violation. The operator $$𝒪_\mathrm{}q=(\overline{\mathrm{}}_L\gamma _\mu \mathrm{}_L)(\overline{q}_L\gamma ^\mu q_L)$$ (8) gives a constraint $`\mathrm{\Lambda }_{lq}>3.0\mathrm{TeV}`$ at 95% CL. If the operator $`(\overline{e}_R\gamma _\mu e_R)(\overline{q}_L\gamma ^\mu q_L)`$ were generated with the same coefficient, $`P`$ would be preserved in atomic systems and the previous limit would vanish. Although we do not expect $`P`$ to be a good symmetry of the underlying theory, a partial cancellation could easily weaken this bound. Apart from $`P`$-violation, the best bounds on $`\mathrm{\Lambda }_\mathrm{}q`$ currently come from OPAL , using the $`\mathrm{}_1,q_3`$ component, and from CDF , using the $`\mathrm{}_2,q_1`$ component. Both find $`\mathrm{\Lambda }>800\mathrm{GeV}`$ at 95% CL. The bounds on the coefficients of the operators $`𝒪_{qq,\mathrm{}q}`$ of (7)–(8) do not provide strict bounds on the scale $`\mathrm{\Lambda }`$, because $`\mathrm{\Lambda }=\mathrm{\Lambda }_i\sqrt{c_i}`$, and the $`c_i`$ are unknown. Nevertheless, if the (flavor-conserving) $`c_i^{}=c_i`$ are expected to be of order unity for these operators, then $`\mathrm{\Lambda }\stackrel{>}{}3\mathrm{TeV}`$ is clearly allowed, while a value of $`\mathrm{\Lambda }`$ as low as $`1\mathrm{TeV}`$ seems disfavored. ## 3 Precision Electroweak Physics and the Higgs Mass Bound A second class of constraints arise from precision measurements in the electroweak gauge sector, namely from the $`S`$ and $`T`$ parameters (see, e.g., ). The strongest of these constraints arise from the operators: $`𝒪_{BW}`$ $`=`$ $`B^{\mu \nu }(\phi ^{}\tau ^aW^{a\mu \nu }\phi )`$ (9) $`𝒪_\mathrm{\Phi }`$ $`=`$ $`(\phi ^{}D^\mu \phi )(D_\mu \phi ^{}\phi )`$ (10) which contribute $`\mathrm{\Delta }S_{new}`$ $`=`$ $`{\displaystyle \frac{2c_Ws_W}{\alpha }}{\displaystyle \frac{v^2}{\mathrm{\Lambda }_{BW}^2}}f_{BW}`$ (11) $`\mathrm{\Delta }T_{new}`$ $`=`$ $`{\displaystyle \frac{1}{2\alpha }}{\displaystyle \frac{v^2}{\mathrm{\Lambda }_\mathrm{\Phi }^2}}f_\mathrm{\Phi }`$ (12) where $`s_W,c_W`$ are the sine and cosine of the weak angle and $`f_{BW}`$, $`f_\mathrm{\Phi }`$ are unknown signs. A global fit to electroweak observables yields $`S_{fit}=0.14\pm 0.12`$ and $`T_{fit}=0.22\pm 0.15`$ assuming $`m_h=100\mathrm{GeV}`$<sup>1</sup><sup>1</sup>1The fit in uses $`m_h=600\mathrm{GeV}`$ and defines $`S=T=0`$ in the SM. We rescale to $`m_h=100\mathrm{GeV}`$ using the parameterization of Ref. (see Eqs. (16)–(17)). We then treat deviations from $`m_h=100\mathrm{GeV}`$ as “new physics.” Since each operator contributes only to one of $`S`$ or $`T`$, we can find independent bounds on each. We find that at 95% CL: $`\mathrm{\Lambda }_{BW}`$ $`>`$ $`3.6\mathrm{TeV}`$ (13) $`\mathrm{\Lambda }_\mathrm{\Phi }`$ $`>`$ $`3.0\mathrm{TeV}.`$ (14) We can also extract a bound if $`\mathrm{\Lambda }_{BW}=\mathrm{\Lambda }_\mathrm{\Phi }`$: $`\mathrm{\Lambda }>4.0\mathrm{TeV}`$, allowing the Higgs mass to vary over the range $`100\mathrm{GeV}<m_h<800\mathrm{GeV}`$. We see that the constraints from precision electroweak physics are very similar in magnitude to those obtained in the previous section. How important are these constraints for restricting $`\mathrm{\Lambda }_\gamma `$? Although the electromagnetic field strength, $`F^{\mu \nu }`$, is not $`SU(2)\times U(1)`$ invariant, the operator $`𝒪_\gamma `$ is generated, after electroweak symmetry-breaking, from the invariant operators $`𝒪_B=(\phi ^{}\phi )B_{\mu \nu }B^{\mu \nu }`$, $`𝒪_W=(\phi ^{}\phi )W_{\mu \nu }W^{\mu \nu }`$ and $`𝒪_{BW}`$ of Eq. (9): $$\frac{f_\gamma }{\mathrm{\Lambda }_\gamma ^2}=c_W^2\frac{f_B}{\mathrm{\Lambda }_B^2}+s_W^2\frac{f_W}{\mathrm{\Lambda }_W^2}+c_Ws_W\frac{f_{BW}}{\mathrm{\Lambda }_{BW}^2}.$$ (15) If all $`f_i`$ and $`\mathrm{\Lambda }_i`$ on the right side of Eq. (15) were equal, then the bound (13) on $`\mathrm{\Lambda }_{BW}`$ implies $`\mathrm{\Lambda }_\gamma >3.3\mathrm{TeV}`$. However, changes in the relative signs or sizes of each contribution significantly reduces the bound; thus we have no strong lower bound on the scale $`\mathrm{\Lambda }_\gamma `$ itself. Likewise we know of no strong constraint on the scale $`\mathrm{\Lambda }_G`$ either. Finally we wish to address the question of the Higgs mass. It is well-known that fits to the electroweak data indicate a light Higgs. A simple fit can be done using only $`S`$ and $`T`$ as given above and the following parameterization of the Higgs contributions from Ref. : $`\mathrm{\Delta }S_H`$ $`=`$ $`0.091x_H0.010x_H^2`$ (16) $`\mathrm{\Delta }T_H`$ $`=`$ $`0.079x_H0.028x_H^2+0.0026x_H^3`$ (17) where $`x_H=\mathrm{log}(m_h/100\mathrm{GeV})`$. Using these forms, one can do a fit demanding $`S_{fit}=\mathrm{\Delta }S_H+\mathrm{\Delta }S_{new}`$ and likewise for $`T`$. For the SM alone, a 95% CL upper bound of $`255\mathrm{GeV}`$ has been obtained . However it is clear that from the point of view of the oblique parameters, shifts in $`\mathrm{\Delta }S_H`$ and $`\mathrm{\Delta }T_H`$ can be compensated by similar shifts in $`\mathrm{\Delta }S_{new}`$ and $`\mathrm{\Delta }T_{new}`$. Thus we can derive an effective “95% CL bound” on the Higgs mass as a function of $`\mathrm{\Lambda }`$ under the requirement that the fit to the experimentally obtained $`S_{fit}`$ and $`T_{fit}`$ be no worse than that obtained for $`m_h=255\mathrm{GeV}`$ and $`\mathrm{\Lambda }\mathrm{}`$. (We do this by constructing a $`\chi ^2`$ distribution from $`S`$ and $`T`$ alone.) How large can the Higgs mass become with the inclusion of $`𝒪_{BW}`$ and $`𝒪_\mathrm{\Phi }`$? The answer is: quite large. Fitting to $`m_h`$ as a function of $`\mathrm{\Lambda }`$ and using $`S`$ and $`T`$ as “experimental” inputs, we find for particular choices of the signs of the operators (i.e., $`f_{BW}=f_\mathrm{\Phi }=+1`$) that the precision electroweak bound on the Higgs mass disappears completely for $`4\mathrm{TeV}\stackrel{<}{}\mathrm{\Lambda }\stackrel{<}{}11\mathrm{TeV}`$! (By “disappear” we mean that the 95% upper bound on $`m_h`$ exceeds the unitarity bound of approximately $`800\mathrm{GeV}`$ and so is meaningless.) Thus, in the context of gravitational physics at or below $`10\mathrm{TeV}`$, the usual claims that electroweak physics prefers a light Higgs do not hold. And even for $`\mathrm{\Lambda }`$ as high as $`17\mathrm{TeV}`$, the upper limit on the Higgs mass exceeds $`500\mathrm{GeV}`$. These results are summarized in Figure 1 where we show the 95% CL allowed range for $`m_h`$ as a function of $`\mathrm{\Lambda }\mathrm{\Lambda }_{BW}=\mathrm{\Lambda }_\mathrm{\Phi }`$. The hatched region at small $`\mathrm{\Lambda }`$ is ruled out because of its large contribution to $`S`$ and $`T`$, while the region at large $`\mathrm{\Lambda }`$ and large $`m_h`$ is ruled out because the new operators contribute too little to $`S`$ and $`T`$ to significantly effect the SM fit to the Higgs mass. However for intermediate $`\mathrm{\Lambda }`$ (unhatched region) it is clear that there is effectively no limit on the Higgs mass thanks to the effects of the new operators. (If the physics at $`\mathrm{\Lambda }`$ were weakly-coupled then we would expect that $`c_{BW}e^2c_Ws_W`$; then allowing $`c_\mathrm{\Phi }1/4`$ would reproduce Fig. 1, only with the $`\mathrm{\Lambda }`$ rescaled by $`1/2`$. Thus the preference for a light Higgs in the SM is even removed for a weakly-coupled gauge theory if $`\mathrm{\Lambda }25\mathrm{TeV}`$.) Finally, we note that the one other argument for a light Higgs, namely triviality, is no longer applicable in these models either. With such a low ultraviolet cutoff ($`\mathrm{\Lambda }`$ few TeV), the Higgs self-coupling cannot run to its Landau pole for $`m_h\stackrel{<}{}1\mathrm{TeV}`$. ## 4 Implications for Electroweak Symmetry Breaking The mechanism for electroweak symmetry breaking (EWSB) is unknown. Nevertheless, it is commonly believed that the Higgs boson exists, and is light. The two indirect indications for this are: * The successful prediction of the weak mixing angle from gauge coupling constant unification. This prediction results in theories with weak scale supersymmetry which are perturbative to a high scale; such theories have a light Higgs boson, $`m_h\stackrel{<}{}150\mathrm{GeV}`$ . * The experimental values of the precision electroweak observables are consistent with the standard model, at 95% C.L., only if $`m_h\stackrel{<}{}255\mathrm{GeV}`$ . If there are large extra dimensions allowing the fundamental scale, $`\mathrm{\Lambda }`$, to be in the TeV domain, neither of these points can be used to argue that the Higgs boson is light. For the first: it has not been demonstrated that it is possible to predict the weak mixing angle to the percent level of accuracy in these theories; furthermore, there is no need for the field theory below $`\mathrm{\Lambda }`$ to be supersymmetric since there is no large hierarchy between the weak scale and $`\mathrm{\Lambda }`$. The argument from fits to the precision electroweak observables applies only if the standard model is the correct theory up to scales of at least 10 TeV; it is a very weak bound which is immediately evaded by large extra dimensions, allowing several scenarios for EWSB: * Light Higgs ($`m_h<200\mathrm{GeV}`$): For $`\mathrm{\Lambda }\stackrel{>}{}20\mathrm{TeV}`$ some protection mechanism for the Higgs mass would be required; if this is supersymmetry, the Higgs will be light. For $`\mathrm{\Lambda }13\mathrm{TeV}`$, if the tree level Higgs mass happened to vanish, EWSB and a light Higgs boson could result from 1 loop radiative corrections. * Heavy Higgs ($`m_h>200\mathrm{GeV}`$): This could arise for $`\mathrm{\Lambda }13`$ TeV, if the Higgs mass parameter is somewhat less than $`\mathrm{\Lambda }`$, or alternatively for $`\mathrm{\Lambda }310\mathrm{TeV}`$ if the Higgs mass parameter vanishes at tree level but arises at 1 loop. In both cases a large value for the Higgs self coupling is needed, and the operators (9) and (10) must mimic the effects of a light Higgs in the $`S`$ and $`T`$ parameters. * No Higgs: Physics at the fundamental scale $`\mathrm{\Lambda }13\mathrm{TeV}`$ may itself cause EWSB. An example of this has already been proposed . In this case the theory below $`\mathrm{\Lambda }`$ will have $`SU(2)\times U(1)`$ realized non-linearly, and the chiral Lagrangian will have operators analagous to (9) and (10) which mimic the effects of a light Higgs in the $`S`$ and $`T`$ parameters. A light Higgs boson is just one possibility amongst several for EWSB, and is not preferred. We have shown that, in theories with large extra dimensions having $`𝒪_{BW,\mathrm{\Phi }}`$ with $`c_{BW,\mathrm{\Phi }}`$ of order unity, the precision electroweak data provide a lower bound on the fundamental scale, $`\mathrm{\Lambda }_{min}3\mathrm{TeV}`$. For values of $`\mathrm{\Lambda }`$ in the range (1–3)$`\times \mathrm{\Lambda }_{min}`$, the signs $`f_{BW,\mathrm{\Phi }}`$ are critical. For two sign choices, no successful fit can be found for any Higgs mass. For a third choice, a good fit to the data is found for Higgs masses all the way up to $`m_h=800\mathrm{GeV}`$. For the final choice, masses up to $`800\mathrm{GeV}`$ are also obtained, though the fits are less convincing. Only in the case of very large $`\mathrm{\Lambda }`$ does the data still prefer a light Higgs, but then the quadratic finetuning of the light Higgs mass to one part in $`m_h^2/\mathrm{\Lambda }^2`$ is reintroduced. In view of the bounds on $`\mathrm{\Lambda }_{min}`$ of $`34\mathrm{TeV}`$ from each of $`𝒪_{qq}`$ (7),$`𝒪_\mathrm{}q`$ (8), $`𝒪_{BW}`$ (9), and $`𝒪_\mathrm{\Phi }`$ (10), it may be felt that the exciting possibility of $`\mathrm{\Lambda }`$ in the $`13\mathrm{TeV}`$ range is unlikely. Why would all the relevant $`c_i`$ coefficients be small? One possibility is that the dominant interactions of the new physics at $`\mathrm{\Lambda }`$ preserve symmetries that are broken by the electroweak gauge interactions, including $`P`$, $`CP`$ and custodial $`SU(2)`$. If these symmetries are broken by sub-dominant interactions at $`\mathrm{\Lambda }`$, then the smallness of the relevant $`c_i`$ can be naturally explained. ## 5 Higgs Production and Decay For the case that there is a Higgs boson, either light or heavy, we now study the effects of $`𝒪_{G,\gamma }`$ of (3)–(4) on the signal for $`h\gamma \gamma `$ at hadron colliders. These operators have two immediate consequences. First, when both Higgs fields are set to their vacuum expectation values (vev’s), the gauge couplings of QED and QCD are shifted. But these shifts can be reabsorbed into the definition of the gauge couplings and therefore have no observable implications. (If one attempts to unify the SM gauge couplings at some ultraviolet scale, or otherwise define theoretical relations among them, then these shifts will enter into the relation between the theoretical couplings and those extracted from data. However, for all but the lightest $`\mathrm{\Lambda }`$, this shift is smaller than the experimental uncertainties.) The second consequence is the possibility of unusual production and decay modes of the (physical) Higgs bosons. Taking one of the Higgs fields to its vev, one obtains terms in the effective Lagrangian: $`_{eff}=\mathrm{}+f_\gamma {\displaystyle \frac{v}{\mathrm{\Lambda }_\gamma ^2}}hF_{\mu \nu }F^{\mu \nu }+f_G{\displaystyle \frac{v}{\mathrm{\Lambda }_G^2}}hG_{\mu \nu }^aG^{a\mu \nu }+\mathrm{}`$ (18) where $`h`$ is the physical Higgs boson, $`v=246\mathrm{GeV}`$ and $`f_{\gamma ,g}=\pm 1`$ are unknown signs. First, $`𝒪_G`$ can contribute to the gluon fusion process $`ggh`$. It is well-known that the dominant production mode for Higgs bosons at the Tevatron and the LHC is through gluon fusion, via a loop of $`t`$-quarks. Because the process occurs at one-loop, non-renormalizable operators are more likely to provide a significant correction to the cross-section. Integrating out the $`t`$-quark, the relevant low-energy operator is then (for a recent discussion of the relevant SM Higgs physics, see ): $$_{G,eff}=\left(\frac{g\alpha _s}{24\pi M_W}I_G+f_G\frac{v}{\mathrm{\Lambda }_G^2}\right)hG_{\mu \nu }^aG^{a\mu \nu }$$ (19) where $`g`$ is the SU(2) coupling constant and $`I_G1(0)`$ for $`m_t^2m_h^2`$ $`(m_t^2m_h^2)`$. For $`\mathrm{\Lambda }\stackrel{<}{}4.5\mathrm{TeV}`$, the new physics will actually dominate the production of Higgs bosons. Note that the cross-section is maximized for constructive interference, $`f_G=1`$, and minimized for $`f_G=+1`$. The operator $`𝒪_\gamma `$ does not contribute to Higgs production<sup>2</sup><sup>2</sup>2However, a large coefficient to $`𝒪_\gamma `$ could turn the NLC into an $`s`$-channel Higgs factory when run in $`\gamma \gamma `$ mode.. However it can contribute to the decay of the Higgs into photons: $$\mathrm{\Gamma }(h\gamma \gamma )=\frac{|\beta |^2m_h^3}{4\pi }$$ (20) for $`=\beta hF_{\mu \nu }F^{\mu \nu }`$. In the SM, this process is dominated by loops of $`W`$-bosons and $`t`$-quarks. Integrating them out yields an effective operator: $$_{\gamma ,eff}=\left(\frac{g\alpha }{4\pi M_W}I_\gamma +f_\gamma \frac{v}{\mathrm{\Lambda }_\gamma ^2}\right)hF_{\mu \nu }F^{\mu \nu }$$ (21) where $`I_\gamma `$ varies from roughly $`0.5`$ to $`1.3`$ as $`m_h`$ is varied. Once again, the new physics will dominate the width for $`h\gamma \gamma `$ given $`\mathrm{\Lambda }_\gamma \stackrel{<}{}7\mathrm{TeV}`$. If $`m_h\stackrel{<}{}150\mathrm{GeV}`$, its decay width is dominated by final state $`b`$-quarks; then $`h\gamma \gamma `$ becomes the dominant decay mode given $`\mathrm{\Lambda }_\gamma \stackrel{<}{}1.5\mathrm{TeV}`$. However, even for larger $`\mathrm{\Lambda }_\gamma `$, the branching ratio $`h\gamma \gamma `$ may be more than sufficient to provide a strong signal. The signal is maximized for $`f_\gamma =+1`$ (i.e., constructive interference of the SM and new physics) and minimized for $`f_\gamma =1`$. (In the context of LEP, Ref. recently examined the effect of $`𝒪_\gamma `$ and related operators on $`e^+e^{}3\gamma ,qq\gamma \gamma `$ and found sensitivity there to new physics roughly below a scale $`\mathrm{\Lambda }\stackrel{<}{}600\mathrm{GeV}`$.) Unfortunately, the operator $`𝒪_G`$ can also contribute to the Higgs decay width via $`hgg`$ which is unobservable among the QCD backgrounds. In fact, to lowest order, $$\mathrm{\Gamma }(hgg)=8\left(\frac{\mathrm{\Lambda }_\gamma }{\mathrm{\Lambda }_G}\right)^4\mathrm{\Gamma }(h\gamma \gamma ).$$ (22) In the limit in which the new physics is dominating the Higgs decays and $`\mathrm{\Lambda }_\gamma \mathrm{\Lambda }_G`$, the $`hgg`$ decays suppress the branching ratio into $`h\gamma \gamma `$ by about a factor of 10. However, once final state $`WW/ZZ`$ dominate the Higgs width, the decays to gluons provide no real additional suppression of the $`h\gamma \gamma `$ branching fraction. Finally we note that the interference of $`𝒪_G`$ with the SM gives simultaneously larger (smaller) Higgs cross-sections and larger (smaller) $`\mathrm{\Gamma }(hgg)`$. The sensitivity of any experiment to new physics in the Higgs channel is then a function of several variables: $`m_h`$, $`f_\gamma `$, $`f_G`$, $`\mathrm{\Lambda }_\gamma `$ and $`\mathrm{\Lambda }_G`$. There are four sign choices for $`f_\gamma ,f_G`$; we choose to study the two cases which maximize/minimize the signal at current and future colliders. The maximum signal case has $`f_\gamma =+1`$ and $`f_G=1`$; we checked that over the entire range of interest the increase in the cross-section implied by $`f_G=1`$ more than offset the corresponding increase in $`Br(hgg)`$. The minimum signal case has the opposite choice of both signs. Our analysis then has two parts. First we ignore the $`𝒪_G`$ operator (i.e., $`f_G=0`$) and determine the sensitivity of current and future experiments to new physics through $`𝒪_\gamma `$ alone. In this case, the production cross-section is simply that of the SM. Then in a second analysis we include both $`𝒪_\gamma `$ and $`𝒪_G`$. As we already noted, the effect of $`𝒪_G`$ is both to enhance the production but also to diminish the relative branching ratio of $`h\gamma \gamma `$. For the purposes of doing the numerical calculations, we have used (in a greatly modified form) the programs of M. Spira and collaborators . In all cases, we will work only to leading order. In the SM it has been found that NLO QCD corrections can change the cross-sections and decay widths by $`60\%`$ . Naively such changes appear to correspond only to $`10\%`$ shifts in $`\mathrm{\Lambda }`$, which are too small for the physics we are interested in here. However, it is possible that interference effects and enhanced backgrounds (i.e., $`h\gamma \gamma `$ in the SM) could produce a larger effect — we will not consider that possibility here. Throughout our analysis we also have to address issues of acceptances and backgrounds in an approximate manner. In Run I, CDF reported an efficiency times acceptance approaching 15% in inclusive $`\gamma \gamma +X`$ Higgs searches ; we will assume that this figure prevails at all future facilities. There are also two major sources of backgrounds for our $`\gamma \gamma `$ signal: SM processes which produce or fake $`\gamma \gamma `$, and the usual SM decay of $`h\gamma \gamma `$ itself. The latter can be calculated explicitly. For the former we estimate by fitting to the CDF background spectrum , appropriately scaled to the luminosity of future Tevatron runs, or the ATLAS background spectrum appropriately scaled for LHC runs. In Figures 2(a)-(b) we show the sensitivity to $`\mathrm{\Lambda }_\gamma `$ that can be obtained at various machines by plotting their $`5\sigma `$ discovery reaches (with no $`𝒪_G`$ contribution). The colliders shown are: the Tevatron with $`\sqrt{s}=1.8\mathrm{TeV}`$ and $`100\text{pb}^1`$ of luminosity (Run I), with $`\sqrt{s}=2\mathrm{TeV}`$ and $`2\text{fb}^1`$ of luminosity (Run II), with $`\sqrt{s}=2\mathrm{TeV}`$ and $`30\text{fb}^1`$ (a proposed Run III), and the LHC with $`\sqrt{s}=14\mathrm{TeV}`$ and $`10\text{fb}^1`$ (initial luminosity) and $`100\text{fb}^1`$ (final luminosity) respectively. (Note that the TeV Run I line falls below the region of parameter space plotted.) As one expects, once the $`hWW,ZZ`$ threshold opens up at $`\sqrt{s}150\mathrm{GeV}`$, the large $`\mathrm{\Gamma }(hWW,ZZ)`$ is sufficient to overwhelm the photonic width and our experimental sensitivity drops significantly. Nonetheless, given the possibility of a light Higgs (and the robust arguments for one in supersymmetric frameworks) experimentalists should be encouraged to view $`h\gamma \gamma `$ as a viable and potentially large signal. In terms of extracting a conservative discovery reach for $`\mathrm{\Lambda }`$, Figure 2(b) should be used since it chooses $`f_\gamma `$ in order to minimize the signal. We note, for example, that the data from Run I cannot presently probe (or exclude) $`\mathrm{\Lambda }`$ above $`1\mathrm{TeV}`$, but that Run II should have a reach of approximately 1 – $`1.5\mathrm{TeV}`$ for a light Higgs. However it is important to realize that for generic $`f_\gamma `$, the various colliders may have reaches as high as those shown in Figure 2(a). Thus, for example, if the Higgs mass is below the $`WW`$ threshold, the LHC can possibly find a signal for $`\mathrm{\Lambda }`$ up to $`8\mathrm{TeV}`$ for a light Higgs! (Unfortunately, that scale could also be as low as $`4\mathrm{TeV}`$.) Figures 3(a)-(b) repeat the same analysis, but now with $`𝒪_G`$ included such that $`\mathrm{\Lambda }_G=\mathrm{\Lambda }_\gamma \mathrm{\Lambda }`$. We view these results as more realistic compared to those above in which only the $`𝒪_\gamma `$ operator was kept. We again show the same set of 5 collider options. Figure 3(b) is the conservative $`5\sigma `$ discovery reach, chosen to minimize the $`pp,p\overline{p}h\gamma \gamma `$ rate. It is interesting that for a light Higgs, the limits are slightly stronger than those obtained with $`f_G=0`$; now even the Tevatron Run I data has the ability to probe scales above $`1\mathrm{TeV}`$. However the more noticable difference is the ability to produce larger numbers of heavy Higgs bosons and observe their $`\gamma \gamma `$ decays. For example, the LHC is capable of probing scales near $`2\mathrm{TeV}`$ even for $`m_h=1\mathrm{TeV}`$. Figure 3(b) shows the maximal reach of the various colliders, with the LHC now extending its sensitivity to $`\mathrm{\Lambda }`$ as high as $`10\mathrm{TeV}`$ for a light Higgs! Finally, we summarize a few of our results for $`m_h=110`$, $`200`$ and $`500\mathrm{GeV}`$ for both exclusion and discovery in Table 1. All bounds assume $`\mathrm{\Lambda }_\gamma =\mathrm{\Lambda }_G`$. For each choice of the Higgs mass, we have shown a conservative limit on $`\mathrm{\Lambda }`$ which can be excluded, and a maximum $`\mathrm{\Lambda }`$ below which a signal may be discovered. Thus for the exclusion bounds ($`2\sigma `$) we have taken the interference effects to minimize the signal; for the maximum discovery reaches ($`5\sigma `$), we have chosen the interference effects to maximize the signal. We have attempted in this analysis to be rather conservative. For one thing, the $`2\sigma `$ exclusion limits of the various colliders are often several TeV higher than the $`5\sigma `$ discovery limits. Secondly, we have treated the discovery of the $`h\gamma \gamma `$ signal as simply a counting experiment, throwing away useful experimental information, for example on the shape of the diphoton mass spectrum, which would be available experimentally to help extract the signal from the backgrounds. Lastly, we have not included QCD corrections to the amplitudes, which we believe could increase the signal (though also increasing the “background” $`h\gamma \gamma `$ signal) by $`50\%`$. Therefore we believe that the reaches given here are to be taken as conservative values, insofar as one should take the scales deduced from naive power-counting seriously. ## 6 Conclusions In this paper we have studied two consequences of large extra dimensions for electroweak symmetry breaking: a relaxation of the precision electroweak bound on the Higgs boson mass, and an enhanced rate for $`\gamma \gamma `$ events at hadron colliders from Higgs decay. The relaxation of the precision electroweak bound on the Higgs mass applies when any new physics generates (9) and (10) at a scale of several TeV. It is well known that $`S`$ and $`T`$ depend only logarithmically on the Higgs boson mass, but it may not be appreciated that the mass bound can be evaded completely for a wide range of values of $`\mathrm{\Lambda }`$, extending as high as 10 TeV. For example, even a weakened bound of $`m_h<500\mathrm{GeV}`$, only applies if the standard model is the correct description of nature up to energies of $`17\mathrm{TeV}`$. We find this implausible, since it implies a fine tuning in the Higgs mass squared parameter of 1 part in 2000. There is only one strong argument for a light Higgs boson: the correct successful prediction of the weak mixing angle at the percent level of accuracy requires weak scale supersymmetry, and therefore a light Higgs boson. In theories with large extra dimensions this argument is not applicable, since the percent level prediction for the weak mixing angle is lost. Hence, in these theories, there is no preference for a light Higgs boson, and thus alternatives with a heavy Higgs or no Higgs should be considered seriously. If there is a Higgs boson, we have shown that a generic signal of large extra dimensions is an anomalously large $`\gamma \gamma `$ signal at machines capable of producing Higgs bosons. Expectations from the SM put such a signal out of reach of the Tevatron. In Figure 3 we showed the $`5\sigma `$ discovery reaches for $`h\gamma \gamma `$ at the Tevatron and LHC. At Run II of the Tevatron collider this signal would be discovered for a light Higgs if $`\mathrm{\Lambda }`$ is less than 2 (3) TeV for destructive (constructive) interference. LHC not only increases the discovery potential for a light Higgs boson mass, up to $`10\mathrm{TeV}`$ for constructive interference, but also has significant discovery potential up to the largest Higgs masses. This signal compares favorably with that of graviton production at colliders , especially if the scale which sets the size of the $`4+n`$ dimensional gravitational coupling is somewhat larger than the scale $`\mathrm{\Lambda }`$. ## Acknowledgements We are grateful to Nima Arkani-Hamed, Michael Chanowitz, Savas Dimopoulos and Henry Frisch for many useful conversations. This work was supported in part by the U.S. Department of Energy under contract DE–AC03–76SF00098 and by the National Science Foundation under grant PHY–95–14797.
no-problem/9904/cond-mat9904326.html
ar5iv
text
# Freezing by Heating in A Driven Mesoscopic System \[ ## Abstract We investigate a simple model corresponding to particles driven in opposite directions and interacting via a repulsive potential. The particles move off-lattice on a periodic strip and are subject to random forces as well. We show that this model—which can be considered as a continuum version of some driven diffusive systems—exhibits a paradoxial, new kind of transition called here “freezing by heating”. One interesting feature of this transition is that a crystallized state with a higher total energy is obtained from a fluid state by increasing the amount of fluctuations. \] Most of the phenomena in our natural environment occur under far-from equilibrium conditions resulting in a rich behavior both in time and space. An important class of such processes takes place in so-called driven systems, which have attracted considerable interest recently. In many of these systems, particles are driven either by an external field (force) , or they are self-propelled , and their collective behavior manifests itself in new kinds of transitions, including noise induced ordering or an ordering in a continuous 2d velocity space. Phase transitions are also common in equilibrium systems, and the related analogies have represented an important contribution to the understanding of non-equilibrium processes. In most cases, lattice models have been considered to demonstrate non-equilibrium transitions. For example, jamming transitions have been seen in discretized traffic models and driven lattice gases . However, off-lattice (i.e., continuum) symmetry is known to bring in qualitatively new behaviour; in particular, this is definitely so in 2d, see, e.g., the XY model versus the Ising model (in equilibrium). A continuum model may lead to new effects due to the fact that the notions of order and disorder have extra facets in this case, and it can describe compressible systems in a more delicate way. In this paper we will consider a simple continuum model exhibiting a paradoxial, new kind of transition that we call “freezing by heating” and being closely related to situations relevant from the practical point of view. The model consists of particles driven in opposite directions and interacting through a simple repulsive potential. The particles move off-lattice on a periodic strip (in a two-dimensional tube) and are subject to random forces as well. The most interesting feature of the transition we find for this system is, that a crystallized state with a higher total energy is achieved from a fluid state over a transient disordered state by increasing the amount of fluctuations. In addition to the interest in the properties of driven systems on its own, there are several further motivations to study such a model. A system of light (rising) and heavy (sinking) particles in a vertical column of fluid, pedestrians moving in a passage, or a system of oppositely charged colloidal particles in an electric field represent potential applications of our model. In fact, the system we study is a generalization to the continuum case of a two-species driven lattice gas model proposed recently , with a number of relevant modifications arising from the adaptation to the off-lattice case. In a wider context, these models can be considered as simplified paradigms of systems consisting of entities with opposing interests (drives). In the present work, we consider the behavior of a limited number of particles in a confined geometry, and our results are primarily valid for this “mesoscopic” situation. In the quickly growing literature on mesoscopic systems there are many examples of the potential practical relevance of phenomena occurring in various models for finite sizes . We denote the location of particle $`i`$ at time $`t`$ by $`𝒙_i(t)`$ and its velocity $`d𝒙_i(t)/dt`$ by $`𝒗_i(t)`$. Furthermore, we assume the acceleration equation $`m{\displaystyle \frac{d𝒗_i(t)}{dt}}`$ $`=`$ $`m{\displaystyle \frac{v_0𝒆_i𝒗_i(t)}{\tau }}+𝝃_i(t)`$ (1) $`+`$ $`{\displaystyle \underset{j(i)}{}}𝒇_{ij}(𝒙_i(t),𝒙_j(t))+𝒇_\mathrm{b}\left(𝒙_i(t)\right).`$ (2) $`m`$ is the mass of the particle, $`v_0`$ the velocity with which it tends to move in the absence of interactions, $`\tau `$ the corresponding relaxation time, and $`𝒆_i\{(1,0),(1,0)\}`$ the direction into which particle $`i`$ is driven. $`\gamma =m/\tau `$ may be interpreted as a friction coefficient. $`𝒇_{ij}`$ represents the repulsive interactions between particles $`i`$ and $`j`$, $`𝒇_\mathrm{b}`$ the interactions with the boundaries, and $`𝝃_i`$ the fluctuations of the individual velocities. For the interactions between the particles, we have chosen the simple function $`𝒇_{ij}(𝒙_i,𝒙_j)`$ $`=`$ $`\mathbf{}A(d_{ij}D)^B,`$ (3) depending on the parameters $`A`$ and $`B`$, and the distance $`d_{ij}(t)=𝒙_i(t)𝒙_j(t)>D`$ only. Thus, $`𝒇_{ij}`$ describes the effect of a soft repulsive potential of particle $`j`$ with a hard core of diameter $`D`$, reflecting the space occupied by the particle. Our choice of these details of the model corresponds to a motion of finite sized particles tending to avoid collisions and maintaining, if possible, a given velocity $`v_0`$. In addition, the interactions with the boundaries were assumed to be $`𝒇_\mathrm{b}(𝒙_i)`$ $`=`$ $`\mathbf{}A(d_iD/2)^B,`$ (4) where $`d_i`$ denotes the shortest distance to the closest wall. In contrast to previous studies of similar models , we will now investigate the decisive role of the fluctuations $`𝝃_i(t)`$, which have been assumed to be uncorrelated and distributed according to a truncated normal distribution with vanishing mean value and finite variance $`\theta `$ . We started our simulations with N particles randomly distributed on a strip without allowing overlaps. For half of the particles a driving into the $`(1,0)`$ direction, and for the other half a driving into the $`(1,0)`$ direction was assigned. Numerical integration of equation (2), using periodic boundary conditions, has produced the following results: For small noise amplitudes $`\theta `$ and sufficiently small particle densities, our simulations lead, depending on the strip width and the initial condition, to the formation of two or more coherently moving linear structures (just as if the particles moved along traffic lanes) (cf. Fig. 1a). For relatively large $`N`$ (if the available area is too small to allow freely moving lanes), jamming occurs. For a small intermediate density region, we find lane formation or jamming, depending on the respective initial condition. At small noise amplitudes, the mechanism of lane formation, which produces a “fluid” state, is very dominant and robust in our model. This can be understood as follows: Particles moving against the stream or in areas of mixed directions of motion will have frequent and strong interactions, because of high relative velocities. In each interaction, the encountering particles move a little aside to pass each other. This sidewards movement tends to separate oppositely moving particles. Nevertheless, jamming may sometimes occur, but in most cases it also supports lane formation (see next paragraph). Particles moving in uniform lanes have very rare and weak interactions. Hence, the tendency to break up existing lanes is negligible, when the fluctuations are small. Furthermore, the most stable configuration corresponds to a state with a minimal interaction rate and is related with a maximum efficiency of motion . Whereas spontaneous lane formation was also observed in previous studies of related models with deterministic dynamics only , in the present, more realistic model we have discovered a surprising phenomenon when we increased the noise amplitude. If the fluctuations and the particle number are large enough, the particles crystallize into a hexagonal lattice. This is a consequence of several subsequent steps: First, the fluctuations are able to prevent lane formation or even to destroy previously existing lanes. This is so, because sufficently strong diffusion can prevent structure formation. Second, some of the oppositely moving particles block each other locally from time to time. Third, this gives rise to jamming since, meanwhile, additional particles arrive at the boundaries of the blocked area. Fourth, if the jam exists long enough, both of its ends expand over the full width of the strip and develop “flat” boundaries perpendicular to $`𝒆_i`$, in order to reach a balance of forces. For the same reason, the particles tend to arrange in a hexagonal lattice structure, very much like in a crystal. Fifth, the crystal is only stationary, if also the interface between the oppositely moving particles is, by chance, flat enough (cf. Fig. 1c). In most cases, however, the interface is rough (cf. Fig. 1b), i.e. in some of the horizontal layers, a majority of particles is pushing in one direction. As a consequence, the most advanced part(s) of the interface eventually break(s) through, which requires a continuous model, where the distance kept among the particles is flexible enough. In this way, particles with uniform directions of motion form “channels”, which tend to produce lanes at sufficiently small densities and noise intensities, otherwise the particles jam again and again (as described above), until they end up in a stationary crystal. Due to the above described mechanism the crystallized state is metastable, i.e., sensitive to structural perturbations (like the interchange of a few particles in our case). The crystallized state can also be destroyed by ongoing fluctuations with extreme noise amplitudes giving rise to a third, disordered (“gaseous”) state with randomly distributed particles. Thus, with increasing “temperature” $`\theta `$, we have the untypical sequence of transitions fluid $``$ solid $``$ gaseous. Interestingly, for a range of moderate densities we find a “fluid” state with lanes at small noise amplitudes most of the time, but a crystallized (“frozen”) state, if the noise amplitude (“temperature”) is large. We call this transition “freezing by heating”. Starting with random initial conditions, the transition is rather smooth (cf. Fig. 2). This is partially so because the system can also become frozen at a relatively low noise amplitude, if the disorder in the initial state (in the sense of the deviation from a freely moving lane state) is large enough (which has an effect similar to additional fluctuations). The transition becomes sharper, if we always start with a two-lane state but with different random seeds. In any case, the transition is hysteretic, since the noise-induced “frozen” state remains, when the noise amplitude is reduced, again. To characterize the state of the system, we calculated various quantities. The expression $$E=\underset{T\mathrm{}}{lim}\frac{1}{T}\underset{0}{\overset{T}{}}𝑑t\frac{1}{N}\underset{i=1}{\overset{N}{}}\frac{𝒗_i(t)𝒆_i}{v_0},$$ (5) for which we expect the relation $`0E1`$, is a measure for the “efficiency” of motion, i.e., $`Ev_0`$ is the average speed at which the particles are able to move in their respective “target direction” $`𝒆_i`$. $`E1`$ corresponds to lanes, $`E=0`$ to a crystallized state. Representative simulation results for the ensemble average $`E`$ as a function of the noise intensity $`\theta `$ are displayed in Figure 2. We observed the following parameter dependencies: Crystallization is more pronounced for large $`\tau `$ and large strip lengths $`L_x`$, while small $`\tau `$ and large strip widths $`L_y`$ are in favour of lane formation (but “freezing by heating” still exists in the overdamped limit $`\tau 0`$). The number of particles required for crystallization does not depend on the length $`L_x`$ (if it is considerably larger than $`L_y`$), while it is roughly proportional to the width $`L_y`$ (for large enough $`L_y`$). Given a fixed aspect ratio $`L_x/L_y`$, for the system sizes that we could numerically handle there was no clear tendency whether the transition becomes sharper or smoother with increasing system size $`NL_xL_y`$ (see Fig. 2). Another interesting quantity is the sum of the potential and kinetic energies associated with a given state: $$W=\underset{T\mathrm{}}{lim}\frac{1}{T}\underset{0}{\overset{T}{}}𝑑t\left[\underset{i}{}\frac{m}{2}𝒗_i^2+\frac{1}{2}\underset{ij}{}A(d_{ij}D)^B\right].$$ (6) The paradox here is that the above mentioned crystallized state is usually more unstable than the fluid state, in the sense that the total energy (6) of the system is higher in the crystallized state than in the “fluid” one (see inset of Fig. 2). Note that both, the “solid” (crystallized) state and the “fluid” state (i.e. lanes) are destabilized, if the friction term $`𝒗_i/\tau `$ is dropped, even in the case $`\theta =0`$ (see Ref. for a related inverse phenomenon). That is, without friction and due to the permanent driving, the undamped repulsive interactions eventually become destructive to any ordered state, which gives rise to a “gaseous” state. Therefore, we point out that the energetically less favourable crystallized state is maintained by the propulsion term $`m(v_0𝒆_i𝒗_i)/\tau `$ in Eq. (2) which, by the way, is also relevant for lane formation. Note that the absolute value of this term becomes largest for $`𝒗_i=\mathrm{𝟎}`$ (i.e. blocking), while it is small for the “fluid” state with $`𝒗_iv_0𝒆_i`$. We consider the transition to a stationary state with a higher total energy by increasing the noise intensity to be a signature of a novel class of behavior in certain non-equilibrium systems, which may have interesting applications. However, here we have demonstrated “freezing by heating” only for limited sizes and a specific geometry. Nevertheless, we point out that “freezing by heating” does not require walls, but occurs for periodic boundary conditions in $`y`$-direction as well. Also, we do not need periodic boundary conditions in $`x`$-direction. Any sufficiently long simulation area will produce both the organized fluid state (i.e. lanes) and the crystallized state, if only the system is continuously entered by particles at the left-hand and right-hand boundaries. Why is freezing by heating new? Some glasses may crystallize when slowly heated. However, this is a well understood phenomenon: the amorphous state is metastable for those temperatures, and crystallization means an approach to the more stable state with smaller total energy. In general, one can distinguish three cases when fluctuations (i.e. temperature or external perturbations) are increased: (i) Total energy increases and order is destroyed (e.g., melting). (ii) Total energy decreases, ordering takes place, and the system goes from a disordered metastable to an ordered stable state (e.g., in metallic glasses and some granular systems). (iii) Total energy increases and ordering takes place, while the system goes from a partially ordered stable to a highly ordered metastable state, which corresponds to the new situation presented here. In our case, crystallization is achieved by spontaneously driving the system with the help of noise uphill towards higher total energy. The system would like to maximize its efficiency , but instead it ends up with minimal efficiency due to noise-induced crystallization. The role of “temperature” or noise here is to destroy the energetically more favourable fluid state, which inevitably leads to jamming and finally to crystall-like lattices. The corresponding transition seems to be related to the off-lattice nature of our model and is different from those reported for driven diffusive systems on a lattice . It should be noted that the transition we find is not sharp, which is a consequence partly of the mesoscopic nature of the phenomenon and partly of the disorder in the initial state. We would like to point out that “freezing by heating” is likely to be relevant to situations involving pedestrians under extreme conditions (panics). Imagine a very smoky situation, caused by a fire, in which people do not know which is the right way to escape. When panicking, people will just try to get ahead, with a reduced tendency to follow a certain direction. Thus, fluctuations will be very large, which can lead to fatal blockings. Our results demonstrate that in driven mesoscopic systems phenomena qualitatively different from those occurring in thermodynamical systems can be observed. While most non-equilibrium transitions have analogies to equilibrium ones, the noise-induced ordering observed in the effect of “freezing by heating” is just opposite to the transitions occurring in equilibrium systems. This suggests that future studies along the lines of the present approach are likely to lead to further unexpected findings. Acknowledgments: D.H. wants to thank the DFG for financial support by a Heisenberg scholarship and D. Mukamel for helpful suggestions. This work was supported by OTKA F019299 and FKFP 0203/1997.
no-problem/9904/hep-ph9904264.html
ar5iv
text
# Next-to-Leading Order Description of Nucleon Structure Function In Valon Model ## I INTRODUCTION Our understanding of hadrons is based on QCD for the interpretation of Deep Inelastic Scattering (DIS) data together with the spectroscopic description of hadrons in terms of massive constituent quarks. At low energies static properties of the hadrons can be deduced from the latter. The most striking feature of the hadron structure intimately related to their nonvalence quark composition. This substructure can be generated in QCD, however, perturbative approach to QCD does not provide absolute values of the observables and requires the input of non-perturbative matrix elements. Experimental data accumulated during the past ten years has shown that Gottfried Sum Rule is violated, suggesting strong violation of $`SU(2)`$ symmetry breaking in the nucleon sea. It is possible to resolve the violation of Gottfried some rule by allowing a non-perturbative component to the nucleon sea or in a chiral quark model, in the region between the chiral symmetry-breaking scale $`\mathrm{\Lambda }_\chi 1GeV`$ and the confinement scale $`\mathrm{\Lambda }_{QCD}0.10.3`$ , hadron can be treated as weakly bound state of effective constituent quarks.The large violation of Ellis-Jaffe sum rule implies that only a small fraction of proton helicity is carried by the quarks, leading to the so-called ”SPIN CRISIS”. The process of quark evolution produces a large asymmetry for gluon, which at a scale of $`10(\frac{GeV}{c})^2`$ can be quite large:$`\mathrm{\Delta }g\left[Q^2=10(\frac{GeV}{c})^2\right]4`$, and counter intuitive. In fact, at this scale $`\mathrm{\Delta }g`$ can be anything; so, something has to compensate for the spin component carried by the radiated gluon in the course of evolution. It turns out that the orbital angular momentum of the produced $`q\overline{q}`$ and $`qg`$ pairs is the compensating agent which finds a natural place in the constituent picture of the nucleon; provided that a constituent quark is viewed as an extended quasiparticle object composed of a valence quark swirling with a cloud of gluons and sea $`q\overline{q}`$ paris. All these fairly successful theoretical attempts suggest the presence of clusters in the nucleon. It seems that there is certain relationships between the constituent quark model of hadron and its partonic structure. Therefore, it is interesting to describe the nucleon structure in terms of these quasiparticle constituent quarks. The picture that emerges is as follows: At high enough $`Q^2`$ values it is the structure of the constituent quark that is being probed and at sufficiently low $`Q^2`$, no longer the constituent structure can be resolved. Of course, the above presented picture is not new. In 1974 Altarelli, et al pioneered such a model, and more recently, pion structure function was constructed by Altarelli and co-workers in a constituent model. In the early 1980’s R.C. Hwa proposed a similar model, the so-called valon model which was more elaborate and very successful in analyzing a range of observed phenomena . The rise of $`F_2`$ at small-$`x`$ and larger scale of $`Q^25GeV^2`$ posses a Lipatov-type behavior, whereas, at smaller scale of $`Q^213GeV^2`$ it is predicted to develop Regge-type $`x`$-dependence;that is, parton distributions and $`F_2`$ to be flat or almost flat . This latter prediction although has not been confirmed yet at HERA, but there are some evidences of flattening for $`F_2`$ at smallest $`x`$-bins in the HERA data as well as EMC and NMC data. In this paper we will utilize the essence of the valon model of Ref. in an attempt to investigate the above mentioned qualitative regularities in the constituent quark model of hadronic structure in the HERA region which is now extended to very low-$`x`$ values by H1 , , and ZEUS collaborations. The organization of the paper is as follows; in section II we will give a brief description of the valon model which this work is based on, then we will calculate the nucleon structure in terms of the structure of the valons. In section III, an explicit parameterization of the parton distributions and the numerical calculations will be outlined; In section IV we discuss some qualitative implications of the model on the spin structure of the nucleon. Finally, in section V, we will discuss the results and elaborate on the concluding results. ## II THE VALON MODEL The valon model is a phenomenological model which is proven to be very useful in its applied to many areas of the hadron physics. The main features of the model is as follows: (detailed work can be found in Ref. and references therein.) A valon is defined to be a dressed valence quark in QCD with a cloud of gluons and sea quarks and antiquarks. Its structure can be resolved at high enough $`Q^2`$ probes. The process of dressing is an interesting subject of its own. In fact, progress is made to derive the dressing process in the context of QCD . A valon is an effective quark behaving as a quasiparticle. In the scattering process the virtual emission and absorption of gluon in a valon becomes bremsstrahlung and pair creation, which can be calculated in QCD. At sufficiently low $`Q^2`$ the internal structure of a valon cannot be resolved and hence, it behaves as a structureless valence quark. At such a low value of $`Q^2`$, the nucleon is considered as bound state of three valons, UUD for proton. The binding agent is assumed to be very soft gluons or pions. The constituent picture of hadron also can be based on the Nambu-Jona-Lasinio (NJL) model including the six-fermion $`U(1)`$ breaking term. In this model, the transition to the partonic picture is described by the introduction of chiral symmetry breaking scale $`\mathrm{\Lambda }_\chi `$ . Nevertheless, for our purpose, combination of all these effects is summarized in finding the parton distributions in the constituent quark and arriving at the nucleon structure functions. One subtle point is that the valons, or constituent quarks, are not free, gluons are also needed. Thus, in addition to valon degree of freedom, gluon degrees are also should be considered. But, it is not known that in an infinite-momentum frame, the valons carry all of the hadron momentum. This concern ultimately ought to be settled by a reliable theory of confinement. As a working hypothesis we shall assume that the valons exhaust the hadron momentum. Let us denote the distribution of a valon in a hadron by $`G_{\frac{v}{h}}(y)`$ for each valon $`v`$. It satisfies the normalization condition $$_0^1G_{\frac{v}{h}}(y)𝑑y=1.$$ (1) and the momentum sum rule: $$\underset{v}{}_0^1G_{\frac{v}{h}}(y)y𝑑y=1$$ (2) where the sum runs over all valons in the hadron $`h`$. Nucleon structure function $`F^N(x,Q^2)`$ is related to the valon structure function $`f^v(\frac{x}{y},Q^2)`$ by the convolution theorem as follows; $$F^N(x,Q^2)=\underset{v}{}_x^1𝑑yG_{\frac{v}{N}}(y)f^v(\frac{x}{y},Q^2)$$ (3) We note that valons are the universal property of the hadron and therefore its distribution is independent of the nature of the probe and $`Q^2`$ value. As the $`Q^2`$ evolution matrix of $`F_2`$ does not depend on the target, the $`Q^2`$ evolution of partonic density is the same for partons in a proton or in a valon. It follows then, that if the convolution is valid at one $`Q^2`$, it will remain valid at all $`Q^2`$. Notice that the moments of the parton densities $$(n,Q^2)=_0^1𝑑xx^{n1}P(x,Q^2)$$ (4) are simply given by the sum of products of the moments $$M(n,Q^2)=\underset{v}{}_{\frac{v}{h}}(n)(n,Q^2)$$ (5) At sufficiently high $`Q^2`$, $`f^v(\frac{x}{y},Q^2)`$ can be calculated accurately in the leading-order (LO) and Next-to-leading order(NLO) results in QCD and its moments are expressed in terms of the evolution parameter defined by: $$s=\mathrm{𝑙𝑛}\frac{\mathrm{𝑙𝑛}\frac{Q^\mathit{2}}{\Lambda ^\mathit{2}}}{\mathrm{𝑙𝑛}\frac{Q_\mathit{0}^\mathit{2}}{\Lambda ^\mathit{2}}}$$ (6) where $`\mathrm{\Lambda }`$ and $`Q_0`$ are the scale parameters to be determined from the experiments. From the theoretical point of view, both $`\mathrm{\Lambda }`$ and $`Q_0`$ should depend on the order of the moments, however, in our approximation we will take them independent of $`n`$. Since $`f^v(z,Q^2)`$ is free of bound state complications which are summarized in $`G_{\frac{v}{h}}(y)`$; we can describe a U-type valon structure function, say $`F_2^U`$ as: $$F_2^U=\frac{4}{9}(G_{\frac{u}{U}}+G_{\frac{\overline{u}}{U}})+\frac{1}{9}(G_{\frac{d}{U}}+G_{\frac{\overline{d}}{U}}+G_{\frac{s}{U}}+G_{\frac{\overline{s}}{U}})+\mathrm{}$$ (7) where $`G_{\frac{q}{U}}`$ are the probability functions for quarks and antiquarks to have momentum fraction $`z`$ in a $`U`$-type valon at $`Q^2`$. Similar expressions can be written for the $`D`$-type valon. Structure of a valon, then, can be written in terms of flavor singlet(S) and flavor nonsinglet(NS) components as: $$F_2^U(z,Q^2)=\frac{2}{9}z\left[G^S(z,Q^2)+G^{NS}(z,Q^2)\right]$$ (8) $$F_2^D(z,Q^2)=\frac{1}{9}z\left[2G^S(z,Q^2)G^{NS}(z,Q^2)\right]$$ (9) For the electron or muon scattering $`G^S`$ and $`G^{NS}`$ in eqs.($`89`$) are defined as: $$G^S=\underset{i=1}{\overset{f}{}}(G_{\frac{q_i}{v}}+G_{\frac{\overline{q}_i}{v}})G^{NS}=\underset{i=1}{\overset{f}{}}(G_{\frac{q_i}{v}}G_{\frac{\overline{q}_i}{v}})$$ (10) For the neutrino and anti-neutrino scattering, similar relations can be written, but we will not present them here since we are mainly concerned with HERA data. In the moment representation we will have; $$_2(n,Q^2)=_0^1𝑑xx^{n2}F_2(x,Q^2)$$ (11) $$_\gamma (n,Q^2)=_0^1𝑑xx^{n1}G_\gamma (x,Q^2)$$ (12) where, $`\gamma =\frac{v}{N}`$,$`S`$,$`NS`$. From these equations, for a nucleon eq.($`5`$) follows: $$M^N(x,Q^2)=\underset{v}{}M_{\frac{v}{N}}(n)M^v(n,Q^2)$$ (13) Solution of the renormalization group equation of QCD provides the moments of singlet and nonsinglet valon structure functions in the LO and NLO and they can be expressed in terms of the evolution parameter of eq.($`6`$). These moments are given in , and . To evaluate nucleon structure function we need the distribution of valons in a nucleon. To proceed, we take a simple form for the exclusive valon distribution $$G_{UUD}(y_1,y_2,y_3)=Ny_1^\alpha y_2^\alpha y_3^\beta \delta (y_1+y_2+y_31)$$ (14) The inclusive distribution can be obtained by double integration over unspecified variables. for example: $$G_{U/p}(y)=𝑑y_2𝑑y_3G_{UUD}(y_1,y_2,y_3)=B(\alpha +1,\alpha +\beta +2)^1y^\alpha (1y)^{\alpha +\beta +2}$$ (15) The normalization factor is fixed by the requirement that $$G_{U/p}(y)𝑑y=G_{D/p}(y)𝑑y=1$$ (16) The moments of these inclusive valon distributions are calculated according to equation (12) and they are given by $$U(n)=\frac{B(a+n,a+b+2)}{B(a+1,a+b+2)}D(n)=\frac{B(b+n,2a+2)}{B(b+1,2a+2)}$$ (17) where $`B(i,j)`$ is the Euler beta function with $`a=0.65`$ and $`b=0.35`$. After performing inverse Mellin transformation we get for the valon distributions: $$G_{U/p}=7.98y^{0.65}(1y)^2G_{D/p}=6.01y^{0.35}(1y)^{2.3}$$ (18) ## III PARTON DISTRIBUTIONS To determine the parton distributions, we denote their moments by $`M_x`$(n,s), where the subscript $`x`$ stands for valence quarks $`u_v`$ and $`d_v`$, as well as for the sea quarks, antiquarks and the gluon. The moments are functions of the evolution parameter, $`s`$. They are given as, $$M_u(n,s)_v=2U(n)^{NS}(n,s)M_d(n,s)_v=D(n)^{NS}(n,s)$$ (19) $$M_{sea}(n,s)=\frac{1}{2f}\left[2U(n)+D(n)\right]\left[^S(n,s)^{NS}(n,s)\right]$$ (20) $$M_g=\left[2U(n)+D(n)\right]M_{gQ}(n,s)$$ (21) Where $`^S(n,s)`$ and $`^{NS}(n,s)`$ are the moments of the singlet and nonsinglet valon structure functions and $`M_{gQ}(n,s)`$ is the quark-to-gluon evolution function. $`U(n)`$ and $`D(n)`$ are the $`U`$ and $`D`$ type valon moments, respectively, and are given in the previous section. Calculation of $`M_x`$ is simple; in what follows, instead, the results are presented in parametric form in the NLO. In determining the patron distributions, we have used a procedure which is consistent with our physical picture. To assure that we are at a high enough $`Q^2`$ value, we first used a set of data at $`Q^2=12GeV^2`$ from 1992 run of H1 collaboration which covers $`x`$ interval of $`x=[0.000383,0.0133]`$. That is, for a single value of $`s`$, or $`Q^2`$, we fit the moments by a sum of beta functions that are the moments of the form: $$xq_v(x)=a(1x)^bx^cxq_{sea(gluon)}(x)=\underset{i}{\overset{3}{}}a_i(1x)^{b_i}$$ (22) The fit is effective as seen in Figure 1. The parameters $`a`$,$`b`$,$`c`$, $`a_i`$, $`b_i`$, are further considered to be functions of $`s`$, the evolution parameter. For the valence sector : $$a=a_0+a_1s+a_2s^2$$ (23) similarly for b, and c. For the non-valence sector we have $$a_i=\alpha _i+\beta _iexp(s/\gamma _i)$$ (24) with similar form for $`b_i`$. The values for these parameters are given in the appendix. Since the formalism has to include both valence and the sea quarks, it is further required that the valence distributions to satisfy the normalization conditions: $$_0^1q_{u_v}(x,Q^2)𝑑x=2_0^1q_{d_v}(x,Q^2)𝑑x=1$$ (25) at all $`Q^2`$ values, reflecting the number of valence quark of each type. For the sea quarks distributions we will assume $`SU(2)`$ flavor symmetry breaking inferred from the violation of the Gottfried sum rule . Implementation of the $`\overline{u}<\overline{d}`$ in the nucleon sea is those obtained in Ref. where we extracted the ratio: $$\frac{\overline{u}}{\overline{d}}=(1x)^{3.6}$$ (26) using low $`p_T`$ physics in the valon-recombination model. We take $`xs(x)=\frac{x(u_{sea}(x)+d_{sea}(x))}{4}`$ and $`xc(x)=\frac{1}{10}x(u_{sea}(x)+d_{sea}(x))`$. These choices are made based on the mass ratios of the involved quarks. All distributions are referring to the proton. The parameterization of patron distributions are those given in equations (22-24). From the fit to the data at $`Q^2=12GeV^2`$ we determined the scale parameter $`Q_0^2`$ and $`\mathrm{\Lambda }`$: $$Q_0^2=0.28GeV^2\mathrm{\Lambda }=0.22GeV.$$ (27) Figure 2 shows the shape of the distributions in equations (22) for the valence and the sea quarks at typical values of $`Q^2`$ . Our sea quark distribution, although has a sharp rise at small-$`x`$, but also damped very fast as $`x`$ increases. It appears that there are some evidences in the H1 data at low-$`x`$ bins and low but fixed $`Q^2`$ that supports a flat shape for $`F_2(x)`$. Figure 3 presents this behavior. Such a flattening of $`F_2`$ also is elucidated by Gluck, Reya and Vogt in Ref. . Our results favor a somewhat flat or almost flat behavior which sets in at some $`x_010^5`$ for low $`Q^2`$. This is attractive in the sense that one may argue that the observed flatness corresponds to the small-$`x`$ Regge behavior. It is possible to generate the rise of $`F_2`$ as $`x`$ decreases either from the DGLAP evolution in $`\mathrm{𝑙𝑛𝑄}^\mathit{2}`$ or from BFKL evolution in $`\mathrm{𝑙𝑛}(\frac{\mathit{1}}{x})`$; although, due to large partonic densities at low $`x`$; they both must be modified to account for the parton recombination effects. Fortunately, at extreme limit of $`x0`$, behavior of parton densities can be calculated analytically, though this limit will not be reached within the kinematic range accessible to HERA. It is evident from Figure 3 , that our calculation of the proton structure function at low $`Q^2`$; $`Q^20.4`$; and small $`x10^510^4`$ gives good agreement with the HERA data. we have included GRV(94) results of for comparison. In figure 4, $`F_2`$ is plotted as a function of $`x`$ for various $`Q^2`$ values. In Figure 5, $`F_2`$ data is plotted as a function of $`Q^2`$ for different $`x`$ bins. The data now extended over four orders of magnitude both in $`x`$ and $`Q^2`$ and our constituent model NLO calculation agrees well with experimental data. In our calculation for $`Q^2<4GeV^2`$ we have considered only three flavors whereas for higher $`Q^2`$, four active flavors are taken into consideration. Since our main input in determining parton distribution was HERA data at $`Q^2=12GeV^2`$; and it is limited to low $`x`$ interval $`[0.000383,0.0133]`$; it requires to check that if a satisfactory result for $`F_2`$ emerges for the entire range of $`x`$. This is presented in Figure 6 for $`Q^2=20GeV^2`$ and $`x=[0.000562,0.875]`$ with the combined data from H1 Ref., BCDMS, SLAC and EMC taken from the compilation in reference . Functional form of gluon distribution is treated similar to the sea quarks distribution as give in equation (22-24). The pertinent parameters are obtained by imposition of the momentum Sum rule. Unfortunately, there are only a few experimental data points for the gluon distribution, to check against. These data points are the result of a direct measurement of gluon density in the proton at low $`x`$ . Figure 7, shows the accuracy of our gluon density and the data of Ref., also the GRV results are shown. Finally in figure 8 we plot $`\frac{d_v}{u_v}`$ for our model and compare it with the world data. Our result for the Gottfried sum rule with four flavors gives $$S_G=_0^1\frac{dx}{x}\left[F_2^pF_2^n\right]=0.27.$$ (28) Exactly the same result is obtained from the latest MRST parameterization which is slightly larger than the experimental data of $`S_G=0.235\pm 0.026.`$ This discrepancy is originated from the form of $`\frac{\overline{u}}{\overline{d}}`$ used in eq.(26). The point here is to demonstrate that a constituent quark model is able to describe the DIS data, even without a fine tuning. ## IV IMPLICATION ON THE NUCLEON SPIN Spin structure of the nucleon merits a separate consideration of its own. Here we will mention a couple points relevant to the present work. (i) In the naive quark model the polarized structure function $`g_2`$ is zero. However, if we allow quarks to have an intrinsic $`p_{}`$ inside the nucleon we can achieve a nonzero value for $`g_2`$: $$g_2(x)=\frac{1}{2}\underset{q}{}e_q^2(\frac{m_q}{xM}1)\mathrm{\Delta }q(x)$$ (29) with obvious notations. Neglecting the $`p_{}`$ component leads to $`m_q=xM`$ and hence,$`g_2=0`$ is recovered. In parton model, even such an allocation of $`p_{}`$ and getting a nonzero value for $`g_2`$ is not free of ambiguity. For, the parton model assumes the validity of impulse approximation and neglects the binding effects of the struck parton in large transverse momentum reactions. Measurement of polarization asymmetries may reveal that they depend on the binding energy. In the model described above, the binding effects are summarized in the constituent quark distributions in the nucleon and the structure of a constituent quark is free of binding problem. In this model $`p_{}`$ distribution of quarks within the constituent quark can be obtained by realizing that there exists a size hierarchy: hadron size, constituent quark size, and the point-like partons as stated in. The hadronic structure is determined by the constituent quark wave function and its size is related to low-$`Q^2`$ form factor of the nucleon. Constituent quarks have a smaller size and described by their own form factors. Partons are the contents of the constituent quarks and are manifested only in high-$`Q^2`$ reactions. In high energy collisions one encounters two scales in $`p_{}`$ distribution: the average transverse momentum of pions in multiparticle production processes is about $`0.35`$ GeV whereas, in massive lepton pair production one needs to give a primordial $`<p_{}>0.8`$ Gev to the partons in order to describe the data. These two scales are related to the hierarchy of sizes. Pion production is a soft process and its scale is related to the average transverse momentum of the constituent quarks, characterizing the hadronic size. Lepton pair production in $`q\overline{q}`$ annihilation is a hard process and its scale is due to the transverse momenta of partons in the constituent quark. Using lepton pair production data, the transverse momentum distribution of quarks in a constituent quark can be parameterized in a Gaussian form $$p_q^c(k^2)=exp(1.2k^2)$$ (30) which leads to the ratio of the sizes $`\frac{<r^2>_c}{<r^2>_{hadron}}\frac{1}{5}`$. Now we can calculate $`<L_{z_{q\overline{q}}}>`$. For a proton of radius $`0.85\mathrm{𝑓𝑚}`$ $$<L_{z_{q\overline{q}}}>=r_c<k_{}>=0.321$$ (ii) Another point related to the spin content of proton is also relevant here. H. Kleinert was first who suggested to consider the vacuum of massive constituent quark as a coherent superposition of the Cooper pairs of massless quarks in analogous to the theory of superconductivity. In the BCS theory, gauge symmetry associated with the particle number conservation is spontaneously broken and the Noether current $`j_5^\mu `$ is not conserved. To restore it,it is realized that there are both collective and single particle excitations. Recently Gaitan has shown that the bare current $`j_5^\mu `$ becomes dressed by a virtual cloud of Goldstone excitations ($`q\overline{q}`$ in our case) and the conserved dressed current $`j^\mu `$ is the sum of two parts $$j^\mu =j_s^\mu +j_{back}^\mu $$ (31) where $`j_{back}^\mu `$ describes the backflow current. This is very similar in our model, where for the dressed constituent quark, the generator of the gauge transformation induces a rotation of the $`q\overline{q}`$ pair correlations which can be identified as the orbital angular momentum. To this end, the pairing correlation will have the axial symmetry around an anisotropic direction, acting as the local $`z`$-axis and the particles forming the cloud of the constituent quark would rotate about this anisotropic direction. what have been said was only a qualitative description; to make it more quantitative, let us consider the spin of a constituent quark,say $`U`$. It can be written as $$J_z^U=\frac{1}{2}=\frac{1}{2}\mathrm{\Delta }\mathrm{\Sigma }^U+L_{z_{q\overline{q}}}$$ (32) The DIS polarization data suggest that $`\mathrm{\Delta }\mathrm{\Sigma }^p\frac{1}{3}`$. Within the $`SU(6)`$ model $$\mathrm{\Delta }\mathrm{\Sigma }^p=(\mathrm{\Delta }U+\mathrm{\Delta }D)\mathrm{\Delta }\mathrm{\Sigma }^U=\mathrm{\Delta }\mathrm{\Sigma }^U$$ (33) comparing with equation (32) we see that $`L_{z_{q\overline{q}}}\frac{1}{3}`$ i.e. about $`70`$ percent of the spin of a constituent quark is due to the orbital angular momentum of quark pairs in its surrounding cloud, screening the spin of the valence quark: $$\frac{1}{2}\mathrm{\Delta }\mathrm{\Sigma }^U=S_{u_{val.}}+S_{sea}=\frac{1}{2}+S_{sea}=\frac{1}{6}$$ (34) resulting in $`S_{sea}=L_{z_{q\overline{q}}}=\frac{1}{3}`$. We can take the transverse momentum distribution from eq.(30) and calculate $`<L_{z_{q\overline{q}}}>`$ directly. For the proton of mean radius of $`0.86fm`$ we will have the mean radius of the constituent quark equal to $`0.385fm`$ and hence, $$<L_{z_{q\overline{q}}}>=r_c<k_{}>=0.321$$ which agrees well with the results stated above. Notice that our calculation of $`J_U`$ and the conclusion reached did not include gluonic effects. In reality we should have included the gluon degree of freedom, however, an inclusion of those effects at the constituent quark level would reduce ($`\mathrm{\Delta }U+\mathrm{\Delta }D`$)by some $`30`$ percent from unity , nevertheless the point is clear. ## V CONCLUSION We have used the valon model to describe the deep inelastic scattering. The model handles the bound state problem and sets the scale parameters. In determining the parton distributions no arbitrary theoretical assumptions are made for low $`Q^2`$. Our parton distributions nicely accommodate the data in a wide range of $`x=[10^6,1]`$ and in a broad range of $`Q^2`$ spanning from a few $`GeV^2`$ up to $`5000GeV^2`$. Our findings indicate that at $`x10^5`$ at fixed $`Q^2`$ the structure function $`F_2`$ flattens, which may be interpreted as the manifestation of Regge dynamics. As $`Q^2`$, increases, however, the almost flat shape of $`F_2`$ gets washed out or pushed towards yet smaller region of $`x`$. Thus, we conclude that in a region of $`xQ^2`$ plane the Regge dynamics is in play. This region sets in at some $`x_0`$ and not too high $`Q^2`$. Scale violation is obvious from the calculations and the data. The rise of $`F_2`$ at $`Q^2<1`$ $`GeV^2`$ in our model indicates that even at $`Q^2`$ as low as a few GeV the evolution has run the course. We further find that certain issues related to the spin structure of the proton can find an interesting place in the framework of the model described. ## VI Appendix In this appendix we give the numerical values for our parton distributions in the NLO both for three and four flavors. These relations are functions of evolution parameter $`s`$ defined in equation $`(6)`$. For details see the text. $`f=4`$: i: For valence sector using Eqs.(22-23) we have: $`q_v=u_v`$ $`a=14.13215.759s+6.795s^21.082s^3`$, $`b=1.6081.206s+0.37s^20.0523s^3`$, $`c=2.083+0.753s+0.214s^20.0106s^3`$, $`q_v=d_v`$ $`a=14.5095.309s+2.795s^20.562s^3`$, $`b=1.2351.063s+0.467s^20.086s^3`$, $`c=2.215+0.639s+0.439s^2+0.02s^3`$, ii: The sea distribution is parametrized as in Eqs. (22-24) with the following values. Antiquark distributions are followed from Eq.(26) and note thereafter. $`a_1=0.202+0.045\mathrm{exp}(s/0.791)`$, $`b_1=3.09+4.187\mathrm{exp}(s/0.538)`$, $`a_2=0.064+0.152\mathrm{exp}(s/0.89)`$ , $`b_2=2.574+11.156\mathrm{exp}(s/0.303)`$, $`a_3=0.196+0.014\mathrm{exp}(s/0.404)`$, $`b_3=3.809+1.856\mathrm{exp}(s/0.669)`$, iii: The gluon distribution is given in Eqs.$`(2224)`$ with the following numerical values. $`a_1=2.81+0.46\mathrm{exp}(s/0.349)`$, $`b_1=4.565+8.319\mathrm{exp}(s/0.498)`$, $`a_2=0.948+0.297\mathrm{exp}(s/1.099)`$, $`b_2=1.493+1.291\mathrm{exp}(s/0.809)`$, $`a_3=0.481+0.721\mathrm{exp}(s/0.948)`$ , $`b_3=0.22+1.526\mathrm{exp}(s/1.306)`$, $`f=3`$: i: For valence sector using Eqs.(22-23) we have: $`q_v=u_v`$ $`a=10.89112.525s+4.87s^2`$, $`b=1.4571.18s+0.384s^2`$, $`c=1.979+0.254s+0.537s^2`$, $`q_v=d_v`$ $`a=5.9777.55s+3.025s^2`$, $`b=1.0950.44s0.067s^2`$, $`c=2.532+0.282s+0.473s^2`$, ii The sea distribution is parametrized as in Eqs. (22-24) with the following values. Antiquark distributions are followed from Eq.(26) and note thereafter. $`a_1=0.31+0.064\mathrm{exp}(s/0.807)`$, $`b_1=3.905+26.187\mathrm{exp}(s/0.658)`$, $`a_2=0.064+0.023\mathrm{exp}(s/0.305)`$, $`b_2=2.644+14.156\mathrm{exp}(s/0.152)`$, $`a_3=0.063+0.023\mathrm{exp}(s/0.509)`$, $`b_3=3.508+12.856\mathrm{exp}(s/0.452)`$,
no-problem/9904/physics9904051.html
ar5iv
text
# Untitled Document Compyter analysis of undulators with block-periodic stucture A.F. Medvedev, M.L. Schinkeev Tomsk Polytechnic University Abstract Methods to detect the spectral sensitivity of an object using undulator radiation without monochromators or any other spectral devices are developed. The spectral transmission function of the object is calculated from its response to the spectrum-integral undulator radiation with the known spectral distribution. This response is measured as a function of the electron energy. l. Introduction At present, synchrotron radiation (SR) is widely used in spectroscopy as a standard source, the monochromatic components usually being obtained by mono- chromators or other spectral devices. However, a large intensity loss and variation of the transmission function during the operating time are inherent to such measurements. Recently, undulator radiation (UR) has been discussed as an alternative. As is known , monochroma- tization of UR can be partially achieved by increasing the period number. UR serving as a standard source in spectroscopy without monochromators has been discussed in ref. . It appears that the UR resolution is not only limited by the spectral line width that depends on the period number, but is also limited by the spread of angles and electron energies, the undulator magnetic field nonuniformity over the beam cross section, the finite diaphragm size, and other factors. In this connection, a monochromatorless computer spectroscopy method (MCS method) has been proposed , the computer algorithm playing the part of the monochromator. The point is that the radiation from an undulator installed in the synchrotron ring is not pure UR as assume’d in the ideal theoretical model , but also includes the SR components from the edges of bending magnets and focusing elements adjacent to the undulator. In practice, one uses the frequency partition of SR and UR spectra to exclude the admixed SR and to conserve the ideal UR properties . But we cannot admit this method to be consistent with the requirements of metrology, according to which all the ideal UR properties (angular monochromatization, polarization, independence of the UR spectral form from the particle energy) need to be confirmed by adequate quantitative measurements. 2. Amplitude-time modulation In the MCS method, the electron-energy-invariant spectral form of the. radiation source turns out to be a kernel of the integral equation, the solution of which being just the MCS problem. In order to make the UR kernel metrologically pure, we have to exclude the admixed SR. To this end we can use a procedure consistng of a series of consequent measurements of the undulator radiation at various states of the undulator magnetic system and the subsequent combination of these results. Such a procedure will be referred to as the amplitude - time modulation (ATM). Parameters varying in the ATM process are those of the undulator, which do not disturb the properties of the admixed SR. This means that the phase relations for the SR components must be invariant during ATM. This implies the constancy of the time $`t`$ for a charge travelling along a straight section of length $`l`$ between the edge elements adjacent to the undulator: $$t=\frac{1}{c}\left[l+\frac{l+k^2L}{2\gamma ^2}\right].$$ (1) Here $`L`$ is the undulator length, $`\gamma `$ the Lorenz factor of the charge ($`\gamma ^11`$), $`c`$ is the light velocity, and $`k`$ is the undulator dipole parameter. The directional modulation of the magnetic field in the undulator or in some of its blocks is a special case of ATM since the electron transit time is conserved. The magnetic field in any block satisfies the balancing condition, i.e., the field integral over the block length vanishes. Suppose $`A(\omega )`$ and $`B(\omega )`$ are the complex Fourier amplitudes of the ideal UR and the SR, respectively. Every $`A(\omega )`$ corresponds to one undulator magnetic field state. Let us consider the following four states: the basic state $`A`$; the state $`(A)`$, which differs from $`A`$ in that the undulator magnetic field is switched on with an orientation opposite to the field in the elements forming the electron orbit and adjoining the undulator; the phase-discontinuous state $`\stackrel{~}{A}`$ obtained by the reverse-sign switching of the magnetic field in some undulator blocks; and finally $`(\stackrel{~}{A})`$. Adding the radiation intensi- ties in the first and second states and subtracting those in the third and the fourth states, we obtain $$|A+B|^2+|A+B|^2|\stackrel{~}{A}+B|^2|\stackrel{~}{A}+B|^2=2\left[|A|^2|\stackrel{~}{A}|^2\right].$$ (2) Thus, the four-step ATM with phase switching enables us to exclude the SR and to obtain the metrologically pure UR kernel for MCS as an element of the ideal UR (eq. (2)). The practical realization will be especially simple when one ATM step corresponds to one acceleration cycle. In this case, we need only four cycles to achieve a metrologically pure procedure with UR. 3. Undulator magnetic system Among the various possible realizations of the undulator magnetic system for MCS with UR, the electromagnetic ironless system is preferable since only such a system has the desired properties, such as linearity, predictability, reiteration and the ability to change undulator states quickly. A block-periodic organization of the ironless electromagnetic undulator system in which the resulting distribution of the magnetic field is a superposition of fields from standard elements switching on mth given weights, seems to be the most suitable one for many types of ATM. In the general case, because of the common standard elements, the blocks may overlap one another with the equivalent summarized overlap weight. The MCS technique needs no monochromatization devices since the computer algorithm plays the part of the monochromator. Thus we avoid the many order waste of the source intensity, which is usually inevitable with radiation monochromatization. It appears from this that an undulator with the dipole regime of UR excitation $`(k1)`$ will be quite effective in achieving an adequate reaction of the object in the MCS technique. The dipole regime is favourable also with regard to the ’operation of the undulator magnetic system, since the decreased heat and electromagnetic loads permit the ironless variant of this system to be used. Therefore, we can now regard the undulator dipole regime as the basic one for the MCS technique with UR, and so the model investigations of MCS with UR, based on the dipole approximation expression for UR, become legitimate. In ATM it is important to provide the coincidence of the low-frequency trend components of the motion in the undulator for different steps of the ATM period. The amplitudes of the low-frequency components of the spectrum depend essentially on the values of the integrals $`J_1={\displaystyle \underset{0}{\overset{b}{}}}H(x)𝑑x,J_2={\displaystyle \underset{0}{\overset{b}{}}}𝑑x^{}{\displaystyle \underset{0}{\overset{x^{}}{}}}H(x)𝑑x,\mathrm{}`$ where $`H(x)`$ is the magnetic field along the undulator axis and $`b`$ is the block length. We shall call the motion of a particle in a block an $`m`$-time balanced motion, when $`J_1=J_2=\mathrm{}=J_m=0`$. The Fourier structure of the UR line of a block with the motion balancing degree $`m`$ contains the factor) $`|\mathrm{cos}^m\overline{\omega }\mathrm{sin}\overline{\omega }(Nm)/sin\overline{\omega }|^2,`$ where $`\overline{\omega }=\pi (\nu +1)/2,\nu =\eta (1+\psi ^2)`$ is the number of the UR harmonic at an angle $`\theta `$ to the motion axis, $`\psi =\gamma \theta /\sqrt{1+k^2},\eta =p\omega (1+k^2)/2\pi c\gamma ^2,\omega `$ is the radiation frequency, $`p`$ is the magnetic field half-period length, and $`N`$ is the number of standard elements in the block. The low-frequency asymptote for such a spectrum is $`\nu ^{2m}`$ . It appears from this that in the region $`0<\nu <1`$ the UR spectral density at a given direction $`\theta `$ is suppressed and the number of spectral function zeros decreases when $`m`$ increases. This causes the oscillating part of the angle-integral UR spectrum to be depressed. As a result, the difference UR kernel of the type of eq. (2) formed by the ATM is localized in a region of the high-frequency cut-off of the fundamental harmonic. The maximal balancing degree in a block consisting of $`N`$ standard elements equals $`Nl`$. In this case, the maximal smoothness of the UR integral spectrum and the maximal frequency, angle and polarization localization of the difference kernel of type (2) are obtained. The balancing degree desired is achieved by thc proper choice of the standard element weights. For example, for $`m=1`$ we have the standard element weight distribution $`1,2,3,\mathrm{},2,1`$ and for $`m=2`$ we get $`1,3,4,4,\mathrm{},4,3,1`$. With increasing $`m`$ the weight distribution tends from a trapezoidal to a binomial one, the latter corresponding to $`m=N1`$. 4. Basic equation In the case where the object reaction depends lin early on the incident radiation amplitude, its respons to a part of the flux of the ideal UR (2) can be writte as follows: $$J\left(\frac{1+k^2}{\gamma ^2}\right)=\underset{0}{\overset{\mathrm{}}{}}\frac{d\mathrm{\Phi }_\mathrm{\Delta }(\eta )}{d\omega ^{}}\mathrm{\Pi }(\omega )𝑑\omega ^{},$$ (3) where $`{\displaystyle \frac{d\mathrm{\Phi }_\mathrm{\Delta }}{d\omega ^{}}}={\displaystyle \frac{d\mathrm{\Phi }}{d\omega ^{}}}|_A{\displaystyle \frac{d\mathrm{\Phi }}{d\omega ^{}}}|_{\stackrel{~}{A}},d\omega ^{}={\displaystyle \frac{d\omega }{\omega }}`$. $`d\mathrm{\Phi }/d\omega ^{}|_{A,\stackrel{~}{A}}`$ \- is the integral spectral density of the photon flux for undulator magnetic field states $`A`$ and $`\stackrel{~}{A}`$, $`\mathrm{\Pi }(\omega )`$ is the spectral sensitivity of the object. By the following change of variables in eq. (3): $`\eta =e^\tau ,\omega ={\displaystyle \frac{2\pi c}{p}}e^s,{\displaystyle \frac{1+k^2}{\gamma ^2}}=e^x`$, we obtain the standard formula for the object reaction to the UR as a convolution-type Fredholm integral eguation of the first kind: $$U(x)=\underset{\mathrm{}}{\overset{\mathrm{}}{}}K(xs)Z(s)𝑑s.$$ (4) Here, $`U(x)=J(e^x),K(\tau )={\displaystyle \frac{d\mathrm{\Phi }_\mathrm{\Delta }}{d\omega ^{}}}(e^\tau ),Z(s)=\mathrm{\Pi }\left({\displaystyle \frac{2\pi c}{p}}e^s\right)`$. Eq. (4) is a basic one in the MCS problem. In order to solve this problem the object reaction is measured as a function of the particle energy. The speetral density $`K(\tau )`$ of the UR photon flux is measured experimentally or calculated using known formulas. The spectral sensitivity of the object $`Z(s)`$ is found from eq. (4) by numerical calculations using the regularization methods which are applied to solve ill-defined problems. The angle-integral density of the UR photon flux can be written down in the dipole approximation for $`\sigma `$\- and $`\pi `$-polarization components: $`{\displaystyle \frac{d\mathrm{\Phi }}{d\omega ^{}}}|_\pi ^\sigma ={\displaystyle \frac{4\pi L\alpha k^2}{3Tp}}{\displaystyle \frac{I_1}{I_2}}\varphi (\eta )|_\pi ^\sigma `$, $`\varphi (\eta )|_\pi ^\sigma ={\displaystyle \frac{4}{3I_1}}\eta {\displaystyle \underset{\eta }{\overset{\mathrm{}}{}}}|H(\omega _x)|^2\left[\begin{array}{cc}12{\displaystyle \frac{\eta }{\nu }}+3\left({\displaystyle \frac{\eta }{\nu }}\right)^2\hfill & \\ 12{\displaystyle \frac{\eta }{\nu }}+\left({\displaystyle \frac{\eta }{\nu }}\right)^2\hfill & \end{array}\right]{\displaystyle \frac{d\nu }{\nu ^2}},`$ with the normalizing condition $`{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}\left[\varphi (\eta )|_\sigma +\varphi (\eta )|_\pi \right]{\displaystyle \frac{d\eta }{\eta }}=1.`$ Here $`\alpha 1/137`$ is the fine structure constant, $`T`$ is the orbit period for a charge in an accelerator or storage ring, $`I_j={\displaystyle \underset{0}{\overset{\mathrm{}}{}}}\left|H(\omega _x)\right|^2{\displaystyle \frac{d\nu }{\nu ^j}},H(\omega _x)={\displaystyle \underset{\mathrm{}}{\overset{\mathrm{}}{}}}H(x)e^{i\omega _xx}𝑑x,k^2=\left({\displaystyle \frac{\mu e_0}{\pi m_0c}}\right)^2{\displaystyle \frac{p}{L}}I_2`$, is the dipole parameter, $`e_0`$ is the electron charge, $`m_0`$, is the electron mass, $`\omega _x=\pi \nu /p,\mu =4\pi 10^7H/m`$. In the numerical experiment on the ATM with phase reversal and weight modulation we have used the wide undulator approximation which is close to that in practice, the field being dependent only on $`x`$. The phase modulation has been realized by alternately switching of the polarity of the undulator block supplies. The criterion for the choice of the correct modulation parameters $`\{N,n,m,D/p\}`$ is the vanishing integral difference given by: $$\mathrm{\Delta }=\underset{0}{\overset{\mathrm{}}{}}\frac{d\mathrm{\Phi }_\mathrm{\Delta }}{d\omega ^{}}(\eta )\frac{d\eta }{\eta }=\frac{4\pi \alpha }{3T}\left(\frac{\mu e_0}{\pi m_0c}\right)^2\underset{0}{\overset{\mathrm{}}{}}\left[|H(\omega _x)|_A^2|H(\omega _x)|_{\overline{A}}^2\right]\frac{d\nu }{\nu },$$ (5) where $`n`$ is the number of blocks and $`D`$ is the distance between the neighbouring blocks. The geometric param- eters of the standard elements and their number $`M`$ are found from the basic initial parameters, namely, the undulator length and the undulator gap. Knowing $`M`$ one can start a model run in order to find the remaining parameters $`\{N,n,m,D/p\}`$. If the neighbouring blocks overlap, then $`D<0,D/p`$ being an integer. The qualitative criteria for the results of this run are: (i) the vanishing $`\mathrm{\Delta }`$ in eq. (5) with $`D/p<0`$; (ii) the correctness of the solution of the MCS problem with the obtained UR kernel. 5. Conclusions (1) When the motion balancing degree $`m`$ increases, an effective depression of the low-frequency component of the UR kernel $`K(xs)`$ in the region $`xs<0`$ takes place in the undulator blocks. (2) When $`m`$ increases, localization of the UR kernel in the region $`\eta 1`$ of the high-frequency cut-off of the total ideal kernel occurs. This monochromatization effect corresponds to the angular localization of the UR kernel in the angular range $`\psi (0,1/\sqrt{N})`$, in accordance with the known formula $`\nu =\eta (1+\psi ^2)`$. This angular localization near the direction $`\psi =0`$ causes the preferable selection of the $`\sigma `$-component of UR polarization and the strong depression of the $`\pi `$-component, since the intensity of the $`\pi `$-component is small for directions $`\psi <1\sqrt{N}`$. References 1. E.E. Koch (ed.), Handbook on Synchrotron Radiation (North-Holland, Amsterdam, 1983). 2. M.M. Nikitin and G. Zimmerer, Report DESY SR 85-04 (Hamburg, 1985); Nucl. Instr. and Meth. A240 (1985) 188. 3. V.G. Bagrov, A.F. Medvedev, M,M. Nikitin and M.L. Shinkeev, Nucl. Instr. and Meth. A261 (19B7) 337. 4. A.F. Medvedev, M.M. Nikitin and M.L. Shinkeev, Tomsk Research Centre preprint N 2-90 (Tomsk, 1990) in Russian. 5. M.M. Nikitin and V.Ya. Epp, Undulator Radiation (Energoatomizdat, Moscow, 1988) in Russian. 6. A.G. Valentinov, P.D. Vobly and S.F. Mikhailov, INP preprint N 89-174 (Novosibirsk, 1989) in Russian. 7. L. Landau and E. Lifshitz, The Classical Theory of Fields (Pergamon, London, 1962) section 77.
no-problem/9904/astro-ph9904030.html
ar5iv
text
# Luminosity Profiles of Merger Remnants ## 1. Introduction The merger hypothesis for elliptical galaxy formation, as put forth by Toomre & Toomre (1972; see also Toomre (1977)), posits that two spiral galaxies can fall together under their mutual gravitational attraction, eventually evolving into an elliptical-like remnant. An early objection to this hypothesis was that the cores of ellipticals are too dense to result from the dissipationless merging of two spirals (Carlberg (1986); Gunn (1987); Hernquist, Spergel & Heyl (1993)). An obvious solution to this objection is to include a dissipative (gaseous) component in the progenitors (for other solutions, see e.g. Veeraraghavan & White (1985); Barnes (1988); Lake (1989)). Numerical experiments including such a component readily showed that gaseous dissipation can efficiently drive large amounts of material into the central regions (Negroponte & White (1983); Noguchi & Ishibashi (1986); Barnes & Hernquist (1991)). Indeed, it was considered a great success for early hydrodynamical work to be able to reproduce dense knots of gas within the central regions of simulated merger remnants, similar to the gas concentrations observed in IR luminous mergers (Barnes & Hernquist (1991); Sanders et al. 1988b ). Since the inferred central gas mass densities in the observed gas knots are comparable to the stellar mass density seen in the cores of normal ellipticals ($`10^2M_{}`$ pc<sup>-3</sup>), it seems natural that they could be the seed of a high surface brightness remnant (Kormendy & Sanders (1992)). Subsequent numerical work by Mihos & Hernquist (1994; hereafter MH94) finds that the dissipative response of the simulated gas component is so efficient that the resulting mass profiles of the simulated remnants are unlike those seen in normal ellipticals. In particular, ensuing starformation leaves behind a dense stellar core whose surface density profile does not join smoothly onto the de Vaucoleurs $`r^{1/4}`$ profile of the pre-existing stellar population. Instead, the profiles exhibit a “spike” at small radii, with a suggested increase in surface brightness by factors $``$100. While the predicted break in the mass density profile occurs at spatial scales comparable to the gravitational softening length of the simulations, making the precise slope somewhat questionable, the conclusion that the profiles should exhibit a clear break was considered firm (MH94). This prediction, if confirmed, offers a means to constrain the frequency of highly dissipative mergers in the past by searching for their fossil remnants in the cores of nearby ellipticals. However, the numerical formalisms used in MH94 to model gaseous dissipation, star formation, and energy injection back into the ISM from massive stars and SNe (“feedback”) are necessarily ad hoc in nature. As such, these predictions should be viewed as preliminary (as Mihos & Hernquist themselves note). We investigate this question observationally by converting the observed gas column densities of on-going or late-stage mergers into optical surface brightness by assuming a stellar mass-to-light ratio appropriate for an evolved population. This light is added to the observed luminosity profile after allowing it to age passively. The resulting luminosity profile is examined for anomalous features such as the sharp break predicted by MH94. We conduct this experiment with the late-stage mergers NGC 3921 and NGC 7252 and with the ultraluminous infrared (ULIR) merger Arp 220. These systems were chosen because they have been observed in the CO(1-0) molecular line transition with resolutions (full width at half maximum) of $`2^{\prime \prime }2.5^{\prime \prime }`$ (Yun & Hibbard 1999a ; Wang et al. (1992); Scoville et al. (1997)). The resulting spatial radial resolution (300–400 pc; $`H_o`$= 75 km s<sup>-1</sup> Mpc<sup>-1</sup>) is similar to the hydrodynamical smoothing length used in MH94 ($``$ 350 pc)<sup>1</sup><sup>1</sup>1Assuming scaling parameters appropriate for a MilkyWay-like progenitor., indicating that the molecular line observations have sufficient resolution to resolve the types of mass concentrations found in the simulations. The molecular gas surface densities of each of these systems are plotted in Figure 1, converted from CO fluxes by adopting a conversion factor of $`N_{H_2}/I_{CO}=3\times 10^{20}\mathrm{cm}^2(\mathrm{K}\mathrm{km}\mathrm{s}^1)^1`$ (Young & Scoville (1991)). Detailed studies on each of these systems, which fully discuss their status as late stage mergers, can be found in Schweizer (1996) and Hibbard & van Gorkom (1996) for NGC 3921; Schweizer (1982) and Hibbard et al. (1994) for NGC 7252; and Scoville et al. (1997) for Arp 220. ## 2. Results ### 2.1. NGC 3921 & NGC 7252 For these moderately evolved merger remnants (ages of $``$ 0.5–1 Gyr since their tidal tails were launched, Hibbard et al. (1994); Hibbard & van Gorkom (1996)), the observed gas and luminosity profiles are used to predict the expected luminosity profile of a 2 Gyr old remnant. We assume that all of the molecular gas is turned into stars at the same radii, adopting an exponentially declining starformation history. The present luminosity profile is allowed to fade due to passive aging effects, and the final luminosity profile is the sum of these two populations. The molecular gas profiles plotted in Fig. 1 are converted into gas mass densities by multiplying by a factor of 1.36 to take into consideration the expected contribution of Helium. Optical luminosity profiles have been obtained by Schweizer (1982, 1996), Whitmore et al. (1993), and Hibbard et al. (1994), showing them to be well fitted by an $`r^{1/4}`$ profile over all radii, with no apparent luminosity spikes. The gas surface densities ($`\mathrm{\Sigma }_{gas}`$ in $`M_{}\mathrm{pc}^2`$) are converted to optical surface brightnesses ($`\mu _B`$) by dividing by the stellar mass-to-light ratio ($`M_{}/L_B`$) expected for a 2 Gyr old population. We adopt the stellar mass-to-light ratios given by de Jong (1995, Table 1 of ch. 4), which were derived from the population synthesis models of Bruzual & Charlot (1993) for an exponentially declining star formation history, a Salpeter IMF, and Solar metallicity ($`M_{}/L_B=0.82M_{}L_{}^1`$ at 2 Gyr). Noting that 1 $`L_{}\mathrm{pc}^2`$ corresponds to $`\mu _B`$ = 27.06 mag arcsec<sup>-2</sup> (adopting $`M_{B,}=+5.48`$), the conversion from gas surface density to optical surface brightness is given by $`\mu _B(r)=27.06\mathrm{mag}\mathrm{arcsec}^22.5\times log[\mathrm{\Sigma }_{gas}(r)/(M_{}/L_B)]`$. The luminosity profiles of the evolved remnants are estimated from the observed $`B`$-band profiles (Schweizer (1996); Hibbard et al. (1994)), allowing for a fading of +1 mag arcsec<sup>-2</sup> in the $`B`$-band over the next 2 Gyr (Bruzual & Charlot (1993); Schweizer (1996)), and adding in the expected contribution of the population formed from the molecular gas, calculated as above. We emphasize that this should favor the production of a luminous post-merger population, since it assumes the that none of the molecular gas is lost to stellar winds or SNe and the adopted IMF favors the production of many long-lived low-mass stars. The results of this exercise are plotted in Figure 2. This plot shows that the observed gas densities in NGC 3921 and NGC 7252, although high, are not high enough to significantly affect the present luminosity profiles. The profile of NGC 3921 is basically indistinguishable from an $`r^{1/4}`$ profile. The profile of NGC 7252 does show a slight rise at small radii, but not the clear break predicted by MH94. Therefore the resulting luminosity profiles of these remnants are now and should remain fairly typical of normal elliptical galaxies, and the conclusions of MH94 are not applicable to all mergers of gas-rich galaxies. Since both of these systems also obey the Faber-Jackson relationship (Lake & Dressler (1986)) and NGC 7252 falls upon the fundamental plane defined by normal ellipticals (Hibbard et al. (1994); Hibbard (1995)), we conclude that at least some mergers of gas-rich systems can evolve into normal elliptical galaxies as far as their optical properties are concerned. ### 2.2. Arp 220 Since Arp 220 is an extremely dusty object, its optical luminosity profile is poorly suited for a similar analysis. Instead, we use a luminosity profile measured in the near-infrared, where the dust obscuration is an order of magnitude less severe. Arp 220 was recently observed with camera 2 of NICMOS aboard the HST (Scoville et al. 1998), and we use the resulting $`K`$band luminosity profile, kindly made available by N. Scoville. Since Arp 220 is presently undergoing a massive starburst, the fading factor is much less certain than for the already evolved systems treated above, and depends sensitively on what fraction of the current light is contributed by recently formed stars. We adopt a situation biased towards the production of a discrepant luminosity profile by assuming that the entire population was pre-existing, converting the observed $`K`$band profile to an evolved $`B`$band profile by adopting a $`BK`$ color of 4, appropriate for a 10 Gyr old population (de Jong (1995)). The contribution due to the population formed from the molecular disk is calculated exactly as before. The resulting profile is shown in Figure 2. This figure shows that Arp 220 is predicted to evolve a luminosity profile with a noticeable rise at small radii. This is due to the peak in the molecular gas surface density at radii less than 0.5 kpc (Fig. 1). We conclude that Arp 220 has the potential to evolve a similar feature in its luminosity profile, if indeed all of the current molecular gas is converted into stars. However, the expected rise of $``$2 mag arcsec<sup>2</sup> in surface brightness (a factor of $``$6) is considerably lower than the two orders of magnitude increase predicted by the simulations (see Fig. 1 of MH94). ## 3. Discussion From the above exercise, we conclude that neither NGC 3921 nor NGC 7252 are expected to show a significant deviation in their luminosity profiles, and that the maximum rise expected for Arp 220 is considerably lower than the two orders of magnitude increase predicted by the simulations of MH94. We conclude that the numerical formalisms adopted in the simulations to treat the gas and star formation are incomplete. Mihos & Hernquist enumerate various possible shortcomings of their code. For example, their star-formation criterion is extrapolated from studies of quiescent disk galaxies, and may not apply to violent starbursts. Perhaps most importantly, their simulations fail to reproduce the gas outflows seen in ULIR galaxies (“superwinds” e.g. Heckman, Armus & Miley (1990); Heckman, Lehnert & Armus (1993)), suggesting that the numerical treatment of feedback is inadequate. In spite of these results, it is still interesting that under some conditions there might be an observational signature of a past merging event in the light profile of the remnant. The question is for which mergers might this be the case? Since $`\mathrm{\Sigma }_{H_2}`$ is tightly correlated with IR luminosity (Yun & Hibbard 1999b ), we infer that only the ultraluminous IR galaxies retain the possibility to evolve into ellipticals with a central rise in their luminosity profiles. While such profiles are not typical of ellipticals in general, they are not unheard of. For example, $``$ 10% of the the Nuker sample profiles presented by Byun et al. (1996) show such anomalous cores (e.g., NGC 1331, NGC 4239). It is therefore possible that such systems evolve from ultraluminous IR galaxies. This can be tested by careful “galactic archaeology” in such systems to search for signatures of a past merger event (e.g., Schweizer & Seitzer (1992); Malin & Hadley (1997)). However, it is not a foregone conclusion that systems like Arp 220 will evolve anomalous profiles. This system presently hosts a very powerful expanding “superwind” (Heckman et al. (1996)), which may be able to eject a significant fraction of the cold gas in a “mass-loaded flow” (e.g. Heckman et al. (1999)). Such winds are common in ULIR galaxies (Heckman, Armus & Miley (1990); Heckman, Lehnert & Armus (1993)). Another related possibility is that the IMF may be biased towards massive stars (i.e. “top heavy”, Young et al. (1986); Scoville & Soifer (1991)). Such mass functions will leave far fewer stellar remnants than the IMF adopted here. A third possibility is that the standard Galactic CO-to-H<sub>2</sub> conversion factor is inappropriate for ULIR galaxies, and that the high gas surface densities derived from CO observations (and thus the resulting stellar luminosity profile) may be over-estimated (see Downes et al. (1993); Bryant & Scoville (1996)). Some support for the idea that central gas cores may be depleted by the starburst is given by a population synthesis model of NGC 7252, which suggests that it experienced an IR luminous phase (Fritze-von Alvensleben & Gerhard (1994)). While the current radial distribution of molecular gas in NGC 7252 is flat and lacks the central core seen in Arp 220, it appears to connect smoothly with that of Arp 220 in Fig. 1. Therefore one may speculate that NGC 7252 did indeed have a radial gas density profile much like Arp 220 but has since lost the high density gas core as a result of prodigious massive star formation and/or superwind blowout. However, the burst parameters are not strongly constrained by the available observations, and a weaker burst spread over a longer period may also be allowed (Fritze-von Alvensleben & Gerhard (1994)). Further insight into this question could be obtained by constraining the past star formation history in other evolved merger remnants. In conclusion, a comparison of the peak molecular column densities and optical surface brightnesses in NGC 3921 and NGC 7252 suggests that some mergers between gas-rich disks will evolve into elliptical-like remnants with typical luminosity profiles, even considering their present central gas supply. For ULIR galaxies like Arp 220 the case is less clear. Such systems will either produce an excess of light at small radii, as seen in a small number of ellipticals, or require some process such as mass-loaded galactic winds or a top-heavy IMF to deplete the central gas supply without leaving too many evolved stars. If the latter possibility can be excluded, then the frequency of such profiles may be used to constrain the number of early type systems formed via ULIR mergers.<sup>2</sup><sup>2</sup>2 We note that any subsequent dissipationless merging of these cores with other stellar system will tend to smooth out these profiles. ## 4. Summary * Even under assumptions that favor the production of a luminous post-merger population, the dense molecular gas complexes found in the centers of NGC 3291 and NGC 7252 should not significantly alter their luminosity profiles, which are already typical of elliptical galaxies (Schweizer (1982, 1996)). Since these systems also obey the Faber-Jackson relationship (Lake & Dressler (1986)) and NGC 7252 falls upon the fundamental plane defined by normal ellipticals (Hibbard et al. (1994); Hibbard (1995)), it appears that at least some mergers of gas-rich systems can evolve into normal elliptical galaxies as far as their optical properties are concerned. * The dense molecular gas complex found in the center of Arp 220 may result in a moderate rise in the remnants’ luminosity profile at small radii. Since the molecular gas column density is a tight function of IR luminosity (Yun & Hibbard 1999b ), we conclude that this condition may apply to all of the ultraluminous infrared galaxies. However, this does not preclude a merger origin for elliptical galaxies since (1) About 10% of the Nuker sample ellipticals (Byun et al. 1996) show such rises in their radial light profiles, and (2) it is possible that much of the gas in such systems is blown into intergalactic space by the mass-loaded superwinds found emanating from such objects (Heckman et al. (1999); Heckman, Lehnert & Armus (1993)). * The maximum expected rise in the luminosity profiles are considerably lower than the orders of magnitude increase predicted by the simulations (MH94). We therefore suggest that the numerical formalisms adopted in the simulations to treat the gas and star formation are incomplete. The authors thank N. Scoville for kindly providing the NICMOS K-band profile for Arp 220. We thank F. Schweizer and J. van Gorkom for comments on an earlier version of this paper, and R. Bender and C. Mihos for useful discussions, and the referee, J. Barnes, for a thorough report.
no-problem/9904/astro-ph9904130.html
ar5iv
text
# The Properties of the Galactic Bar Implied by Gas Kinematics in the Inner Milky Way ## 1. Introduction The structure and morphology of the inner Milky Way are difficult to determine due both to dust obscuration and to our edge-on view. The canonical picture of the Milky Way as an axisymmetric spiral galaxy was enshrined in the models of Schmidt (1965), Bahcall & Soneira (1980), Ostriker & Caldwell (1983), Kent (1992), and others. However, the suggestion by de Vaucouleurs (1964) that the Galaxy is barred has been supported by many recent studies (cf. the reviews of Blitz et al. 1993 and Kuijken 1996). What was once thought of as the bulge now seems to be, at least in part, a thickened bar. Lines of evidence for a bar include: the infrared surface brightness distribution (Blitz & Spergel 1991; Dwek et al. 1995), the distribution of Mira variables (Whitelock & Catchpole 1992), IRAS point sources (Weinberg 1992, Nikolaev & Weinberg 1997), the magnitude offset of bulge stars at positive and negative longitudes (Stanek 1995; Stanek et al. 1997), OH/IR stars (Sevenster 1995), and the gas motions near the Galactic center (e.g. Liszt & Burton 1980; Binney et al. 1991). Several groups have used infrared photometry, especially from the COBE/DIRBE data, to deduce the density distribution in the Galactic bar (e.g. Blitz & Spergel 1991; Dwek et al. 1995; Binney, Gerhard & Spergel 1997). It has long been known, from both 21 cm and mm observations of gaseous emission lines, that the kinematics of gas toward the Galactic center ($`|l|10\text{}`$) are inconsistent with purely circular motions (e.g. Rougoor & Oort 1960, Kerr & Westerhout 1965, Oort 1977). Figure 1 shows the H I longitude–velocity ($`\mathrm{}V`$) diagram constructed from the data of Liszt & Burton (1980; see also Burton & Liszt 1983). This diagram shows the distribution of H I radial velocities at galactic longitudes $`13\text{}>\mathrm{}>11\text{}`$. Most gas is approaching at negative longitudes and receding at positive, which is the general sense of rotation of the Milky Way, but there is significant emission from gas moving in the opposite sense on both sides; such gas is inconsistent with simple circular orbits and is said to have “forbidden velocities.” Forbidden velocities in excess of 100 $`\mathrm{km}\mathrm{s}^1`$ are observed throughout the range $`6\text{}<\mathrm{}<6\text{}`$. A variety of explanations for the non-circular motions have been proposed including explosive outflows (cf. Oort 1977), spiral density waves (e.g. Scoville, Solomon & Jefferts 1974), and barlike perturbations. If the non-circular motions do result from gas flow in a non-axisymmetric potential, observation and detailed modeling of the gas kinematics should provide strong constraints on the mass distribution in the inner Galaxy. In fact, flow patterns in barred galaxy models have already been shown to provide qualitative fits to the observations (e.g. Peters 1975; Liszt & Burton 1980; van Albada 1985b; Mulder & Liem 1986; Binney et al. 1991). Features in diagrams such as Figure 1 contain information about the distribution of gas in space and velocity within the disk of the Galaxy. But because we cannot determine the distance to individual parcels of gas, there is no unique way to invert the observed $`\mathrm{}V`$diagram to determine the two-dimensional distribution of gas in the Galaxy; the projection into $`\mathrm{}`$ and $`V`$ space is highly degenerate. Even if such a deprojection were available, we still could not use the flow pattern to deduce the galactic gravitational potential directly, since the gas is also subject to pressure forces and its motion is governed by the non-linear equations of fluid dynamics. Thus the data need to be interpreted by comparison with models. Binney et al. (1991) compare stellar orbits in a barred model with the CO and H I $`\mathrm{}V`$diagrams, which offers some insight, but omits the effects of the strong shocks expected in gas flows in a bar. Subsequently, several numerical methods have been employed to construct improved models for the gas. Jenkins & Binney (1994) used sticky particles, Englmaier & Gerhard (1998) used smoothed particle hydrodynamics (SPH), while Fux (1997,1999) combined SPH and $`N`$-body techniques to attempt to build a fully self-consistent model of the inner Milky Way. Fux (1999) has compared the gas kinematics in such a model to “arm” features in the CO and H I $`\mathrm{}V`$diagrams to constrain the properties of the bar; his approach is complementary to ours, concentrating on high-density regions of the $`\mathrm{}V`$diagram. Most modeling efforts have been devoted to observations of the dense molecular gas while comparatively little attention has been devoted to the H I data. Here we focus on the $`\mathrm{}V`$diagram for the H I, which is less affected by two principal limitations of the molecular data: the $`\mathrm{}V`$diagram for the H I is both more symmetric and more complete than the corresponding CO plots. In particular, CO (Dame et al. 1987; Bally et al. 1988) is not detected where H I emission is present in some significant regions of the $`\mathrm{}V`$plane; for example, between $`\mathrm{}=0\text{}`$ and $`6\text{}`$, the H I emission extends to $`270\mathrm{km}\mathrm{s}^1`$ while the CO emission extends to $`220\mathrm{km}\mathrm{s}^1`$ only (Figure 4 of Dame et al.). More importantly, H I emission extends to higher forbidden velocities over a wider angular range in comparison with that observed in CO. We attempt to place constraints on the properties of the Galactic bar by comparing the H I $`\mathrm{}V`$diagram with similar plots synthesized from many fluid-dynamical models in various potentials. The full gas velocity field allows us to determine which regions of the Galaxy are responsible for prominent features of the $`\mathrm{}V`$diagram. Our goal is not to identify a unique model, but rather to infer properties of the inner Galaxy that appear to be required by the data. We conclude that the Galaxy must have a strong bar that rotates fairly quickly and has a central density high enough to produce an inner Lindblad resonance. The bar must have a semi-major axis $`a3`$ kpc, and be viewed obliquely, with the bar major axis between 30 and 40 to the Sun–Galactic Center line. ## 2. The Galactic longitude–velocity diagram ### 2.1. Observational data We use the H I observations of the inner Galaxy by Burton & Liszt (1978, 1983; and Liszt & Burton 1980), which produced the $`\mathrm{}V`$diagram shown in Figure 1. These data have uniform coverage of the longitude range $`\mathrm{}=11\text{}`$ to +13, with spatial resolution $`0.5\text{}`$, well matched to the resolution of our simulations, and good velocity resolution ($`2.75\mathrm{km}\mathrm{s}^1`$) and sensitivity. H. Liszt kindly provided the data in electronic form. The spectra are taken on an 0.5 grid in $`\mathrm{}`$ and $`b`$; because we are comparing to 2-D simulations, we summed the data along the $`b`$ axis. We also smoothed in $`V`$ with a Gaussian of $`\sigma =5.5\mathrm{km}\mathrm{s}^1`$. A high-velocity H I cloud at $`\mathrm{}=8\text{},b=4\text{}`$ and $`V=210\mathrm{km}\mathrm{s}^1`$ (“Shane’s feature,” Saraber & Shane 1974), was excluded from the dataset. Plots of individual latitude slices (Burton & Liszt 1978; Liszt & Burton 1980) show that the velocity “peaks” in Figure 1 are prominent at latitudes near $`b=0\text{}`$. The broad band of emission (sometimes called the “main maximum”) at $`100<V<100\mathrm{km}\mathrm{s}^1`$ at all longitudes is present over the entire latitude range observed by Liszt & Burton ($`6\text{}<b<6\text{}`$). Since the half-thickness of the gas layer is approximately 250 pc inside the Solar radius to 4 kpc radius, and the thickness may be only 100 kpc inside 4 kpc (Mihalas & Binney 1981; Jackson & Kellman 1974), the band of emission is presumably from disk gas that is relatively close by. The velocity extent of the band is large for the velocity dispersion of the gas as derived by Gunn, Knapp & Tremaine (1979), even given the 1000:1 density contrast. It is presumably attributable to line-of-sight integration over substantial bulk motions in the disk such as spiral arm streaming motions (Burton & Liszt 1983). Foreground gas is also responsible for 21 cm absorption against the central continuum source at $`\mathrm{}=0\text{}`$, $`b=0\text{}`$. This absorption appears at negative velocity (Burton & Liszt 1978, 1993) and is visible in the summed data in some of the intermediate contours in Figure 1, although it is not conspicuous in the extreme contour. The absorption at negative velocities implies that the negative-velocity gas at $`\mathrm{}=0\text{}`$, $`b=0\text{}`$ is between the Sun and the Galactic Center, while the positive velocity gas at that position is behind the Center (Burton & Liszt 1978). The filled circles in Figure 1 mark the points of the observed extreme-velocity contour (EVC) we will use for comparison to the simulations. Because we are interested in the motions of the gas in the inner Galaxy, we do not use that portion of the EVC that appears to be substantially influenced by foreground disk gas, but we retain the data point at $`\mathrm{}=0\text{}`$ since the extreme contour there is not much affected by absorption. The $`\mathrm{}V`$diagram is not perfectly two-fold rotationally symmetric in many respects. Here we simply note that the shapes of the velocity peaks in the EVC differ: that at positive $`\mathrm{}`$ lies at 3 while the most negative velocity is at $`\mathrm{}=4\text{}`$, although the magnitudes are similar. More detailed plots of the H I $`\mathrm{}V`$diagram reveal other non-symmetric features in the interior of the diagram, including the well-known “3-kpc expanding arm” (e.g. Peters 1975; Burton & Liszt 1983, 1993), which is marginally visible in Figure 1 at $`3\text{}>\mathrm{}>9\text{}`$ near $`100\mathrm{km}\mathrm{s}^1`$. Some investigators (e.g. Kerr 1967; Liszt & Burton 1980) have also presented evidence that the H I gas distribution in the inner Galaxy is tilted out of the Galactic plane. By summing the data over $`b`$, we have suppressed this aspect, which would be difficult to address in any case since our models are two-dimensional. ### 2.2. Interpreting the extreme-velocity contour One must make assumptions in order to extract information on the structure of the Galaxy from the $`\mathrm{}V`$diagram. The simplest approach is to assume that the Galaxy is axisymmetric and the gas moves on circular orbits. With this assumption (and others noted below), the $`\mathrm{}V`$diagram can be used to determine the rotation curve of the Galaxy interior to the Solar circle by the tangent-point method (cf. Gunn et al. 1979, Mihalas & Binney 1981). The critical feature of the $`\mathrm{}V`$diagram in this method is the extreme-velocity contour (EVC), which is the outer contour of the gas distribution in longitude–velocity space; it is the highest absolute radial velocity observed along the line of sight at each $`\mathrm{}`$. In the tangent-point method, the EVC in the upper left and lower right quadrants only is used; gas at forbidden velocities is ignored. The extreme observed velocity needs to be corrected for instrumental resolution and the velocity dispersion of the gas, which is assumed to have a uniform value (Gunn et al. 1979), to find the terminal velocity at each longitude, $`v_t(\mathrm{})`$ (see Section 4.1 below). With the further assumptions that some H I gas exists at every tangent point and that the circular angular frequency, $`\mathrm{\Omega }(R)`$, decreases monotonically from the center, $`v_t(\mathrm{})`$ yields the Galactic rotation curve $`\mathrm{\Theta }(R)`$ directly through the equation $`\mathrm{\Theta }(R_0|\mathrm{sin}\mathrm{}|)=|\mathrm{v}_\mathrm{t}(\mathrm{})|+\mathrm{\Theta }_0|\mathrm{sin}\mathrm{}|`$. As the correction term for the circular velocity of the LSR is small at longitudes near $`0\text{}`$, the EVC on the maximum side (positive $`V`$ at positive $`\mathrm{}`$, negative $`V`$ at negative $`\mathrm{}`$) is approximately the rotation curve under the axisymmetric assumption. For circular orbits, on one side of the Galactic center all the gas should be coming towards the Sun, and on the other side it should be going away. Hence, the EVC on the non-maximum side, in the upper right and lower left quadrants of the $`\mathrm{}V`$diagram, should be featureless and close to 0 $`\mathrm{km}\mathrm{s}^1`$ (as long as the circular frequency at $`R_0`$ is less than the circular frequency in the inner Galaxy, which is true for any reasonable rotation curve). The velocity dispersion of the gas and bulk motions in the disk will push the EVC beyond 0 $`\mathrm{km}\mathrm{s}^1`$, but apart from these effects the non-maximum EVC should not tell us much. Figure 2 shows an $`\mathrm{}V`$diagram for a model with gas all on circular orbits. The rotation curve that gives rise to this $`\mathrm{}V`$diagram is plotted in Figure 3. The contrast with Figure 1 is instructive. Gas at forbidden velocities in the Milky Way is clearly inconsistent with a simple circular flow pattern. The EVC is still a useful probe of the Galactic mass distribution even when the gas is not on circular orbits, provided that the observed tracer is ubiquitous in the disk and that the non-circular motions are caused by streaming in a non-axisymmetric potential, as first proposed by de Vaucouleurs (1964). As long as the observations are sensitive enough to pick up the tracer in regions of low density, the EVC depends almost solely on the velocity field, and variations in the fraction of gas mass in a given tracer phase are much less important. Here we discount the alternative possibility that non-circular motions arise from explosions or other violent events near the Galactic Center (cf. Oort 1977). Neutral hydrogen is ubiquitous in the Galactic disk and is readily detectable through its 21 cm emission. It is clearly more widespread than CO in the inner Galaxy, since there are no “holes” in the H I $`\mathrm{}V`$diagram (Figure 1) in contrast with that for the CO (e.g. Figure 4 of Dame et al. 1987, and Figure 4 of Bally et al. 1988). Additionally, as noted earlier, the negative velocity peak of the CO $`\mathrm{}V`$diagram reaches only to $`220\mathrm{km}\mathrm{s}^1`$ at $`\mathrm{}=2\text{}`$ while that peak reaches $`270\mathrm{km}\mathrm{s}^1`$ at $`\mathrm{}=4\text{}`$ in the H I $`\mathrm{}V`$diagram, and the forbidden emission extends further in H I than in CO, especially for negative velocities at $`0\text{}<\mathrm{}<5\text{}`$. The interior of an $`\mathrm{}V`$diagram for CO shows much substructure with strong density contrasts, whereas that for H I exhibits only mild variations (Figure 1). Interior features, in both molecular and atomic gas, provide extra information to constrain models; e.g. Fux (1999) attempts to match them to an SPH gas flow in a model of the Galaxy. The additional substructure in molecular emission, which traces gas of higher density, is probably caused by variations both in the atomic fraction and in molecular emissivity (e.g. temperature). Such variations, even if they are well understood, would be very hard to model, however. The EVC of the H I $`\mathrm{}V`$diagram, on the other hand, is insensitive to density variations. All successful models of the inner Milky Way should therefore match it provided only that there is some atomic gas everywhere in the flow. The smoothness of the EVC in Figure 1 gives us grounds to hope that this requirement is fulfilled. ## 3. Simulations of the gas flow We use a two-dimensional grid-based gas dynamical code to simulate the gas flow in models for the galactic potential. The code was originally written by G. D. van Albada to model gas flow in barred galaxy potentials (van Albada 1985a, 1985b) and kindly provided by E. Athanassoula. She used it (Athanassoula 1992b) to study gas flow patterns in various barred potentials. ### 3.1. The fluid code The code is an second-order, flux-splitting Eulerian grid code for an isothermal gas in an imposed gravitational potential representing the stellar component and halo of the Galaxy. We neglect the self-gravity of the gas in order to reduce computational requirements. We justify this omission on the grounds that the gas surface density is considerably less than that of the stellar bulge and disk, especially in the inner regions of the Galaxy with which we are primarily concerned (see Section 5.1 below). Our grid has 200 by 400 cells, each 50 pc square, and we enforce a 180 rotation symmetry, so that the grid is effectively 400 by 400. The grid is fixed with respect to the barred potential, and both rotate at a steady pattern speed; the bar is aligned at 45 to the grid axes. The time step is variable, chosen automatically via a Courant condition, and is generally approximately 0.1 Myr. The sound speed of the gas is taken to be 8 $`\mathrm{km}\mathrm{s}^1`$ (cf. Gunn et al. 1979), corresponding to a temperature of $`10^4\text{ K}`$. Varying the sound speed within reasonable limits of a few $`\mathrm{km}\mathrm{s}^1`$ does not materially affect the derived gas flow. By its nature, the code approximates the interstellar medium as an Eulerian fluid, smooth on scales of the grid cell size. Without some idealization it is hopeless to simulate the extremely complex dynamics of the multiphase ISM, which has structure on all observed scales and a vast assortment of energy inputs and outputs. Some authors (Jenkins & Binney 1994; Combes 1996) have suggested that smooth-fluid models using the Euler equations, such as grid codes and smooth-particle hydrodynamics, are not appropriate for the clumpy ISM, and have advocated various sticky-particle methods. Sticky-particle codes may be well suited to simulating the dynamics of the self-gravitating molecular cloud component, which Jenkins & Binney implicitly probed by comparing to CO observations. However, the H I in the neutral ISM is much less clumpy; it is not clear that the neutral ISM is made up of discrete clouds, especially over scales of $`50`$ pc, the grid scale we use. Essentially, applying the Euler equations to the ISM simply asserts that the ISM has a pressure or sound speed defined in a coarse-grained sense, over scales greater than the code’s resolution. Englmaier & Gerhard (1997) used an SPH code to simulate flow in one of the model potentials that Athanassoula (1992b) used with the Eulerian grid code. For equivalent input parameters, Englmaier & Gerhard obtained results very similar to Athanassoula’s, which reassures us that the simulations are not dependent on the fluid-dynamical algorithm.<sup>1</sup><sup>1</sup>1Englmaier & Gerhard found that increasing the sound speed of the gas to 20–25 $`\mathrm{km}\mathrm{s}^1`$ changed the flow pattern. However, such a large value implies an unreasonably high temperature for the ISM, and is inconsistent with the value found by Gunn et al. (1979). A limitation of particle codes is their inability to represent large density contrasts. By design, spatially adaptive particle codes resolve structure well in high density regions, but the finite number of particles precludes adequate representation of the fluid properties in very low density regions. Grid codes, on the other hand, cannot resolve spatial structure below a few grid cells, but can handle nearly any density contrast with no increase in overhead, and represent low and high density regions equally. In a case such as the gas in the Milky Way bar, where the geometry and scales of interest are largely fixed by the stellar potential, spatial adaptivity is less essential and grid codes are generally more efficient. The grid’s advantage in density contrast is especially important since the gas in low density regions will prove crucial to match the observed emission in the forbidden quadrants of the $`\mathrm{}V`$diagram, as discussed further in Section 5.2. ### 3.2. Simulation procedure We begin each simulation in a quasi-equilibrium state, with the mass of the bar redistributed in an axisymmetric configuration, the gas on circular orbits, and a uniform gas surface density of 5 $`\mathrm{M}_{}\mathrm{pc}^2`$. We turn on the bar by linear interpolation between the initial axisymmetric state and its fully barred shape, reaching its final state in 0.1 Gyr. The bar growth time is approximately equal to the orbital period at a radius of 3 kpc. Different choices for the growth time and initial density do not particularly affect the results, save that the final gas density distribution scales overall proportionally to the constant chosen for the initial density. We continue the simulation to 0.2 Gyr to allow the gas flow to “settle” after the bar has grown, and to 0.3 Gyr to verify that the flow has stabilized. The gas response can never reach a completely steady state, because the gas inside co-rotation continuously loses energy in shocks and flows toward the center.<sup>2</sup><sup>2</sup>2The gas build up in the center can be significant if the code is run for many rotation periods, e.g. several Gyr. This effect can be lessened by the use of a “gas-recycling” provision in the code. However, we found that gas recycling caused long-period oscillations in the flow with the fine grid used here, probably because it redistributes energy over the grid (G. van Albada, private communication). The oscillations do not occur on coarser grids, such as those used by Athanassoula (1992b), presumably due to the higher numerical diffusivity. Since we are not interested in the long-term evolution of the flow, we avoid this numerical problem by turning gas recycling off. Gas continues to accumulate in the center, but there is very little change in the gas velocity field from 0.2 to 0.3 Gyr. We use the gas density and velocity fields at 0.2 Gyr to construct $`\mathrm{}V`$diagrams as would be seen by an observer in the plane of the simulation. The observer is placed 8.5 kpc from the Galactic center and in the LSR, moving with a velocity of $`\mathrm{\Theta }_0=220\mathrm{km}\mathrm{s}^1`$ toward $`\mathrm{}=90\text{}`$, and at a given viewing angle – the angle between the bar major axis and the Sun-Galactic Center line. (The effect of a different LSR motion is discussed below in Section 6.2). The viewing angle is varied to find the best value, as detailed below in Section 4.2. For each cell in the simulation grid, we calculate the longitude of the cell and the angle it subtends, and the radial velocity of the gas in the cell. The gas density in the cell and its distance from the Sun determine the observed brightness. The brightness distribution is convolved and sampled in longitude to model the angular beamwidth of the telescope and the $`0.5\text{}`$ sampling of the observed positions, and convolved in velocity to include the effects of the sound speed of the gas ($`c_s=8\mathrm{km}\mathrm{s}^1`$) and the velocity resolution of the observations (smoothed with a Gaussian of $`\sigma =5.5\mathrm{km}\mathrm{s}^1`$). ### 3.3. Model gravitational potentials Our models for the gravitational potential are similar to those used by Athanassoula (1992a,b). They have three components: an ellipsoidal bar, a centrally concentrated bulge, and an extended component to represent both the disk and halo. We model the bar as a prolate Ferrers $`n=1`$ ellipsoid with semimajor axis $`a`$ and semiminor axis $`b`$. The bar density is given by $$\rho (x,y,z)=\{\begin{array}{cc}\rho _{0,\mathrm{bar}}(1u^2)\hfill & \mathrm{if}u^2<1,\hfill \\ 0\hfill & \mathrm{if}u^2>1,\hfill \end{array}$$ (1) where $$u^2=\frac{x^2}{a^2}+\frac{y^2}{b^2}+\frac{z^2}{b^2}.$$ (2) This model for the bar is convenient because its gravitational field is analytic (Binney & Tremaine 1987), but it is a crude model for the real bar (e.g. Dwek et al. 1995). We compensate for one of its principal weaknesses by adding a bulge component. Ferrers bars are not very centrally concentrated; the bulge component allows us to increase the central concentration and to adjust its strength relative to the bar. The bulge is a modified Hubble profile sphere with core radius $`r_c`$ and density given by $$\rho (r)=\rho _{0,\mathrm{bul}}\left[1+\left(\frac{r}{r_c}\right)^2\right]^{3/2}.$$ (3) The “bulge” component can be viewed as effectively part of the bar; our treatment of the two as separate analytical components does not imply that we regard them as distinct, either photometrically or kinematically. We use $`M_{\mathrm{bul}}`$ to refer to the bulge mass within 1 kpc of the Galactic center, since this is most analogous to the central concentration of the bar; the total mass of a modified Hubble profile sphere diverges at large radii. The extended component has the potential $$\mathrm{\Phi }(R)=\mathrm{\Phi }_0\mathrm{ln}\left(1+\sqrt{1+(R/R_c)^2}\right),$$ (4) where $`R_c`$ is scale length. If all the mass that gives rise to this potential were to reside in the disk, it would have the surface density of a Rybicki disk (given by Zang 1976 and derived independently by Hunter, Ball & Gottesman 1984): $$\mathrm{\Sigma }(R)=\mathrm{\Sigma }_0\frac{R_c}{\sqrt{R_c^2+R^2}},$$ (5) with $`\mathrm{\Phi }_0=2\pi G\mathrm{\Sigma }_0R_c`$. The rotation curve of this potential becomes asymptotically flat at large radius, making it suitable for modeling the contribution both of the axisymmetric part of the stellar disk and of the dark matter halo. As the simulation is two-dimensional, it is insensitive to the three-dimensional forms of these density distributions; any distribution that yielded similar forces in the plane could be substituted. Thus, mass can be traded off between the axisymmetric components; for example, it is unimportant that the density of the Hubble bulge falls off slowly, since the small additional contribution to the rotation curve (cf. Figure 3) could be absorbed into the rotation curve of the disk or halo. The total potential is specified by seven parameters: a central density and scale length for each of the bulge and “disk,” and a central density and two axis lengths for the bar. Our only constraint is that the rotation curve should be roughly flat outside $`R_0`$, with a circular velocity from 200–220 $`\mathrm{km}\mathrm{s}^1`$ at 8.5 kpc. An eighth parameter, the Lagrange or corotation radius $`R_L`$, is required to fully specify a model; choosing $`R_L`$ is equivalent to specifying a pattern speed for the bar. The gas flow pattern is determined by the adopted potential, but the $`\mathrm{}V`$diagram further depends on the viewing angle $`\varphi _{\mathrm{LSR}}`$ between the Sun–Galactic center line and the major axis of the bar. We varied the parameters by trial and error and examined the $`\mathrm{}V`$diagrams after each run to learn the effects of changes in bar size, bar mass, bulge mass, Lagrange radius and so on. Our goal was to find a model or models that matched the observations reasonably well, rather than systematically to explore the parameter space, which is impractical given the large number of parameters. We did run some series to explore the effect of varying a parameter, most notably, varying the Lagrange radius while holding all other parameters constant. In all, we ran 51 models; their parameters are given in Table 1. The table is sorted by the goodness of fit as measured by the RMS deviation in velocity between model and data (discussed further in Section 4.2). The best fit viewing angle and the goodness of fit are tabulated in the last two columns of Table 1. The models are numbered best to worst; the number, naturally, does not correspond to the order in which the models were run, since we improved the models by learning from past results – Model 1 was actually the 46th model run. ## 4. Comparison to observations We compared the outer envelope – the extreme-velocity contour – of the synthesized $`\mathrm{}V`$diagrams to that of the data. The observed EVC used is the contour of 0.125 K degrees of antenna temperature summed over $`b`$, or 1.25 $`\mathrm{M}_{}\mathrm{kpc}^2\mathrm{deg}^1/\mathrm{km}\mathrm{s}^1`$ of atomic gas (H and He), using the calibration given by Liszt & Burton (1980). The data points used are shown by the filled circles in Figure 1. As discussed in Section 2, those portions of the EVC that show signs of contamination from foreground disk emission are excluded from comparisons to models. ### 4.1. The EVC contour level The position of the observed extreme-velocity contour is determined by the actual terminal velocity envelope of the gas, extended by the velocity broadening due to the gas sound speed and the instrumental resolution. Since the flux level at which the EVC can be observed is also limited by the noise in the observations, the EVC is not an intrinsic property of the Galaxy, but also depends on the observational parameters. In the tangent-point method, the observed EVC must be corrected to yield the terminal velocity envelope. In practice, it is conventional to assume that (1) the difference between the terminal velocity $`v_t(\mathrm{})`$ and the EVC is some constant $`\mathrm{\Delta }V`$, and (2) $`\mathrm{\Delta }V`$ can be determined by observations near $`\mathrm{}\pm 90\text{}`$, where the actual terminal velocity is expected to be zero (cf. Gunn et al. 1979). As the data we are using do not cover $`\mathrm{}\pm 90\text{}`$, we cannot make use of this method to derive $`\mathrm{\Delta }V`$. In order to compare the observations and simulations, we have constructed simulated $`\mathrm{}V`$diagrams which take into account the velocity dispersion of the gas and the instrumental resolution. But the absolute level at which to place the EVC in the simulated $`\mathrm{}V`$diagram is not constrained, since we do not know $`\mathrm{\Delta }V`$ for the observations. Fortunately, both simulated and observed $`\mathrm{}V`$diagrams have fairly sharp edges, in the sense that the flux falls off rapidly with increasing $`|V|`$ – see Figures 1 and 4. The lowest contours simply trace the falloff profile of the velocity dispersion and instrumental resolution. We compared simulated $`\mathrm{}V`$diagrams to that observed, examining the fall off at the edges of the distribution, to set the level for the EVC in the simulated $`\mathrm{}V`$diagram. Placing the EVC at $`1.7`$ $`\mathrm{M}_{}\mathrm{kpc}^2\mathrm{deg}^1/\mathrm{km}\mathrm{s}^1`$of total simulation gas produced a reasonably good match but EVC levels of 1.25 – 2.5 $`\mathrm{M}_{}\mathrm{kpc}^2\mathrm{deg}^1/\mathrm{km}\mathrm{s}^1`$ were almost equally acceptable. Because the $`\mathrm{}V`$diagrams do have sharp edges, changing the flux level of the comparison EVC, even by a factor of 2, does not have a strong effect. We ran comparisons of the entire series of models at EVC contour levels from 0.625 – 5.0 $`\mathrm{M}_{}\mathrm{kpc}^2\mathrm{deg}^1/\mathrm{km}\mathrm{s}^1`$ and verified that the small changes caused by different choices for the EVC level produce only minor changes in the rank ordering of models, and do not affect our conclusions. We note that comparing the simulation EVC at 1.7 $`\mathrm{M}_{}\mathrm{kpc}^2\mathrm{deg}^1/\mathrm{km}\mathrm{s}^1`$ of total gas to the observed H I EVC at 1.25 $`\mathrm{M}_{}\mathrm{kpc}^2\mathrm{deg}^1/\mathrm{km}\mathrm{s}^1`$ of atomic gas could be interpreted to mean that the gas is 75% atomic; levels of 1.25 – 2.5 $`\mathrm{M}_{}\mathrm{kpc}^2\mathrm{deg}^1/\mathrm{km}\mathrm{s}^1`$ would imply atomic fractions of 100% – 50%. However, the comparison is not reliable for this purpose. The edges of the EVC largely represent gas in low density regions, and the molecular fraction is undoubtedly higher in high density regions – CO emission does not generally extend to the velocities of the H I EVC (Dame et al.1987). Additionally, the inferred fraction would be changed if the value of the initial gas surface density used in the simulations were changed. It is, however, comforting that the inferred atomic fraction is close to but less than 1. (The actual atomic mass fraction in the inner Galaxy is perhaps 50%; cf. Bronfman et al. 1988; Bloemen et al. 1986.) ### 4.2. Best fit and viewing angle To rank the models by the quality of their fit to the data, we compute the root-mean-square deviation in velocity between the location of the simulated EVC and the observed data points. The RMS velocity deviation is not an “error” in a statistical sense; it serves as a figure-of-merit for ranking the models. The RMS places a relatively high weight on large deviations, which penalizes gross differences between model and data more than large numbers of small differences. For a given model, the position of the observer with respect to the bar must be specified to construct an $`\mathrm{}V`$diagram. We define the “viewing angle” $`\varphi _{\mathrm{LSR}}`$ to be the angle between the bar major axis and the Galactic Center–to–Sun line, so that 0 is an end-on bar, 90 is side-on, and values between 0 and 90 put the near end of the bar in the first Galactic quadrant ($`0\text{}<\mathrm{}<90\text{}`$). We determined the best-fit viewing angle for each model iteratively by synthesizing $`\mathrm{}V`$diagrams and computing the RMS deviation at viewing angle intervals of 10, 4, and 1, successively, narrowing the search interval at each step. The best-fit viewing angle for each model and the corresponding RMS velocity deviation are tabulated in Table 1, sorted by the goodness of fit. The viewing angle given is for the best fit between 0 and 90; these are the realistic models since many lines of evidence place the near end of the bar in this quadrant. For the few models that have a better fit outside this quadrant, that result is given in the table footnotes. ## 5. Results: I. The best model ### 5.1. Properties of the model Our primary result is that we have found a model which reproduces the outer contour of the $`\mathrm{}V`$diagram fairly well. This model is model 1 in Table 1; a number of the models that are runners-up are closely related to it. Model 1 has a bar with semimajor axis 3.6 kpc and Lagrangian radius 5.0 kpc, corresponding to a pattern speed of 41.9 $`\mathrm{km}\mathrm{s}^1\mathrm{kpc}^1`$. The best-fit $`\mathrm{}V`$diagram is shown in Figure 4 and the RMS velocity deviation is 16.54 $`\mathrm{km}\mathrm{s}^1`$. The minimum in RMS deviation is well localized at a viewing angle of 34 to the bar major axis, although changes of a few degrees ($`<5\text{}`$) are possible without greatly worsening the fit. The localization in RMS deviation is similar for all of the better models. The effects of changes in the viewing angle are discussed further in Section 6. Figure 5 shows the surface density distribution of the combined bar, bulge, and disk+halo components, as projected along the $`z`$ axis of the Galaxy. The figure shows the central 8 kpc by 8 kpc region of the model, in what is essentially a face-on view – although the contours are in surface density of mass, not light. This plot demonstrates the influence of the bulge component, which makes the central concentration of the bar much higher than that of a Ferrers bar in isolation. It is also clear that the full surface density distribution is less elongated than the bar component alone, with an axis ratio of about 3:1 as compared to the bar component’s axis ratio of 4:1. When the mass density is integrated over $`100<z<100`$ pc, the lower estimate for the thickness of the gas layer, the resulting distribution is similar to that of Figure 5 but with mass surface density lower by a factor of about four. Within this range of $`z`$, at the bar end the mass surface density is 140 $`\mathrm{M}_{}\mathrm{pc}^2`$; in the central region at $`R<0.5`$ kpc the mean mass surface density is 2900 $`\mathrm{M}_{}\mathrm{pc}^2`$. A comparison of these mass surface densities with the gas surface density suggests our neglect of gas self-gravity is justified. The gas density in the innermost 8 kpc square of this model is shown in Figure 6, also in a face-on view. The long, straight high density features in the bar are shocks, with transverse velocity jumps of $`200\mathrm{km}\mathrm{s}^1`$, extending out to 2.9 kpc from the galactic center. They are parallel but offset from each other; near the center the straight shocks join onto an oval or nuclear ring of high density gas that is also the location of shocks. The semi-major axis of this oval is 0.5 kpc. The gas surface density in the shocks is 5–20 $`\mathrm{M}_{}\mathrm{pc}^2`$; within $`R<0.5`$ kpc the mean gas surface density is 130 $`\mathrm{M}_{}\mathrm{pc}^2`$. The straight, offset shocks and the inner oval are characteristic of gas flow in strongly barred potentials with an inner Lindblad resonance (Athanassoula 1992b). Dust lanes with morphologies similar to the high density gas in Figure 6 are observed in many barred galaxies. The dust lanes are presumably caused by the high gas density at the shock (Prendergast 1962, unpublished; see also van Albada & Sanders 1982; Prendergast 1983; Athanassoula 1992b). Spectroscopy of barred galaxies shows sharp velocity jumps at the location of the dust lane (e.g. Pence & Blackman 1984; Lindblad et al. 1996; Regan, Vogel & Teuben 1997; Weiner et al. 1999). Figure 6 also shows that there is little gas in the lens region of the galaxy, inside 3 kpc; barred galaxies often show a central hole in the gas distribution swept clear by the angular momentum transport of the bar (e.g. NGC 1300, England 1989; NGC 1398, Moore & Gottesman 1995; and NGC 4123, Weiner et al. 1999). Outside 3 kpc, the gaseous disk is relatively quiescent; the bar does not drive a large response in the outer disk. The disk does not exhibit spiral patterns outside the bar radius; spirals in the outer disk could be driven by spirals in the stellar disk and/or the self-gravity of the gas, which we have neglected in order to concentrate on the inner Galaxy. The gas velocity field as seen in a non-rotating frame, in the inner $`8\text{ kpc}\times 8`$ kpc region, is shown in Figure 7. For clarity, we have plotted only every fourth cell. The velocity changes abruptly at the shocks along the density peaks. Essentially, gas in the bar moves up to the shock at relatively high velocity and hits the shock, dissipating energy. The post-shock gas then streams back down the bar, gaining velocity quickly as it moves away from the shock and falls down the potential well. Gas streamlines in the bar are elongated along the bar, in the manner of the $`x_1`$ family of stellar orbits in bars, but are clearly not symmetric about the major axis of the bar, unlike the $`x_1`$ orbits. The shocks are located along the leading edge of the streamlines and are approximately parallel to the bar; the major axis of the elongated streamlines is rotated approximately $`5\text{}`$ ahead (toward the leading side) of the bar major axis. This angle, which we will refer to as the “lead angle,” is closely related to the pattern speed of the bar, to be discussed further in Section 6.3. Near the center of the bar, the major axes of the streamlines change, so that the streamlines are elongated across the bar more than along it, similar to the $`x_2`$ family of stellar orbits present in bars with inner Lindblad resonances (Athanassoula 1992a,b). The central oval of high gas density corresponds to this family of streamlines. Again, the streamlines are rotated by an oblique angle with respect to the bar, unlike the $`x_2`$ stellar orbits, which are perpendicular to the bar major axis. ### 5.2. Inverting the projection into $`\mathrm{}V`$space The plot of the gas streamlines offers some understanding of the features in the $`\mathrm{}V`$diagram, but the effect of projection into $`\mathrm{}V`$space is much clearer in Figure 8. This figure plots the radial velocity observed in Model 1 as a function of position on the grid, i.e. over the plane of the Galaxy, showing the radial velocity before it is projected into the $`\mathrm{}V`$diagram. The gas at forbidden velocities moves toward the Sun at $`\mathrm{}>0\text{}`$, the side where most of the gas is moving away, and vice versa at $`\mathrm{}<0\text{}`$. It is clear from Figure 7 and Figure 8 that forbidden velocities belong to low-density gas approaching the shocks. The preshock region with forbidden velocities extends all the way out to the shock tip at 2.9 kpc. However, the magnitude of the forbidden radial velocities in the preshock region falls below 100 $`\mathrm{km}\mathrm{s}^1`$ at about 1.5 kpc from the Galactic center, which corresponds roughly to $`\mathrm{}=\pm 6\text{}`$ for the viewing angle of 34. Past this point, emission at forbidden velocities is obscured in the Milky Way by the band of emission from foreground gas. Identifying the emission in the forbidden quadrants with the low-density preshock gas may explain why the forbidden emission is much more extensive in H I than in CO, while the peaks in the H I $`\mathrm{}V`$diagram, which come from higher density regions, are present in the CO $`\mathrm{}V`$diagram. The identification of forbidden velocities with the preshock gas also illuminates some of the difficulty Jenkins & Binney (1994) had matching their sticky-particle models to the data. The $`\mathrm{}V`$diagrams they presented have very little emission in the forbidden quadrants. However, their maps of gas density in the plane of the simulation show that the apparent lack of emission is because there are very few particles in the preshock regions at any given time, as discussed in Section 3. The high and narrow peaks in the EVC at $`\mathrm{}+3\text{}\text{ and }4\text{}`$, or $`0.6`$ kpc projected distance from the Galactic center, have no counterparts in the equivalent axisymmetric model (Figure 2). The origin of these peaks can also be understood from Figure 7; the elongation of the orbits caused by the strong ellipticity of the gravitational potential results in high gas velocities roughly parallel to the bar major axis. The observed high radial velocities arise from the gas on elongated orbits just as it passes the oval of high-density gas (Figure 8). The EVC declines rapidly beyond the peak because the gas at larger radii does not fall as deeply into the bar’s potential well, and is on less elongated orbits. Many authors (e.g. Gunn et al. 1979; Gerhard & Vietri 1986; Liszt 1992; Burton & Liszt 1993) have noted that the peaks in the EVC and the rapid decline imply an unusual rotation curve if the gas is assumed to move on circular orbits; the inferred rotation curve also shows a sharp rise and rapid decline. These features are more naturally explained by gas flow in a triaxial potential (e.g. Gerhard & Vietri 1986, Burton & Liszt 1993). Simulations such as model 1 show that not only the EVC peaks, but also the forbidden emission, are accounted for by gas flows in a strong bar. As Burton & Liszt emphasize, comparisons with a derived rotation curve instead of the full $`\mathrm{}V`$diagram both embody incorrect assumptions about the inner Galaxy and discard valuable data from the forbidden quadrants of the $`\mathrm{}V`$diagram. Figure 8 can also be used to determine the location within the plane of the Galaxy of a feature in the $`\mathrm{}V`$diagram, or an object whose longitude and radial velocity are known but whose distance is uncertain. For example, the 3-kpc expanding arm goes approximately through the points $`(\mathrm{},V)`$ = ($`10\text{}`$, $`100\mathrm{km}\mathrm{s}^1`$), ($`5\text{}`$, $`75\mathrm{km}\mathrm{s}^1`$), (0, $`50\mathrm{km}\mathrm{s}^1`$), (+2.5, $`35\mathrm{km}\mathrm{s}^1`$) (Liszt & Burton 1980). Locating these points on Figure 8 shows that they lie approximately on a arc centered on the Galactic center and of $`2.5`$ kpc radius, suggesting that the 3-kpc arm could be a spiral arm at about that radius with a small pitch angle, and that its motion is consistent with the overall Galactic velocity field, removing the need for large anomalous expansion velocities. In fact, an arm at approximately the right position is visible in Figure 6. We note that even though the simulation is bisymmetric, the synthesised $`\mathrm{}V`$diagram has some asymmetry because one end of the bar is closer to the Sun than the other. The observed $`\mathrm{}V`$diagram is somewhat more asymmetric than the model, however. We cannot rule out the possibility that the observed asymmetry is due to actual asymmetries in the gas distribution or the shape of the Galaxy. However, the asymmetries in H I are considerably smaller than those in the CO $`\mathrm{}V`$diagram (Dame et al. 1987; Bally et al. 1988). The most obvious deviation of this model from the observations is that it is not as strongly peaked at positive $`\mathrm{}`$ as the data, although this is essentially due to the asymmetry in the peaks of the data, since the model compromises by slightly overestimating the peak at negative $`\mathrm{}`$. The model also produces a strong diagonal feature in the interior of the $`\mathrm{}V`$diagram from about (+3, +100 $`\mathrm{km}\mathrm{s}^1`$) to (–3, –100 $`\mathrm{km}\mathrm{s}^1`$) which is not present in Figure 1. This feature is caused by the nuclear ring of high density gas, which is not seen in the observations both because the gas is probably in the $`H_2`$ phase (cf. Rubin, Kenney & Young 1997) and because those parts of this feature with $`|V|<100`$km s<sup>-1</sup> are obscured by the “main maximum” of foreground from disk gas. The gas density and velocity fields in our model 1 are consistent with those observed in external barred galaxies. In particular, the straight shock regions with high gas densities can be identified with the straight dust lanes along the bar seen in many barred galaxies, which are generally thought to be the locations of shocks (Prendergast 1962, unpublished; see also e.g. Prendergast 1983; van Albada & Sanders 1982; Athanassoula 1992b). Model 1 provides the best fit among our models, but it is by no means a unique solution to the problem of reproducing the $`\mathrm{}V`$diagram. A slightly different choice of parameters could conceivably do better, and it is almost certain that some potential with components other than the particular analytic forms we used could improve on Model 1. However, the Galaxy is likely to resemble Model 1 in certain major respects, such as viewing angle, bar size, possession of an ILR, and high pattern speed. These conclusions are partly drawn from our experience with other, less well-fitting models, which we now discuss. ## 6. Results: II. Other models In this section we describe other models to illustrate the influence of variations in some of the major parameters. This exercise allows us to infer the properties that a successful model is likely to possess in order to reproduce the observations. A natural question to ask is whether the adverse consequences of changing one parameter can be compensated for by changes to other parameters. In general, the effects of the parameters are sufficiently interlinked that attempting to compensate by making one change has other unintended consequences. Given the number of parameters, it is impractical to test for all possible compensatory changes, but we do not believe that large variations in important parameters can be compensated away. Although Model 1 was one of the last models to be run, it is a close variant of Model 3, which we had tried much earlier (our 24th run); we tried a number of variations to improve Model 3 before actually succeeding. Our experience makes it seem unlikely that some other radically different model could fit equally well or better, but we cannot rule out the possibility. ### 6.1. Changes in viewing angle Changing the angle from which Model 1 is viewed is not properly a different model for the potential, but can drastically change the resulting $`\mathrm{}V`$diagram. Figure 9 illustrates the systematic changes that occur when Model 1 is viewed at angles $`10`$, $`5`$, $`+5`$, and $`+10`$ from the optimum value of $`\varphi _{\mathrm{LSR}}=34\text{}`$. Viewing a model more nearly end-on than optimum, as in Figure 9(a) and (b), produces both higher peaks in the EVC and steeper declines from the peaks. It also reduces the extent of the gas in the forbidden quadrants, relative to the height of the peaks. Once again, reference to Figure 7 reveals the reasons for these changes. In a more end-on view of the bar, the elongated orbits that produce the velocity peaks are projected more onto the line of sight of the observer, making the peaks higher. Counter to what might be expected, the peaks do not move significantly closer together in a more end-on view because the streamlines in this part of the flow are curved, and the region contributing to the peaks rotates somewhat. The curve of the streamlines is caused by the presence of an ILR, because the $`x_2`$ orbit family forces the elongated streamlines in the inner region of the bar away from the center. The more end-on view also means that the region with highly to moderately elongated orbits subtends a smaller angle, and so the fall-off with increasing $`|\mathrm{}|`$ is more rapid. The relative deficiency of gas in the forbidden quadrants occurs because the shocks, and the preshock regions responsible for the forbidden emission, subtend a smaller angle when the bar is viewed more end-on. In the more end-on view, the projected components of the velocities of the preshock gas are larger, which compensates somewhat, but the slope of the decline in the EVC from the peaks into the forbidden quadrants is steeper. Clearly, the peaks could be lowered by reducing the central density of the model, but the more end-on view would then yield too little emission in the forbidden quadrants. The effects of a more side-on view of the bar, as seen in Figure 9(c) and (d), are essentially exactly the opposite. The velocity peaks drop and their slope is gentler. The extent of the gas in the forbidden quadrants increases, but the lower projected velocities give a gentler slope to the EVC. Gross variations in viewing angle, to the point where, for example, a model is viewed fully side-on at $`\varphi _{\mathrm{LSR}}90\text{}`$, can produce $`\mathrm{}V`$diagrams that deviate somewhat from these rules of thumb. For example, some models such as numbers 6, 9, 12, and 18 can produce high velocity peaks at side-on viewing angles because the innermost streamlines derived from $`x_2`$ orbits (approximately perpendicular to the bar) are viewed end-on. These models are of little practical interest, since a number of other lines of evidence rule out such large viewing angles – for example, a grossly side-on view cannot produce the magnitude offset between bulge stars at positive and negative longitudes, as shown by Stanek et al. (1997). (We note that models 6, 9, 12, and 18 are all slow bars in which $`R_L2.4a`$; see below.) ### 6.2. Motion of the LSR The $`\mathrm{}V`$diagrams were constructed by assuming that the LSR is moving with a circular (tangential) velocity $`\mathrm{\Theta }_0=220`$ $`\mathrm{km}\mathrm{s}^1`$ relative to the Galactic Center, with no radial motion. We tested the effect of assuming a different velocity of the LSR relative to the Galactic Center. A radial motion of –5 to +10 $`\mathrm{km}\mathrm{s}^1`$, positive outward, can be accommodated; values outside this range significantly worsen the models’ fit to the data. The best values of the radial motion are between 0 and +5 $`\mathrm{km}\mathrm{s}^1`$. The fits are not sensitive to reasonable variations of the circular speed, since the data are near $`\mathrm{}=0\text{}`$; values of $`\mathrm{\Theta }_0`$ from 160 to 240 $`\mathrm{km}\mathrm{s}^1`$ were tested and yielded acceptable fits. Varying the LSR motion has a minimal effect on the relative ranking of the models. The non-circular motion predicted by the models for gas at the solar position is small. For Model 1, the gas at the solar position has a tangential velocity of 211 $`\mathrm{km}\mathrm{s}^1`$, and a radial motion of –0.7 $`\mathrm{km}\mathrm{s}^1`$ (inward). Model 1 has an outer Lindblad resonance (OLR) near the solar position, but the gas is on an essentially circular orbit. The OLR could have observable effects on the kinematics of stars in the solar neighborhood, in either mean velocity or dispersion. The nature of the effects is not simple to predict (cf. Kalnajs 1992, Kuijken & Tremaine 1992, Weinberg 1994); moreover Dehnen’s (1998) analysis of Hipparcos data shows that the velocity structure of nearby stars is quite complicated. ### 6.3. Varying the pattern speed We created a sequence of models including Model 1 to test the effect of varying the Lagrange radius or, equivalently, the pattern speed of the bar. The sequence in Lagrange radius $`R_L=4`$, 5, 6, 7, and 8 kpc yielded models 2, 1, 5, 8, and 4 respectively. This sequence includes most of the best-fitting models (Model 3 is closely related).<sup>3</sup><sup>3</sup>3 The groups of models {33, 26, 22, 6}, {23, 12, 9}, {19, 13}, and {21, 18} also comprise sets where only the Lagrange radius is changed – the effects are similar. Figure 11 shows face-on views of the gas density in this sequence of models, like that of Figure 6 for Model 1; Figure 10 shows $`\mathrm{}V`$plots for Models 2, 5, 8, and 4, to be compared with Figure 4. The streamlines in Model 1 are not symmetric about the bar major axis; in fact the major axis of the streamlines is rotated by about 5 with respect to it, the “lead angle” referred to in Section 5. Figure 11 shows that the lead angle increases with the Lagrange radius, as far as 25 for the slowest bar. The somewhat surprising result that several models with grossly different Lagrange radii and lead angles all appear to fit the $`\mathrm{}V`$data reasonably well arises because the models simply compensate by moving the best-fit viewing angle synchronously with the changes in the lead angle. The best-fit viewing angle stays roughly constant with respect to the shocks, which means that it also changes in a clockwise sense with respect to the bar, causing $`\varphi _{\mathrm{LSR}}`$ to decrease. Thus changes in viewing angle are strongly coupled to the angle the gas streamlines make with the bar. The systematic change in the location of the shocks has a relatively simple explanation. As the Lagrange radius is increased, the bar pattern speed slows (for $`R_L=4.0`$, 5.0, 6.0, 7.0, 8.0, $`\mathrm{\Omega }_p=54.2`$, 41.9, 34.9, 30.2, 26.6 $`\mathrm{km}\mathrm{s}^1\mathrm{kpc}^1`$ respectively). Inside the Lagrange or corotation radius, gas overtakes the gravitational potential well of the bar; the shocks are caused as the gas climbs out of the well, slows down, and piles up (Prendergast 1983). Although the shape of the gas streamlines is dependent on the full gas-dynamics, the magnitude of the velocity for gas at a given radius is roughly set by the gravitational acceleration from the mass interior to it, which is the same in all five models. In a frame co-rotating with the bar, if the bar is slower, the gas is moving faster as it overtakes the bar, so it climbs farther out of the potential well before the shock pile-up occurs. Therefore, in slower pattern speed models, the shocks are farther ahead of the bar, in the sense of more positive lead angle. The increased speed of the gas relative to the potential also increases the strength of the shocks. The behavior of the shocks rules out slow bars, if we demand that the Milky Way bar should resemble bars in other galaxies. In external galaxies, the prominent dust lanes frequently seen along the bar run along the “leading” sides of the bar; the morphology of these dust lanes and exemplary galaxies are discussed by Athanassoula (1992b). Strong bars generally have straight dust lanes while weaker bars sometimes have curved dust lanes; in both cases, the dust lanes are generally parallel to the bar, as in the shocks of Model 1, or angled slightly in the sense of smaller lead angle. These dust lanes are identified with the high-density shocks, such as those in Figure 6, as discussed above. We know of no barred galaxies that have dust lanes with a lead angle of more than a few degrees; Athanassoula (1992b) argued that therefore strong bars rotate quickly. Merrifield & Kuijken (1995) have also showed that the bar in NGC 936 rotates quickly, via a completely independent method. The position of the shocks in models 4 and 8 ($`R_L=7.0`$ and 8.0 kpc), and in all other slow bar models we have run, is grossly inconsistent with what we know about barred galaxies. We reject these models for this reason, even though some of them formally fit the $`\mathrm{}V`$diagram well. ### 6.4. Bar strength and shape The streamline plot of Figure 7 and the contours of observed velocity shown in Figure 8 suggest that the bar has to be strong and fairly elongated. Only a massive bar can produce the large non-circular motions needed to put gas at the forbidden velocities observed in the $`\mathrm{}V`$diagram. If the gas streamlines are less elongated than those seen in Figure 7, the regions with forbidden velocities are smaller and subtend a smaller range of Galactic longitude. In our best model, the bar component has a mass of $`M_{\mathrm{bar}}=9.8\times 10^9`$ $`\mathrm{M}_{}`$, and the mass of the bulge component (within 1 kpc radius) is $`M_{\mathrm{bul}}=5.4\times 10^9`$ $`\mathrm{M}_{}`$. The effect of a weaker bar on the $`\mathrm{}V`$plots is shown in Figure 12. Models 10, 11, 13, and 14 show a significant deficit of gas in the forbidden quadrants, notably at $`\mathrm{}5\text{}`$. These models have smaller bars than Model 1 with lower $`M_{bar}`$ (even though the bar density $`\rho _{0,bar}`$ is somewhat higher). Models 6, 9, and 12, which also have less massive bars than Model 1, do somewhat better at producing material in the forbidden quadrants, but only because the weaker forcing potential is partly compensated for by the stronger shocks that occur in a slow-rotating bar, as noted above. However, models 6, 9, and 12, like all the other slow-bar models, have shocks in an implausible position and are not viable models for the Galaxy. The bar must also be strong in the sense of having a large axis ratio. The formal axis ratio of the Ferrers bar in Model 1 is 4:1, although the actual axis ratio of the total mass distribution, when the bulge and disk are included, is closer to 3:1 (cf. Figure 5). Models with smaller axis ratios generally do not reproduce the data well. Figure 13 shows $`\mathrm{}V`$diagrams for several models whose bar components have axis ratios smaller than that of Model 1, with $`a:b`$ from 3.1:1 to 2.3:1. The axis ratios of the total mass distributions are fatter still. Although these models have different bar lengths, their appearance in $`\mathrm{}V`$diagrams is similar: they produce EVCs that are gently sloped, not sharply peaked as seen in the observations. In particular, the decline of the EVC away from the peaks is fairly sharp in the observations, but much too gentle in the models with low axis ratio bars. Even an extremely centrally concentrated model but wide-barred potential such as Model 46, which has a small dense bulge, does not produce sharp peaks. More massive bars – longer, more dense, or both – do not successfully produce sharper peaks or better models: the most massive bars are models 45, 42, 37, 27, 43 and 44 ($`M_{bar}`$ = 41.2, 27.2, 22.7, 17.5, 15.9, and 15.9 $`\times 10^9\mathrm{M}_{}`$, respectively). As discussed above, the peaks in the $`\mathrm{}V`$diagram are produced by the strongly non-circular motions inside the bar while the steep decline in the EVC as $`|\mathrm{}|`$ increases further is linked to the weakening of the non-circular motions as the quadrupole field decays quickly with Galactocentric distance. An axisymmetric model with an unusual mass distribution could be made to produce this behavior but could not, of course, give rise to forbidden velocities. A strong and elongated bar is favored to produce both forbidden velocities and the narrow peaks in the EVC. ### 6.5. The presence of an inner Lindblad resonance As already noted, the peaks of the $`\mathrm{}V`$diagram arise from orbits just outside the oval of high density gas in the center, where the streamlines rotate to be highly angled to the bar rather than closely aligned with it. This rotation of the streamlines is related to the presence of an inner Lindblad resonance (ILR) (Athanassoula 1992a,b). Bars with an ILR have a family of stellar orbits near the center that are elongated perpendicular to the bar rather than along it, and the rotated streamlines are related to these orbits. Bars without an ILR have only streamlines elongated along the bar; these streamlines would yield peak gas velocities as they pass the center (Athanassoula 1992b). An inner Lindblad resonance forces the elongated orbits away from the center, causing the highest bar-induced streaming velocities to occur some distance out, and producing sharply defined peaks in the $`\mathrm{}V`$diagram that are several degrees apart. If the bar did not have an ILR, the EVC peaks are not necessarily as sharply defined, nor can they be separated by several degrees in longitude, as is observed. Model 27 is the least centrally concentrated of all our models, and its best-fit $`\mathrm{}V`$diagram has EVCs without dominant peaks; the positive velocity EVC is nearly flat from $`\mathrm{}=3\text{}`$ to $`\mathrm{}=10\text{}`$. The least centrally concentrated potentials are models 27, 51, 45, 50, 48, and 42 ($`M_{bul}/M_{bar}`$ = 0.019, 0.096, 0.11, 0.13, 0.16, and 0.17 respectively). Figure 14 shows $`\mathrm{}V`$diagrams for four of these weakly-concentrated potentials, which have EVCs with weak or gentle peaks. The observed strength and separation of the peaks in the $`\mathrm{}V`$diagram suggests that the Galactic bar must have an ILR. The central mass concentration, represented in our model by the “bulge” component, is responsible for the ILR, and is also necessary to cause the sharply rising peaks in the EVC. Our adopted modified Hubble profile for the central mass component has a uniform density core, whereas the luminosity density in the Milky Way rises all the way to the center as the $`1.8`$ power of the radius (Becklin & Neugebauer 1968). The finite resolution of the grid code vitiates attempts to simulate the effects of a central cusp; strong gradients in the angular velocity on scales below a few grid cells cannot be accurately represented. However, the small core radius in our best model, $`r_c=0.2`$ kpc (four simulation grid cells), is well inside the ILR feature at $`R0.4`$ kpc. The existence of the ILR implied by the EVC peaks requires only a concentrated mass within that radius, so our conclusion is little affected by the details of the density profile. ## 7. Discussion We have shown that gas flow in a barred model of the Galaxy can fit many of the observed features of the H I $`\mathrm{}V`$diagram, most notably the emission in the forbidden quadrants and the sharp peaks in velocity. Our best fit model was arrived at through adjusting the free parameters by trial and error. Although the model has been tuned, the number of parameters is relatively small for a model of the Galactic potential. Furthermore, the complexity of the dynamics governing the gas response to the potential makes constructing a reasonably good model a non-trivial pursuit. This model does show that it is possible, and that the reservations of Jenkins & Binney (1994) regarding the ability of simple gas-dynamical models to reproduce the data are perhaps too pessimistic. Our preferred model has a bar semi-major axis of 3.6 kpc. The bar component itself has an axis ratio of 4:1, although the “bulge” in our model should also be considered as part of the bar, and the axis ratio of the bar+bulge is somewhat fatter, approximately 3:1. The bar in this model rotates quickly, with a Lagrange radius of 5.0 kpc (bar pattern speed 42 $`\mathrm{km}\mathrm{s}^1\mathrm{kpc}^1`$) and the bar major axis is inclined at 34 to our line of sight. Our model 1 differs in a number of important respects from the models of the Milky Way bar proposed by Binney et al. (1991), and further developed by Jenkins & Binney (1994) and by Englmaier & Gerhard (1998). These authors favor a considerably smaller bar and a higher pattern speed, placing corotation at $`R3.5`$ kpc, because they employ a cusped $`x_1`$ orbit to give the narrow peaks at $`\mathrm{}\pm 3\text{}`$. The $`x_2`$ orbit family, which is much less extensive in their models than in ours, gives a smaller peak very close to $`\mathrm{}=0`$. While their models were developed to interpret the CO $`\mathrm{}V`$diagram, they fail to account for the large and extensive forbidden velocities seen in H I. The strength and size of the bar are required by forbidden velocities in excess of 100 $`\mathrm{km}\mathrm{s}^1`$ extending as far as $`\mathrm{}=\pm 6\text{}`$. Were the true viewing angle much less than our preferred 34, as favored in some studies, the bar would have to be considerably longer to produce the observed forbidden velocities. Our constraint on viewing angle is not independent of the bar pattern speed, however, since slower bars give better fits when viewed at smaller angles. Models with a Lagrange radius of $`R_L`$ = 4.0 to 6.0 kpc (bar pattern speed $`\mathrm{\Omega }_p`$ = 54 to 35 $`\mathrm{km}\mathrm{s}^1\mathrm{kpc}^1`$), i.e. fast-rotating bars, are favored; models with higher $`R_L`$ (lower $`\mathrm{\Omega }_p`$) have shock patterns in the gas that differ drastically from those observed in other barred galaxies. It is unlikely that the viewing angle could be forced below 25 to the bar major axis. We interpret the narrow velocity peaks at $`\mathrm{}\pm 3\text{}`$ as the signature of gas streaming along the bar past a nuclear ring in the Milky Way which lies close to the location of the inner Lindblad resonance. If the bar is strong, the high speed of these streams does not require an unusual radial mass profile – the flow patterns shown in Figures 2 & 4 arise from two mass distributions that both, when azimuthally averaged, give the circular velocity curve shown in Figure 3. The mass distribution in the inner Galaxy does have to be sufficiently concentrated for an ILR to be present, however; if this were not the case, the peaks would lie much closer to $`\mathrm{}=0`$. The location of the peaks at $`\mathrm{}\pm 3\text{}`$ requires the semi-major axis of the nuclear ring to be $`400`$ pc – on the small end of the distribution of nuclear rings seen in other barred galaxies (Buta & Crocker 1993). As nuclear rings in external galaxies are generally highly gas rich (Helfer & Blitz 1995; Sofue 1996; Rubin, Kenney & Young 1997), it is no surprise that the associated velocity peaks in the Milky Way stand out in CO as well as H I. We note that the rotation curve of our preferred model, shown in Figure 3, indicates that the bulge and bar components together dominate the rotation curve in the inner few kpc of the Galaxy. We cannot isolate the contribution of the dark halo component, since our analytical model lumps the dark halo and the axisymmetric part of the disk together. However, since the Galaxy does have a disk, it is clear that the dark halo cannot be very dominant in this model. Although this potential is not a unique model of the Galaxy, as discussed above in Section 5, we believe that any model that fits the $`\mathrm{}V`$diagram will have to have non-axisymmetric motions as strong as those in Model 1 and, hence, a bulge+bar which dominates the rotation curve in the inner part of the Galaxy. Englmaier & Gerhard (1998) modeled the gas flow in the inner Galaxy, using models derived from COBE photometry. They found that the luminous matter must dominate over dark matter inside the solar circle, in order to match the terminal velocity curve in the non-forbidden quadrants. We do not claim that because our model 1 gives a reasonable fit, the mass distribution in the inner Galaxy must necessarily be very close to the analytic form we have assumed. The real mass distribution in the inner Galaxy is undoubtedly more complex than our simple analytical model. A different form of mass distribution will yield somewhat different results for the best-fitting model parameters. However, we believe that the real mass distribution will resemble Model 1 in its chief details: the strength and size of the bar, presence of an ILR, and viewing angle which is not too close to end-on. We have not attempted to satisfy the many other constraints on the shape of the inner Galaxy, such as COBE photometry, simultaneously. The model is broadly consistent with some results, such as the bar viewing angle determined by the IRAS point sources (Weinberg 1992), the magnitude offset of red clump stars (Stanek et al. 1997), and the distribution of OH/IR stars (Sevenster et al.1999). Fux (1999) has compared the appearance of arm features produced in a self-consistent model with features in the CO and H I $`\mathrm{}V`$diagrams; his preferred model has a bar of similar length, with an ILR, and which rotates quickly, but the preferred viewing angle is somewhat smaller, 25, and the bar is fatter. Fux’s comparison of models to data emphasizes high-density gas, while ours probes mostly low-density gas, which may be responsible for some of the differences. The viewing angles in Fux’s best model and in ours are both incompatible with models which invoke a fairly end-on bar to account for the high microlensing optical depth towards the Galactic Bulge (Zhao & Mao 1996; see also Fux 1997). The $`\mathrm{}V`$diagrams synthesized from fluid models are sensitive to the details of the potential and the viewing angle, and the comparison with the data is unaffected by extinction. For these reasons we believe that the technique has great power to discriminate among candidate models of the inner Milky Way. We may eventually hope to identify a model of the Galactic bar that satisfies photometric constraints and fits both the CO and H I kinematic data. We are grateful to Dick van Albada and Lia Athanassoula for providing us with the gas dynamics code and for helpful comments and advice on its use, to Harvey Liszt for providing the data of Liszt & Burton (1980) in electronic form, and to an anonymous referee for a thoughtful report. This work was supported by NSF grant AST 96/17088 and NASA LTSA grant NAG 5-6037. BJW acknowledges support from a Carnegie postdoctoral fellowship.
no-problem/9904/astro-ph9904191.html
ar5iv
text
# 1 Abstract ## 1 Abstract A sample of 323 Ultraluminous IRAS galaxies (ULIRGs) has been correlated with the ROSAT All-Sky Survey and ROSAT public pointed observations. 22 objects are detected in ROSAT survey observations, and 6 ULIRGs are detected in addition in ROSAT public pointed observations. The detection is based on a visual inspection of the X-ray contour maps overlaid on optical images of ULIRGs taken from the Digitized Sky Survey. Simple power law fits were used to compute the absorption-corrected fluxes of the ROSAT detected ULIRGs. The ratio of the soft X-ray flux to the far-infrared luminosity is used to estimate the contribution from starburst and AGN emitting processes. These results are compared with the ISO SWS ULIRG diagnostic diagram. ## 2 Class properties of ULIRGs ### 2.1 Specific observational results #### 2.1.1 IRAS 10026+4347 The ULIRG IRAS 10026+4347 can also be classified as a narrow-line quasar. The FWHM of the $`\mathrm{H}\beta `$ line is about 2500 $`\mathrm{km}\mathrm{s}^1`$, and strong optical Fe II multiplet emission is a prominent feature of its optical spectrum. The X-ray spectrum exhibits a steep X-ray continuum slope, with a photon index for a simple power law fit of $`\mathrm{\Gamma }=3.2\pm 0.5`$, typical of narrow-line Seyfert 1 galaxies and narrow-line quasars. The (0.1$``$2.4 keV) luminosity of IRAS 10026+4347, obtained via a simple power law fit to the data, and corrected for absorption by neutral hydrogen along the line of sight, is $`1.1210^{45}\mathrm{erg}\mathrm{s}^1`$. The ratio of the soft X-ray (0.1$``$2.4 keV) to far-infrared (40$``$120 $`\mu `$m) flux is 0.25. In Section 2.3 we argue that values of this ratio above about 0.003 require a contribution of an AGN component, in addition to starburst processes. #### 2.1.2 Mrk 231 Mrk 231 (IRAS 12540+5708) is detected both in the ROSAT All-Sky Survey and in ROSAT public pointed observations. The X-ray light curve (cf. Fig. 1), obtained from a ROSAT pointed observation, suggests some indication of variability with a doubling time scale of about 0.4 days. ### 2.2 ROSAT-detected ULIRGs 22 of the 323 ULIRGs from the IRAS 1.2 Jansky redshift catalogue (Fisher et al. 1995) are detected in the ROSAT All-Sky Survey (cf. Table 1). By inspecting the structure of the X-ray emission in overlays on optical images taken from the Digitized Sky Survey, it is strongly believed, that the objects in Table 1 are potential identifications of ULIRGs in soft X-rays. Table 2 lists the ULIRGs detected in public ROSAT pointed observations, in addition to the objects detected in the ROSAT All-Sky Survey. 6 objects are detected in pointed observations, resulting in a total number of 28 ULIRGs detected with ROSAT. Although the ROSAT energy range does not allow the probing of highly obscured regions in ULIRGs, the ROSAT All-Sky Survey allows at least a statistical approach to the class properties of ULIRGs. In Section 2.3 we discuss the ratio of the soft X-ray to far-infrared luminosity of ROSAT detected ULIRGs, which can be used to estimate the relative fraction of starburst emitting processes and emission due to accretion onto supermassive black holes. ### 2.3 The soft X-ray to far-infrared flux ratio Since the total X-ray luminosity of a star-forming galaxy is proportional to its total star-formation rate, one might assume that a high X-ray luminosity might just reflect a high star-formation rate. A problem with this picture arises when one compares the soft X-ray (0.1$``$2.4 keV) flux with the far-infrared (40$``$120 $`\mu `$m) flux. Both quantities are proportional to the star-formation rate, and an increase in the star-formation rate results in the first order in a horizontal shift of an object in Fig. 2. We (Boller & Bertoldi 1996) found that in equilibrium, the ratio of the soft X-ray to far-infrared flux is about 0.003. Considering variable star-formation rates, the ratio between both quantities varies by about a factor of 3 (see the evolutionary tracks in the right panel of Fig. 2, where an increase of the star-formation rate by a factor of 10 for a time scale of $`10^8`$ years is assumed). The total far-infrared fluxes were computed from the IRAS 60 and 100 $`\mu `$m fluxes following Helou (1985). To compute the soft X-ray fluxes from the PSPC count rate, a simple power law spectrum of the form $`\mathrm{E}^\alpha `$ was assumed. The fluxes were converted into luminosities using eq. 7 of Schmidt & Green (1986). A Hubble constant of $`\mathrm{H}_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and a cosmological deceleration parameter of $`\mathrm{q}_0=0.5`$ were adopted. In Fig. 2 (left panel) the ratio between the soft X-ray and far-infrared flux is plotted against the far-infrared flux. The right panel gives the corresponding distribution for luminosities. The ratio between the soft X-ray and far-infrared flux ranges over 4 orders of magnitude. An additional AGN contribution is necessary to reach the X-ray to far-infrared ratio for those objects with values above 0.003. ## 3 Comparison with ISO SWS results In Fig. 3 the ROSAT results are compared with the diagnostic diagram obtained from ISO SWS measurements to distinguish between starburst and AGN processes in ULIRGs. The ISO SWS diagnostic diagram shows the ratio between the high- and low-excitation fine structure lines versus the strength of the PAH $`7.7\mu `$m feature (see Lutz, this proceedings). The circles indicate ROSAT-detected ULIRGs which have measured values in the ISO diagnostic diagram. The size of the circle scales with increasing soft X-ray to far-infrared flux ratio. All ROSAT detected-ULIRGs in this diagram are predominantly powered by star-formation processes, as the ratio for all objects is below the critical value of 0.003. This is in agreement with the prediction from the ISO measurements, as all objects are located in Fig. 3 in the region where the AGN contribution is less than 50 per cent. For ROSAT-detected ULIRGs with ratios of the soft X-ray to far-infrared flux above a value of 0.003, no ISO SWS \[O IV\], Ne II or PAH feature measurements are available. ## 4 Future prospects for the study of ULIRGs We have proposed to observe the most interesting ULIRGs within the guaranteed time program of XMM. We intend to extend our studies by precisely determining the spectral and timing properties of ULIRGs, to further disentangle starburst- and AGN-emitting processes in ULIRGs.
no-problem/9904/astro-ph9904226.html
ar5iv
text
# Abstract ## Abstract A multi-cloud model is presented which explains the soft X-ray excess in NGC 4051 and, consistently, the optical line spectrum and the SED of the continuum. The clouds are heated and ionized by the photoionizing flux from the active center and by shocks. Diffuse radiation, partly absorbed throughout the clouds, nicely fits the bump in the soft X-ray domain, while bremsstrahlung radiation from the gaseous clouds contribute to the fit of the continuum SED. Debris of high density fragmented clouds are necessary to explain the absorption oxygen throats observed at 0.87 and 0.74 keV. The debris are heated by shocks of about 200-300 $`\mathrm{km}\mathrm{s}^1`$ . Low velocity ($``$ 100 $`\mathrm{km}\mathrm{s}^1`$ ) - density (100 $`\mathrm{cm}^3`$) clouds contribute to the line and continuum spectra, as well as high velocity (1000 $`\mathrm{km}\mathrm{s}^1`$ ) - density (8000 $`\mathrm{cm}^3`$) clouds which are revealed by the FWHM of the line profiles. The SED in the IR is explained by reradiation of dust, however, the dust-to-gas ratio is not particularly high ($`3\times 10^{15}`$). Radio emission is well fitted by synchrotron radiation created at the shock front by Fermi mechanism. ## 1 Introduction NGC 4051 is a SAB galaxy at z=0.0023. It is classified as a Seyfert 1 galaxy, and is characterized by unusually narrow permitted lines, only moderately wider than the forbidden lines (Osterbrock 1977). Simultaneous observations by ROSAT-IUE and GINGA of a sample of 8 Seyfert 1 galaxies (Walter et al 1994), including NGC 4051, show that the UV to X-ray spectral energy distribution (SED) can be decomposed into two major distinct components: a nonthermal hard X-ray continuum and a broad emission excess (bump) spanning from UV to soft X-rays. All models (power-law, thin disk, bremsstrahlung, black body) are able to reproduce the soft X-ray spectra in the Walter et al. sample, except the power-law model for NGC 4051. The evidence that the power-law model is not a good representation of the X-ray spectrum of NGC 4051, when the absorbing column density is fixed to the galactic value, contrasts with the long established situation that the power-law is the best $`simple`$ fit to X-spectra of active galactic nuclei (AGN). This indicates that the spectrum is more complicated, and, for example, could be affected by intrinsic absorption. The addition of a soft component to the X-ray spectrum, which accounts for an excess in the 0.1-2 keV band, is also raised by Fiore et al. (1992). From the study of GINGA data they found a spectral variability consistent with a constant underlying power-law slope modified by partial covering or by a ’warm absorber’. On the other hand, the soft excess frequently found in EXOSAT spectra were formerly interpreted as thermal emission from the innermost regions of a viscous heated accretion disk (Arnaud et al. 1985). As suggested by Pounds et al. (1994), both the soft excess and the blue bump emission may arise, instead, from reprocessing the hard X-rays in dense cold cloudlets surviving close to the central source. The ionizing photons absorbed in optically thick material will be reemitted at the black-body equilibrium temperature ($`10^510^6`$ K) as long as the density of the absorbing gas is sufficiently high. Maximum temperatures of the bump component are evaluated to about 5$`\times 10^5`$ K (Walter et al. 1994). Variability on small scale of NGC 4051 is used to reveal the characteristics of the emitting clouds and of the velocity field. The observed spectral variability on scale of hours can be explained in terms of a change in ionization parameter plus an emerging soft excess (Pounds et al. 1994). The assumption of a typical variability time scale of one hour leads to a matter density of $`5\times 10^7`$ $`\mathrm{cm}^3`$ and a thickness of the photoionized gas of $`1.4\times 10^{14}`$ cm. Considering that the radius of the source of the optical and UV continuum is larger than a few $`10^{14}`$ cm, Walter et al. (1994) claim that it can be covered in less than 3 years if the velocity of the absorbing clouds is larger than 200 $`\mathrm{km}\mathrm{s}^1`$ . From the spectral observations in the optical range (De Robertis & Osterbrock 1984) it appears that there may not be a simple dichotomy between the broad line region (BLR) and the narrow line region (NLR) in NGC 4051. Instead, the continuum of line widths suggests that the emitting regions may be inhomogeneusly filled with clouds or filaments showing densities in a very large range, so that the division into a BLR and a NLR is an extreme simplification. The asymmetry of the narrow-line profiles is consistent with radial outflow or expansion of the gas. The presence of strong blue wings (Veilleux 1991) favors models with radial motion and a source of obscuration. NGC 4051 is peculiar also because its UV spectrum is very steep and probably affected by intrinsic reddening. The presence of dust in the nucleus of NGC 4051 has been suggested by several groups (Walter et al 1994). Veilleux (1991) claims that dust is probably present and is the source of line asymmetry and that the differences in profiles of H$`\beta `$ and H$`\alpha `$ are due to reddening and/or optical depth effects. Balmer fluxes are known to vary over periods shorter than one year. The basic model of a warm absorber (Pounds et al. 1994) consists of a gaseous region close to the BLR photoionized by the central radiation, originating absorption features in the central X-ray radiation. This radiation is represented by a power-law characterized by a photon index and normalized by the intensity at 1 keV (photons $`\mathrm{cm}^2\mathrm{s}^1\mathrm{keV}^1`$). The procedure is first to fit the power-law plus a cold absorbing column. Then, add one or more components seeking the best fit. The warm absorber model introduces two free parameters: the column density, N, and the ionization parameter, U. Principal absorption edges are identified with ionized carbon, nitrogen, and oxygen. The effect of a warm absorber can be interpreted also as a set of emission features. Komossa & Fink (1997) have recently modeled the absorbed spectrum of NGC 4051 from ROSAT observations in terms of warm absorption. Their excellent model can explain most of the features observed in the X-ray spectrum. However, some discrepancies with a general scenario still appear, e.g., reduced solar abundances are in contrast with the absence of dust, a single component warm absorber is in contrast with the plurality of cloud conditions found by De Robertis & Osterbrock (1984). In this work we consider NGC 4051 in the light of a multi-cloud model, as was found appropriate also for other galaxies (e.g. the Circinus galaxy model by Contini, Prieto & Viegas 1998). The clouds move radially outward the galaxy. A composite model which consistently accounts for the effects of the radiation from the active center and of the shocks on the emitting clouds is adopted. Particularly, we suggest that the soft X-ray excess can be reproduced by reprocessed radiation (diffuse radiation) emitted from the hot slabs of gas after being partially absorbed by optically thick regions throughout the clouds. The consistency of the model is checked up by fitting both the SED of the continuum in a large frequency range and the emission lines. In §2 the general model is described. In §3 the soft X-ray excess is modeled. The results of model calculations are compared with the observed continuum SED and the line spectrum in §4 and §5, respectively. Final remarks follow in §6. ## 2 The model Clouds in the NLR with different physical conditions and with radial outward motion are assumed by the model. A shock forms on the outer edge of the clouds while the power-law radiation from the active center (a.c.) reaches the inner edge. The flux from the central source and the shock are the primary sources of ionization and heating of the gas. The cloud may be depicted as consisting of a large number of parallel slabs in which the conditions within any given increment are essentially uniform. The gas is ionized and originates a diffuse radiation field through recombination and lines. The intensity of the diffuse radiation depends on the source function which cannot be determined unless the ionization equilibrium is already known (Williams 1967). In the source function are therefore implicit the effects of collisional ionization. The role of diffuse secondary radiation in shocked clouds is illustrated by Viegas, Contini & Contini (1998). Diffuse radiation which emerges from the heated slabs is partially absorbed throughout the cloud. We suggest that the soft X-ray excess may be produced by the reprocessed radiation, e.g. diffuse radiation, emitted from a group of clouds in different physical conditions. The SUMA code is adopted (Viegas & Contini 1994 and references therein). A composite ionizing spectrum with spectral index $`\alpha _{UV}`$ = 1.4 and $`\alpha _X`$ = 0.4 is assumed for all models as for previous modeling (see Contini, Prieto, & Viegas 1998). The other input parameters, i.e., the shock velocity, $`\mathrm{V}_\mathrm{s}`$, the preshock density, $`\mathrm{n}_0`$, the radiation flux intensity at 1 Ryd, $`\mathrm{F}_\nu `$(in number of photon $`\mathrm{cm}^2\mathrm{s}^1\mathrm{eV}^1`$), and the dust-to-gas ratio by number, d/g, are chosen from the observational evidence and are adjusted by the modeling. Cosmic abundances are assumed (Allen 1973) and the preshock magnetic field $`\mathrm{B}_0`$ = $`10^4`$ gauss. ## 3 The soft X-ray excess The observational data are taken from Komossa & Fink (1997, Fig. 5). We assume that many clouds, at different physical conditions, contribute to the ’warm absorption’. A large range of densities is considered. Actually, a continuum distribution of densities could be present. Velocities are in agreement with the observed emission line FWHM (De Robertis & Osterbrock 1974, Veilleux 1991). Some clouds are shock dominated, other are radiation dominated with radiation intensities in the range usually found for the NLR of Seyfert galaxies. The input parameters of the models used to fit the X-ray excess are listed in Table 1. The best fit of the X-ray excess by model calculations is shown in Figure 1. Models 12, 13, and 14 are not included in the figure and will be discussed further on. Each model represents one type of cloud with a characteristic geometrical width, D, which is also given in Table 1. Shock dominated (SD) models are calculated adopting $`\mathrm{F}_\nu `$= 0. Notice that model 8 is a SD model corresponding to the radiation dominated (RD) model 9. For each model, the diffuse radiation emitted by each cloud is calculated. The thick solid line in Fig. 1 represent the weighted summed spectrum from the models. In the last column of Table 1 the weights, W, adopted for single-cloud models in the average sum are shown. The weights reflect the dilution factor $`(\mathrm{r}/\mathrm{d})^2`$ (r is the distance from the clouds to the center and d is the distance to earth) and the covering factor. It can be noticed that an acceptable fit to the observations is obtained by the present sample of models, considering that the observed data are contaminated by residuals and errors. X-ray data at energies higher than 1.3 KeV can be well fitted by the flat power-law radiation from the central source, which is directly reaching the observer. Lower energy photons are generally absorbed by the cloud. This can be easily seen in the curve representing model 5, which corresponds to high density clouds. The spectrum given by model 7 shows a trend similar to the observed one, however lacks the throats of absorption in the critical edges. The two primary flux models have the lowest weights. High density clouds are invoked to fit the deep throat at about 0.85 KeV, which is due to the O VIII edge. The geometrical thickness of the clouds are small, particularly for dense clouds, indicating that fragmentation is rather strong. This is consistent with a regime of turbulence in the presence of shocks. The weights of the high density models are very high, particularly for model 3. In the corresponding clouds the cooling rate is high, due to the high density; moreover, the hot gas emitting region is very small. Consequently, the flux emitted by each cloud is weak and many clouds are necessary to fit the data. This is consistent with the small D. Model 6 with a larger D has also a high weight. In this case most of the gas inside the cloud is cold and neutral because the intensity of the central radiation is relatively low. Modeling implies the choice of a composite model which is seldom unique. The validity of the present composite model for the warm absorber will be checked in the next sections by the consistent fit to the observed continuum SED and to the line spectrum. ## 4 The continuum References of the observed continuum below $`10^{16}`$ Hz are given in Table 2. Data in the X-ray range are from Komossa & Fink (1997). The SED of the observed continuum is plotted in Fig. 2. Optical observations integrated over the whole galaxy are not included. For energies less than 13.6 eV, the continuum calculated by the models, which is essentially the sum of bremsstrahlung radiation emitted from the gas within the clouds, roughly fit the data. The weights adopted to sum up the models are the same as listed in Table 1. Reradiation by dust in the IR depends strongly on the shock velocity. The observed IR maximum constrains the dust-to-gas ratio (d/g value), while the frequency corresponding to the maximum depends on the dust temperature. The grains are heated by radiation and by collisions. Dust and gas mutual heating and cooling determine the temperature of dust which follows the temperature of the gas (Viegas & Contini 1994). For all the models dust-to-gas ratios in the range 1-3$`\times 10^{15}`$ are adopted. The SED of the continua corresponding to single models 1,2,3,4,6,8,9,10, and 11 (dotted lines) and their weighted sum (solid lines) are shown in Fig.2. The three components originating in the clouds (synchrotron emission due to Fermi mechanism, dust emission, and free-free emission) are shown separetely. It can be noticed that most of the models are below the lower edge of the figure because their weight is very low. The weighted sum corresponds to the SED of model 3 the weight of which largely prevails. Depending on the models, the bremsstrahlung component peaks at a different frequency. So, the weighted average shows two peaks, one at $`10^{14}`$ Hz and another at 3 $`\times 10^{16}`$ Hz. Actually, absorption by ISM peaks at 3 $`\times 10^{16}`$ Hz (Zombeck 1990). In the radio range ($`<10^{10}`$ Hz), the free-free emission is higher than the observational data, which are nicely fitted by synchrotron emission due to Fermi mechanism at the shock front. As happens for Circinus, the bremmstrahlung emission at such low frequencies is probably absorbed. In fact, if we assume an average temperature for the clouds of about 10<sup>4</sup> K, the optical depth for free-free absorption is greater than unity for $`\nu 10^{11}`$, increasing at lower frequencies. Notice, however, that the observed optical continuum is not well fitted yet. Moreover, reradiation by dust calculated by the models peaks at $`10^{13}`$ Hz, while the data peaks at $`3\times 10^{12}`$ Hz. Therefore, the ensemble of clouds which explain the X-ray data is not complete, and models representing other clouds at different physical conditions must be included in the multi-cloud model. A final choice of the best fitting models will be possible after discussing the line spectrum. In fact, modeling the line and continuum spectra simultaneously implies cross checking of one another until a fine tuning of the models is obtained. ## 5 The optical - near-UV line spectrum The observed line spectrum is taken from Malkan (1986, Table 1). A typing error crept in the published data has been corrected (H$`\beta `$= 31. and not 3.1, M. Malkan, private communication). The data are reddening corrected adopting E(B-V) = 0.32 which represent the obscuration inside the clouds (Malkan 1986). This is higher than the intrinsic reddening, E(B-V)=0.08 and galactic reddening E(B-V)=0.02. Notice that Walter et al. (1994) obtain E(B-V)=0.05-0.13. The calculated line intensities relative to H$`\beta `$ are compared to the observations in Table 3. Radiation dominated models provide relatively high HeII/H$`\beta `$ (e.g. model 9), while shock dominated models provide higher \[OII\]/H$`\beta `$ and \[OIII\]4363/H$`\beta `$ (e.g. model 8). The results presented in Table 3 indicate that both radiation-dominated and shock-dominated clouds should be taken into account for the final multi-cloud model. Model AV0 corresponds to the weighted average of the single-cloud models accounting for the X-ray data. This average model gives line ratios practically identical to model 3, which, in fact, largely prevails. However, the fit to the observed line ratios is not good enough. Therefore, models 12, 13, and 14, which are negligible in the fit of the soft X-ray excess, are invoked to improve the fit of the line ratios observed in the optical-near UV range. The most noticeable features in the spectrum of NGC 4051 are that the ratio of the high ionization line widths to the low ionization line widths is considerably smaller than for other objects in the sample of De Robertis & Osterbrock (1984) and that blue wings up to -800 $`\mathrm{km}\mathrm{s}^1`$ are present in all the forbidden line profiles (Veilleux 1991). Veilleux (1991) also noticed four ’shoulders’ (at -40, -110, -180, and -350 $`\mathrm{km}\mathrm{s}^1`$ ) in the observed line profiles. The line intensity ratios observed by Veilleux are not included in Table 3 which shows the line ratios to H$`\beta `$. In fact, the narrow and broad component of H$`\beta `$ could not be deblended. The ’shoulders’ indicate the velocities of emitting clouds which are represented by models 14, 13, 3, in addition to those represented by models 4 to 10. Model 12 represents the high velocity gas. The preshock density is high enough to cause the rapid cooling of the gas downstream. So, low ionization line ratios relative to H$`\beta `$ are particularly high (Table 3), and bremsstrahlung emission in the X-ray domain is low (Fig. 3). Dust reemission is completely annihilated by the sputtering of the grains in the immediate postshock region. Model 13 is characterized by a low $`\mathrm{V}_\mathrm{s}`$ (100 $`\mathrm{km}\mathrm{s}^1`$ ), a low $`\mathrm{n}_0`$ (100 $`\mathrm{cm}^3`$), and a low primary flux ($`\mathrm{F}_\nu `$= 5 $`10^{10}`$ units). These conditions generally represent the clouds either in the outer NLR or in Liners (Contini 1997). This model shows a too high \[OIII\] 4959/\[OII\] 3727 line ratio , while model 14, which is a SD model characterized by a low $`\mathrm{V}_\mathrm{s}`$(50 $`\mathrm{km}\mathrm{s}^1`$ ), provides a very strong \[OII\]/H$`\beta `$. The averaged spectrum (AV1) is given in the last column of Table 3 and shows an acceptable fit to the observations. The relative contributions of models 3,12,13 and 14 to single line fluxes are shown in Table 4. As the models are distinguished particularly by the shock velocities, the results refer to the line profile features observed by Veilleux (1991). Model 3 is chosen to represent the contribution of the clouds generating the X-ray excess. Interestingly, the shock velocities adopted to fit the spectra confirm the observed prevailing FWHM of about 200-300 $`\mathrm{km}\mathrm{s}^1`$ in the observed line profiles of H$`\beta `$, \[OIII\] 4363, and \[OI\]. Models calculated with $`\mathrm{V}_\mathrm{s}`$= 100 $`\mathrm{km}\mathrm{s}^1`$ provide 98-99 % of the \[OIII\] 4959+5007 and HeII 4686 lines, 94 % of He II 3200, and 85-86 % of the \[NeV\] 3426 and \[NeIII\] 3869 lines, respectively. Models calculated with $`\mathrm{V}_\mathrm{s}`$= 50 $`\mathrm{km}\mathrm{s}^1`$ contribute essentially to the \[OII\], \[NII\], and \[SII\] 6717,6730 lines. The high velocity model (12) contributes to all the lines, except \[OIII\], \[NeIII\], \[NeV\], and HeII 4686. Model results are not in full agreement with observations for all the lines, but they roughly show how complex is the structure of the NLR which extends from the edge of the BLR to the outskirts of the galaxy. Models 13 and 14 are chosen also to improve the fit of the continuum SED. In fact, the mutual heating of dust and gas provides a dust temperature low enough to settle the peak in the IR at about 3 $`10^{12}`$ Hz and the trend of model AV1 in the optical range improves the fit which was obtained by model AV0 (Fig. 3). Clouds corresponding to models 12, 13, and 14 contribute mostly to emission lines but not to the X-ray excess. ## 6 Final Remarks Komossa & Fink find that the X-ray spectrum consists of a power-law modified by absorption edges and an additional soft excess during the high-state in source flux. Their results indicate a column density of ionized material of log N = 22.7 and a ionization parameter of log U = 0.4. The underlying power-law is in its steepest observed state with photon index $`\mathrm{\Gamma }_X`$ = -2.3. They assume that the absorber is one-component with a gas temperature of 3 $`\times 10^5`$ K, metal abundances up to 0.2 $`\times `$ solar, electron density $`\mathrm{n}_\mathrm{e}`$$`3\times 10^7`$ $`\mathrm{cm}^3`$, a thickness D $`2\times 10^{15}`$ cm, at a distance r $`3\times 10^{16}`$ cm from the central power source , and no dust. Moreover, they claim that no emission line component can be fully identified with the warm absorber. Notice that in photoionization models, the high temperature is associated to a low chemical abundance. However, it is well known that the galaxies often show an abundance gradient, indicating higher abundances in the central regions. Thus unless the depletion is due to dust, it seems unreal to assume such low abundances in the central region of NGC 4051. In the previous sections we have selected the models which consistently fit the observed continuum in all the frequency ranges and the line ratios. Our results show that the so-called warm absorber is composed by many clouds in different physical conditions. The column density within each cloud contributing to the soft X-ray excess does not exceed 5$`\times 10^{20}`$ $`\mathrm{cm}^2`$, considering a postshock compression of $``$ 10 for low velocity clouds (200-300 $`\mathrm{km}\mathrm{s}^1`$ ). Comparing with the results obtained by Komossa & Fink this indicates that hundreds of clouds form the warm absorber. The preshock densities span from the values which fit the NLR to values approaching those of the BLR, i.e. between 400 and $`10^7`$ $`\mathrm{cm}^3`$, in agreement with the predictions of De Robertis & Osterbrock (1984) that a large range of densities characterize the emitting clouds in NGC 4051. Obviously, the clouds responsible of the edge absorption are the densest, in agreement with the density indicated by Komossa & Fink. Moreover, they predict no dust, while the modeling of the continuum in the present work shows that dust is present inside the clouds to explain the IR emission. However, the dust-to-gas ratio is rather low, even lower by a factor $`3`$ than found for Liners by Viegas & Contini (1994). The central radiation flux reaching the clouds ranges from $`10^{11}`$ to $`10^{13}`$ photons per cm<sup>-2</sup> s<sup>-1</sup> eV<sup>-1</sup> at the Lyman limit which are ”normal” values in the NLR of AGN (Viegas & Contini 1994). However, the high density clouds which explain the soft X-ray excess at $``$ 1 keV are all shock dominated. This is an interesting result which shows that the gas is heated by the shock. Temperatures of about 6$`\times 10^5`$ K correspond in fact to shocks of $``$200 $`\mathrm{km}\mathrm{s}^1`$ . These temperatures are in agreement with the temperatures predicted by previous models (Komossa & Fink 1997 and references therein) which were explained by very strong radiation from the active center photoionizing and heating a gas characterized by low metal abundances, in order to reach such high temperatures. Our model shows that shock dominated clouds are present in large number, indicating that the central source radiation is screened, probably by the BLR clouds, and that the filling factor is high. Because the high temperature is due to shock, the fit to the observations is obtained with cosmic abundances. Finally, in the present warm absorber model we assume that some very dense clouds are characterized by relatively low velocities (Table 1, models 1, 2, and 3). In the nuclear region of the Circinus galaxy some clouds characterized by velocities of 250 $`\mathrm{km}\mathrm{s}^1`$ and preshock densities of 5000 $`\mathrm{cm}^3`$ were invoked in order to fit the emission spectra (Contini et al. 1998). Generally, in AGN, higher densities correspond to higher velocities as for the clouds corresponding to model 12 ($`\mathrm{n}_0`$ = 8000 $`\mathrm{cm}^3`$, $`\mathrm{V}_\mathrm{s}`$= 1000 $`\mathrm{km}\mathrm{s}^1`$ ). Notice that the high density low velocity clumps are characterized by a very small geometrical thickness in NGC 4051, therefore, they could be identified with the debris of high density-velocity clouds from the BLR edge which have been fragmented by cloud collision in a turbulent regime. Fragmentation is generally accompanied by a considerable loss of kinetic energy. If model 3 represents these debries, their distance from the active center can be calculated from $`\mathrm{F}(\mathrm{H}\beta )_{\mathrm{obs}}`$ $`\mathrm{d}^2`$ = $`\mathrm{F}(\mathrm{H}\beta )_{\mathrm{calc}}`$ $`\mathrm{r}^2`$, where $`\mathrm{H}\beta _{\mathrm{obs}}`$ is the absolute flux of H$`\beta `$ observed at earth (Malkan 1994), d is the distance of the galaxy from earth (d=14 Mpc), $`\mathrm{H}\beta _{\mathrm{calc}}`$ is the absolute flux of H$`\beta `$ calculated at the gaseous clump, and r is the distance of the clumps from the active center. Adopting $`\mathrm{H}\beta _{\mathrm{obs}}`$ = 31.$`\times 10^{14}`$ $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$and $`\mathrm{H}\beta _{\mathrm{calc}}`$ = 27. $`\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$(see Table 3), r results 1.5 pc. The high density components of the warm absorber are thus located between the NLR and the BLR. In conclusion, a multi-cloud model can explain the soft X-ray excess in NGC 4051, the optical emission line spectrum and the SED of the continuum. Indeed, modeling implies the choice of some conditions which should actually prevail. So single-cloud models must be considered as prototypes. The clouds are heated and ionized by the photoionizing flux from the active center and by the shocks. Due to the high temperature gas, diffuse radiation, partly absorbed throughout the clouds, is used to explain the bump in the soft X-ray domain, while free-free emission from lower temperature gas and dust reradiation from the ensemble of the clouds mainly fit the far IR to optical continuum. The fit of the continuum in the IR shows that the dust-to-gas ratio is not particularly high ($`3\times 10^{15}`$). Radio emission is well fitted by synchrotron radiation created at the shock front by Fermi mechanism. Debris of high density fragmented clouds are necessary to explain the absorption oxygen throats observed at 0.87 and 0.74 keV. The debris are heated by shocks of about 200-300 $`\mathrm{km}\mathrm{s}^1`$ . Low velocity ($``$ 100 $`\mathrm{km}\mathrm{s}^1`$ ) - density (100 $`\mathrm{cm}^3`$) clouds and high velocity (1000 $`\mathrm{km}\mathrm{s}^1`$ ) - density (8000 $`\mathrm{cm}^3`$) clouds, which are revealed from the FWHM of the line profiles, contribute to the line and continuum spectra. Acknowledgements. This paper was partially supported by the Brazilian funding agencies PRONEX/Finep, CNPq, and FAPESP. References Allen, C.W. 1973 in ”Astrophysical Quantities” (Athlon) Arnaud, K.A. et al. 1985, MNRAS, 217, 105 Balzano,V.A., & Weedman,D.W. 1981, ApJ, 243,756 Contini, M. 1997, A&A, 323, 71 Contini, M., Prieto, M.A., & Viegas, S.M. 1998, ApJ, 505, 621 De Robertis, M.M. & Osterbrock, D.E. 1984, ApJ, 286, 171 De Vaucouleurs,G., De Vaulcouleurs, A., Corwin, G.H.G. et al. 1991, Third Reference Catalogue of Bright Galaxies, Version 3.9 De Vaucouleurs, A., Longo, G., 1988, Catalogue of Visual and Infrared Photometry of Galaxies from 0.5 $`\mu `$m to 10 $`\mu `$m (1961-1985) Fiore, F. et al. 1992, A&A, 262, 37 Gregory,P.C. & Condon,J.J. 1991 ApJS, 75,1011 Ficarra,A., Grueff,G., & Tomasetti,G. 1985, A&AS, 59,255 Komossa, S. & Fink, H. 1997, A&A, 322, 719 Lebofsky,M.J. & Rieke,G.H. 1979, ApJ, 229,111 McAlary,C.W., McLaren,R.A., & Crabtree,D.R. 1979, ApJ, 234,471 Malkan, M. 1986, ApJ, 310, 679 Moshir, M. et al. 1990, Infrared Astronomical Satellite Catalogs, The Faint Source Catalog, Version 2.0 Osterbrock, D.E. 1977, ApJ, 215, 733 Penston,M.V., Penston,M.J., Selmes, R. A., Becklin, E. E., & Neugebauer, G. 1974, MNRAS, 169,357 Pounds, K.A., Nandra, K., Fink, H.H., & Makino, F. 1994, MNRAS, 267, 193 Rieke,G.H. 1978, ApJ, 226,550 Rieke,G.H. & Low, F.J. 1972, ApJL, 176,L95 Stein,W.A., & Weedman,D.W. 1976, ApJ,205,44 Soifer,B.T., Boehmer, L., Neugebauer, G., &Sanders, D. B. 1989, AJ, 98,766 Veilleux, S. 1991, ApJ, 369, 331 Viegas, S.M. & Contini, M. 1994, ApJ, 428, 113 Viegas, S.M., Contini, M., & Contini, T. 1998, A&A submitted Walter, R. et al. 1994, A&A, 285, 119 Williams, R.E. 1967 ApJ, 147, 556 Wisniewski,W.Z. & Kleinmann,D.E. 1968, AJ,73,866 Zombeck, M.V. 1990 in ”Handbook of Space Astronomy and Astrophysics” Cambridge University Press, p. 199 Figure Captions Fig. 1 The X-ray spectrum of NGC 4051. Filled squares indicate the data Single model results are indicated by numbers which refer to Table 1. The thick solid line represents the weighted sum which best fits the data. Fig. 2 The fit of the SED of the continuum by the ensemble of models which form the warm absorber. Dotted lines represent the single models and solid lines the weighted sum. Filled squares refer to observations in the X-ray (see Fig. 1). Open squares indicate the observation data at lower frequencies. Fig. 3 Same as Fig. 2 including models 12 (dotted line), model 13 (short-dashed lines), and model 14 (long-dashed lines). The dot-dashed line shows that free-free emission is absorbed in the radio range. The thick solid lines refer to model AV1 and the thin solid lines to AV0.
no-problem/9904/astro-ph9904206.html
ar5iv
text
# Bar Diagnostics in Edge-On Spiral Galaxies. II. Hydrodynamical Simulations. ## 1 Introduction The importance of bars in the structure of spiral galaxies is recognised in the Hubble sequence for the classification of galaxies (Sandage 1961). In the first paper of this series (Bureau & Athanassoula 1999, hereafter Paper I), we highlighted the difficulties involved in the identification of bars in edge-on systems. It is clear that the detection of such bars based on photometric or morphological criteria (e.g. de Carvalho & da Costa 1987; Hamabe & Wakamatsu 1989) is uncertain. Kuijken & Merrifield (1995, hereafter KM95; see also Merrifield 1996) were the first to show that the particular kinematics of barred disks could be used to identify bars in edge-on spirals. They showed that the periodic orbits in an edge-on barred galaxy produce characteristic double-peaked line-of-sight velocity distributions, which can be taken as the signature of a bar. In Paper I, we improved on the work of KM95. We studied the signatures of individual periodic orbit families in the position-velocity diagrams (PVDs) of edge-on barred spirals (before combining them to model real galaxies), and examined how the PVDs depend on the viewing angle. We adopted a widely used mass model and a well-defined method to populate the periodic orbits. Our aim was to provide insight into the projected kinematical structure of barred disks, and to provide guidance to interpret stellar and gaseous kinematical observations of edge-on spiral galaxies. We showed in Paper I that the global appearance of a PVD can be used as a diagnostic to detect the presence of a bar in an edge-on disk. The signatures of the various periodic orbit families leave gaps in the PVDs which are a direct consequence of the non-homogeneous distribution of orbits in a barred spiral. The signature of the $`x_1`$ periodic orbits is parallelogram-shaped and occupies all four quadrants of the PVDs, reaching very high radial velocities when the bar is seen end-on and only low velocities when the bar is seen side-on. The signature of the $`x_2`$ orbits, when present, is similar to that of the $`x_1`$, but reaches its maximum radial velocities at opposite orientations. Those features can be used to determine the viewing angle with respect to the bar in an edge-on disk. However, even if carefully chosen and populated, periodic orbits provide only an approximation to the structure and kinematics of the stars and gas in spiral galaxies. For example, a number of stars may be on irregular orbits, and shocks can develop in the gas. In this paper (Paper II), we concentrate on developing bar diagnostics for edge-on disks using the gaseous component alone. We use the hydrodynamical simulations of Athanassoula (1992b, hereafter A92b), designed to study the gas flow and shock formation in barred spiral galaxies. Unlike Paper I, these simulations properly take into account the fact that the gas is not a collisionless medium. The shocks and inflows which develop in the simulations lead to better bar diagnostics than those of Paper I. In addition, we run simulations covering a large fraction of the parameter space likely to be occupied by real galaxies. The PVDs produced can thus be directly compared with observations not only to detect the presence of bars in edge-on spiral galaxies, but also to constrain the mass distribution of the systems observed. In particular, Bureau & Freeman (1999, hereafter BF99) have applied those diagnostics to long-slit spectroscopic observations of a large number of edge-on spiral galaxies, most of which have a boxy or peanut-shaped bulge, to determine the formation mechanism of these objects and study the vertical structure of bars. In Paper III (Athanassoula & Bureau 1999), using fully self-consistent three-dimensional (3D) $`N`$-body simulations, we will develop similar bar diagnostics for the stellar (collisionless) component of barred spiral galaxies. We describe the mass model and hydrodynamical simulations used in this paper in § 2. In § 3, we study the signatures in the PVDs of the various components present in the simulations and discuss the influence of the parameters of the mass model. The effects of dust extinction are illustrated in § 4. We develop the bar diagnostics for edge-on disks and discuss the limitations of our models for the interpretation of real data in § 5. We conclude in § 6 with a brief summary of our main results. ## 2 Hydrodynamical Simulations For the hydrodynamical simulations, we use the flux-splitting second-order scheme of G. D. van Albada (van Albada & Roberts 1981; van Albada, van Leer, & Roberts 1982; van Albada 1985). It is the same code as that used by A92b so we will only briefly review its main properties here. The simulations are two-dimensional and time-dependent, and the gas is treated as ideal, isothermal, and non-viscous. The simulations are not self-consistent and we do not consider the self-gravity of the gas; the flow is calculated using the potential described in Paper I (see also Athanassoula 1992a, hereafter A92a). The mass model has two axisymmetric components, a Kuzmin/Toomre disk (Kuzmin 1956; Toomre 1963) and a bulge-like spherical density distribution. They combine to yield a flat rotation curve in the outer parts of the disk. We use a homogeneous ($`n=0`$) or inhomogeneous ($`n=1`$) Ferrers spheroid (Ferrers 1877) as a third component representing the bar. Each model is described by four main parameters: the bar axial ratio $`a/b`$, the quadrupole moment of the bar $`Q_m`$ (proportional to the mass of the bar), the Lagrangian radius $`r_L`$ (the radius of the Lagrange points $`L_1`$ and $`L_2`$ on the major axis of the bar, approximately inversely proportional to the bar pattern speed), and the central concentration $`\rho _c`$. The other quantities are fixed, including the semi-major axis of the bar $`a=5`$ kpc. We refer the reader to Paper I for a more detailed description of the mass model. A92a discusses at great length its relevance to real galaxies. Suffice it to say here that the properties of the mass model are in excellent agreement with those of early-type barred spirals. The simulations are started with a massless bar and an additional axisymmetric component of mass equal to that of the desired final bar. Mass is transfered from that component to the bar over $`10^8`$ yr and the simulations are run until the gas flow is roughly stationary in the frame of reference corotating with the bar (about 10 bar revolutions). An $`80\times 160`$ cells grid is used to cover a disk of 16 kpc radius, assuming bisymmetry. The inner half of the simulations are then regridded to the $`80\times 160`$ grid and the simulations continued for another 5 bar revolutions using this increased spatial resolution ($`0.1\times 0.1`$ kpc<sup>2</sup> cell size; see van Albada & Roberts 1981). Star formation and mass loss are modeled crudely. The gas density is lowered artificially in high density regions and gas is added uniformly over the grid. This process is governed by the equation $$d\rho _g/dt=\alpha \rho _{g,init}^2\alpha \rho _g^2,$$ (1) where $`\rho _g`$ is the gas density, $`\alpha `$ is a constant (set to 0.3 $`M_{\text{}}`$ <sup>-1</sup> pc<sup>2</sup> Gyr<sup>-1</sup> in most runs), and $`\rho _{g,init}=1`$ $`M_{\text{}}`$ pc<sup>-2</sup> is the uniform initial gas density. It therefore takes about 3 Gyr for the gas to be reprocessed. There is no artificial viscosity in the code. Our main tool in this paper will be PVDs, representing the projected density of material in the edge-on disks as a function of line-of-sight velocity and projected position along the major axis. Since we are only interested in the bar region, we use only the inner 8 kpc $`\times `$ 8 kpc region of the simulations, covered by the high resolution grid. At larger radii, the motion of the gas can, to first order, be considered as circular, since the force of the bar decreases steeply with radius. Including the outer regions thus makes no difference to our PVDs. As in Paper I, the models considered are those of A92a (see her Table 1). We will also use her units: $`10^6`$ $`M_{\text{}}`$ for masses, kpc for lengths, and km s<sup>-1</sup> for velocities. It is essential to understand the orbital structure of the models to interpret properly the results of the simulations. Orbital properties have been discussed in Paper I and A92a, but also in Athanassoula et al. (1983), Papayannopoulos & Petrou (1983), Teuben & Sanders (1985), and others. Here, we will mainly use Paper I and A92a for comparison. We will also draw heavily on the results of A92b, which used the same set of simulations but for different purposes. She discussed in detail the gas flow and compared the results with the properties of real galaxies. ## 3 Bar Diagnostics In this section, we will concentrate on understanding the PVDs of two inhomogeneous bar models which are prototypes of models with and without inner Lindblad resonances (ILRs). As suggested in A92a, we will identify the existence and position of the ILRs with the existence and extent of the $`x_2`$ periodic orbits. The two models considered are the same as in Paper I: model 001 ($`n=1`$, $`a/b=2.5`$, $`Q_m=4.5\times 10^4`$, $`r_L=6.0`$, $`\rho _c=2.4\times 10^4`$) and model 086 ($`n=1`$, $`a/b=5.0`$, $`Q_m=4.5\times 10^4`$, $`r_L=6.0`$, $`\rho _c=2.4\times 10^4`$). We will then extend our results to other models and analyse how the properties of the PVDs vary as the parameters of the mass model are changed. ### 3.1 Model 001 (ILRs) Figure 6 shows PVDs for model 001, which has ILRs. The figure shows, for the inner half of the simulation, the face-on surface density of the gas and PVDs obtained by projecting the simulation edge-on and using various viewing angles with respect to the bar. Unlike the models of Paper I, which are symmetric around both the minor and major axes of the bar, the present simulations are bisymmetric, so we need to cover a viewing angle range of 180°. The viewing angle $`\psi `$ is defined to be 0° for a line-of-sight parallel to the major axis of the bar and 90° for a line-of-sight perpendicular to it, increasing counterclockwise in the surface density plots. A92b showed convincingly that the two strong parallel narrow segments present in the surface density plot of Figure 6 represent offset shocks on the leading sides of the bar, displaying strong density enhancements and sharp velocity gradients. They can be identified with the dust lanes observed in barred spiral galaxies. The structures seen at the ends of the bar and perpendicular to it are also shocks. In the inner bar region is a very intense two-arm nuclear spiral, connecting with the offset shocks. There is little gas in the barred region outside the nuclear spiral, which we will hereafter refer to as the outer bar region. Beyond the bar the surface density is almost featureless; only a few spiral arms are seen. A92b showed that, in the outer bar region, the streamlines have roughly the shape and orientation of the $`x_1`$ periodic orbits. As one moves inward, the streamlines change gradually to the shape and orientation corresponding to the $`x_2`$ periodic orbits. This flow pattern leads to the offset shocks and results in an inflow of gas toward the nuclear region, accounting for the gas distribution in the bar: low densities in the outer bar region and high densities in the center. The velocities are small (in the reference frame corotating with the bar) around the Lagrange points on the minor axis of the bar and the flow is close to circular outside the bar region. The PVDs in Figure 6 show the existence of three distinct regions: an inner region, corresponding to the signature of the nuclear spiral; an intermediate region, corresponding to the signature of the outer parts of the bar; and an external region, corresponding to the signature of the parts outside the bar. It is important to notice how low the density of the intermediate region is, compared to that of the other two regions. This is not surprising since, as mentioned in A92b and above, most of the bar region has very low gas density, with the exception of the central part (the nuclear spiral) and the two shock loci. This will form the basis of the bar diagnostics which we will develop later on. Let us now examine each of the three regions separately, by keeping only the gas in the targeted region (and masking out the gas in the other regions) when calculating the PVDs. Figure 6 shows the surface density and PVDs of model 001 when considering the low density outer bar region only. Because the high density regions have been masked out, and because we are looking only at relatively low density regions, we see much more structure in the PVDs of Figure 6 than in the corresponding sections of the PVDs of Figure 6. When compared with Figure 4 of Paper I, Figure 6 shows, at first glance, many similarities, but also, when scrutinised closer, a number of differences. This could be expected since the gas streamlines in that region follow loosely, but far from exactly, the shape and orientation of the $`x_1`$ orbits, which are elongated parallel to the bar. The parallelogram-shaped signature of the $`x_1`$ orbits in Figure 4 of Paper I is again observed here and the “forbidden” quadrants are again populated. This is due to the fact that both the $`x_1`$ orbits and the streamlines in the outer bar region are not circular, but elongated. In the gas, however, the parallelogram shape is also present for $`\psi =0\mathrm{°}`$, while, for this viewing angle, the PVD of the $`x_1`$ orbits showed a bow-shaped feature. The reason is that, unlike the $`x_1`$ orbits, the streamlines do not have their major axes parallel to that of the bar, but rather at an angle of about 20° to it (see A92b). Also, at $`\psi =90\mathrm{°}`$, the $`x_1`$ orbits show a near-linear PVD, while the gas shows an additional faint bow-shaped feature reaching high velocities near the center. Masking out a larger fraction of the simulation (not shown), it is possible to show that this extra feature arises from the very low density areas just inside the offset shocks, close to the major axis of the bar. In this section of the bar, the flow differs substantially from the behaviour of the $`x_1`$ periodic orbits. The elongated “hole” in the center of the PVDs, at intermediate viewing angles, is present in both sets of PVDs, and is due to the way the $`x_1`$ family was terminated and the way the gas density was masked out. This causes in both cases an absence (or low density) of quasi-circular streamlines near the end of the bar. In the hydrodynamical simulation, there is also a reduced degree of symmetry with respect to viewing angles of 0° or 90°, in particular for the angles 67.5° and 112.5°. They are, of course, identical in the PVDs of Paper I. This is because the gas flow is bisymmetric, as opposed to being symmetric with respect to both the minor and major axes of the bar, as are the $`x_1`$ orbits (see Fig. 2 of A92b). In Figure 6, we see that the radial velocities are maximum (compared to the circular velocity) for lines-of-sight almost parallel to the bar (highest values for $`\psi =0\mathrm{°}`$ and 157.5°), and decrease as $`\psi `$ increases, to reach a minimum when the line-of-sight is roughly along the bar minor axis (lowest values for $`\psi =90\mathrm{°}`$ and 67.5°). The same behaviour was seen for the PVDs of the $`x_1`$ orbits (Paper I), but with a slight shift of the viewing angle, so that the maximum and minimum occurred at exactly 0° and 90°, respectively. This was to be expected since, like the $`x_1`$ orbits, the streamlines in the outer bar region are elongated parallel to it (with a small offset). Similarly, the position where the maximum occurs moves out as the viewing angle increases from $`\psi =0\mathrm{°}`$ to $`\psi =90\mathrm{°}`$. This was also found for the $`x_1`$ orbits (Paper I), where it was explained by considering the trace of an elongated orbit in a PVD as a function of viewing angle and distance from the center. The explanation is analogous here. However, in the hydrodynamical simulation, the maxima generally occur at a larger distance from the center than in the periodic orbits approach (this is easily seen by comparing Fig. 6 to Fig. 4 in Paper I). This is due to the fact that the $`x_1`$-like flow does not extend up to the center of the simulated galaxy (contrary to the $`x_1`$ orbits in Paper I), but is superseded by an $`x_2`$-like flow at a certain radius. For example, the nuclear spiral (which has an $`x_2`$-like behaviour) extends about 1.6 kpc on the minor axis of the bar, which is the projected distance at which the envelope of the signature of the outer bar region in the PVD for $`\psi =0\mathrm{°}`$ drops abruptly. To see which features of the gas distribution contribute to the PVDs, we have repeated the blanking exercise, this time masking out all areas except either the shock loci along the leading sides of the bar or the density enhancements near the ends of the bar major axis (not shown). The gas in the shock loci does not contribute any outstanding features to the PVDs, mainly because the amount of gas integrated along most lines-of-sight is small, the features being very narrow. At $`\psi =22.5\mathrm{°}`$, when the line-of-sight is nearly parallel to the shock loci, they contribute the two straight and parallel segments separating the intense and faint regions of the PVD just inside $`\text{D}=\pm 2`$. The high density enhancements near the ends of the bar major axis contribute the very strong linear segment going through the center of the PVDs at $`\psi =0\mathrm{°}`$ and 22.5°. For these angles, they are seen roughly as segments of circles. For viewing angles near 90°, they give rise to the undulating parts of the PVDs at large radius. We can repeat the masking process to keep only the region of the simulation containing the nuclear spiral, as shown in Figure 6. Comparing the PVDs thus obtained with those of the $`x_2`$ family of periodic orbits (Fig. 5 of Paper I), we find again many similarities, but also some differences. In both cases, we see either an inverted S-shaped feature or a near straight segment passing through the center of the PVDs. We refer the reader to Paper I for a discussion and explanation of these shapes. More similarity is found, however, if we compare PVDs at viewing angles differing by $`20\mathrm{°}`$, e.g. comparing the gaseous PVD at $`\psi =112.5\mathrm{°}`$ to the orbital one at $`\psi =90\mathrm{°}`$. This offset seems to corresponds to an offset of the nuclear spiral with respect to the minor axis of the bar (and to the offset of the straight shocks with respect to the major axis, see A92b), although this is hard to measure in the surface density plot. Also, the PVDs of the nuclear spiral are more asymmetric with respect to the viewing angles 0° or 90° than the PVDs of the outer bar region (e.g. comparing the PVDs at $`\psi =22.5\mathrm{°}`$ and $`\psi =22.5\mathrm{°}=157.5\mathrm{°}`$). This is due to the weaker symmetry of the nuclear spiral. Because the streamlines in the central region are elongated roughly perpendicular to the bar, like the $`x_2`$ orbits, higher radial velocities (compared to the circular velocity) are reached when the bar is seen side-on than when the bar is seen end-on. Considering the maximum velocity of the PVDs as a function of the viewing angle, we find that it is highest for $`\psi =112.5\mathrm{°}`$ and 135° and lowest for $`\psi =0\mathrm{°}`$ and 22.5°. The “hole” in the parallelogram-shaped signature of the $`x_2`$ orbits (see Fig. 5 of Paper I) has completely disappeared in the hydrodynamical simulation, because the $`x_2`$-like behaviour of the gas flow persists past the inner ILR and the flow becomes almost circular in the very center. Figure 6 isolates the signature of the outer parts of the simulation in the PVDs. Because the influence of the bar decreases rapidly with radius, the flow outside the bar is close to circular. A perfectly circular orbit would yield an identical inclined straight line passing through the origin in all PVDs. The structure seen here can thus be thought of as a succession of near-circular orbits of increasing radii, yielding the “bow tie” signature observed. The “hole” seen in the center of the PVDs at certain viewing angles (e.g. $`\psi =45\mathrm{°}`$) is again due to the fact that the orbits are not perfectly circular. The strong almost solid-body features forming loop-like structures near the upper and lower limits of the envelope of the signature are due to tightly wound spiral arms in the outer disk. ### 3.2 Model 086 (no-ILRs) Figure 6 shows the face-on surface density distribution of model 086, which has no ILRs (or, equivalently, has no $`x_2`$ periodic orbits), and the PVDs obtained using an edge-on projection. The main difference with the density distribution of model 001 is the absence of any significant nuclear spiral. The strong straight and narrow features in the center of the bar are centered shock loci, caused by the high curvature of the streamlines near the major axis of the bar (see Fig. 4 in A92b). Similarly, the $`x_1`$ orbits in this model have high curvatures or loops near their major axes (A92a). The shock loci do not curve near the center because no streamline perpendicular to the bar exists; there are no $`x_2`$ periodic orbits in this model. The shocks, however, still drive an inflow of gas, resulting in the entire bar region being gas depleted. As expected, the region outside the bar is almost unaffected by the change in the bar axial ratio compared to model 001. The PVDs for model 086 reflect all of the above properties. In particular, they lack the central feature associated with the nuclear spiral in model 001, and the signature of the bar region is very faint (Fig. 6 should be contrasted with Fig. 6). Because of the absence of an $`x_2`$-like flow in the center of the simulation, the $`x_1`$-like flow extends all the way to the center. In good agreement with what happens for the periodic orbits (see Paper I or A92a), the streamlines are much more eccentric than those of model 001. The parallelogram-shaped envelope of the signature of the bar region in the PVDs therefore reaches even more extreme radial velocities (compared to the circular velocity) than in model 001, and the rising part of the envelope extends almost all the way to the center for viewing angles close to 0° (see also Fig. 9 in Paper I). When the bar is seen close to end-on, radial velocities more than twice those in the outer parts are reached. These velocities decrease rapidly as the viewing angle approaches $`\psi =90\mathrm{°}`$. Unfortunately, the signature of the bar region in the PVDs is very faint, because of the strong inflow of gas toward the center. As in the case of model 001, the regions outside the bar lead to a strong almost solid-body component in all PVDs. ### 3.3 Other Models In this section, we will analyse sequences of simulations corresponding to the ranges along the axes of parameter space likely to be occupied by real galaxies. We hope thereby to further our understanding of the PVDs of edge-on barred spiral galaxies, and to be able to extend the bar diagnostics which we will develop. We also wish to develop criteria allowing us to constrain the bar properties and mass distribution of edge-on systems. To achieve this, we will concentrate on understanding the PVDs of the gas flow within the barred region of the simulations, and will use extensively for that purpose the results of A92a and A92b. In particular, A92b showed that nuclear spirals occur in models with an extended $`x_2`$ family of periodic orbits and lead to offset shocks, while centered shocks occur in models with no or shortly extended $`x_2`$ orbits. She also showed that $`x_1`$ periodic orbits with a high curvature near the major axis of the bar are essential to the formation of shocks, and that such shocks lead to an inflow of gas toward the central regions of the simulations, depleting the outer (or entire) bar regions. Figure 6 shows a sequence of simulations with varying bar axial ratio $`a/b`$ (the other parameters are kept fixed at those of model 001). The figure shows, for each simulation, the face-on surface density distribution and the velocity field (with a few streamlines) in the frame of reference corotating with the bar. It also shows the PVDs obtained for various viewing angles with respect to the bar when the simulation is viewed edge-on. To limit the size of the figure, we show a slightly reduced number of viewing angles compared to Figures 66. For small bar axial ratios, when the shocks are short and curved, the outer bar region is not strongly gas depleted and its signature is easily visible in the PVDs. As we consider simulations with increasingly high bar axial ratios, the density of the region of the PVDs which corresponds to the outer bar region drops considerably. It reaches its lowest values for the highest bar axial ratio considered ($`a/b=5.0`$), for which the shock loci are straight and close to the bar major axis. The envelope of the signature of the outer bar region in the PVDs becomes more extreme as the bar axial ratio is increased: the maximum radial velocities reached for small viewing angles increase and the positions of the maxima get closer to the center. These behaviours are quasi-linear and the opposite is true for large viewing angles. For example, at $`\psi =0\mathrm{°}`$, the maximum velocity of the outer bar region is about 250 for $`a/b=1.5`$, 360 for $`a/b=3.0`$, and 430 for $`a/b=5.0`$. At $`\psi =90\mathrm{°}`$, the opposite is observed, with maximum velocities of about 225 and 215, respectively. This is easily understood because, as the bar axial ratio increases, the eccentricity of the streamlines in the outer bar region (and that of the $`x_1`$ orbits, see A92a) also increases, leading to higher velocities along the line-of-sight when the bar is seen end-on, and lower velocities when the bar is seen side-on. The most important effect of an increase of the bar axial ratio is the disappearance of the nuclear spiral for axial ratios $`a/b2.7`$. Because the nuclear spiral is associated with an $`x_2`$-like flow, and because the range of radii occupied by the $`x_2`$ orbits decreases rapidly as the bar axial ratio is increased (see Fig. 6 of A92a), the inverted S-shaped signature of the nuclear spiral in the PVDs disappears for large bar axial ratios. The maximum radial velocity reached by the nuclear spiral signature varies little with the bar axial ratio (when present). Figure 6 shows how the PVDs of the simulated disks change when the Lagrangian radius of the mass model is varied. A92b showed that the gas flow in a bar has shock loci offset toward the leading sides of the bar and of the form observed in early-type barred spiral galaxies only for a restricted range of Lagrangian radii, namely $`r_L=(1.2\pm 0.2)a`$. All the observational estimates for early-type strongly barred spirals also give values within this range (see, e.g., A92b; Elmegreen 1996). We will thus concentrate on this range here. Model 028, the limiting case with $`r_L=5.0`$, shows strong spiral arms starting at the ends of the bar and extending to large radii. The spiral arms are easily identified in the PVDs as long filamentary structures. This model has no nuclear spiral, and thus no corresponding inverted S-shaped feature in the PVDs. The opposite is true for models with larger Lagrangian radii or, equivalently, lower pattern speeds. Those models have extended $`x_2`$ families of periodic orbits and therefore nuclear spirals (see A92b). As the pattern speed of the bar is further decreased, the radial range occupied by the $`x_2`$ periodic orbits is increased and the nuclear spiral becomes more predominant. The outer bar region decreases accordingly (A92a). This effect is clearly seen in the PVDs of Figure 6. For increasing Lagrangian radii, they show an increase of the radial extent of the nuclear spiral signature, and a decrease of the radial extent of the outer bar region signature. There is also an increase of the maximum radial velocity reached by the nuclear spiral, but the effect is rather small. Figure 6 shows the behaviour of the PVDs as the central concentration of the mass model is varied. For low central concentrations, the gas streamlines are oval-shaped and aligned with the bar, there are no shocks, and the bar region is only slightly gas depleted. The signature of the bar region in the PVDs is then clear. As the central concentration is increased, the streamlines become more eccentric and the envelope of the signature of the bar region extends to higher velocities. Once the central concentration reaches $`\rho _c2.2\times 10^4`$, an $`x_2`$-like flow appears in the center of the bar and a nuclear spiral and offset shocks are formed. These changes are also easily seen in the PVDs, which acquire an inverted S-shaped feature, while the density in the outer bar region drops. The region occupied by the $`x_2`$ orbits then increases with increasing central concentration (A92a), and so does the radial extent of the nuclear spiral signature in the PVDs. An increase in the central concentration of the mass model has similar effects to an increase of the Lagrangian radius (see Fig. 6). This is not surprising, since both changes influence the location of the resonances in a similar way. Figure 6 shows a sequence of simulations with varying bar quadrupole moment. For the lowest quadrupole moment, the bar is weak and the flow is close to circular, with only a weak nuclear spiral and curved shocks in the center. This does not cause substantial inflow, and the gas density in the barred region remains high. The velocity field shows that there is a transition from a $`x_1`$-like flow to a $`x_2`$-like flow in the central region, but the effect is not strong since the eccentricity of the streamlines is small. All these effects reflect themselves in the PVDs. For somewhat higher bar quadrupole moments (model 058), the nuclear spiral is well-developed and its signature is strong in the PVDs, with a gap present between it and the solid-body signature of the outer parts of the galaxy. As the bar quadrupole moment is increased further, the envelope of the signature of the outer bar region in the PVDs becomes more extreme, reaching larger radial velocities and extending closer to the center. For $`Q_m5.5\times 10^4`$, the nuclear spiral disappears. Those two effects are due to the facts that the eccentricity of the streamlines increases significantly with increasing bar quadrupole moment, while the region occupied by the $`x_2`$ orbits decreases until it disappears completely (see A92a). Because the bar is so strong for high quadrupole moments, the flow is non-circular even in the outer parts of the simulations, and a “hole” is present in the center of the PVDs at intermediate viewing angles. Not surprisingly, the variations in the gas distribution and kinematics are similar when the bar quadrupole moment is increased and when the axial ratio of the bar is increased. ## 4 Dust Extinction It is interesting to note that the PVDs discussed so far have a certain degree of symmetry with respect to the viewing angles 0° or 90°. This means that although it is relatively easy to determine whether a line-of-sight is close to the major or the minor axis of a bar ($`|\psi |`$), it is considerably harder to determine in which half of the PVD (positive or negative projected distances from the center) the near side of the bar is located ($`\pm \psi `$). This situation can be contrasted to that in the Galaxy, where most studies have no difficulty identifying the quadrant in which the near side of the Galactic bar is located. Studies using infrared photometry (e.g. Dwek et al. 1995; Binney, Gerhard, & Spergel 1997), star counts (e.g. Weinberg 1992; Stanek et al. 1997), gaseous or stellar kinematics (e.g. Binney et al. 1991; Wada et al. 1994; Zhao, Spergel, & Rich 1994; see also Beaulieu 1996; Sevenster et al. 1997), and microlensing events (e.g. Paczynski et al. 1994) all indicate a bar making an angle of 15° to 45° with respect to the line-of-sight to the Galactic center (positive values indicating that the near side of the bar is at positive Galactic longitude). In the case of the Galaxy, at least two effects help the observer determine the exact orientation of the bar. Firstly, projection effects (both in longitude and latitude) mean two lines-of-sight on each side of the Galactic center reach correspondingly different parts of the bar (e.g. Binney et al. 1991). Secondly, the large difference between the distances to each side of the bar means point sources in the far side of the bar will appear significantly fainter than the corresponding sources in the near side (e.g. Stanek et al. 1997). In addition, the far side of the bar will appear thinner that the near side (Dwek et al. 1995). For a galaxy at infinity, all lines-of-sight are parallel, and no projection or distance effects are present. However, extinction within an edge-on disk can play a similar role to that of distance in the Galaxy, and can help constrain the orientation of a bar. Because the velocity spread in the inner parts of the PVDs is so large, it is unlikely that self-absorption by any line would be significant. Given the prominence of the dust lane in many edge-on spiral galaxies, extinction by dust is likely to be the dominant factor affecting the PVDs. If dust is present in significant amount, the spectroscopic signature of the far side of the bar in a PVD should be fainter than that of the nearer side. Figure 6 shows the surface density and PVDs of model 001 when considering a dust distribution proportional to the gas surface density. Because our simulations are not self-consistent, only the relative values of the density are important and the value of the dust absorption coefficient per unit mass $`\kappa `$ we use is meaningless. Our goal in this section being merely to illustrate the effects of dust on the simulated PVDs, and not to reproduce quantitatively the situation in real galaxies, we have simply increased the value of $`\kappa `$ (in which the dust to gas ratio is also folded) until the PVDs were significantly affected. In Figure 6, we have decreased the surface densities in the surface density plot to reflect the effective contribution of each point to the projected density for a viewing angle $`\psi =45\mathrm{°}`$. Unlike Figure 6, the PVDs are now far from symmetric with respect to viewing angles of 0° or 90°. In addition, PVDs at intermediate viewing angles are no longer antisymmetric with respect to the center. This is mainly because the nuclear spiral obscures most of the material behind it, breaking the symmetry of the parallelogram-shaped signature of the $`x_1`$-like flow in the outer bar region. For viewing angles $`0\mathrm{°}\psi 90\mathrm{°}`$, the nuclear spiral obscures mostly material moving away from the observer in the outer bar region, and obscures only a small amount of material moving toward the observer in the same region. Thus, the signature of the $`x_1`$-like flow in the PVDs is weakened for positive radial velocities, leading to a much fainter signature of the outer bar region in the upper halves of the PVDs than in the lower halves. The opposite is true for viewing angles $`90\mathrm{°}\psi 180\mathrm{°}`$, where the signature of the outer bar region is much fainter in the lower halves. This effect is strongest and least extended for viewing angles $`\psi 110\mathrm{°}`$ to 135°, as the line-of-sight is then roughly parallel to the major axis of the nuclear spiral (which is slightly offset from that of the $`x_2`$ periodic orbits). Some effects on the signature of the nuclear spiral itself and on the signature of the outer parts of the simulation are present, but they are less pronounced. In addition to the diagnostics suggested in the previous sections to identify a bar in an edge-on spiral galaxy and determine whether it is seen end-on or side-on, the introduction of dust in the simulations has allowed us to develop criteria to determine in which half of the galaxy the near side of the bar is located. Of course, the distribution of dust in real galaxies will inevitably be more complex than the highly idealised distribution adopted here. Nevertheless, the features due to dust in the PVDs of Figure 6 can probably still be used as a guide to interpret asymmetries present in real data. ## 5 Discussion ### 5.1 Bar Diagnostics Unlike periodic orbits studies (KM95; Paper I), where one has to adopt a way of populating the orbits, hydrodynamical simulations provide both the velocity and density of the gas. In particular, if some periodic orbit families intersect, it is not necessary to make a choice between them, because the simulations will reveal which family the gas follows in each region (and to what extent). Nevertheless, as we will discuss later in this section, the comparison with observations can still present some problems, since observations involve the strength of a given emission line rather than the gas density. Notwithstanding the problem of populating the orbits, there is generally a good agreement between the PVDs obtained from periodic orbit calculations in Paper I and those obtained here from hydrodynamical simulations. This is because there are several similarities between the periodic orbits structure of the models and the gas flow (A92b). We have in both cases a central PVD component, which we identified in Paper I with the $`x_2`$ orbits and in this paper with the nuclear spiral. Further out, we have the domain of the $`x_1`$ orbits, which covers the outer (or entire) bar regions of the simulations. In Paper I, by studying in detail the signatures of various periodic orbit families in the PVDs, we gained useful insight into the projected structure and kinematics of the gas from first principles. This allowed us to obtain a deeper understanding of the PVDs produced in the hydrodynamical simulations. The main feature of the PVDs is the gap (fainter region), present at all viewing angles, between the signature of the nuclear spiral and that of the outer parts of the simulations. Such a structure would not be possible in an axisymmetric galaxy and it unmistakably reveals the presence of a bar or oval in an edge-on disk. This gap occurs because of the large-scale shocks which are present in bars, and which drive an inflow of gas toward the center, depleting the outer bar regions. The gaps present in the PVDs produced with periodic orbits (see KM95; Paper I) are different in nature from the ones observed here, being mainly due to the absence of populated orbits in certain regions of the models, particularly near corotation. As a result, in these studies, the low density region extends well beyond corotation, although its exact extent depends on which orbits are neglected (e.g. as self-intersecting) and how the other orbits are populated. For example, in Figure 1 of KM95, the gap extends to almost twice the corotation radius. We recall that 3D $`N`$-body simulations of rotating disks produce bars which, when viewed edge-on, appear boxy-shaped if seen end-on and peanut-shaped if seen side-on (see, e.g., Combes & Sanders 1981; Combes et al. 1990; Raha et al. 1991). Taking the maximum height of the peanut-shaped bars to occur at half the corotation radius (Combes et al. 1990), we find that, in the KM95 case, the ratio of the radial extent of the gap in the PVDs to the radius where the maximum height of the peanut-shaped bulges occurs should be approximately 4 (or slightly less because of projection effects, associating the bulges with edge-on bars). On the other hand, in the hydrodynamical simulations presented here, the low density region of the PVDs reflects the low density region in the outer parts of the bar, and should thus be within corotation. When a bar is seen side-on, the ratio of the extent of the gap to the radius of maximum height of the peanut-shaped bulge should thus be approximately 2 (or slightly less). KM95 report that this ratio is about 2 for the two galaxies they studied, in agreement with the prediction of our hydrodynamical simulations. The same can be said for the galaxies studied by Bureau & Freeman (1997) and BF99. These results can only be reconciled with the periodic orbits approach if the maximum height of the peanut-shaped bulges occurs near corotation (rather than halfway). As we pointed out in § 3.1, the shapes of the features present in the PVDs vary with the viewing angle, and can thus in principle be used to constrain that quantity in an observed galaxy. The envelope of the signature of the outer bar region in the PVDs reaches very high radial velocities (compared to those in the outer parts of the simulations) for viewing angles close to the bar, and relatively low velocities for viewing angles perpendicular to it. However, the envelope is so faint in most cases that it is unlikely to be of much use with real data (see § 3.3). The signature of the nuclear spiral has an inverted S-shape for some viewing angles and is almost solid-body for others. That feature is rather thick and does not show much fine structure, however, so it may be hard to use in conjunction with observations. The best viewing angle diagnostic is probably provided by the ratio of the maximum radial velocity reached by the nuclear spiral component to the velocity in the outer parts of a galaxy. This ratio should be greater than unity for viewing angles roughly perpendicular to the bar and smaller than unity for viewing angles parallel to it. Analysing their sample, BF99 find that the above is in good agreement with their results, provided they make the reasonable assumption that bars are peanut-shaped when seen close to side-on and boxy-shaped when seen close to end-on (see, e.g., Combes & Sanders 1981; Combes et al. 1990). Other properties of the PVDs, such as the maximum radial velocity reached or the faintness of the signature of the outer (or entire) bar regions, can also help to constrain the mass distribution and bar properties of an observed galaxy (see § 3.3). In practice, however, the analysis and modeling of spectroscopic data should not be a trivial task. If a barred system has no nuclear spiral (or, equivalently, does not have an extended $`x_2`$ family of periodic orbits), there will not be a nuclear spiral signature in the PVDs and no gap, or double-peaked structure. In addition, the surface density in the bar region can be extremely low. An example of such a model was discussed in § 3.2 (model 086). In such cases, the only component likely to be detected observationally is the solid-body signature of the outer parts of the galaxy, and the kinematical detection of the bar will not be straightforward. The first step would be to rule out such a slowly rising rotation curve (the rotation curve being defined as the upper limit of the envelope of an observed PVD), for example by calculating the shape of the rotation curve expected from surface photometry. Unfortunately, this would only show that the type of gas observed (ionised, neutral, or molecular) is not present in large quantities in the central region of the galaxy, but it would not provide any information on the cause of this depletion. It is thus probably necessary to use stellar kinematics to identify a bar in such systems. Because a large percentage of the stars in the central regions of barred disks are expected to be trapped around the $`x_1`$ orbits, there should be a clear bar signature in the PVDs. We shall explore possible diagnostics based on the stellar kinematics (and corresponding PVDs) in Paper III. Cases with no nuclear spiral component should be a minority, however, at least among strongly barred early-type spiral galaxies, since observational evidence argue that these systems possess ILRs (see, e.g., Athanassoula 1991; A92b). In particular, the fact that the dust lanes in most early-type strongly barred galaxies are offset along the leading sides of the bar, rather than centered and close to its major axis, is a strong argument for the existence of ILRs (A92b). In addition, out of 17 galaxies with a boxy/peanut-shaped bulge in the sample of BF99, only four have no nuclear component in their PVD. The lack of a nuclear component in these four galaxies could be due either to the lack of ILRs, or to the lack of emitting gas around the ILRs region. At least 13 out of 17 galaxies, i.e. an overwhelming majority, have ILRs. Nevertheless, the existence of ILRs deserves further study, particularly for later type spirals. This can be done, for example, by high resolution kinematical studies, to show the change of direction of the orbits or streamline ellipticities in the central parts of galaxies (e.g. Teuben et al. 1986), or by calculating the families of periodic orbits in galaxy potentials derived from observations, to show the existence or non-existence of the $`x_2`$ (and $`x_3`$) family. ### 5.2 NGC 5746-Like PVDs Although our simulations cover a fair fraction of the available parameter space (see Table 1 in A92a), we have never come across a PVD showing a clear “figure-of-eight”, as suggested by KM95. In other words, the upper envelope of the low density region of the simulated PVDs is never as pronounced as it is in NGC 5746 or, to a lesser extent, in NGC 5965 and NGC 6722 (see KM95; Bureau & Freeman 1997; BF99). Based on periodic orbit calculations, we expect this region of the PVDs to originate from material near the ends of the bar on its minor axis (see, e.g., Fig. 2b in Paper I). Some models, particularly $`n=0`$ models, indeed have secondary density enhancements near the edge of the bar, just outside the offset shocks which we have already discussed (see Fig. 3 in A92b). These enhancements are due to the small distance between the outer ILR and the tip of the $`x_1`$ family characteristic curve in the characteristic diagram (A92b). This leads to orbit crowding, particularly near the bar minor axis. In these models, the upper envelope of the gap in the PVDs is stronger than that in most other models. Figure 6 shows two density distributions for model 088 and the corresponding PVDs for a viewing angle $`\psi =45\mathrm{°}`$. On the left (Fig. 6a,b), the entire density distribution and PVD are displayed. On the right (Fig. 6c,d), most of the PVD was masked out, leaving only the region of interest in objects like NGC 5746. The corresponding density distribution was obtained using a simple inversion scheme. The strong upper envelope of the low density region of the PVD is clearly due to the secondary density enhancement on the leading side of the bar. A similar effect is observed in many $`n=0`$ models with low bar axial ratio $`a/b`$, small Lagrangian radius $`r_L`$, and/or low central concentration $`\rho _c`$ (e.g. models 013-016, 040-043, 061-063, 087-088, and 108-110). Although the intensities obtained in these simulations are much lower than the strong envelopes observed in objects like NGC 5746, we must remember that it is the density which is plotted here, and the gaseous emission in the secondary density enhancements could be somehow enhanced, e.g. through shocks. Features similar to those density enhancements are observed in real barred galaxies, where “plumes” are sometime seen on the leading sides of bars (in NGC 1365 for example). We suggest here that these plumes may be at the origin of the strong upper envelope of the gap observed in the PVD of some galaxies. We must stress, however, that a strong envelope is unusual; it is absent from the PVD of most galaxies. In the sample of BF99, only 2 galaxies out of 17 present clear indications of such a feature. This seems in agreement with the explanation presented above, as strong “plumes” are not often observed. ### 5.3 Limitations of the Models When using the bar diagnostics developed in this paper to interpret observational data, one has to take into account the fact that the PVDs were calculated using the gas density in the simulations and not the strength of a given gaseous emission line, which is what one usually observes. Physical conditions in the gas vary across the simulated disks, and will lead to different excitation mechanisms dominating in different parts of the disks. Regions of high density and low shear are likely to have higher star formation rates and thus more photoionised gas than elsewhere. Conversely, the gas in or near shocks, such as the nuclear spiral, will be mainly shock-excited (see Binette, Dopita, & Tuohy 1985; Dopita & Sutherland 1996), with possible star formation depending on the shear. Because different excitation mechanisms lead to different emission line ratios, the relative amplitude of the various components of a PVD (e.g. the intensity of the signature of the outer parts of the disk versus that of the nuclear spiral) will depend on the emission line used in the observations. For example, Bureau & Freeman (1997) and BF99 found that, for many objects, the signature of the nuclear spiral was very strong in the \[N II\] $`\lambda `$6548,6584 lines but was almost absent in H$`\alpha `$, probably indicating that it is shock-excited (the other components of the PVDs showed \[N II\] $`\lambda `$6584/H$`\alpha `$ ratios typical of H II regions). The PVDs were not corrected for stellar absorption, however, so these results should be interpreted with caution. Nevertheless, the PVDs produced should only be used as guide when interpreting kinematical data, and one should have a basic understanding of the mechanisms involved in the production of a given line before using the PVDs for comparison purposes. We stress that the morphology of the PVDs (the multiple components) is a more significant bar diagnostic than the distribution of intensity (the relative amplitude of the components). Conversely, under certain assumptions about the physical conditions in the gas, it is possible to use the observed emission line ratios (e.g. H$`\alpha `$/H$`\beta `$) to measure the extinction due to dust in data. This could prove useful to interpret asymmetries present in observed PVDs (see § 4 for a more complete discussion of likely dust effects). Contrary to the “building blocks” approach of Paper I, which used combinations of periodic orbit families to model the structure and kinematics of barred galaxies, the hydrodynamical simulations presented here inherently take into account the collisional nature of gas, so the kinematics of the gaseous component is more accurately modeled. Approximations to the gas properties are nevertheless necessary, and we recall that the gas was treated as ideal, isothermal, infinitely thin, and non-self-gravitating. How much do our results depend on these assumptions, or, in other words, how model dependent are they? The interstellar medium is a complicated multi-phase mixture, which can only be described schematically, particularly in non-local studies covering an object the size of a galaxy. Two different approaches have been developed so far. In the first one, the gas is treated as ballistic particles which, when they collide, lose energy according to pre-specified recipes (e.g. Miller, Prendergast, & Quirk 1970; Schwarz 1981, 1984; Combes & Gerin 1985). Unfortunately, the results of these simulations can depend on the adopted collision law (Guivarch & Athanassoula 1999), at least as far as the shocks in the barred region are concerned. In the second approach, a hydrodynamical treatment is used, solving the Euler equations with the help of a grid (e.g. van Albada & Roberts 1981; van Albada, van Leer, & Roberts 1982; Mulder 1986; Piner, Stone, & Teuben 1995), Smooth Particles Hydrodynamics (SPH; Lucy 1977; Gingold & Monaghan 1977; Hernquist & Katz 1989; etc), or beams (Sanders & Prendergast 1974). A number of these studies assume an isothermal equation of state, following the model of Cowie (1980), who calculated the equation of state for an ensemble of clouds and found it could be described as isothermal, provided the clouds have an equilibrium mass spectrum. Comparing the results of all these schemes is beyond the scope of this paper. In all approaches, the more reliable codes produce shocks in the bar region which are more or less offset from the bar major axis towards its leading sides. These shocks result in an inflow of gas and a substantial lowering of the density in the outer bar region. Although the precise location, shape, and persistence of the shocks differ, those are relatively small effects, compared to the fact that we weight by density rather than by the strength of an emission line. The main reason for adopting the present hydrodynamical code is that it gave very good results in many previous studies (see A92b). Because dissipation ensures that the gas layer in spiral galaxies remains thin, the two-dimensional nature of the simulations should not be a factor limiting the applicability of the results to the interpretation of real data. Furthermore, vertical motions have no direct consequence on the PVDs produced as the movement is perpendicular to the line-of-sight. It is somewhat harder to gauge the consequences of the fact that we have ignored the gas self-gravity. Because our mass model represents best (barred) early-type spirals (see A92a), where the fraction of the total mass in gas is typically less than 10%, the contribution of the gas to the potential and large scale forces is negligible. Indeed, Lindblad, Lindblad, & Athanassoula (1996) found that including self-gravity in their model of NGC 1365 made very little difference to the global picture. It may well be, however, that near high density structures such as the nuclear spiral, the self-gravity of the gas is important. ## 6 Summary and Conclusions Our main aim in this paper, the second in a series, was to develop diagnostics to identify bars in edge-on spiral galaxies using the particular kinematics of the gaseous component of barred disks. To achieve this goal, we ran two-dimensional hydrodynamical simulations of the gas flow in the potential of a barred spiral galaxy mass model. We constructed position-velocity diagrams (PVDs) from those simulations, using an edge-on projection and various viewing angles with respect to the major axis of the bar. The presence of shocks and inflows in the simulations allowed us to develop better bar diagnostics than those presented in our first paper (Bureau & Athanassoula 1999), based on periodic orbits calculations. We analysed in detail two simulations which are prototypes of simulations for models with and without inner Lindblad resonances (which we associate with the existence of $`x_2`$ periodic orbits). We showed that, for models allowing $`x_2`$ orbits, the nuclear spiral which is created in the center of the simulations produces a strong inverted S-shaped signature in the PVDs. This signature reaches high radial velocities (compared to those in the outer parts of the simulations) when the bar is seen side-on, and relatively low velocities when the bar is seen end-on. The flow in the outer bar region (the entire bar region if no nuclear spiral is present) produces a parallelogram-shaped signature in the PVDs, being associated with the $`x_1`$ periodic orbits. Because the flow is mostly along the bar in that region, the highest velocities are now reached for viewing angles close to the bar major axis. In the outer parts of the simulations, the flow is almost circular and produces a strong almost solid-body signature in the PVDs for all viewing angles. Shocks within the bar are present in most simulations and lead to a depletion of the gas in the region of the bar occupied by an $`x_1`$-like flow. Thus, if a nuclear spiral is present, a bar can easily be identified in an edge-on spiral galaxy, as there will be a gap in the PVD between the signature of the nuclear spiral and that of the outer parts of the galaxy. If there is no nuclear spiral, it may still be possible to detect a bar, but only with the help of photometry and/or stellar kinematics. The envelope of the signature of the nuclear spiral, and to a lesser extent that of the outer bar region, is most useful to determine the orientation of a bar with respect to the line-of-sight. It is nevertheless hard to discriminate between two viewing angles on either side of the bar. We showed that adding dust to the simulations helps break this degeneracy. We also produced PVDs for a range of simulations covering most of the fraction of parameter space likely to be occupied by real galaxies. These simulations can be used to constrain the mass distribution and bar properties of an observed system. In particular, the presence or absence of the signature of a nuclear spiral in a PVD places strong constraints on the values the parameters of our mass model may take. The nuclear spiral can be absent for high bar axial ratios and/or bar quadrupole moments, and for low Lagrangian radii and/or central concentrations. We thank K. C. Freeman, A. Bosma, and A. Kalnajs for comments on the manuscript and J.-C. Lambert for his computer assistance. E. A. thanks G. D. Van Albada for making available to her his version of the FS2 code. M. B. acknowledges the support of an Australian DEETYA Overseas Postgraduate Research Scholarship and a Canadian NSERC Postgraduate Scholarship during the conduct of this research. M. B. would also like to thank the Observatoire de Marseille for its hospitality and support during a large part of this project.
no-problem/9904/quant-ph9904042.html
ar5iv
text
# References Delayed choice for entanglement swapping ASHER PERES Department of Physics, Technion—Israel Institute of Technology, 32 000 Haifa, Israel Abstract. Two observers (Alice and Bob) independently prepare two sets of singlets. They test one particle of each singlet along an arbitrarily chosen direction and send the other particle to a third observer, Eve. At a later time, Eve performs joint tests on pairs of particles (one from Alice and one from Bob). According to Eve’s choice of test and to her results, Alice and Bob can sort into subsets the samples that they have already tested, and they can verify that each subset behaves as if it consisted of entangled pairs of distant particles, that have never communicated in the past, even indirectly via other particles. To appear in a special issue of Journal of Modern Optics. 1. Introduction and notations Since the early days of quantum mechanics, it has been known that after two quantum systems interact, their joint state is usually entangled and it may remain so even after these systems separate and are far away from each other. However, a direct interaction is not necessary in order to produce entanglement between distant systems. For example, any existing entanglement between two particles can be teleported to other, distant particles, by performing suitable joint measurements and broadcasting their results as classical information . Protocols known as entanglement swapping have recently been proposed and experimentally realized . In the present article, I propose an even more paradoxical experiment, where entanglement is produced a posteriori, after the entangled particles have been measured and may no longer exist. To simplify the discourse and the notations, the experiment will be described in terms of spin-$`\frac{1}{2}`$ particles. (In the real world, it would be easier to use polarized photons. The spin sphere then has to be understood as a Poincaré sphere, and the argument is exactly the same.) The eigenstates of $`\sigma _z`$ will be denoted as $$\left(\genfrac{}{}{0pt}{}{1}{0}\right)|0\text{and}\left(\genfrac{}{}{0pt}{}{0}{1}\right)|1.$$ (1) In that basis, the other spin components have the following nonvanishing matrix elements: $$0|\sigma _x|1=1|\sigma _x|0=1,0|\sigma _y|1=1|\sigma _y|0=i.$$ (2) We shall also need the spin components along directions at 45 from the $`x`$ and $`y`$ axes: $$\sigma _\pm =(\sigma _x\pm \sigma _y)/\sqrt{2}.$$ (3) These give $$0|\sigma _\pm |1=1|\sigma _{}|0=(1i)/\sqrt{2}.$$ (4) For a pair of particles, it is convenient to define “Bell states” $$\mathrm{\Phi }^\pm =(|00\pm |11)/\sqrt{2}\text{and}\mathrm{\Psi }^\pm =(|01\pm |10)/\sqrt{2}.$$ (5) Note that $`\mathrm{\Psi }^{}`$ is the singlet state. In the proposed experiment, two distant observers, conventionally called Alice and Bob, independently prepare two sets of singlets, whose states are denoted as $`\mathrm{\Psi }_A^{}`$ and $`\mathrm{\Psi }_B^{}`$. Alice and Bob keep one particle of each singlet and send the other particle to a third observer, Eve, who also arranges them in pairs (one from Alice and one from Bob). The three observers keep records specifying to which pair each particle belongs. The joint state of a pair of singlets can thus be written as $$\mathrm{\Psi }_A^{}\mathrm{\Psi }_B^{}(\mathrm{\Psi }_E^+\mathrm{\Psi }^+\mathrm{\Psi }_E^{}\mathrm{\Psi }^{}\mathrm{\Phi }_E^+\mathrm{\Phi }^++\mathrm{\Phi }_E^{}\mathrm{\Phi }^{})/2,$$ (6) where the subscript $`E`$ refers to the particles that are sent to Eve, and the symbols $`\mathrm{\Phi }^\pm `$ and $`\mathrm{\Psi }^\pm `$ without a subscript refer to the particles that Alice and Bob keep. Note the minus signs in Eq. (6). It would be possible to eliminate them by using, instead of the Bell basis, the “magic basis” where $`\mathrm{\Phi }^{}`$ and $`\mathrm{\Psi }^+`$ are multiplied by $`i`$. 2. Analysis of the results of measurements Alice and Bob now measure the values of spin components (along arbitrary directions) of the particles that they kept. For example, Alice measures spin component $`\sigma _x`$ or $`\sigma _y`$ (randomly chosen) of her particles, and likewise Bob measures $`\sigma _\pm `$ on his particles. These components were chosen because Alice and Bob want to test Bell inequalities, in the form given by Clauser, Horne, Shimony, and Holt (CHSH) . The results that Alice and Bob get, namely $`\pm 1`$, are of course completely random and uncorrelated. At a later time, Eve performs joint tests on her pairs of particles. Just as in the teleportation scenario , she performs Bell measurements and she informs the other observers of the results that she found. Using that information, Alice and Bob sort the records of their measurements into four subsets, according to whether Eve found $`\mathrm{\Phi }^\pm `$ or $`\mathrm{\Psi }^\pm `$. It then follows from Eq. (6) that, in each subset, the state of the particles that Alice and Bob kept was the same as the state later found by Eve. Thus, in each subset, there are nonvanishing expectation values $`\mathrm{\Phi }^\pm |\sigma _a\sigma _b|\mathrm{\Phi }^\pm `$ and $`\mathrm{\Psi }^\pm |\sigma _a\sigma _b|\mathrm{\Psi }^\pm `$. Explicitly, owing to Eqs. (2) and (4), we have $$\mathrm{\Psi }^+|\sigma _x\sigma _\pm |\mathrm{\Psi }^+=\mathrm{\Psi }^{}|\sigma _x\sigma _\pm |\mathrm{\Psi }^{}=1/\sqrt{2},$$ (7) $$\mathrm{\Psi }^+|\sigma _y\sigma _\pm |\mathrm{\Psi }^+=\mathrm{\Psi }^{}|\sigma _y\sigma _\pm |\mathrm{\Psi }^{}=\pm 1/\sqrt{2},$$ (8) $$\mathrm{\Phi }^+|\sigma _x\sigma _\pm |\mathrm{\Phi }^+=\mathrm{\Phi }^{}|\sigma _x\sigma _\pm |\mathrm{\Phi }^{}=1/\sqrt{2},$$ (9) $$\mathrm{\Phi }^+|\sigma _y\sigma _\pm |\mathrm{\Phi }^+=\mathrm{\Phi }^{}|\sigma _y\sigma _\pm |\mathrm{\Phi }^{}=1/\sqrt{2}.$$ (10) Thus, in each subset, one of the following CHSH inequalities is violated (and the others are satified): $$2\sigma _x\sigma _++\sigma _x\sigma _{}+\sigma _y\sigma _+\sigma _y\sigma _{}2,$$ (11) $$2\sigma _x\sigma _++\sigma _x\sigma _{}\sigma _y\sigma _++\sigma _y\sigma _{}2.$$ (12) For the subset associated with $`\mathrm{\Psi }_E^{}`$, it is the left hand side of Eq. (11) that is violated, for $`\mathrm{\Psi }_E^+`$, it is the right hand side. For the subset associated with $`\mathrm{\Phi }_E^{}`$, it is the left hand side of Eq. (12) that is violated; and for $`\mathrm{\Phi }_E^+`$, it is the right hand side. In other words, Alice and Bob find experimentally that each one of the four postselected subsets produces statistical results identical to those arising from maximally entangled pairs. 3. The paradox There can be no doubt that the particles that were independently produced and tested by Alice and Bob were uncorrelated and therefore unentangled. Each one of these particles may well have disappeared (e.g., been absorbed) before the next particle was produced, and before Eve performed her tests. Only the records kept by the three observers remain, to be examined objectively. How can the appearance of entanglement arise in these circumstances? The point is that it is meaningless to assert that two particles are entangled without specifying in which state they are entangled, just as it is meaningless to assert that a quantum system is in a pure state without specifying that state . If this simple rule is forgotten, or if we attempt to attribute an objective meaning to the quantum state of a single system, curious paradoxes appear: quantum effects mimic not only instantaneous action-at-a-distance but also, as seen here, influence of future actions on past events, even after these events have been irrevocably recorded. Note in particular that even after Alice and Bob have recorded the results of all their measurements, Eve still has the freedom of deciding which experiment she will perform. It can be a Bell measurement as proposed above, but the latter can also be preceded by arbitrary bilateral rotations of the two spin-$`\frac{1}{2}`$ particles (this corresponds to an arbitrary real orthogonal transformation of the magic basis ). Eve can also perform an incomplete Bell measurement \[10–12\], or any other measurement she decides, represented by any positive operator valued measure (POVM) of her choice. The only demand is that at least one of her outcomes corresponds to a definite entangled state of the other pair of particles, namely those retained by Alice and Bob (though not necessarily a maximally entangled state, nor even a pure state of these particles) . Proof: Let $`\text{1}\text{1}_E`$ and $`\text{1}\text{1}_{AB}`$ denote the unit matrices (of order 4) for Eve’s pairs and for the particles kept by Alice and Bob. Let the elements of Eve’s POVM be denoted as $`E^\mu `$ (satisfying $`_\mu E^\mu =\text{1}\text{1}_E`$). Then the joint state of Alice and Bob’s particles, that corresponds to outcome $`\mu `$ registered by Eve, is the partial trace $$\rho _{AB}^\mu =\text{Tr}_E[\rho (E^\mu \text{1}\text{1}_{AB})],$$ (13) where $`\rho `$ is the initial state, given by Eq. (6). Note that $`\rho _{AB}^\mu `$ is not normalized: its trace is the probability that Eve observes outcome $`\mu `$. The above relationship readily follows from the fact that if $`\{F^k\}`$ any POVM chosen by Alice and Bob, with $`_kF^k=\text{1}\text{1}_{AB}`$, then the probability of the joint result $`\mu k`$ is $`\text{Tr}[\rho (E^\mu F^k)]`$. It is not even necessary for Alice and Bob to know which experiments Eve will do. If they know nothing, they just measure $`\sigma _x`$, $`\sigma _y`$, or $`\sigma _z`$ (randomly chosen) on each one of their particles, instead of the specific components that appear in Eqs. (2) and (4). Later, they will learn from Eve that a definite subset of her experiments ascertained the existence of a definite entangled state of their particles. Alice and Bob don’t even have to know which state this was. They will simply set apart the results of their prior measurements for the corresponding subset of particles and compute the correlations $`\sigma _a\sigma _b`$. From the latter, they can obtain the density matrix $`\rho `$ of the pairs of particles in that subset . Subjecting that density matrix to a partial transposition , they will find that the latter has a negative eigenvalue, thus verifying that the corresponding subset of particles, if it still existed, would have an entangled state. It is obvious that from the raw data collected by Alice and Bob it is possible to select in many different ways subsets that correspond to entangled pairs. The only role that Eve has in this experiment is to tell Alice and Bob how to select such a subset. Clearly, Eve has to be honest: if she does not perform her measurements in the correct way and if she reports fake data, Alice and Bob will not select good subsets, and then their analysis will readily expose Eve’s misbehaviour. In summary, there is nothing paradoxical in the experiments outlined above. However, one has to clearly understand quantum mechanics and to firmly believe in its correctness to see that there is no paradox. Acknowledgments I am grateful to Chris Fuchs for improving the presentation of this article. This work was supported by the Gerard Swope Fund and the Fund for Encouragement of Research.
no-problem/9904/cond-mat9904116.html
ar5iv
text
# 1/8 Doping Anomalies and Oxygen Vacancies in Underdoped Superconducting Cuprates ## Introduction There is mounting experimental evidence that novel charge- and spin-ordered phases are generic to underdoped cupratesCuprates . The doped holes in Nd-doped La<sub>1-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> segregate into periodic stripes that separate antiferromagnetically ordered, hole-poor domains. Lattice distortions pin these stripes, and this pinning is most effective for planar hole concentrations near $`p`$=1/8 where the stripe modulation wavelength is commensurate with the lattice. In the absence of pinning, stripe modulations are presumed to be fluctuating and/or disordered, and this is the emerging picture for La<sub>1-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (La-214) and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> (Y-123)DynamicStripes . We have recently demonstrated, through measurements of thermal conductivity ($`\kappa `$)CohnHg , that both Y-123 and HgBa<sub>2</sub>Ca<sub>m-1</sub>Cu<sub>m</sub>O<sub>2m+2+δ</sub> \[Hg-12($`m`$-1)$`m`$, $`m`$=1, 2, 3\] exhibit doping anomalies near $`p`$=1/8 that can be attributed to the presence of localized charge and associated lattice distortions. Here we discuss the oxygen doping behavior of the anomalies in Hg cuprates and present new results on vacancy-free Ca-doped YBa<sub>2</sub>Cu<sub>4</sub>O<sub>8</sub> (Y-124) that suggest elastic properties may reflect stripe dynamics and stripe fragments may pin near oxygen-vacancy clusters. The $`p`$=1/8 features (Fig. 1) are evident in the doping behavior of the normal-state thermal resistivity, $`W=1/\kappa `$, and the normalized change in temperature derivative of $`\kappa `$ that occurs at $`T_c`$, $`\mathrm{\Gamma }d(\kappa ^s/\kappa ^n)/dt|_{t=1}`$ \[$`t=T/T_c`$ and $`\kappa ^s(\kappa ^n)`$ is the thermal conductivity in the superconducting (normal) state\]. For Y-123, there is a compelling correlation of $`W(p)`$ and $`\mathrm{\Gamma }(p)`$ with the doping behavior of anomalous <sup>63</sup>Cu NQR spectral weightHamScal , attributed to localized holes, and the electronic specific heat jumpLoram , $`\mathrm{\Delta }\gamma /\gamma `$, respectively \[crosses in Fig.’s 1 (a) and (b)\]. Thus $`W`$ probes lattice distortions associated with localized holes, and $`\mathrm{\Gamma }`$ the change in low-energy spectral weight induced by superconductivity. Since both $`\mathrm{\Gamma }`$ and $`\mathrm{\Delta }\gamma /\gamma `$ provide bulk measures of the superfluid volume, it is significant that the muon spin rotation ($`\mu `$SR) depolarization rate \[$`\sigma _0`$ in Fig. 1 (b)\], proportional to the superfluid density, exhibits no anomalous behavior near 1/8 doping. The $`\mu `$SR signal originates in regions of the specimen where there is a flux lattice. The apparent discrepancy between $`\sigma _0`$ and $`\mathrm{\Gamma }`$ or $`\mathrm{\Delta }\gamma /\gamma `$ is resolved if the material is inhomogeneous, composed of non-superconducting clusters embedded in a superconducting network. The suppression of $`\mathrm{\Gamma }`$ and $`\mathrm{\Delta }\gamma /\gamma `$ below the scaled $`\sigma _0`$ curve in Fig. 1 (a) are then measures of the non-superconducting volume fraction. Taken together, the $`W`$ and $`\mathrm{\Gamma }`$ data imply that the non-superconducting regions are comprised of localized holes and associated lattice distortions, i.e. polarons. It is plausible that these hole-localized regions are stripe domains akin to those inferred from neutron scatteringCuprates . That they produce lattice thermal resistance implies they are static on the timescale of the average phonon lifetime, estimated as a few ps in the ab-plane of Y-123CohnHg . Given that $`T_c`$ is not substantially suppressed near $`p`$=1/8 implies that these regions do not percolate, and thus the picture is one of small clusters of localized holes and their associated lattice distortions, perhaps no larger than the stripe unit cell ($`2a\times 8a`$ where $`a`$ is the lattice constantCuprates ), separated by a distance comparable to the phonon mean free path (about 100$`\mathrm{\AA }25a`$ at 100K). For the Hg materials the $`p`$=1/8 enhancement of $`W`$ and suppression of $`\mathrm{\Gamma }`$ is most prominent in single-layer Hg-1201, less so in double-layer Hg-1212, and absent or negligible in three-layer Hg-1223. This trend follows that of the oxygen vacancy concentration: a single HgO<sub>δ</sub> layer per unit cell contributes charge to $`m`$ planes in Hg-12($`m`$-1)$`m`$ so that the oxygen vacancy concentration, $`1\delta `$, increases with decreasing $`m`$ Occupancy . The absence of suppression in $`\mathrm{\Gamma }`$ near 1/8 doping for Hg-1223 \[Fig. 1 (d)\] suggests that this material has sufficiently few localized-hole domains that their effects in $`W`$ and $`\mathrm{\Gamma }`$ are unobservable. Thus we employ the Hg-1223 $`W(p)`$ data as a reference and plot the differences for the other two compounds in Fig. 1 (c). Comparing Fig.’s 1 (c) and (d) we see that for both Hg-1201 and Hg-1212 $`W`$ is enhanced and $`\mathrm{\Gamma }`$ is suppressed relative to values for Hg-1223 in common ranges of $`p`$, with maximal differences near $`p`$=1/8. ## Stripes Alter the phonon dispersion? That 1/8 doping anomalies are observed in both Y-123 and Hg cuprates suggests they are a generic feature of underdoped CuO<sub>2</sub> planes. In this context the doping behavior of $`\kappa `$ for Hg-1223 \[Fig. 2 (a)\] is of interest since it presumably approximates the phase behavior in the absence of localized holes. We expect the electronic contribution, $`\kappa _e`$, to increase smoothly with increasing $`p`$. The upturn in $`\kappa `$ above optimal doping ($`p`$=0.16) is presumably attributable to this rising $`\kappa _e`$. The most reliable estimate of $`\kappa _e`$ in the cuprates comes from thermal Hall conductivity measurementsKrishana on Y-123 which imply $`\kappa _e/\kappa 0.1`$ for in-plane heat conduction near optimal doping. The data in Fig. 1 (a), (b) suggest that this ratio is roughly the same in polycrystals, thus motivating the dotted and dashed curves in Fig. 2 (a) as an educated guess for the electronic and lattice terms, respectively, in Hg-1223. The lattice conductivity, $`\kappa _L=\kappa \kappa _e`$, predominates in the underdoped regime and is peaked near $`p`$=1/8. There is experimental support for the proposal that this peak in $`\kappa `$ is associated with doping-dependent changes in the phonon dispersion which are generic to underdoped cuprates. Figure 2 (b) shows the behavior of the normal-state, transverse shear elastic constant (proportional to the square of the sound velocity) for single-crystal La-214LSCOelastic , the only material for which doping-dependent measurements for all the main symmetry directions have been reported to our knowledge. A substantial hardening of the lattice in the underdoped regime, with a maximum near $`p=x`$1/8, was observed for all symmetries, indicating that the changes with doping are systemic. It is possible that the phase behavior of the elastic constants for La-214 and $`\kappa _L`$ for Hg-1223 reflect a renormalization of the lattice dispersion due to changes in stripe dynamics with doping. At $`p`$=1/8 the stripes are maximally commensurate with the lattice, and their fluctuations should be minimal. Enhanced fluctuations are to be expected at higher doping, due to the destabilizing role of repulsive interactions in stripes that neutron scattering results (Yamada et al.DynamicStripes ) suggest, are charge compressed. For $`p<`$1/8, the cluster spin-glass state observed by $`\mu `$SR studiesNiedermayer is characterized by increasing magnetic disorder with decreasing $`p`$ in the range $`0.08<p<0.12`$, possibly associated with increasing disorderEmery in the stripe period. It is plausible that disorder in the stripe system at both higher and lower doping about $`p`$=1/8 induces a softening of the lattice that is reflected in Fig. 2. Theoretical investigations of elastic coupling to the stripe system would certainly be of interest. ## Oxygen Vacancies and 1/8 Anomalies Returning now to the enhancement of $`W`$ and suppression of $`\mathrm{\Gamma }`$ near $`p`$=1/8, our measurements suggest these phenomena reflect stripe pinning (or reduced stripe fluctuations) associated with oxygen vacancy clusters. Raman studies of defect modes in the Hg materialsHgRaman provide a measure of the density of vacancy clusters. The 590 cm<sup>-1</sup> Raman mode in Hg cuprates is attributed to c-axis vibrations of apical oxygen in the presence of oxygen vacancies on each of the four nearest-neighbor sites in the HgO<sub>δ</sub> layers. The amount by which the thermal resistivity and $`\mathrm{\Gamma }`$ values of Hg-1201 and Hg-1212 differ from those of Hg-1223 at $`p`$=1/8 both correlate well with the integrated oscillator strength of this vibrational mode normalized by that of the 570 cm<sup>-1</sup> mode (Fig. 3), the latter attributed to apical vibrations in the presence of fewer than four vacancies, and common in the spectraHgRaman of all three Hg materials. We infer that oxygen-vacancy clusters, of size four or more in the Hg cuprates, can pin a stripe fragment. Supporting the role of oxygen vacancies in the observed 1/8 features for Y-123 are our recent measurements on oxygen-stoichiometric Y<sub>1-x</sub>Ca<sub>x</sub>-124CohnYCa124 . The $`\mathrm{\Gamma }`$ values for $`x`$=0, 0.10, 0.15 are plotted in Fig. 1 (b). There is no evidence for suppression of $`\mathrm{\Gamma }`$ near 1/8 doping, and the data follow the scaled $`\mu `$SR curve quite well. Within the context of the interpretation we have outlined, we conclude that Ca substitution for Y (at the level of $``$ 15%) is not as effective in stripe pinning as are oxygen vacancy clusters. It remains to be determined by what mechanism these clusters induce pinning. ## acknowledgements This work was supported by NSF Grant No. DMR-9631236.
no-problem/9904/cond-mat9904309.html
ar5iv
text
# Is there a symmetry between absorption and amplification in disordered media? Recent observations of laser-like emission from dye solution emerged with TiO<sub>2</sub> nanoparticles have stimulated intensive theoretical efforts to investigate the properties of disordered media which are optically active. From considerations of enhanced optical paths through multiple scattering, random systems are expected to possess a reduced gain threshold for lasing. Correspondingly, one would expect the transmission in disordered systems to be enhanced with gain. The longer the system, the larger the enhancement. Surprisingly, numerical calculations based on time-independent wave equations showed that for large systems the wave propagation is still attenuated, indicating a localization of waves even with gain. The rate of the exponential attenuation is the same as if the system was absorbing. Such a symmetry between absorption and amplification was subsequently shown to hold for the time-independent wave equation. Intuitively one would expect that the presence of amplification should facilitate wave propagation, not suppress it, even in disordered systems. The peculiar result of reduced transmission in gain media is also absent when wave propagation in disordered media is described by time dependent diffusion equations which always predict an increased output and a gain threshold above which the system become unstable. The diffusion equation neglects the phase coherence of the wave and is adequate only when the wavelength is much smaller than the mean free path. Thus it was proposed that the paradoxical phenomenon may indicate enhanced localization due to the amplification of coherent backscattering. However, amplification of backscattering does not necessarily imply a reduction in transmission since no conservation of the total photon flux is required in gain media. To fully understand the origin of this non-intuitive result, we will examine the validity of the solutions derived from the time-independent wave equation which has been commonly employed in describing the wave propagation in active media. Linearized time-independent wave equations with a complex dielectric constant have been successfully utilized to find lasing modes by locating the poles in the complex frequency plane and to investigate the spontaneous emission noise below the lasing threshold in distributed feedback semiconductor lasers. Such equations are known to be inadequate to describe the actual lasing phenomena due to their simplicity in dealing with the interactions between radiation and matter. However, it is generally believed that the time-independent equation should suffice to describe the wave propagation in amplifying media, before any oscillations occur. We will unambiguously show that the so-called symmetry between amplification and absorption is an artifact due to the unphysical assumption of a finite output in solving the wave equations. We show that for each system, there is a frequency-dependent gain threshold above which both the total transmission and the total reflection become divergent. Solving the time-independent wave equations by incorrectly assuming a fixed output leads to unphysical solutions that does not correspond to the true behavior of the system. To demonstrate our point, we take the simplest example of a uniform active media sandwiched between two mirrors as feedback (see the insert in Fig. 1), the classical Febry-Perot setup. Wave propagation within the active media is simply described with the following phenomenological wave equation, $$\frac{d^2E(z)}{dz^2}+\frac{\omega ^2}{c^2}\epsilon (z)E(z)=0,$$ (1) where $`E(z)`$ is the electric field and the dielectric constant $`\epsilon (z)=\epsilon ^{}(z)i\epsilon ^{^{\prime \prime }}(z)`$ with the imaginary part signifying amplification ($`\epsilon ^{^{\prime \prime }}>0`$) or absorption ($`\epsilon ^{^{\prime \prime }}<0`$ ). We point out that in electromagnetic theory, the imaginary part of the dielectric constant is proportional to the conductivity of the material and thus cannot be negative. The negative dielectric constant is strictly speaking only an effective way to introduce coherent amplification . Complex potentials known as optical potential have long been employed in nuclear physics to describe nuclei scattering processes. The transmission and the reflection amplitude can be easily obtained by solving Eq.(1) to yield: $$t=\frac{t_1t_2e^{ikL}}{1r_1r_2e^{2ikL}}$$ (2) where $`t_1=2k/(k+k_0)`$, $`t_2=2k_0/(k+k_0)`$, and $`r_1=r_2=(kk_0)/(k+k_0)`$ are the transmission and reflection coefficients at the left and right two interfaces, respectively. $`k_0=\sqrt{\epsilon _0}\frac{2\pi }{\lambda }`$ and $`k=k^{}ik^{\prime \prime }=\sqrt{\epsilon ^{}i\epsilon ^{\prime \prime }}\frac{2\pi }{\lambda }`$ are the wavevectors outside and inside the system, $`L`$ is the distance between the mirrors (system size). The oscillation condition for lasing is correctly given by $`1r_1r_2e^{2ikL}=0`$ at which both the transmission and reflection coefficient diverge. However, Eq.(2) also predicts the exponential decrease of the transmission coefficient for large system sizes. In fact the term in the numerator with the $`exp(ikL)`$ increases exponentially as the length of the system $`L`$ increases because of the gain, but the term in the denominator with the $`exp(2ikL)`$ increases even faster, making the transmission coefficient decay asymptotically as $`|t||exp(ikL)|=exp(k^{\prime \prime }L)`$. This is clearly shown in Fig.1, where we plot the $`ln(T)`$ versus $`L`$ for a one-layer system. Notice that for large $`L`$, $`ln(T)`$ decreases as $`L`$ increases despite the fact that we have gain at here. Gain effectively becomes loss at large lengths! Remember the system is homogeneous thus disorder is definitely not responsible for this strange behavior. Thus the inhibition of wave propagation for large systems is clearly not a result of amplification in backscattering. A clear picture of what is going on can be obtained from the path integral method . In such an approach, the total transmission coefficient can be obtained by adding the paths of the successive reflected and transmitted rays. In doing so, we obtain that $$t=t_1t_2e^{ikL}[1+(r_1r_2e^{2ikL})+(r_1r_2e^{2ikL})^2+\mathrm{}]$$ (3) where the first term represents the direct transmission of the incoming wave, and the second term represents the wave which was reflected first by $`r_2`$ at right interface and then by $`r_1`$ at left interface and subsequently transmitted through. More terms from sequences of multiple transmissions and reflections from the two mirrors follow. It is clear Eq.(3) represents an infinite series whose sum reproduces Eq. (2), provided that the following condition is met, $`|r_1r_2e^{2ik^{}L}e^{2k^{\prime \prime }L}|<1.`$ (4) When the condition given by Eq.(4) is violated, such as when the system size or the gain is large, the physical output represented by the sum diverges, even away from the oscillation pole. We hope to note at here that the sum of right side of Eq.(3) include the phases of all paths, so that it includes all interference of scattering waves. We also note that the results of Eq.(3) are consistent with time-dependent theory because different terms of Eq.(3) can explain as outputs at different time from a same incidence or the output at same time from a series incidence at different time. By this way, the out-put of a system with gain much larger than the threshold will increase exponentially versus time after a plane wave incidence, but time-independent theory suppose a small output as if the system is of absorbing material. Base on time-dependent Maxwell equations, a well-developed FDTD (finite-difference time-domain) method can help us to see the obvious different behavior of transmission of two kind of systems versus time. We choose a plane wave with $`\lambda =800`$nm as the incidence from left to the setup of Fig.1 with $`L=4300`$nm. The dielectric constant is taken to be $`\epsilon _0=1`$ outside and $`Re(\epsilon )=\epsilon ^{}=9`$ inside. The threshold of this system is $`\epsilon ^{\prime \prime }=0.12`$ from Eq.(4). Our numerical results show that, after a short instantaneous state, the output of system will get to a stable value which is the transmission of the system if the system with under-threshold gain or with absorption. Unless the gain or the length is over the threshold, the results of time-independent theory are same as the time-dependent theory. But if the system with gain is above the threshold, the output of system will increase exponentially versus time as predicted by Eq.(3). In Fig. 2, we plot the logarithm of amplitudes of out-put at right side of system versus time for different gain, we can see the numerical results exactly same as we predicted. We also examined the slope of lines with the gain larger the threshold in Fig. 2, and found the slope is exactly same as we predict in Eq. (3), equal to $`log_{10}|r_1r_2e^{2k^{\prime \prime }L}|`$. The divergence of the transmission above threshold even away from the oscillation pole is the key in understanding the failure of Eq.(2), which is conventionally derived from boundary condition by implicitly assuming that the output is finite. Normally the physical boundary condition is satisfied when $`t_1t_2e^{ikL}+r_1r_2e^{2ikL}t=t`$, resulting in Eq. (2). Obviously this condition lost its meaning when t is infinite. As a result, Eq. (2) becomes unphysical. For a lossy system, the condition given by Eq.(4) is always satisfied. Thus the divergence is a new phenomenon occurring only to systems with gain. The breakdown of the time-independent wave equation signals the large fluctuations of the transmission in time and calls for more sophisticated theories that can take into account the interaction between radiation and matter to correctly describe the response of the system. To show that the exponentially attenuated results of time-independent theory with gain is not from the backscattering effect, but from a theoretical mistake, we also do the same numerical experiment to random system with gain. The system is made of 50 cells, each cell contains two kind layers with dielectric constant $`\epsilon _1=1`$ and $`\epsilon _2=4i\epsilon ^{\prime \prime }`$. To introduce disorder, we choose the width of first layer of the $`n`$th cell to be random variable $`a_n=a_0(1+W\gamma )`$,where $`a_0=95`$ nm, $`W`$ describes the strength of randomness which is 0.8 in our model and $`\gamma `$ is a random number between $`(0.5,0.5)`$. The width of second layer in $`n`$th cell is $`b_n=215nma_n`$. The wavelength of incident wave is $`\lambda =1200`$nm. In Fig.3, we plot the logarithm of output at right side of system versus time with plane wave incidence for different gain. We also get the same results as what we predicted, the exponential increase of the output versus time when the gain is over the threshold value, which are showed in Fig.3. The time-dependent results also show that the threshold of random system is quite small as previously predicted and showed experimentally . Actually all time-dependent theories about the system with the gain or the length over the threshold, such as Letokohov diffusion theory , Lamb theory , give the divergent output of the system. The physical meaning of the divergent output is that the rate of generating photons by the induced radiation is larger than the escaping rate of photons from the interfaces of the system. Now the field in the system will become stronger and stronger even if the frequency of the wave is not resonant frequency(at the resonant frequency, the system become a laser). For multilayer systems, to see whether the results of transmission or reflection coefficients of the time-independent theory is physical, a simple approach is to check every layer of the system by the following method. When we examine one certain layer, such as m-th layer, in the multilayer system, we can assume that the left part of system forms an effective interface of the layer with transmission and reflection coefficients $`t_l`$ and $`r_l`$, and the right part forms the other effective interface with $`t_r`$ and $`r_r`$. Here we assume both right and left subsystems are below threshold. The convergence condition for every layer of the system is then given by Eq.(2), with ($`t_1`$, $`r_1`$), and ($`t_2`$, $`r_2`$) replaced by ($`t_l`$,$`r_l`$) and ($`t_r`$, $`r_r`$), respectively. For every layer, Eq.(2) gives us a line. The cusps are formed by the lowest limit out of all lines. Fig. 4 shows the results of such a calculation for a 40 layer system, which is formed by two kind of layers with widths $`L_1=95nm`$, $`L_2=120nm`$ and dielectric constants $`\epsilon _1^{}=1`$, $`\epsilon _2^{}=2`$ respectively . In reality, the curve only gives an indication of the magnitude of the gain above which the results from time-independent equation becomes suspect. In solving the time-independent equations, it is difficult to know exactly when the solutions break down. One certainly cannot tell from the expression of the total transmission and the total reflection coefficient which are well behaved except exactly at the oscillation pole. However, a rough indication is that when the gain for a fixed system approaches the lasing threshold for nearby poles or when the system sizes exceed the threshold length, the solutions should not be trusted. However, the threshold condition is still given by the poles in the complex plane. A study of the distribution of these poles has been carried out through carefully locating the pole position by continuously tuning the gain up at a fixed frequency until the transmission coefficient diverges. The implication of the analysis above is that the calculation of transmission and reflection coefficient with the traditional method become suspect once the gain or system size reaches a certain value. Care has to be taken to ensure that the system is not above threshold and the solution is physical. This remark, unfortunately, also pertains to the application of the powerful $`invariant`$ $`embedding`$ method . in which the finite reflection and transmission are assumed implicitly. In summary, this note aims to bring attention to some peculiar aspects of the time-independent wave equation when the gain is above certain threshold for a fixed length, or equivalently, when the length of the system exceeds a certain value for a fixed gain. Thus some of the conclusions on the statistical properties of the reflection and the transmission coefficient in media with gain become suspicious at large system sizes. The time-independent equation is inadequate to describe the amplification of light under these conditions. Nevertheless, the simplicity of the time-independent equation can be used effectively to locate resonant conditions, even in disordered systems. A complete treatment of the wave propagation in gain media may require the construction of the time-dependent solution out of the continuous and discrete solutions of the time-independent equation . Unlike for the Hermitian system, the completeness of these solutions could not be proved easily when the potential is complex. Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. W-7405-Eng-82. This work was supported by the director for Energy Research, Office of Basic Energy Sciences.
no-problem/9904/cond-mat9904451.html
ar5iv
text
# Does spin-orbit coupling play a role in metal-nonmetal transition in two-dimensional systems? ## Abstract We propose an experiment, which would allow to pinpoint the role of spin-orbit coupling in the metal-nonmetal transition observed in a number of two-dimensional systems at low densities. Namely, we demonstrate that in a parallel magnetic field the interplay between the spin-orbit coupling and the Zeeman splitting leads to a characteristic anisotropy of resistivity with respect to the direction of the in-plane magnetic field. Though our analytic calculation is done in the deeply insulating regime, the anisotropy is expected to persist far beyond that regime. In a recent paper an interesting experimental observation was reported. It was demonstrated that the period of beats of the Shubnikov-de Haas oscillations in a two-dimensional hole system is strongly correlated with the zero-magnetic-field temperature dependence of the resistivity. The beats of the Shubnikov-de Haas oscillations have their origin in the splitting of the spin subbands in a zero magnetic field,. The authors of Ref. were able to tune the zero-field splitting by changing the gate voltage. They observed that, while in the absence of the subband splitting, the zero-magnetic-field resistivity was temperature-independent below $`T=0.7`$K , a pronounced rise (by $`5`$ percent) in resistivity with temperature emerged in the interval $`0.2K<T<0.7K`$ at the maximal subband splitting, indicating a metallic-like behavior. This close correlation suggests that it is a mechanism causing the spin subband splitting that plays an important role in the crossover from the metallic-like to the insulating-like temperature dependence of resistivity with decreasing carrier density (the metal-nonmetal transition). This transition by now has been experimentally observed in a number of different two-dimensional electron and hole systems. By challenging the commonly accepted concepts, it has attracted a lot of theoretical interest and attempts to identify the underlying mechanism. Possible relevance of zero-field splitting to the transition was first conjectured in Ref. . The evidence presented in Ref. about the importance of the subband splitting for metallic-like behavior of resistivity is further supported by the very recent data reported in Ref. . Another important feature of the metal-nonmetal transition, which might also provide a clue for the understanding of its origin, is that the metallic phase is destroyed by a relatively weak parallel magnetic field. At the same time, no quenching of the metallic phase in a parallel magnetic field was observed in SiGe hole gas, in which the strain, caused by the lattice mismatch, splits the light and heavy holes. As far as the theory is concerned, the role of the parallel magnetic field was previously accounted for exclusively through the Zeeman energy, which either alters the exchange interactions (and, thus, electron-ion binding energy) or suppresses the liquid phase, or affects the transmittancy of the point contact between the phase-coherent regions. It is appealing to combine the observations of the subband splitting in zero field and the results in a parallel magnetic field within a single picture. The spin-orbit (SO) coupling appears to be a promising candidate for such a unifying mechanism. Indeed, on one hand, it is known to lead to spin subband splitting. On the other hand, a parallel magnetic field, though not affecting the orbital in-plane motion, destroys the SO coupling and, thus, suppresses the intersubband transitions. Possible importance of these transitions was emphasized in Ref. . Their suppression with increasing magnetic field is caused by the fact that the corresponding subband wave functions become orthogonal for all wave vectors. At the present moment there is no consensus in the literature about the role of the SO coupling. Several authors have explored the role of the SO coupling as a possible source of the metallic-like behavior, by considering noninteracting two-dimensional system and including the SO terms into the calculation of the weak-localization corrections. At the same time, the majority of theoretical works , stimulated by the experimental observation of the transition, disregarded the SO coupling. To pinpoint the role of the SO coupling in the metal-nonmetal transition, it seems important to find a qualitative effect which exists only in the presence of the SO coupling. Such an effect is proposed in the present paper. We show that an interplay between the SO coupling and the Zeeman splitting gives rise to a characteristic anisotropy of resistivity with respect to the direction of the parallel magnetic field. Obviously, the Zeeman splitting alone cannot induce any anisotropy. To demonstrate the effect, we consider the deeply insulating regime, where the physical picture of transport is transparent. We choose the simplest form for the spin-orbit Hamiltonian $$\widehat{H}_{SO}=\alpha 𝐤(𝝈\times \widehat{𝐳}).$$ (1) Here $`\alpha `$ is the SO coupling constant, $`𝐤`$ is the wave vector, $`\widehat{𝐳}`$ is the unit vector normal to the 2D plane, $`𝝈=(\sigma _1,\sigma _2,\sigma _3)`$ are the Pauli matrices. In the presence of the parallel megnetic field, the single particle Hamiltonian can be written as $$\widehat{H}=\frac{\mathrm{}^2k^2}{2m}+\alpha 𝐤(𝝈\times \widehat{𝐳})+g\mu _B𝝈𝐁=\left(\begin{array}{cc}\frac{\mathrm{}^2k^2}{2m}& \frac{\mathrm{\Delta }_Z}{2}e^{i\varphi _𝐁}i\alpha ke^{i\varphi _𝐤}\\ \frac{\mathrm{\Delta }_Z}{2}e^{i\varphi _𝐁}+i\alpha ke^{i\varphi _𝐤}& \frac{\mathrm{}^2k^2}{2m}\end{array}\right),$$ (2) where $`m`$ is the effective mass, $`g`$ and $`\mu _B`$ are the g-factor and the Bohr magneton respectively; $`\mathrm{\Delta }_Z=2g\mu _BB`$ is the Zeeman splitting; $`\varphi _𝐁`$ and $`\varphi _𝐤`$ are, correspondingly, the azimuthal angles of magnetic field $`𝐁`$ (Fig. 1) and the wave vector $`𝐤`$. The energy spectrum of the Hamiltonian Eq. (2) is given by $$E_\pm (𝐤)=\frac{\mathrm{}^2k^2}{2m}\pm \frac{1}{2}\sqrt{\mathrm{\Delta }_Z^2+4\alpha ^2k^2+4\alpha k\mathrm{\Delta }_Z\mathrm{sin}(\varphi _𝐁\varphi _𝐤)}.$$ (3) Note that the spectrum is anisotropic only if both $`\mathrm{\Delta }_Z`$ and $`\alpha `$ are nonzero. The standard procedure for the calculation of the hopping conductance is the following. Denote with $`P_{12}`$ the hopping probability between the localized states $`1`$ and $`2`$. The logarithm of $`P_{12}`$ represents the sum of two terms $$\mathrm{ln}P_{12}=\frac{\epsilon _{12}}{T}\mathrm{ln}|G(𝐑)|^2,$$ (4) where the first term originates from the activation; $`\epsilon _{12}`$ is the activation energy and $`T`$ is the temperature. The second term in Eq. (4) describes the overlap of the wave functions of the localized states centered at points $`𝐑_1`$ and $`𝐑_2`$, so that $`𝐑=𝐑_1𝐑_2`$. In Eq. (4) we use the fact that within the prefactor the overlap integral coincides with the Green function $`G(𝐑)`$. For the matrix Hamiltonian Eq. (2), the Green function is also a matrix $$\widehat{G}(𝐑)=\frac{d^2𝐤}{(2\pi )^2}\frac{e^{i𝐤𝐑}}{E\widehat{H}(𝐤)}.$$ (5) By projecting onto the eigen-space of Hamiltonian Eq. (2), the above expression can be presented as $$\widehat{G}(𝐑)=\frac{dkkd\varphi _𝐤}{(2\pi )^2}e^{ikR\mathrm{cos}(\varphi _𝐤\varphi _𝐑)}\left[\frac{\widehat{P}_+(𝐤)}{EE_+(𝐤)}+\frac{\widehat{P}_{}(𝐤)}{EE_{}(𝐤)}\right],$$ (6) where the projection operators $`P_\pm (𝐤)`$ are defined as $$\widehat{P}_+(𝐤)=\frac{1}{2}\left(\begin{array}{cc}1& O(𝐤)\\ O^{}(𝐤)& 1\end{array}\right),\widehat{P}_{}(𝐤)=1\widehat{P}_+(𝐤),$$ (7) where $`O^{}(𝐤)`$ is the complex conjugate of $`O(𝐤)`$, which is defined as $$O(𝐤)=\frac{\frac{\mathrm{\Delta }_Z}{2}\mathrm{exp}(i\varphi _𝐁)i\alpha k\mathrm{exp}(i\varphi _𝐤)}{E_+(𝐤)E_{}(𝐤)}.$$ (8) When the distance $`R`$ is much larger than the localization radius, $`a_0`$, the integral over $`\varphi _𝐤`$ is determined by a narrow interval $`|\varphi _𝐤\varphi _𝐑|(kR)^{1/2}1`$. This allows to replace $`\varphi _𝐤`$ by $`\varphi _𝐁`$ in the square brackets and perform the angular integration. Then we obtain $$\widehat{G}(𝐑)=\sqrt{\frac{2\pi }{iR}}_0^{\mathrm{}}\frac{dk\sqrt{k}}{(2\pi )^2}e^{ikR}\left[\frac{\widehat{P}_+(k,\varphi _𝐑)}{EE_+(k,\varphi _𝐑)}+\frac{\widehat{P}_{}(k,\varphi _𝐑)}{EE_{}(k,\varphi _𝐑)}\right].$$ (9) The next step of the integration is also standard. Namely, for large $`R`$, the $`\widehat{G}(𝐑)`$ is determined by the poles of the integrand. However, in the case under the consideration, the equation $`E_\pm (k)=E`$ leads to a fourth-order algebraic equation. To simplify the calculations we will restrict ourselves to the strongly localized regime $`|E|m\alpha ^2/\mathrm{}^2`$. In this case the poles can be found by the successive approximations. In the zero-order approximation, we get the standard result $`k=ik_0`$, where $`k_0`$ is defined as $$k_0=a_0^1=\frac{\sqrt{2m|E|}}{\mathrm{}}.$$ (10) In the first order approximation, we have $`k=ik_0+k_1`$ where $`k_1`$ is given by $$k_1=\pm i\frac{m\alpha }{\mathrm{}^2}\sqrt{\mathrm{\Delta }_1^21+2i\mathrm{\Delta }_1\mathrm{sin}(\varphi _𝐁\varphi _𝐑)},$$ (11) where the dimensionless Zeeman splitting $`\mathrm{\Delta }_1`$ is defined as $$\mathrm{\Delta }_1=\mathrm{\Delta }_Z/2\alpha k_0.$$ (12) Within this approximation, the long-distance asymptotics of the Green function is $$\widehat{G}(R)e^{R/a(\varphi _𝐁,\varphi _𝐑)},$$ (13) where the decay length is given by $$a(\varphi _𝐁,\varphi _𝐑)^1=k_0\left(1\frac{m\alpha }{\mathrm{}^2}Re\sqrt{\mathrm{\Delta }_1^21+2i\mathrm{\Delta }_1\mathrm{sin}(\varphi _𝐁\varphi _𝐑)}\right).$$ (14) In the last equation it is assumed that the real part, $`Re(\mathrm{})`$, has a positive sign. Our main observation is that the decay length and, concommitantly, the probability of hopping are anisotropic, when the parallel megnetic field and the SO coupling are present simultaneously. By evaluating the real part in Eq. (14) we obtain $$a(\varphi _𝐁,\varphi _𝐑)^1=k_0\left(1\frac{m\alpha }{\sqrt{2}\mathrm{}^2k_0}\sqrt{\mathrm{\Delta }_1^21+\sqrt{1+\mathrm{\Delta }_1^42\mathrm{\Delta }_1^2\mathrm{cos}2(\varphi _𝐁\varphi _𝐑)}}\right).$$ (15) To characterize the anisotropy quantitatively, we introduce the perpendicular decay length $`a_{}=a(\varphi _𝐁\varphi _𝐑=\pm \frac{\pi }{2})`$ and the parallel decay length $`a_{}=a(\varphi _𝐁=\varphi _𝐑)`$. Then a quantitative measure of the anisotropy can be defined as $$\frac{a_{}a_{}}{a_0}=\frac{m\alpha }{\mathrm{}^2k_0}f(\mathrm{\Delta }_1),$$ (16) where the function $`f(x)`$ is given by $$f(x)=x(x^21)^{1/2}\theta (x1),$$ (17) where $`\theta (x)`$ is the step-function. In the strongly localized regime ($`\alpha k_0|E|`$) the anisotropy is weak. The magnetic field dependence of the anisotropy is shown in Fig. 2. It can be seen that the maximal anisotropy corresponds to $`\mathrm{\Delta }_1=1`$ and it vanishes both in strong and weak magnetic fields. The theory of hopping transport in the systems with anisotropic localization radius is presented in Ref. . The principal outcome of this theory is that the anisotropy of the localization radius (and, consequently, the exponential anisotropy of the hopping probability (4)) does not lead to the exponential anisotropy of the hopping resistance. In fact, the exponent of the resistance is the same as for the isotropic hopping with localization radius $`\sqrt{a_{}a_{}}`$. However, the anisotropy in the Green function manifests itself in the prefactor of the hopping resistance $$\frac{\rho _{}\rho }{\rho _{}+\rho }=C\frac{a_{}a_{}}{a_{}+a_{}}C\frac{m\alpha }{2\mathrm{}^2k_0}f(\mathrm{\Delta }_1),$$ (18) where $`C1`$ is the numerical factor, determined by the perturbation theory in the method of invariants for random bond percolation problem. The exact value of the constant $`C`$ depends on the regime of hopping (nearest-neighbor or variable-range hopping). The microscopic origin of the SO Hamiltonaian Eq. (1) is the asymmery of the confinement potential. In III-V semiconductor quantum wells there exists another mechanism of the SO coupling, which originates from the absence of the inversion symmetry in the bulk (the Dresselhaus mechanism). Within this mechanism $`\widehat{H}_{SO}=\beta (\sigma _xk_x\sigma _yk_y)`$ (for growth direction). Then the calculation similar to the above leads to the following result for the anisotropic decay length $$a(\varphi _𝐁,\varphi _𝐑)^1=k_0\left(1\frac{m\beta }{\sqrt{2}\mathrm{}^2k_0}\sqrt{\mathrm{\Delta }_2^21+\sqrt{1+\mathrm{\Delta }_2^4+2\mathrm{\Delta }_2^2\mathrm{cos}(\varphi _𝐁+\varphi _𝐑)}}\right),$$ (19) where $`\mathrm{\Delta }_2`$ is related to the Zeeman splitting as $$\mathrm{\Delta }_2=\mathrm{\Delta }_Z/2\beta k_0.$$ (20) By using Eq. (19) we get for anisotropy $$\frac{a_{}a_{}}{a_0}=\frac{m\beta }{\mathrm{}^2k_0}f(\mathrm{\Delta }_2),$$ (21) where the function $`f`$ is determined by Eq. (17). In conclusion, we have demonstrated that, due to the SO coupling, the rotation of an in-plane magnetic field with respect to the direction of current should lead to a characteristic angular variation of resistivity with a period $`\pi `$. The anisotropy is maximal for intermediate magnetic fields and vanishes in the weak and the strong-field limits. In the strongly localized regime, considered in the present paper, the magnitude of anisotropy is small. However, as seen from Eqs. (16) and (18), the magnitude of anisotropy should increase as the Fermi level moves up with increasing carrier concentration (since $`k_0`$ decreases). So the resistivity is expected to remain anisotropic, perhaps with a modified angular dependence, far beyond the deeply insulating regime. For high enough concentrations the SO coupling (subband splitting) is negligible, so that the anisotropy should be also weak. If the intersubband scattering governs the metal-nonmetal transition, then the resistivity anisotropy should reach maximum around the critical density. Finally, let us discuss two possible complications for the experimental observation of the anisotropy in resistivity. Both of them stem from the fact that a realistic two-dimensional system has a finite thickness. Firstly, with finite thickness, even a small deviation of the magnetic field direction from the in-plane position would cause a certain anisotropy even without SO coupling. However, in this case, the anisotropy would only increase with increasing magnetic field, while the SO-induced anisotropy should vanish in the strong-field limit. The second effect of the finite thickness is that it causes the anisotropy of the Dresselhaus term with respect to the crystalline axes. As it is shown in Ref. , the interplay of anisotropic Dresselhaus and isotropic Bychkov-Rashba terms results in the crystalline anisotropy of the resistivity in the weak-localization regime. This effect should be distinguished from the anisotropy with respect to the direction of current predicted in the present paper. Acknowledgements. The authors are grateful to R. R. Du for helpful discussions. M.E.R. acknowledges the support of the NSF grant INT-9815194, and Y.S.W. the NSF grant PHY-9601277.
no-problem/9904/astro-ph9904293.html
ar5iv
text
# Correlations between kHz QPO and Low Frequency Features Attributed to Radial Oscillations and Diffusive Propagation in the Viscous Boundary Layer Around a Neutron Star ## 1 Introduction The discovery of kilohertz quasiperiodic oscillations (QPO’s) in the low mass X-ray neutron star (NS) binaries (Strohmayer et al. 1996; Van der Klis et al. 1996 and Zhang et al. 1996) has stimulated both theoretical and observational studies of these sources. In the upper part of the spectrum (400- 1200 Hz) for most of these sources, two frequencies $`\nu _k`$ and $`\nu _h`$ have been seen. Initially, the fact that for some sources, the peak separation frequency $`\mathrm{\Delta }\nu =\nu _h\nu _k`$ does not change much led to the beat frequency interpretation (Strohmayer et al. 1996; Van der Klis 1998) which was presented as a concept for the first time in the paper by Alpar & Shaham (1985). Beat-frequency models, where the peak separation is identified with the NS spin rate have been challenged by observations: for Sco X-1, $`\mathrm{\Delta }\nu `$ varies by 40% (van der Klis et al. 1997 hereafter VK97) and for source 4U 1608-52, $`\mathrm{\Delta }\nu `$ varies by 26% (Mendez et al. 1998). Mounting observational evidence that $`\mathrm{\Delta }\nu `$ is not constant demands a new theoretical approach. For Sco X-1, in the lower part of the spectrum, VK97 identified two branches (presumably the first and second harmonics) with frequencies 45 and 90 Hz which slowly increase in frequency when $`\nu _k`$ and $`\nu _h`$ increase. Furthermore, in the spectra observed by Rossi X-ray Timing Explorer (RXTE) for 4U 1728-34, Ford and van der Klis (1998, herein FV98) found low frequency Lorentzian (LFL) oscillations with frequencies between 10 and 50 Hz. These frequencies as well as break frequency, $`\nu _{break}`$ of the power spectrum density (PSD) for the same source were shown to be correlated with $`\nu _k`$ and $`\nu _h`$. It is clear that the low and high parts of the PSD of the kHz QPO sources should be related within the framework of the same theory. Difficulties which the beat frequency model faces are amplified by the requirement of relating the observed low frequency features, described above, with $`\nu _k`$ and $`\nu _h`$. Recently, a different approach to this problem has been suggested: kHz QPO’s in the NS binaries have been modeled by Osherovich & Titarchuk (1999) as Keplerian oscillations in a rotating frame of reference. In this new model the fundamental frequency is the Keplerian frequency $`\nu _k`$ (the lower frequency of two kHz QPO’s) $$\nu _k=\frac{1}{2\pi }\left(\frac{GM}{R^3}\right)^{1/2},$$ (1) where G is the gravitational constant, M is the NS mass, and R is the radius of the corresponding Keplerian orbit. The high QPO frequency $`\nu _h`$ is interpreted as the upper hybrid frequency of the Keplerian oscillator under the influence of the Coriolis force $$\nu _h=[\nu _k^2+(\mathrm{\Omega }/\pi )^2]^{1/2},$$ (2) where $`\mathrm{\Omega }`$ is the angular rotational frequency of the NS magnetosphere. For three sources (Sco X-1, 4U 1608-52 and 4U 1702-429), we demonstrated that the solid body rotation ($`\mathrm{\Omega }=\mathrm{\Omega }_0=const`$) is a good first order approximation. Slow variation of $`\mathrm{\Omega }`$ as a function of $`\nu _k`$ within the second order approximation is related to the differential rotation of the magnetosphere controlled by a frozen-in magnetic structure. This model allows us to address the relation between the high and low frequency features in the PSD of the neutron systems. We interpreted the $`45`$ and $`90`$ Hz oscillations as 1st and 2nd harmonics of the lower branch of the Keplerian oscillations in the rotating frame of reference: $$\nu _L=(\mathrm{\Omega }/\pi )(\nu _k/\nu _h)\mathrm{sin}\delta ,$$ (3) where $`\delta `$ is the angle between $`𝛀`$ and the vector normal to the plane of the Keplerian oscillations. For Sco X-1, we found that the angle $`\delta =5.5^o`$ fits the observations. In this Letter we include the LFL oscillations and related break frequency phenomenon in our classification. We attribute LFL oscillations to radial oscillations in the viscous boundary layer surrounding a neutron star. According to the model of Shakura & Sunyaev (1973, hereafter SS73), the innermost part of the Keplerian disk adjusts itself to the rotating central object (i.e. neutron star). The recent modelling by Titarchuk, Lapidus & Muslimov (1998, hereafter TLM) led to the determination of the characteristic thickness of the viscous boundary layer $`L`$. In the following section, we present the extension of this work to relate the frequency of the viscous oscillations $`\nu _v`$ and $`\nu _{break}`$ with $`\nu _k`$. Comparison with the observations is carried out for 4U 1728-34. The last section of this Letter contains our theoretical classification of kHz QPO’s and related low frequency phenomena. ## 2 Radial Oscillations and Diffusion in the Viscous Boundary Layer We define the boundary layer as a transition region confined between the NS surface and the first Keplerian orbit. The radial motion in the disk is controlled by the friction and the angular momentum exchange between adjacent layers resulting in the loss of the initial angular momentum by an accreting matter. The corresponding radial transport of the angular momentum in a disk is described by the equation (e.g. SS73): $$\dot{M}\frac{d}{dR}(\omega R^2)=2\pi \frac{d}{dR}(W_{r\phi }R^2),$$ (4) where $`\dot{M}`$ is the accretion rate, and $`W_{r\phi }`$ is the component of a viscous stress tensor which is related to the gradient of the rotational frequency $`\omega `$, namely $$W_{r\phi }=2\eta HR\frac{d\omega }{dR},$$ (5) where $`H`$ is a half-thickness of a disk, and $`\eta `$ is the turbulent viscosity. The nondimensional parameter which is essential for equation (4) is the Reynolds number for the accretion flow $$\gamma =\frac{\dot{M}}{4\pi \eta H}=\frac{3Rv_r}{v_tl_t},$$ (6) which is the inverse $`\alpha `$parameter in the SS73-model; $`v_r`$ is a characteristic velocity, $`v_t`$ and $`l_t`$ are a turbulent velocity and related turbulent scale respectively. Equations $`\omega =\omega _0\mathrm{at}\mathrm{R}=\mathrm{R}_0`$ (NS radius) and $`\omega =\omega _\mathrm{K}\mathrm{at}\mathrm{R}=\mathrm{R}_{\mathrm{out}}`$ (radius where the boundary layer adjusts to the Keplerian motion), and $`\frac{\mathrm{d}\omega }{\mathrm{dr}}=\frac{\mathrm{d}\omega _\mathrm{k}}{\mathrm{dr}}\mathrm{at}\mathrm{R}=\mathrm{R}_{\mathrm{out}}`$ were assumed by TLM as boundary conditions. Thus the profile $`\omega (R)`$ and the outer radius of the viscous boundary layer $`R_{out}`$ are uniquely determined by these boundary conditions. Presenting $`\omega (R)`$ in terms of dimensionless variables: namely angular velocity $`\theta =\omega /\omega _0`$, radius $`r=R/R_0`$ ($`R_0=x_0R_s`$, $`R_s=2GM/c^2`$ is the Schwarzschild radius), and mass $`m=M/M_{}`$, we express Keplerian angular velocity as $$\theta _K=6/(a_Kr^{3/2}),$$ (7) where $`a_K=m(x_0/3)^{3/2}(\nu _0/363\mathrm{Hz})`$ and the NS rotational frequency $`\nu _0`$ has a particular value for each star. The particular coefficient, 6, presented in formula (7) is obtained for the frequency of nearly coherent (burst) oscillations for 4U 1728-34, i.e. for $`\nu _0=363`$ Hz. The solution of equations (4-5 ) satisfying the above boundary conditions is $$\theta (r)=D_1r^\gamma +(1D_1)r^2,$$ (8) where $`D_1=(\theta _{out}r_{out}^2)/(r_{out}^\gamma r_{out}^2)`$ and $`\theta _{out}=\theta _K(r_{out})`$. Equation $`\theta ^{}(r_{out})=\theta _K^{}(r_{out})`$ determines $`r_{out}`$: $$\frac{3}{2}\theta _{out}=D_1\gamma r_{out}^\gamma +2(1D_1)r_{out}^2.$$ (9) The solution of equations (4-5) subject to the inner sub-Keplerian boundary condition has a regime corresponding to the super-Keplerian rotation (TLM). For such a regime matter piles up in the vertical direction thus disturbing the hydrostatic equilibrium. The vertical component of the gravitational force prevents this matter from further accumulation in a vertical direction and drives relaxation oscillations. The radiation drag force, which is proportional to the vertical velocity, determines the characteristic decay time of the vertical oscillations (TLM). The characteristic time $`t_r`$, over which the matter moves inward through this region, bounded between the innermost disk and relaxation oscillations zone is $$t_r\frac{L}{v_r},$$ (10) where $`L=R_{out}R_0`$ is the characteristic thickness of this region. Even though the specific mechanism providing the modulation of the observed X-ray flux over this timescale needs to be understood, this timescale apparently “controls” the supply of accreting matter into the innermost region of the accretion disk. Any local perturbation in the transition region would propagate diffusively outward over a timescale $$t_{diff}\left(\frac{L}{l_{fp}}\right)^2\frac{l_{fp}}{v_r},$$ (11) where $`l_{fp}`$ is the mean free path of a particle. Note, that the $`\gamma `$parameter is proportional to the accretion rate (see Eq. 6), and therefore $`v_r\gamma `$. Using this relationship, we can exclude $`v_r`$ from the above equations and get the relations for the corresponding inverse timescales (frequencies). For the frequency of viscous oscillations $$\nu _v\frac{\gamma }{r_{out}r_0},$$ (12) and for the break frequency, related to the diffusion $$\nu _{break}\frac{\gamma }{(r_{out}r_0)^2}.$$ (13) In the following section, we compare the predictions of this model with the observations and also establish the theoretical relation between $`\nu _v`$ and $`\nu _{break}`$. ## 3 Comparisons with Observations The results of FV98 for the low frequency Lorentzian in the X-ray binary 4U 1728-34 are presented in Figure 1 and for the break frequency $`\nu _{break}`$ in Figure 2. In Figure 1, crosses represent the frequencies (with the appropriate error bars) observed during four days. Data collected on February 16 (open circles) are situated apart from the rest of observations and they are not included in the empirical power law fit which is suggested by FV98. In the work discussed above, the authors plotted the observed low frequencies versus high-frequency QPO which for all days, except February 16 was $`\nu _k`$ and apparently for February 16 it was $`\nu _h`$. Our theoretical curve for $`\nu _v`$ versus $`\nu _k`$ is based on equation (12). The $`\chi ^2`$ dependence on this parameter is rather strong: the parabola $`\chi ^2=3802473076a_k+35732a_k^2`$ has a minimum at $`a_k=1.03`$, which determines the best fit. Using $`\mathrm{\Omega }/2\pi =340`$ Hz in the upper hybrid relation (2), we calculate $`\nu _k`$ for the points observed on February 16 and show that they belong to the set of frequencies modeled by our theoretical curve for the viscous radial oscillations (closed circles). Identification of the observed $`\nu _{break}`$ with the inverse diffusion time (formulas 11 and 13) is illustrated by theoretical curves in Figure 2. It is worth noting that these two correlations with kHz frequencies are fit by two theoretical curves using only one parameter $`a_k`$. The $`\chi ^2`$ dependence on $`a_k`$ is obtained with inclusion of all data points for the break and low frequency correlations (75 data points). The theoretical dependences of $`\nu _k`$ and $`r_{out}`$ on $`\gamma `$parameter are calculated numerically using equations (7) and (9) and employed here for calculations of the theoretical curves in Figures 2 and 3 using equations (12) and (13). We were unable to interpret data for $`\nu _{break}`$ collected in February 16 (open circles). Neither $`\nu _v`$ nor $`\nu _{break}`$ in our theory have a power law relation with $`\nu _k`$. However, the theoretical relation between $`\nu _{break}`$ and $`\nu _v`$, shown in Figure 3 by a solid curve, is close to the straight line (in log-log diagram), suggesting an approximate power law $$\nu _{break}=0.041\nu _v^{1.61}.$$ (14) This relation is derived from the theoretical dependence for the best fit parameter $`a_k=1.03`$. Observations of FV98 (except February 16) are also presented in the Figure 3. 4. Discussion and Conclusions We present a model for the radial oscillations and diffusion in the viscous boundary layer surrounding the neutron star. Our dimensional analysis has identified the corresponding frequencies $`\nu _v`$ and $`\nu _{break}`$ which are consistent with the low Lorentzian and break frequencies for 4U 1728-34. and predicted values for $`\nu _{break}`$ related to the diffusion in the boundary layer are consistent with the break frequency observed for the same source. Both oscillations (Keplerian and radial) and diffusion in the viscous boundary layer are controlled by the same parameter - Reynolds number $`\gamma `$ which in turn is related to the accretion rate. It is shown in TLM that $`\nu _k`$ is a monotonic function of $`\gamma `$. Therefore, the observed range of $`\nu _k`$, (350-900 Hz) corresponds to the range $`1<\gamma <5`$ (or $`0.2<\alpha <1`$). The results in this Letter extend the classification of kHz QPO’s and the related low frequency phenomena suggested by Osherovich & Titarchuk 1999. Figure 4 summarizes the new classification. Solid lines represent our theoretical curves and open circles observations for Sco X-1 (from VK97). As one can see, formulas (2) and (3), for the Keplerian oscillator under the influence of the Coriolis force, reproduce the observations well. Indeed, $`\mathrm{\Delta }\nu =\nu _h\nu _k`$ is not constant, as observed (see OT99 for details of comparisons of the data with the theory). Effectively, the main viscous frequency $`\nu _v`$ and the diffusive $`\nu _{break}`$ introduce the second oscillator with two new branches in the lower part of the spectra. The unifying characteristic of spectra for both oscillators is the strong dependence on $`\nu _k`$. This common dependence on $`\nu _k`$ can be viewed as a result of the interaction between Keplerian oscillator and the viscous oscillator which share the common boundary at the outer edge of the viscous transition layer. Our parametric study indicates that the power law index 1.6 in Eq. (14) should be the same for different neutron stars. We expect a similar relation for black holes but with a distinctly different index. The found value of $`a_k`$ leads ultimately to independent constraints in the determination of mass and radius for the neutron star (Haberl & Titarchuk 1995). LT thanks NASA for support under grants NAS-5-32484 and RXTE Guest Observing Program. The authors acknowledge discussions with Alex Muslimov, Jean Swank, Lorella Angelini, Will Zhang, Joe Fainberg and fruitful suggestions by the referee. Particularly, we are grateful to Eric Ford, and Michiel van der Klis, for the data which enable us to make comparisons with the data in detail.
no-problem/9904/quant-ph9904077.html
ar5iv
text
# Arbitrary Phase Rotation of the Marked State Can not Be Used for Grover’s Quantum Search Algorithm ## Abstract A misunderstanding that an arbitrary phase rotation of the marked state together with the inversion about average operation in Grover’s search algorithm can be used to construct a (less efficient) quantum search algorithm is cleared. The $`\pi `$ rotation of the phase of the marked state is not only the choice for efficiency, but also vital in Grover’s quantum search algorithm. The results also show that Grover’s quantum search algorithm is robust. Grover’s quantum search algorithm is one of the most important development in quantum computation. It achieves quadratic speedup in searching a marked state in an unordered list over classical search algorithms. As the algorithm involves only simple operations, it is easy to implement in experiment. By now, it has been realized in NMR quantum computers. Benett et al have shown that no quantum algorithm can solve the search problem in less than $`O\sqrt{N}`$ steps. Boyer et al have given analytical expressions for the amplitude of the states in Grover’s search algorithm and given tight bounds. Zalka has improved this tight bounds and showed that Grover’s algorithm is optimal. Zalka also proposed an improvement on Grover’s algorithm. In another development, Biron et al generalized Grover’s algorithm to an arbitrarily distributed initial state. Pati recast the algorithm in geometric language and studied the bounds on the algorithm. In each iteration of the Grover’s search algorithm, there are two steps: 1) a selective inversion of the amplitude of the marked state, which is a phase rotation of $`\pi `$ of the marked state; 2) an inversion about the average of the amplitudes of all basis states. This second step can be realized by two Hadamard-Walsh transformations and an rotation of $`\pi `$ of the all basis states different from $`|0`$. Grover’s search algorithm is a series of rotations in an SU(2) space span by $`|n_0`$, the marked state and $`|c=\frac{1}{\sqrt{N1}}_{nn_0}|n`$. Each iteration rotates the state vector of the quantum computer system an angle $`\psi =2\mathrm{arcsin}\frac{1}{\sqrt{N}}`$ towards the $`|n_0`$ basis of the SU(2) space. Grover further showed that the Hadamard-Walsh transformation can be replaced by almost any unitary transformation. The inversions of the amplitudes can be instead rotated by arbitrary phases. It is believed that if one rotates the phases of the states arbitrarily, the resulting transformation is still a rotation of the state vector of the quantum computer towards the $`|n_0`$ basis in the SU(2) space. But the angle of rotation is smaller than $`\psi `$. From the consideration of efficiency, the phase rotation of $`\pi `$ should be adopted. This fact has been used to the advantage by Zalka recently to improve the efficiency of the quantum search algorithm. According to the proposal, the inversion of the amplitude of the marked state in step 1 is replaced by a rotation through an angle between 0 and $`\pi `$ to produce a smaller angle of SU(2) rotation towards the end of a quantum search calculation so that the amplitude of the marked state in the computer system state vector is exactly 1. In this Letter, we show by explicit construction that the above concept is actually wrong. When the rotation of the phase of the marked state is not $`\pi `$, one can simply not construct a quantum search algorithm at all. Suppose the initial state of the quantum computer is $`|\varphi =B_0|n_0+A_0{\displaystyle \frac{1}{\sqrt{N1}}}{\displaystyle \underset{nn_0}{}}|n.`$ (1) The modified quantum search algorithm now consists of the following two steps: 1) $`|n_0e^{i\theta }|n_0`$; 2) an inversion about the average operation $`D`$, whose matrix elements are: $`D_{ij}=\{\begin{array}{cc}\frac{2}{N},\hfill & ij\hfill \\ \frac{2}{N}1,\hfill & i=j\hfill \end{array}`$ (4) After each iteration of the modified Grover’s quantum search, the state vector still has the form of (1). The recurrent formula for the amplitudes are $`B_{j+1}`$ $`=`$ $`{\displaystyle \frac{N2}{N}}e^{i\theta }B_j+{\displaystyle \frac{2\sqrt{N1}}{N}}A_j,`$ (5) $`A_{j+1}`$ $`=`$ $`{\displaystyle \frac{2\sqrt{N2}}{N}}e^{i\theta }B_j+{\displaystyle \frac{N2}{N}}A_j.`$ (6) Denoting $`\mathrm{cos}\psi =\frac{N2}{N}`$, $`\mathrm{sin}\psi =\frac{2\sqrt{N1}}{N}`$, we can rewrite the recurrent relation in matrix form: $`\left(\begin{array}{c}B_{j+1}\\ A_{j+1}\end{array}\right)=\left(\begin{array}{cc}\hfill \mathrm{cos}\psi e^{i\theta }& \hfill \mathrm{sin}\psi \\ \hfill \mathrm{sin}\psi e^{i\theta }& \hfill \mathrm{cos}\psi \end{array}\right)\left(\begin{array}{c}B_j\\ A_j\end{array}\right).`$ (13) It is not difficult to diagonalize the transformation matrix. The eigenvalues are: $`\lambda _{1,2}=e^{i\gamma _{1,2}},`$ (14) with $`\mathrm{sin}\gamma _{1,2}={\displaystyle \frac{\mathrm{sin}\theta \mathrm{cos}\psi \pm 2\sqrt{1\mathrm{cos}\psi ^2\mathrm{sin}^2\theta }\mathrm{sin}\frac{\theta }{2}}{2}}.`$ (15) It is worth pointing that the two eigen-phases satisfy $`\gamma _1+\gamma _2=\pi +\theta `$. The corresponding normalized eigenvectors are the column vectors of the matrix $`U`$, $`U=\left(\begin{array}{cc}\frac{\mathrm{sin}\psi }{\sqrt{2(1\mathrm{cos}\psi \mathrm{cos}\gamma _2)}}& \frac{\mathrm{cos}\psi +e^{i\gamma _2}}{\sqrt{2(1\mathrm{cos}\psi \mathrm{cos}\gamma _2)}}\\ \frac{\mathrm{cos}\psi e^{i\theta }+e^{i\gamma _1}}{\sqrt{2(1\mathrm{cos}\psi \mathrm{cos}\gamma _2)}}& \frac{\mathrm{sin}\psi e^{i\theta }}{\sqrt{2(1\mathrm{cos}\psi \mathrm{cos}\gamma _2)}}\end{array}\right).`$ (18) This $`U`$ matrix is unitary and diagonalizes the transformation matrix in (13), that is $`U^1TU`$ is diagonal. The amplitude of the marked state after $`j+1`$ iterations is $`B_{j+1}`$ $`=`$ $`{\displaystyle \frac{\mathrm{sin}\psi }{2(1\mathrm{cos}\psi \mathrm{cos}\gamma _2)}}e^{i(j+1)\gamma _1}\left[\mathrm{sin}\psi B_0+(\mathrm{cos}\psi e^{i\theta }+e^{i\gamma _1})A_0\right]`$ (20) $`+{\displaystyle \frac{\mathrm{cos}\psi +e^{i\gamma _2}}{2(1\mathrm{cos}\psi \gamma _2)}}e^{i(j+1)\mathrm{cos}\gamma _2}\left[(\mathrm{cos}\psi +e^{i\gamma _2})B_0+\mathrm{sin}\psi e^{i\theta }A_0\right].`$ When $`\theta =\pi `$ and $`B_0=\sqrt{\frac{1}{N}}`$, $`A_0=\sqrt{\frac{N1}{N}}`$, we recover the original Grover’s quantum search algorithm, and $`B_{j+1}=\mathrm{sin}((j+1+1/2)\psi )`$ as given by Boyer et al. To see the effect of the rotation angle $`\theta `$ on the quantum search algorithm, we plot the norm $`|B_{j+1}|`$ with respect to $`\theta `$. As examples, we draw in Fig. 1. $`|B_4|`$, and $`|B_7|`$ in Fig. 2. For simplicity, $`N=100`$, $`B_0=\sqrt{\frac{1}{N}}`$ and $`A_0=\sqrt{\frac{N1}{N}}`$. From these studies, we see the following points: 1) as $`j`$ increases, $`|B_j|`$ increases too for small $`j`$ values for $`\theta =\pi `$. When $`\theta =\pi `$, Grover’s original quantum search algorithm is working. 2) For other values of $`\theta `$ between 0 and $`2\pi `$, the dependence of $`|B_{j+1}|`$ on $`\theta `$ is not monotonic. There are oscillations. There are peaks and valleys in the values of $`|B|`$ for a given $`j`$. What is more, when $`j`$ changes, the positions of these peaks and valleys change too. In other words, at a given $`\theta `$ value, $`|B_{j+1}|`$ does not always increase when $`j`$ increases. For instance, when $`j=3`$, there is only one peak for $`\theta `$ between 0 and $`\pi `$, whereas for $`j=6`$, there are 3 peaks. This is contrary to the common expectation that for small number of iterations, $`|B_{j+1}|`$ should monotonically increase, though not as big as the standard Grover’s quantum search algorithm. 3) For a $`\theta `$ different from $`\pi `$, even one increases the number of iterations, the norm of the amplitude of the marked state can not reach one. There is a limit at which the norm of the amplitude can reach. In Fig 3. and Fig. 4, we plot the $`|B_{j+1}|`$ versus $`j`$ for $`\theta =\frac{\pi }{4}`$ and $`\theta =\frac{\pi }{3}`$ respectively. The behavior is quite interesting. For $`\theta =\frac{\pi }{4}`$, there is rapid irregular oscillations in the norm. In particular, the maximum height is only about 0.15. The minimum is not zero, it is about 0.07. For $`\theta =\frac{\pi }{3}`$, the plot can be seen as three lines at an interval of 3 points. Again, the maximum height is small, only about 0.18. The norm of the amplitude is in a range between 0.06 and 0.18. Even if one increases the number of iterations, the norm can not be increased any further. In this case, we have ploted $`j`$ up to 100, which is equal to the number of items in the unsorted system. 4) In the vicinity of $`\pi `$, the algorithm still works, though the height of the norm can not reach 1. But it can still reach a considerably large value. This shows that Grover’s quantum search algorithm is robust with respect to $`\theta `$ at $`\pi `$. This is important as an imperfect gate operation may lead to a phase rotation not exactly equal to $`\pi `$. Grover’s quantum search algorithm has a good tolerance on the phase rotating angle near $`\pi `$. A small deviation from $`\pi `$ will not destroy the algorithm. To summarize, we see that $`\theta =\pi `$ is not only a requirement for efficiency, but also a necessary condition for the algorithm. At this angle, the algorithm is also robust. To achieve a smaller increase in the marked state amplitude(or a smaller rotation towards the marked state basis in the SU(2) space), one has to resort to more complicated modifications to Grover’s quantum search algorithm. Encouragement from Prof. Prof. Haoming Chen is gratefully acknowledged. We thank Prof. Grover for helpful email discussions regarding Grover’s quantum search algorithms and bringing our attention new references on the algorithm.
no-problem/9904/cond-mat9904308.html
ar5iv
text
# On the thermodynamics of strongly correlated integrable electron systems ## Abstract We reexamine the Yang-Yang-Takahashi method of deriving the thermodynamic Bethe ansatz equations which describe strongly correlated electron systems of fundamental physical interest, such as the Hubbard, $`sd`$ exchange (Kondo) and Anderson models. It is shown that these equations contain some additional terms which may play an important role in the physics of the systems. It is well known that many one-dimensional (1D) and effective 1D models describing strongly correlated electron systems of fundamental physical interest are diagonalized exactly by following set of the Bethe ansatz (BA) equations: $`\mathrm{exp}(ik_jL)`$ $`=`$ $`{\displaystyle \underset{\alpha =1}{\overset{M}{}}}e_1(u_j\lambda _\alpha )`$ (2) $`{\displaystyle \underset{j=1}{\overset{N}{}}}e_1(\lambda _\alpha u_j)`$ $`=`$ $`{\displaystyle \underset{\beta =1}{\overset{M}{}}}e_2(\lambda _\alpha \lambda _\beta ).`$ (3) Here, $`e_n(x)=(x+in/2)/(xin/2)`$, $`k_j`$ are electron momenta, $`u_ju(k_j)`$, $`N`$ is the total number of electrons on an interval of size $`L`$ and $`M`$ is the number of electrons with spin “down”. The eigenenergy $`E`$ and $`z`$ component of the total spin of the system $`S^z`$ are given by $$E=\underset{j=1}{\overset{N}{}}\omega _j,S^z=\frac{1}{2}NM.$$ (4) where $`\omega _j=\omega (k_j)`$ are electron energies. At $`\omega (k)k^2`$, $`u(k)k`$ and $`\omega (k)\mathrm{cos}k`$, $`u(k)\mathrm{sin}k`$, Eqs. (1) correspond to a 1D electron gas with $`\delta `$-function interaction and the 1D Hubbard model , respectively. In the theory of dilute magnetic alloys , these equations describe the excitation spectrum of a free host in terms of interacting Bethe excitations, while impurity terms, omitted in Eqs. (1), account for an elastic scattering of Bethe excitations at the impurity site. The case of $`\omega (k)=k`$, $`u(k)=0`$ corresponds to the $`sd`$ exchange (Kondo) model , while the case of $`\omega (k)=k`$, $`u(k)k^2`$ corresponds to the Anderson model . In addition, Eqs. (1) can be used to study the Anderson and Kondo impurities embedded in a host with a nonmetallic behavior of the density of band states . The equations (1) correspond to a finite system of electrons. A general method of deriving the thermodynamic BA equations has been developed by Yang and Yang for the case of spinless particles, and then generalized by Takahashi to the case of particles with internal degrees of freedom. In this Letter, we reexamine the Yang-Yang-Takahashi method and show that the thermodynamic Bethe ansatz equations contain some additional terms which may play an important role in the physics of strongly correlated electron systems. These additional terms result from a more accurate computation of a contribution of spin degrees of freedom to a variation of the system’s energy. The structure of the BA equations shows that it is natural to treat the system of $`NM`$ electrons with spin “up” and $`M`$ electrons with spin “down” in terms of $`N`$ charge excitations with “rapidities” $`u_j=u(k_j)`$, $`j=1,\mathrm{},N`$ and spin excitations (spin waves) with rapidities $`\lambda _\alpha `$, $`\alpha =1,\mathrm{},M`$. In accordance with the Yang-Yang-Takahashi method, to derive the thermodynamic BA equations, one needs to note first that in the limit of a sufficiently large physical system Eqs. (1) admit bound spin complexes in which spin rapidities $`\lambda _\alpha `$ are grouped into “strings” of order $`n`$ in the complex $`\lambda `$ plane, $$\lambda _\alpha ^{(n,j)}=\lambda _\alpha ^n+\frac{i}{2}(n+12j),j=1,\mathrm{},n.$$ In addition, for many of the above-mentioned physical systems the BA equations admit also charge complexes in which charge excitations are bond to spin complexes . Nevertheless, to avoid more tedious expressions we start our analysis with the case in which charge complexes are absent. Then, Eqs. (1) take the form of equations for charge excitation (particle) momenta $`k_j`$ and rapidities of spin complexes, $`\lambda _\alpha ^n`$, $`\alpha =1,\mathrm{},M_n`$, where $`M_n`$ is the number of complexes of order $`n`$ and hence $`_nnM_n=M`$, $`2\pi N_j`$ $`=`$ $`k_jL+{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{\alpha =1}{\overset{M_n}{}}}p_n[u(k_j)\lambda _\alpha ^n],`$ (6) $`2\pi J_\alpha ^n`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{N}{}}}\theta _n[\lambda _\alpha ^nu(k_j)]{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{\beta =1}{\overset{M_m}{}}}\mathrm{\Theta }_{nm}(\lambda _\alpha ^n\lambda _\beta ^m),`$ (7) where $`p_n(x)=\pi +\theta _n(x)`$, and $`N_j`$ and $`J_\alpha ^n`$ are sets of quantum numbers of the system corresponding to particles and spin complexes, respectively. Here, $`\theta _n(x)=2\mathrm{arctan}(2x/n)`$ and $$\mathrm{\Theta }_{nm}(x)=(1\delta _{nm})\theta _{|nm|}(x)+\theta _{n+m}(x)+2\underset{k=1}{\overset{\mathrm{min}(n,m)1}{}}\theta _{|nm|+2k}(x).$$ In the continuous limit, $`L\mathrm{}`$, $`N\mathrm{}`$, $`M\mathrm{}`$, but $`N/L`$ and $`M/L`$ are constant, Eqs. (2) take the form of integral equations for the “particle” \[$`\rho (k)`$ and $`\sigma _n(\lambda )`$\] and “hole” \[$`\stackrel{~}{\rho }(k)`$ and $`\stackrel{~}{\sigma }_n(\lambda )`$\] density distributions of charge excitations and spin complexes, respectively, $`\rho (k)+\stackrel{~}{\rho }(k)`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}+u^{}(k){\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle 𝑑\lambda a_n[u(k)\lambda ]\sigma _n(\lambda )},`$ (9) $`\stackrel{~}{\sigma }_n(\lambda )`$ $`=`$ $`{\displaystyle 𝑑ka_n[\lambda u(k)]\rho (k)}{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}{\displaystyle 𝑑\lambda ^{}A_{nm}(\lambda \lambda ^{})\sigma _m(\lambda ^{})},`$ (10) where $`a_n(x)=(2n/\pi )(n^2+4x^2)^1`$, $`u^{}(k)=du/dk`$, and the matrix $`A_{nm}(x)`$ is given by $$A_{nm}(x)=\delta _{nm}\delta (x)+(1\delta _{nm})a_{|nm|}(x)+a_{n+m}(x)+2\underset{k=1}{\overset{\mathrm{min}(n,m)1}{}}a_{|nm|+2k}(x).$$ In terms of the densities, the energy of the system is found to be $$\frac{1}{L}E=𝑑k\omega (k)\rho (k)$$ (11) The number of unknown functions in Eqs. (3) is twice bigger the number of equations. Needed additional equations for fundamental (renormalized) energies of Bethe excitations $$\epsilon (k)=T\mathrm{ln}\frac{\stackrel{~}{\rho }(k)}{\rho (k)},\kappa _n(\lambda )=T\mathrm{ln}\frac{\stackrel{~}{\sigma }_n(\lambda )}{\sigma _n(\lambda )},$$ are derived from the condition of a minimum of the thermodynamic potential of the system $$\mathrm{\Omega }=ETSHS^zAN,$$ (12) where $`T`$, $`S`$, $`A`$, and $`H`$ are the temperature, entropy, chemical potential, and an external magnetic field, respectively. The condition $`\delta \mathrm{\Omega }=0`$, where the energy’s variation is given by $$\frac{1}{L}\delta E=𝑑k\omega (k)\delta \rho (k),$$ (13) results in $`\epsilon (k)`$ $`=`$ $`\omega (k){\displaystyle \frac{1}{2}}HA{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle 𝑑\lambda a_n[u(k)\lambda ]F[\kappa _n(\lambda )]}`$ (15) $`F[\kappa _n(\lambda )]`$ $`=`$ $`nH+{\displaystyle 𝑑ku^{}(k)a_n[\lambda u(k)]F[\epsilon (k)]}`$ (17) $`+{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}{\displaystyle 𝑑\lambda ^{}A_{nm}(\lambda \lambda ^{})F[\kappa _m(\lambda ^{})]}`$ where $`F[f(x)]T\mathrm{ln}\{1+\mathrm{exp}[f(x)/T]\}`$. The analysis of the thermodynamic BA equations (7) in the limiting case of the Kondo system shows that the expression (6) is incomplete. Although, in accordance with Eq. (3a) the variation of the density distribution of particles is related to the variation of the density distributions of spin complexes by the relation $$\delta \rho (k)+\delta \stackrel{~}{\rho }(k)=u^{}(k)\underset{n=1}{\overset{\mathrm{}}{}}𝑑\lambda a_n[u(k)\lambda ]\delta \sigma _n(\lambda ),$$ the latter contains only the derivative of the charge rapidity $`u^{}(k)`$. It is easy to see that if one sets $`u^{}(k)=0`$, as it takes place in the Kondo model, the dependence of the energy variation on the spin degrees of freedom completely disappears from Eq. (6), and Eq. (7b) immediately leads, therefore, to unphysical result: All $`\kappa _n(\lambda )`$ are positive, and hence spin excitations are absent in the ground state of the system. The expression (6) does not account thus for a total contribution of spin degrees of freedom to a variation of the energy of the system. In the continuous limit for spin excitations only, Eq. (2a) for a particle momentum, $$\frac{2\pi }{L}N_j=k_j+\underset{n=1}{\overset{\mathrm{}}{}}𝑑\lambda p_n[u(k_j)\lambda ]\sigma _n(\lambda ),$$ (18) clearly shows, however, that the momentum of a charge excitation, and hence its energy $`\omega (k_j)`$, are determined by both the quantum number of charge excitations $`N_j`$ and the density distributions of spin complexes $`\sigma _n(\lambda )`$ even if $`u(k)=0`$. Therefore, the total variation of the system energy should be written in the form $$\frac{1}{L}\delta E=𝑑k\omega (k)\delta \rho (k)+𝑑k\rho (k)\delta \omega (k),$$ (20) where $`\delta \omega (k)`$ is the variation of the energy of a particle at fixed quantum numbers of charge excitations, $`\delta N_j=0`$, $$\delta \omega (k)=\frac{d\omega (k)}{dk}\delta k=\frac{d\omega (k)}{dk}\frac{_n𝑑\lambda p_n[u(k)\lambda ]\delta \sigma _n(\lambda )}{1+2\pi u^{}(k)_n𝑑\lambda a_n[\lambda u(k)]\sigma _n(\lambda )}.$$ (21) Making use of Eq. (3a) for the denominator in the right-hand side of Eq. (9b), one easily finds the final expression for the total variation of the energy of the system, $$\frac{1}{L}\delta E=𝑑k\omega (k)\delta \rho (k)\frac{dk}{2\pi }\frac{d\omega (k)/dk}{1+\mathrm{exp}[\epsilon (k)/T]}\underset{n=1}{\overset{\mathrm{}}{}}𝑑\lambda p_n[u(k)\lambda ]\delta \sigma _n(\lambda ).$$ (22) Thus, while the energy of the system is given by Eq. (4a), its variation contains two terms. The first term corresponds to a contribution of a variation of charge quantum numbers at fixed spin quantum numbers of the system, while the second one corresponds to a contribution of a variation of spin quantum numbers at fixed charge quantum numbers. Taking into account Eq. (10), we immediately find that the term $$\frac{dk}{2\pi }\frac{d\omega (k)/dk}{1+\mathrm{exp}[\epsilon (k)/T]}p_n[u(k)\lambda ]$$ (23) should be added to the right-hand side of Eq. (7a). Then, in the Kondo limit, where $`u(k)=0`$, $`d\omega (k)/dk=1`$, and $$\frac{dk}{2\pi }\frac{1}{1+\mathrm{exp}[\epsilon (k)/T]}=\frac{N}{L},$$ Eq. (7b) with the extra term (11) correctly reduces to the standard equation describing the thermodynamics of the Kondo model . If the Bethe spectrum of a systems contains also charge complexes, Eq. (8) takes the form $$\frac{2\pi }{L}N_j=k_j+\underset{n=1}{\overset{\mathrm{}}{}}𝑑\lambda p_n[u(k_j)\lambda ]\left[\sigma _n(\lambda )+\sigma _n^{}(\lambda )\right],$$ (25) where $`\sigma _n^{}(\lambda )`$ stands for the density distribution of charge complexes of order $`n`$. The total variation of the energy of the system is then found to be $$\frac{1}{L}\delta E=𝑑k\omega (k)\delta \rho (k)\frac{dk}{2\pi }\frac{d\omega (k)/dk}{1+\mathrm{exp}[\epsilon (k)/T]}\underset{n=1}{\overset{\mathrm{}}{}}𝑑\lambda p_n[u(k)\lambda ]\delta \left[\sigma _n(\lambda )+\sigma _n^{}(\lambda )\right].$$ (26) The Yang-Yang-Takahashi method results then in the following set of the thermodynamic Bethe ansatz equations: $`\epsilon (k)`$ $`=`$ $`\omega (k){\displaystyle \frac{1}{2}}HA{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle 𝑑\lambda a_n[u(k)\lambda ]\left\{F[\kappa _n(\lambda )]F[\xi _n(\lambda )]\right\}},`$ (28) $`F[\kappa _n(\lambda )]`$ $`=`$ $`nH{\displaystyle \frac{dk}{2\pi }\frac{d\omega (k)/dk}{1+\mathrm{exp}[\epsilon (k)/T]}p_n[u(k)\lambda ]}+{\displaystyle 𝑑ku^{}(k)a_n[\lambda u(k)]F[\epsilon (k)]}`$ (30) $`+{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}{\displaystyle 𝑑\lambda ^{}A_{nm}(\lambda \lambda ^{})F[\kappa _m(\lambda ^{})]},`$ $`F[\xi _n(\lambda )]`$ $`=`$ $`\xi _n^{(0)}(\lambda )2nA{\displaystyle \frac{dk}{2\pi }\frac{d\omega (k)/dk}{1+\mathrm{exp}[\epsilon (k)/T]}p_n[u(k)\lambda ]}`$ (32) $`+{\displaystyle 𝑑ku^{}(k)a_n[\lambda u(k)]F[\epsilon (k)]}+{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}{\displaystyle 𝑑\lambda A_{nm}(\lambda \lambda ^{})F[\xi _m(\lambda ^{})]}.`$ where $`\xi _n(\lambda )=T\mathrm{ln}[\stackrel{~}{\sigma }_n^{}(\lambda )/\sigma _n^{}(\lambda )]`$ are the renormalized energies of charge complexes, while $`\xi _n^{(0)}(\lambda )`$ are their bare energies, which are different for different physical systems. In summary, the additional terms in the thermodynamic BA equations, derived in this Letter, should essentially affect the thermodynamic properties of a system at finite temperature. It is clear that the additional terms affect essentially also the low-energy physics of such a system, in which particles (charge excitations unbuilt in charge complexes) essentially contribute to the ground state of the system, as it takes place in the Kondo and Hubbard models. If unpaired charge excitations are absent in the ground state of a system, as it takes place in the Anderson model in the absence of an external magnetic field, the additional terms disappear from the zero-temperature limit of Eqs. (13). In a weak magnetic field, the ground state of the Anderson model contains also a small portion of unpaired charge excitations, and the additional term in Eq. (13c) should result in some small corrections to the standard solution.
no-problem/9904/gr-qc9904044.html
ar5iv
text
# Electrostatics in a Schwarzschild black hole pierced by a cosmic string ## 1 Introduction It is well known for a long time that a point charge at rest in a static spacetime feels an electrostatic self-force. The calculation is performed by considering the global electrostatic potential determined as the solution of the Maxwell equations in the background metric of the spacetime. However, it would seem that its existence was a curiosity. The situation has recently undergone a change when Bekenstein and Mayo and Hod have derived the upper entropy bound for a charged object by requiring the validity of thermodynamics of the Reissner-Nordström black hole. Their proof takes really into account the expression of the electrostatic self-energy for a point charge at rest in a Schwarzschild black hole which has been previously determined in closed form . The purpose of this work is to extend these results to a new case where it is possible to determine explicitly the electrostatic self-energy. We consider the spacetime, introduced by Aryal et al , which describes a Schwarzschild black hole pierced by a cosmic string. It represents a straight cosmic string, infinitely thin, passing through a spherically symmetric black hole. It is obtained by cutting a wedge in the Schwarzschild geometry. So, in the coordinate system $`(t,r,\theta ,\phi )`$ with $`0\phi <2\pi `$, the metric can be written $$ds^2=\left(1\frac{2m}{r}\right)dt^2+\left(1\frac{2m}{r}\right)^1dr^2+r^2d\theta ^2+B^2r^2\mathrm{sin}^2\theta d\phi ^2$$ (1) where $`m`$ is a positive parameter and $`B`$ is related to the linear mass density $`\mu `$ of the cosmic string by $`B=14\mu `$ with $`0<B<1`$. We only consider the spacetime outside the horizon, i.e. for $`r>2m`$. In section 2, we summarise the Maxwell equations in metric (1) and, in the case $`1/2<B<1`$, we give explicitly the expression of the electrostatic potential generated by a point charge at rest. The proof that this expression obeys the electrostatic equation is fulfilled in section 3. Taking into account the found expression of the electrostatic self-energy, we derive in section 4 the entropy bound for a charged object by employing thermodynamics of the black hole. We add some concluding remarks in section 5. ## 2 Electrostatic potential The Maxwell equations in metric (1), having as source a point charge $`e`$ located at the position $`(r_0,\theta _0,\phi _0)`$ with $`r_0>2m`$, reduce to $$_i\left(\sqrt{g}F^{i0}\right)=\frac{e}{4\pi }\delta (rr_0)\delta (\theta \theta _0)\delta (\phi \phi _0)\mathrm{with}F_{i0}=_iA_0$$ (2) where $`A_0`$ is the electrostatic potential. According to (2), the electrostatic equation for $`A_0`$ can be written $`{\displaystyle \frac{1}{r^2}}{\displaystyle \frac{}{r}}\left(r^2{\displaystyle \frac{}{r}}A_0\right)+{\displaystyle \frac{1}{r(r2m)\mathrm{sin}\theta }}{\displaystyle \frac{}{\theta }}\left(\mathrm{sin}\theta {\displaystyle \frac{}{\theta }}A_0\right)+{\displaystyle \frac{1}{B^2r(r2m)}}{\displaystyle \frac{^2}{\phi ^2}}A_0`$ $`={\displaystyle \frac{e}{4\pi Br^2\mathrm{sin}\theta }}\delta (rr_0)\delta (\theta \theta _0)\delta (\phi \phi _0).`$ (3) We point out that the application of the Gauss theorem to equation (2) yields $$A_0(r,\theta ,\phi )\frac{e}{Br}\mathrm{as}r\mathrm{}$$ (4) if there is no electric flux through the horizon. Furthermore, we require that the electromagnetic field is regular at the horizon by imposing that $`F^{\mu \nu }F_{\mu \nu }`$ is finite as $`r2m`$. We limit ourselves to the case where $`1/2<B<1`$ which is physically justified for a cosmic string since $`\mu 1`$. Without loss of generality, we put $`\phi _0=\pi `$ to simplify. The expression of the electrostatic potential $`A_0`$ satisfying equation (2) with the desired boundary conditions can be expressed as the following sum $$A_0(r,\theta ,\phi )=V^{}(r,\theta ,\phi )+V_B(r,\theta ,\phi )+\frac{em}{Brr_0}$$ (5) where the expressions of $`V^{}`$ and $`V_B`$ are given below. To express the potential $`V^{}`$, we must consider the regions of the spacetime delimited by the hypersurfaces $`\phi =`$ constant as shown on Figure 1. We have $$V^{}(r,\theta ,\phi )=\{\begin{array}{cc}V_C(r,\sigma _0(\theta ,\phi )]+V_C[r,\sigma _1(\theta ,\phi )]\hfill & 0<\phi <\pi /B\pi \hfill \\ V_C[r,\sigma _0(\theta ,\phi )]\hfill & \pi /B\pi <\phi <3\pi \pi /B\hfill \\ V_C[r,\sigma _0(\theta ,\phi )]+V_C[r,\sigma _1(\theta ,\phi )]\hfill & 3\pi \pi /B<\phi <2\pi \hfill \end{array}$$ (6) where $`V_C`$ is the Copson potential which is a solution to electrostatic equation (2) with $`B=1`$, i.e. for the Schwarzschild black hole. Its expression is $$V_C[r,\sigma ]=\frac{e}{rr_0}\frac{(rm)(r_0m)m^2\sigma }{[(rm)^2+(r_0m)^2m^22(rm)(r_0m)\sigma +m^2\sigma ^2]^{1/2}}$$ (7) The variables $`\sigma _n`$ in formula (6) are the following functions of $`\theta `$ and $`\phi `$ $`\sigma _0(\theta ,\phi )=\mathrm{cos}\theta \mathrm{cos}\theta _0+\mathrm{sin}\theta \mathrm{sin}\theta _0\mathrm{cos}B(\phi \pi ),`$ $`\sigma _1(\theta ,\phi )=\mathrm{cos}\theta \mathrm{cos}\theta _0+\mathrm{sin}\theta \mathrm{sin}\theta _0\mathrm{cos}B(\phi +\pi ),`$ (8) $`\sigma _1(\theta ,\phi )=\mathrm{cos}\theta \mathrm{cos}\theta _0+\mathrm{sin}\theta \mathrm{sin}\theta _0\mathrm{cos}B(\phi 3\pi ).`$ The potential $`V_B`$ is given by the integral expression $$V_B(r,\theta ,\phi )=\frac{1}{2\pi B}_0^{\mathrm{}}V_C[r,k(\theta ,x)]F_B(\phi ,x)𝑑x$$ (9) where the function $`k`$ is given by $$k(\theta ,x)=\mathrm{cos}\theta \mathrm{cos}\theta _0\mathrm{sin}\theta \mathrm{sin}\theta _0\mathrm{cosh}x$$ (10) and the function $`F_B`$ by $$F_B(\phi ,x)=\frac{\mathrm{sin}(\phi \pi /B)}{\mathrm{cosh}x/B+\mathrm{cos}(\phi \pi /B)}+\frac{\mathrm{sin}(\phi +\pi /B)}{\mathrm{cosh}x/B+\mathrm{cos}(\phi +\pi /B)}.$$ (11) In the Schwarzschild black hole, sum (5) with $`V_1=0`$ and $`V^{}=V_C`$ yields the electrostatic potential that we have already obtained . On the other hand for the cosmic string, i.e. $`m=0`$, we find our previous result , already known in the case of a wedge in flat space . The electrostatic self-potential in a neighbourhood of the point charge is $`A_0(r,\theta ,\phi )V_C[r,\sigma _0(\theta ,\phi )]`$. In consequence, the electrostatic self-energy $`W_{self}`$ is $$W_{self}(r_0,\theta _0)=\frac{e^2m}{2Br_0^2}\frac{e\mathrm{sin}\pi /B}{2\pi B}_0^{\mathrm{}}V_C[r_0,k(\theta _0,x)]\frac{dx}{\mathrm{cosh}x/B\mathrm{cos}\pi /B}.$$ (12) From (12), we can deduce the electrostatique self-force which has been already obtained in the Schwarzschild black hole and in the cosmic string . ## 3 Checking of the electrostatic solution We must firstly verify that sum (5) is a solution to equation (2). The potential $`V^{}`$ is obviously a local solution since the Copson potential $`V_C`$ expressed in variables $`r`$, $`\theta `$ and $`\varphi `$ with $`\varphi =B\phi `$ obeys the electrostatic equation for the Schwarzschild black hole. As a consequence, the function $`V_C[r,k(\theta ,x)]`$ satisfies $$\frac{1}{r^2}\frac{}{r}\left(r^2\frac{}{r}V_C\right)+\frac{1}{r(r2m)\mathrm{sin}\theta }\frac{}{\theta }\left(\mathrm{sin}\theta \frac{}{\theta }V_C\right)\frac{1}{r(r2m)}\frac{^2}{x^2}V_C=0.$$ Then, according to its expression (9), potential $`V_B`$ obeys electrostatic equation (2) without second member if we have $$_0^{\mathrm{}}\frac{^2}{x^2}V_C[r,k(\theta ,x)]F_B(\phi ,x)𝑑x+\frac{1}{B^2}_0^{\mathrm{}}V_C[r,k(\theta ,x)]\frac{^2}{\phi ^2}F_B(\phi ,x)𝑑x=0.$$ Now from expression (11), we verify that $$\frac{^2}{x^2}F_B(\phi ,x)+\frac{1}{B^2}\frac{^2}{\phi ^2}F_B(\phi ,x)=0$$ (13) which ensures the condition mentioned above after two sucessive integrations by part. We notice that the potential $`em/Brr_0`$ is a homogeneous solution to electrostatic equation which is regular at the horizon. We secondly check that sum (5) is continuous. At $`\phi =0`$, it is clear because $`V^{}(r,\theta ,0)=V^{}(r,\theta ,2\pi )`$ since $`\sigma _1(r,\theta ,0)=\sigma _1(r,\theta ,2\pi )`$. At $`\phi =\pi /B\pi `$, we introduce $`ϵ`$ by setting $`\phi =\pi /B\pi +ϵ`$ and then the potential $`V_B`$ becomes $`V_B(r,\theta ,\pi /B\pi +ϵ)={\displaystyle \frac{\mathrm{sin}ϵ}{2\pi B}}{\displaystyle _0^{\mathrm{}}}V_C[r,k(\theta ,x)]{\displaystyle \frac{dx}{\mathrm{cosh}x/B\mathrm{cos}ϵ}}`$ $`{\displaystyle \frac{\mathrm{sin}(2\pi /B+ϵ)}{2\pi B}}{\displaystyle _0^{\mathrm{}}}V_C[r,k(\theta ,x)]{\displaystyle \frac{dx}{\mathrm{cosh}x/B\mathrm{cos}(2\pi /B+ϵ)}}.`$ (14) We write down the following integral $$\mathrm{sin}ϵ_0^x\frac{dy}{\mathrm{cosh}y\mathrm{cos}ϵ}=2\mathrm{arctan}\left(\mathrm{tanh}\frac{x}{2}\mathrm{cot}\frac{ϵ}{2}\right)\mathrm{with}ϵ0.$$ By integrating by part the first term of expression (3), we get $$\frac{1}{2}V_C[r,k(\theta ,\mathrm{})]\frac{1}{\pi }_0^{\mathrm{}}\frac{}{y}V_C[r,k(\theta ,By)]\mathrm{arctan}\left(\mathrm{tanh}\frac{y}{2}\mathrm{cot}\frac{ϵ}{2}\right)𝑑y.$$ But the function $`\mathrm{arctan}`$ is bounded by $`\pi /2`$ and consequently we may take the limit $`ϵ0`$ inside the integral. We obtain thereby $$\frac{1}{2}V_C[r,k(\theta ,\mathrm{})]\frac{1}{2}\left\{V_C[r,k(\theta ,\mathrm{})]V_C[r,k(\theta ,0)]\right\}\mathrm{as}ϵ0ϵ>0.$$ We have thus prove that integral expression (3) verifies $`\underset{ϵ0ϵ>0}{lim}V_B(r,\theta ,\pi /B\pi +ϵ)={\displaystyle \frac{1}{2}}V_C[r,k(\theta ,0)]`$ $`{\displaystyle \frac{\mathrm{sin}2\pi /B}{2\pi B}}{\displaystyle _0^{\mathrm{}}}V_C[r,k(\theta ,By)]{\displaystyle \frac{dy}{\mathrm{cosh}y\mathrm{cos}2\pi /B}}`$ (15) and another with $`ϵ<0`$ yielding $`V_C/2`$ in formula (3). On the other hand, the potential $`V^{}`$ verifies $$\underset{ϵ0ϵ>0}{lim}\left(V^{}(r,\theta ,\pi /B\pi +ϵ)V^{}(r,\theta ,\pi /B\pi ϵ)\right)=V_C[r,k(\theta ,0)].$$ (16) By combining results (3) and (16), we thus obtain that the potentials $`V_B`$ and $`V^{}`$ are both discontinuous at $`\phi =\pi /B\pi `$ whereas their sum is regular. Of course, the potential $`V^{}+V_B`$ is also continuous at $`\phi =3\pi \pi /B`$ by symmetry. In conclusion, the electrostatic potential $`A_0`$ is a smooth function only singular at the position of the point charge. Furthermore, it is easy to show that the derivative of the electrostatic potential $`A_0`$ with respect to $`\phi `$ is everywhere continuous. We thirdly determine the asymptotic form of the electrostatic potential $`A_0`$. From expression (6) of $`V^{}`$, we have immediately $$V^{}(r,\theta ,\phi )\{\begin{array}{cc}2e(r_0m)/rr_0\hfill & 0<\phi <\pi /B\pi \hfill \\ e(r_0m)/rr_0\hfill & \pi /B\pi <\phi <3\pi \pi /B\mathrm{as}r\mathrm{}\hfill \\ 2e(r_0m)/rr_0\hfill & 3\pi \pi /B<\phi <2\pi \hfill \end{array}.$$ (17) On the other hand, from expression (9) of $`V_B`$ we get $$V_B(r,\theta ,\phi )\frac{e(r_0m)}{rr_0}g(\phi )\mathrm{as}r\mathrm{}$$ (18) with $$g(\phi )=\frac{1}{2\pi }_0^{\mathrm{}}\left[\frac{\mathrm{sin}(\phi +\pi /B)}{\mathrm{cosh}x+\mathrm{cos}(\phi +\pi /B)}\frac{\mathrm{sin}(\phi \pi /B)}{\mathrm{cosh}x+\mathrm{cos}(\phi \pi /B)}\right]𝑑x$$ which can be integrated by elementary methods $$g(\phi )=\{\begin{array}{cc}1/B2\hfill & 0<\phi <\pi /B\pi \hfill \\ 1/B1\hfill & \pi /B\pi <\phi <3\pi \pi /B\hfill \\ 1/B2\hfill & 3\pi \pi /B<\phi <2\pi \hfill \end{array}.$$ (19) By using (17) and (18) with (19), we obtain that the electrostatic potential (5) has the desired asymptotic form (4). At last, we must verify that the electromagnetic field derived from the electrostatic potential (5) is regular at the horizon. This point results of the fact that $`V_C`$ tends to $`e/rr_0`$ when $`r2m`$. ## 4 Entropy bound for a charged object We now consider the spacetime which describes a Reissner-Nordström black hole pierced by a cosmic string. It is obtained by cutting a wedge in the Reissner-Nordström geometry. In the coordinate $`(t,r,\theta ,\phi )`$ with $`0\phi <2\pi `$, the metric can be written $`ds^2=\left(1{\displaystyle \frac{2E}{Br}}+{\displaystyle \frac{q^2}{B^2r^2}}\right)dt^2`$ $`+\left(1{\displaystyle \frac{2E}{Br}}+{\displaystyle \frac{q^2}{B^2r^2}}\right)^1dr^2+r^2d\theta ^2+B^2r^2\mathrm{sin}^2\theta d\phi ^2`$ (20) where $`E`$ and $`q`$ are two parameters. We only consider the spacetime outside the outer horizon, i.e. $`r>(E+\sqrt{E^2q^2})/B`$ by assuming that $`E^2>q^2`$. Following , we interpret $`E`$ as the energy of the black hole. Clearly, $`q`$ is the electric charge of the black hole. For $`q=0`$, metric (4) reduces to metric (1) by setting $`m=E/B`$. The horizon area $`𝒜`$ of the black hole defined by metric (4) has the expression $$𝒜(E,q)=\frac{4\pi }{B}\left(E+\sqrt{E^2q^2}\right)^2$$ (21) and the entropy $`S_{BH}`$ of the black hole is given by $$S_{BH}(E,q)=\frac{1}{4}𝒜(E,q).$$ (22) The Reissner-Nordström black hole pierced by a cosmic string linearised with respect to its electric charge $`q`$ is described by metric (1) plus an electromagnetic test field having the electrostatic potential $$A_0^{ext}(r,\theta ,\phi )=\frac{q}{Br}.$$ (23) Moreover, the black hole entropy (22) reduces to $$S_{BH}(E,q)\frac{2\pi }{B}\left(2E^2q^2\right).$$ (24) The original method of Bekenstein for finding the entropy bound for a neutral object in the Schwarzschild black hole has been recently extented for charged object in the Reissner-Nordström black hole . Referring to , we recall that the energy $``$ of a charged object with a mass $`\mu `$, an electric charge $`e`$ and a radius $`R`$ located at the position $`(r_0,\theta _0)`$ in metric (1), in presence of the exterior electrostatic potential (23), has the expression $$=\sqrt{1\frac{2E}{Br_0}}+\frac{eq}{Br_0}+W_{self}(r_0,\theta _0)$$ (25) where $`W_{self}`$ is the electrostatic self-energy (12). When the charged object is just outside the horizon, its energy (25), for a very small proper length $`R`$, is $$_{last}\frac{\mu RB}{4E}+\frac{eq}{2E}+W_{self}(2E/B,\theta _0)\mathrm{as}R0.$$ (26) In this state, the system formed by the black hole and the charged object has an entropy $`S_{BH}(E,q)+S`$ where $`S`$ is the entropy of the charged object. When the charged object falls in the horizon, the final state is a Reissner-Nordström black hole with the new parameters $$E_f=E+_{last}\mathrm{and}q_f=q+e.$$ (27) But in this final state, the entropy is $`S_{BH}(E_f,q_f)`$. We now write down the generalised second law of thermodynamics $$S_{BH}(E_f,q_f)S_{BH}(E,q)+S.$$ (28) We can calculate $`\mathrm{}S_{BH}=S_{BH}(E_f,q_f)S_{BH}(E,q)`$ from expression (24). We keep only linear terms in $`_{last}`$. By this way, we thus exclude a possible gravitational self-force which should be quadratic in $`\mu `$ as in a cosmic string . We find $$\mathrm{}S_{BH}=\frac{4\pi }{B}\left[2E_{last}eq\frac{e^2}{2}\right].$$ (29) By inserting (26) into (29), we get $$\mathrm{}S_{BH}=\frac{4\pi }{B}\left[\frac{\mu RB}{2}+2E\left(\frac{e^2}{8E}+\frac{e^2B}{8E}g(\pi )\right)\frac{e^2}{2}\right].$$ (30) where $`g(\pi )=1/B1`$ by formula (19). According to inequality (28), we obtain then from (30) the desired entropy bound $$S2\pi \left[\mu R\frac{e^2}{2}\right],$$ (31) initially derived by Zaslavskii in another context. ## 5 Conclusion We have determined the explicit expressions of the electrostatic potential and self-energy in the Schwarzschild black hole pierced by a cosmic string. We can extend our method to the static, spherically symmetric spacetimes pierced by a cosmic string when the electrostatic potential is known in absence of a cosmic string: Brans-Dicke and Reissner-Nordström . We have found again the upper entropy bound for a charged object by employing thermodynamics of the Reissner-Nordström black hole pierced by a cosmic string. To prove this, we have used the value of the electrostatic self-energy at the horizon of the Schwarzschild black hole pierced by a cosmic string. This result confirms the physical importance of the electrostatic self-force.
no-problem/9904/astro-ph9904400.html
ar5iv
text
# Kinematics of LMC stellar populations and self-lensing optical depth ## 1 Present observational constraints The bulk of the mass of the lmc resides in a nearly face-on disk, with an inclination usually taken to equal the canonical value of $`i=33^{}`$ (Westerlund (1997)), although both lower ($`27^{}`$) and higher (up to $`45^{}`$) values have also been derived from morphological or kinematical studies of the lmc. This disk is observed to rotate with a circular velocity $`V_C80`$ km/s out to at least $`8^{}`$ from the lmc center (Schommer et al. (1992)). If all the stars belong to the same population, with a vertical (i.e. perpendicular to the disk) velocity dispersion $`\sigma _W`$, the microlensing optical depth of such a disk upon its own stars is given by $`\tau 2\sigma _W^2\mathrm{sec}^2i/c^2`$ (Gould (1995)). Considering the measured velocity of lmc carbon stars (Cowley & Hartwick 1991), Gould (1995) assumed $`\sigma _W=20`$ km/s as a typical velocity dispersion for lmc stars. He thus concluded that $`\tau 10^8`$, i.e. that self-lensing (first suggested by Sahu (1994) and Wu (1994)) contributes very little to the observed optical depth towards this line of sight. Carbon stars however may not be the ultimate probe to infer the velocity dispersion of lmc populations: they actually comprise various ill-defined classes of objects (Menessier 1999), and their prevalence is a complex function of age, metallicity and probably other factors (Gould 1999). Both observational and theoretical arguments favour the existence of a wide range of velocity dispersions among the various lmc stellar populations. To commence, Meatheringham et al. (1988) have determined the radial velocities of a sample of planetary nebulae (PN) in the lmc. They measured a velocity dispersion of 19.1 km/s, much larger than the value of 5.4 km/s found for the HI. This was interpreted as being suggestive of orbital heating and diffusion operating in the lmc in the same way as it is observed in the solar neighbourhood. Then, the observations of Hughes et al. (1991) show clear evidence for an increase in the velocity dispersion of long period variables (LPV) as a function of their age. For young LPVs, the velocity dispersion is 12 km/s whereas for old LPVs, it reaches 35 km/s. More recently, Zaritsky et al. (1999) found a velocity dispersion of $`\sigma =18.4\pm 1.4`$ km/s for 190 vertical red clump (VRC) stars<sup>1</sup><sup>1</sup>1see Zaritsky et al. (1999) and Beaulieu and Sackett (1998) for a definition of RC and VRC stars. whereas for the red clump (RC), they measured a value of $`\sigma =32.2\pm 3.8`$ km/s on a sample of 75 objects (throughout this paper, error bars are converted from Zaritsky’s 95 % confidence levels to standard $`1\sigma `$). A general trend appears: the velocity dispersion is an increasing function of the age. Just like for our own Milky Way, stars of the lmc disk have been continuously undergoing dynamical scattering by, for instance, molecular clouds or other gravitational inhomogeneities. This results in an increase of the velocity dispersion of a given stellar population with its age, as will be further discussed in section 3. Notice that the main argument in disfavour of a lmc self-lensing explanation is precisely the low value of the measured vertical velocity dispersions. However, the stellar populations so far surveyed predominantly consist of red giants. They are shown in the next section not to be representative of the bulk of the lmc disk stars, and actually biased towards young ages: they are on average $``$ 2 Gyr old, to be compared to an lmc age of $``$ 12 Gyr. ## 2 The age bias The red clump population will illustrate the main thrust of our argument. Clump stars have burning helium cores whose size is approximately independent of the total mass of the object. They also have the same luminosity and hence they spend a fixed amount of time $`\tau _{\mathrm{He}}`$ in the clump, irrespective of their mass $`m`$. Such objects are evolved post-MS stars, which does not mean that they are necessarily old. We have assumed a Salpeter Initial Mass Function for the various lmc stellar populations $$\frac{dN}{dm}m^{\left(1+\alpha \right)},$$ (1) with $`\alpha =1.35`$. The stellar formation history has been borrowed from Geha et al. (1998). Their preferred model (e) corresponds to a stellar formation rate $`(t)`$ that has remained constant for 10 Gyr since the formation of the lmc 12 Gyr ago. Then, two Gyr ago, $`(t)`$ has increased by a factor of three. The number of stars that formed at time $`t`$ and whose mass is comprised between $`m`$ and $`m+dm`$ may be expressed as $$\frac{d^2N}{dmdt}=(t)m^{\left(1+\alpha \right)}.$$ (2) We have assumed a mass-luminosity relation $`Lm^\beta `$ on the MS so that the stellar lifetime may be expressed as $`\tau _{\mathrm{MS}}(m)=12\mathrm{Gyr}/m^{\beta 1}`$ (since $`\tau m/L`$). With these oversimplified but natural assumptions, a star whose initial mass is $`1\mathrm{M}_{}`$ is still today on the MS and cannot have reached the clump. Conversely, a heavier star with $`m1\mathrm{M}_{}`$ may well be today in a helium core burning stage provided that its formation epoch lies in the range between $`t=\tau _{\mathrm{MS}}(m)`$ (the object has just begun core helium burning) and $`t=\tau _{\mathrm{MS}}(m)\tau _{\mathrm{He}}(m)`$ (the star is about to leave the red clump). The number of RC stars observed today with progenitor mass in the range between $`m`$ and $`m+dm`$ is therefore given by $$dN_{\mathrm{RC}}=(\tau _{\mathrm{MS}}(m))\times m^{\left(1+\alpha \right)}dm\times \tau _{\mathrm{He}}.$$ (3) To get more insight into the age bias at stake, we can parameterize the progenitor mass $`m`$ in terms of the age $`\tau \tau _{\mathrm{MS}}(m)`$. The previous relation simplifies into $$\frac{dN_{\mathrm{RC}}}{d\tau }=\frac{(\tau )\tau _{\mathrm{He}}}{(\beta 1)}\tau ^{\left(\gamma 1\right)},$$ (4) where $`\gamma =\alpha /(\beta 1)`$. This may be directly compared to the age distribution of the bulk of the lmc stars that goes like $`(\tau )`$. With a Salpeter mass function and $`\beta =4.5`$, we get a value of $`\gamma =0.4`$. The excess of young RC stars goes as $`1/\tau ^{0.6}`$ and the bias is obvious. Other IMF are possible and a spectral index as large as $`\alpha \beta 13.5`$ would be required to invalidate the effect. HST data analyzed by Holtzman et al. (1997) nevertheless point towards a spectral index $`\alpha `$ that extends from 0.6 up to 2.1 for stars in the mass range $`0.6m3`$ $`\mathrm{M}_{}`$. The average value corresponds actually to a Salpeter law. There has been furthermore a recent burst in the lmc stellar formation rate. In order to model it, we may express the total number of today’s RC stars as an integral where the progenitor mass $`m`$ runs from $`m_1=1\mathrm{M}_{}`$ up to the tip of the IMF whose actual value is irrelevant and has been set equal to infinity here for simplicity. Notice that the specific progenitor mass $`m_21.7\mathrm{M}_{}`$ corresponds to stars born 2 Gyr ago, when the stellar formation rate increased by a factor of 3. Stars which formed before that epoch will be referred to as old. Their number is given by $$N_{\mathrm{RC}}^{\mathrm{old}}=_{m_1}^{m_2}\left(\tau _{\mathrm{MS}}\right)m^{\left(1+\alpha \right)}𝑑m\tau _{\mathrm{He}}.$$ (5) On the other hand, the number $`N_{\mathrm{RC}}^{\mathrm{young}}`$ of young clump stars is obtained similarly, with masses in excess of $`m_2`$. We readily infer a fraction of young stars $$N_{\mathrm{RC}}^{\mathrm{young}}/N_{\mathrm{RC}}=\frac{3}{2+(m_2/m_1)^\alpha }0.751.$$ (6) Three quarters of the clump stars observed today in the lmc have thus formed less than 2 Gyr ago, during the recent period of stellar formation mentioned above. Integrating $`\tau _{\mathrm{MS}}`$ over the RC population $$\tau =\frac{1}{N_{\mathrm{RC}}}_{m_1}^{\mathrm{}}\tau _{\mathrm{MS}}𝑑N_{\mathrm{RC}},$$ (7) yields the average age $$\tau =\left(12\mathrm{Gyr}\right)\times \frac{\alpha }{\alpha +\beta 1}\times \frac{m_1^{1\alpha \beta }+2m_2^{1\alpha \beta }}{m_1^\alpha +2m_2^\alpha }.$$ (8) This gives a numerical value of $`1.95`$ Gyr. We thus conclude that today’s clump stars are, on average, much younger than the lmc disk. ## 3 Distributions of velocity dispersions This simple analytical result has been checked by means of a Monte Carlo study. We have randomly generated a sample of $`10^8`$ lmc stars. The progenitor mass was drawn in the range $`0.1m10`$ $`\mathrm{M}_{}`$ according to a Salpeter law. The age of formation was drawn in the range $`12`$ Gyr $`t0`$ according to the stellar formation history $`(t)`$ favoured by Geha et al. (1998). The vertical velocity dispersion $`\sigma _W`$ was then evolved in time from formation up to now according to Wielen’s (1977) relation: $$\sigma _W^2=\sigma _0^2+C_Wt.$$ (9) This purely diffusive relation is known to be inadequate to describe velocity dispersions in our Galaxy (Edvardsson et al. 1993). We will however use it in our model, as heating processes in the lmc may be different than those in the galaxy. The lmc is indeed subject to tidal heating by the Milky Way (Weinberg (1999)) and has most probably suffered encounters with the smc. Although this simple relation lacks a theoretical motivation, it will be shown to account for several features of the velocity distributions in the lmc, without being at variance with any observation. The initial velocity dispersion $`\sigma _0`$ was taken to be 10 km/s, and the diffusion coefficient in velocity space along the vertical direction $`C_W`$ to be 300 $`\text{km}^2\text{s}^2\text{Gy}^1`$ so that our oldest stars have a vertical velocity dispersion reaching up to $`\sigma _W^{\mathrm{MAX}}=60`$ km/s. For each star, the actual vertical velocity was then randomly drawn, assuming a Gaussian distribution with width $`\sigma _W`$. In order to compare our Monte Carlo results with the Zaritsky et al. (1999) measurements of the radial velocities of lmc clump stars, we selected two groups of stars according to their position in the HR diagram. Following Zaritsky et al., we use their colour index $$C\mathrm{\hspace{0.33em}0.565}(BI)+\mathrm{\hspace{0.33em}0.825}(UV+1.15),$$ (10) so that the RC population is defined by $`3.1<C<3.4`$ with a magnitude $`19<V<19.3`$ whereas the VRC stars have the same colour index C and brighter magnitudes $`18<V<18.75`$. In order to infer the colours and magnitudes of the stars that we generated, we used the isochrones computed by Bertelli et al. (1994) for a typical lmc metallicity and helium abundance of $`Z=0.008`$ and $`Y=0.25`$. A random sample of 190 stars that passed the VRC selection criteria is presented in Fig. 1 where the vertical velocities are displayed. This histogram may be compared to Fig. 10 of Zaritsky et al. (1999) where no VRC star is found with a velocity in excess of 60 km/s. With the full statistics, our Monte Carlo generated a population of $``$ 2,900 VRC objects whose vertical velocity distribution has a RMS of $``$ 18 km/s. The agreement between the Zaritsky et al. observations and our Monte Carlo results is noteworthy. The average age of our VRC sample is $``$ 0.87 Gyr. We also selected a random sample of 75 RC stars whose velocity distribution is featured in Fig. 2. Even with a diffusion coefficient as large as $`C_W=300\text{km}^2\text{s}^2\text{Gy}^1`$ so as to comply with a large lmc self-lensing optical depth, our full statistics of 18,000 RC objects has a velocity dispersion of $``$ 23 km/s. This is slightly below the value of $`\sigma =32.2\pm 3.8`$ km/s quoted by Zaritsky et al. Observations are nevertheless fairly scarce with only 75 RC stars. When Zaristsky et al. fitted a Gaussian to the RC radial velocity distribution featured in the Fig. 11 of their paper, they obtained a 95 % C.L. dispersion of $`\sigma =32_{16}^{+19}`$ km/s with a large uncertainty. Our Monte Carlo velocity dispersion of 23 km/s is definitely compatible with that result. We infer an average age for the RC population of $``$ 1.8 Gyr to be compared to our analytical result of $``$ 1.95 Gyr. This agrees well with Beaulieu and Sackett’s conclusion that isochrones younger than 2.5 Gyr are necessary to fit the red clump. Notice finally that our age estimates for these various clump populations are in no way related to lmc kinematics. They merely result from the postulated Salpeter IMF, the Geha et al. preferred stellar formation history and the Bertelli et al. isochrones. With this model, 70% in mass of the lmc disk consists of objects whose vertical velocity dispersion is in excess of 25 km/s, although the average vertical velocity dispersion of RC stars, for instance, is only $``$ 23 km/s. What about the other measurements? The velocity dispersion of PNs has been found equal to 19.1 km/s (Meatheringham et al. 1988). These authors estimate that the bulk of the PNs have an age near 3.5 Gyr. They also note that younger objects are present down to an age of order $`0.51.3`$ Gyr. Meatheringham et al. come finally to the conclusion that the indicative age of the PN population is 2.1 Gyr. This value agrees well once again with our analytical estimate. Our Monte Carlo gives a slightly larger value of 2.4 Gyr for the age of the PNs, with a velocity dispersion of 24.7 km/s. Because the observed sample contains 94 objects, the measured value of 19.1 km/s suffers presumably from significant uncertainties. Quite interesting also are the measurements by Hughes et al. (1991) of the velocity dispersions of LPVs as a function of their age. Their sample of 63 “old” LPVs has a velocity dispersion of $`\sigma =35_4^{+10}`$ km/s. For the bulk of the lmc populations, we obtain an average velocity dispersion of $`37`$ km/s. The problem at stake is actually the age of those old LPVs. These stars indeed display an age-period relation. However, Hughes et al. derived this relation from kinematics considerations, using precisely Eq. 9, and postulating the same diffusion coefficient as in the Milky Way. They thus inferred an average age of 9.5 Gyr. Finding instead the position of these stars in a colour-magnitude diagram and using lmc isochrones would have led to a clean determination of the age-period relation. A direct determination of the age of LPVs is nevertheless spoilt by a few biases. Some LPVs are carbon stars and the ejected material around them may considerably dim their luminosities. These stars may also pulsate on an harmonic of the fundamental mode. Both effects lead to an under-determination of their luminosity and hence to an overestimate of their age (Menessier 1999). As a matter of fact, Groenewegen and de Jong (1994) conclude that lmc stars whose progenitor mass is less than 1.15 $`\mathrm{M}_{}`$ never reach the instability strip on the AGB. This yields an upper limit on the age of LPVs of $`7.3`$ Gyr, in clear contradiction with the average age of 9.5 Gyr inferred by Hughes et al. for old LPVs. Finally, Schommer et al. (1992) have obtained a velocity dispersion of $`2124`$ km/s for 9 old lmc clusters. Their large $`1\sigma `$ error of $``$ 10 km/s is due to the small size of the sample. It is not clear whether or not these clusters have formed in the disk. If they nevertheless had, they would have undergone a fairly restricted orbital heating with respect to the lmc stars. Those systems and the giant molecular clouds have actually comparable masses and the energy exchange between them does not result in a significant increase of the velocity dispersion of the clusters unlike what happens to the stars. ## 4 Multi-component model of the lmc We model the lmc to contain several stellar populations, each associated with a different velocity dispersion $`\sigma _{W,i}`$ which has evolved according to Eq. 9. We describe each of the ten components of our model by an ellipsoidal density profile $$\rho _i(R,z)=\frac{\mathrm{\Lambda }_i}{R^2+z^2/(1e_i^2)},$$ (11) up to a cut-off radius $`R_{\mathrm{MAX}}=15\text{kpc}`$ (Aubourg et al. 1999). The multi-component model based on these profiles is self-consistent in the sense that it satisfies Poisson equation and results in a flat rotation curve with the desired $`V_C`$ of 80 km/s. We define the set of $`\sigma _{W,i}`$ so as to sample linearly the range between $`\sigma _0=10\text{km/s}`$ and $`\sigma _W^{\mathrm{MAX}}=60\text{km/s}`$ (see previous section). The parameters $`\mathrm{\Lambda }_i`$ and the ellipticities $`e_i`$ are determined so that the model reproduces the set of velocity dispersions $`\sigma _{W,i}`$ and surface mass densities $`\mathrm{\Sigma }_i`$ where $`d\mathrm{\Sigma }_i/d\sigma _i\sigma _i(t)`$ with $`(t)`$ the stellar formation history of the lmc mentioned in section 2. Assuming a typical M/L of 3, which is a free parameter in our model, we reproduce the observed surface brightness of the LMC. For a given distribution of objects, one can compute the total self-lensing optical depth $`\tau `$ and the event rate $`\mathrm{\Gamma }`$. Both quantities are integrated on all deflectors and sources, considering that only main sequence stars brighter than $`V=20`$ and red giants can be potential sources, since they are the only objects bright enough to be visible in microlensing surveys. The computation of $`\mathrm{\Gamma }`$ requires an estimate of the relative transverse velocity of deflector and source, for which we have assumed an horizontal velocity dispersion equal to the vertical one predicted by the model. Details of this computation can be found in (Aubourg et al. (1999)). For the model described above, one obtains $`\tau =9.3\times 10^8`$ and $`\mathrm{\Gamma }=3.5\times 10^7\text{yr}^1`$. This can be compared to the eros and macho optical depths, respectively $`8.2\times 10^8`$ (Ansari et al. (1996)) and $`2.9_{0.9}^{+1.4}\times 10^7`$ (Alcock et al. 1997). A combination of those two results yields an average optical depth of $`2.1_{0.8}^{+1.3}\times 10^7`$ (Bennett (1998)), but preliminary macho results from their five-year analysis (Sutherland 1999) hint to a reduced optical depth as compared to their two-year analysis. The model prediction is thus in good agreement with the results obtained so far from microlensing experiments. Another relevant prediction of the model is the distribution of event durations, $`d\mathrm{\Gamma }/d\mathrm{\Delta }t`$. Figure 3 illustrates this prediction for our model, along with the distribution of observed macho events. Our model thus reproduces both the total observed optical depth towards the lmc and the observed event duration distribution, while complying with the velocity dispersion measurements. A self-lensing interpretation of all the microlensing events observed so far towards the lmc thus appears to be a plausible explanation. ###### Acknowledgements. We wish to thank M.O. Menessier for useful discussions, and the members of the EROS collaboration for their comments. We thank Andy Gould, our referee, for his useful remarks and suggestions.
no-problem/9904/cond-mat9904151.html
ar5iv
text
# Pauli susceptibility of nonadiabatic Fermi liquids ## Abstract The nonadiabatic regime of the electron-phonon interaction leads to behaviors of some physical measurable quantities qualitatively different from those expected from the Migdal-Eliashberg theory. Here we identify in the Pauli paramagnetic susceptibility $`\chi `$ one of such quantities and show that the nonadiabatic corrections reduce $`\chi `$ with respect to its adiabatic limit. We show also that the nonadiabatic regime induces an isotope dependence of $`\chi `$, which in principle could be measured. When the Fermi energy $`E_F`$ is anomalously small, as in high-$`T_c`$ cuprates and in the fullerene compounds , the Migdal-Eliashberg (ME) approach may result inadeguate in describing the interplay between charge carriers and phonons. For example, the alkali-doped fullerenes (A<sub>3</sub>C<sub>60</sub>) have Fermi energies of order $`0.25`$ eV and intramolecular phonon modes with frequencies $`\omega _0`$ in the range between $`20`$ meV and $`0.2`$ eV . In this case, the adiabatic parameter $`\omega _0/E_F`$ lies somewhere between $`0.1`$ and $`0.9`$, depending on which phonon modes most couple to the electrons. The main consequence is that the electron-phonon vertex corrections may no longer be negligible, as assumed in the ME framework, and a generalization of the theory is required to include the nonadiabatic contributions . This generalization In terms of the electron-phonon coupling $`\lambda `$ and the adiabatic parameter $`\omega _0/E_F`$, the ME regime applies for $`\lambda \text{ }\stackrel{<}{}\text{ }1`$ and $`\omega _0/E_F1`$. Therefore, a generalization beyond the ME framework is required when $`\lambda \text{ }\stackrel{>}{}\text{ }1`$ and/or $`\omega _0/E_F`$ is no longer negligible. However, when $`\lambda `$ is larger than some critical value $`\lambda _c`$ (which is of order one or larger), the system evolves toward a polaronic regime characterized by strong electron-lattice correlations. This holds true even in the adiabatic case in which the charge carriers aquire large effective masses. On the other hand, a region in the $`\lambda `$-$`\omega _0/E_F`$ plane different from the one leading to polaronic states is defined by $`\lambda \text{ }\stackrel{<}{}\text{ }1`$ and $`\omega _0/E_F`$ finite. Within this region, where the charge carriers are weakly interacting nonadiabatically with phonons, the nature of quasiparticles is different from both the ME and the polaronic ones. In such a nonadiabatic regime we shall speak of nonadiabatic Fermi liquids (or nonadiabatic fermions), to stress the difference from the ME and the polaronic limits. In practice, such a regime can be described by a perturbative approach where $`\lambda \omega _0/E_F`$ plays the role of the small parameter of the theory . Various comparisons with exact results (for the one electron case) and quantum Monte Carlo calculations point toward the reliability of such a perturbative description. At the zeroth order in $`\lambda \omega _0/E_F`$, the nonadiabatic theory coincides with the ME limit while for finite values of $`\lambda \omega _0/E_F`$ the nonadiabatic fermions display anomalous behaviors. In this situation, several properties are modified and a very important question regards the possibility to observe some fingerprints of such a nonadiabatic regime. Furthermore, in order to be considered as possible evidences, such fingerprints should be searched among those physical quantities for which some well established property in the ME regime results qualitatively modified in the nonadiabatic one. In order to clarify this statement, let us consider for example the electron-phonon renormalized charge carrier mass $`m^{}`$. In the ME regime $`m^{}=(1+\lambda )m`$ , where $`m`$ is the bare mass and $`\lambda `$ is the electron-phonon coupling. Since $`\lambda `$ is independent of the ion-mass , no isotope effect is expected for $`m^{}`$. However, when the nonadiabatic contributions are no longer negligible, $`m^{}`$ aquires an ion-mass dependence which leads to a non-zero isotope coefficient $`\alpha _m^{}`$ . The effective mass $`m^{}`$ represents therefore a clear example of a quantity for which a well established property in the ME regime ($`\alpha _m^{}=0`$) is drastically modified in the nonadiabatic one ($`\alpha _m^{}0`$). So far, strong evidences for isotope-dependent $`m^{}`$ have been reported for YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> and La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> and theoretical calculations have shown that already the inclusion of the first nonadiabatic vertex correction to the ME limit provides values of $`\alpha _m^{}`$ with sign and order of magnitude in agreement with those estimated by the experiments . Another property typical in the ME regime which is instead strongly altered by the nonadiabatic contributions is the non-magnetic impurity dependence of the critical temperature $`T_c`$ of an homogeneous $`s`$-wave superconductor. For a conventional superconductor, weak disorder does not influence the critical temperature as stated by Anderson’s theorem . On the contrary, since the electron-phonon vertex corrections are very sensitive to the amount of disorder, the critical temperature of a $`s`$-wave nonadiabatic superconductor can be strongly lowered by the impurities . Such a peculiar behavior is also accompained by an anomalous impurity dependence of the isotope coefficient of $`T_c`$. So far, reduction of $`T_c`$ driven by disorder for $`s`$-wave superconductors has been reported for K<sub>3</sub>C<sub>60</sub> and Nd<sub>2-x</sub>Ce<sub>3</sub>CuO<sub>4-δ</sub> . In this paper we consider another measurable quantity which could be considered as a test for the breakdown of Migdal’s theorem: the Pauli paramagnetic susceptibility $`\chi `$. Here, the characteristic feature in the ME regime ($`\omega _0/E_F1`$) is that the electron-phonon interaction does not renormalize the Pauli susceptibility so that $`\chi `$ is independent of $`\lambda `$ and $`\omega _0`$ . In the ME regime therefore $`\chi \chi _P=\mu _B^2N(0)`$, where $`\mu _B`$ is the Bohr magneton and $`N(0)`$ is the electron density of states at the Fermi level. In principle, therefore, a measure of $`\chi `$ via for example electron paramagnetic resonace (EPR) is unaffected by the electron-phonon interaction and provides an estimate of the electronic density of states $`N(0)`$ which however is renormalized by many-electrons effects (Stoner enhancement)<sup>1</sup><sup>1</sup>1In the present discussion we shall consider the many-electrons effects as being already contained in $`N(0)`$.. The interesting aspect of $`\chi `$ is that, as we show below, when $`\omega _0/E_F`$ is no longer negligible $`\chi `$ aquires a phonon renormalization and becomes dependent on both $`\lambda `$ and $`\omega _0`$. This result can be of importance for two reasons. First, it leads to re-consider the estimates of the electron density of states obtained by EPR measurements, since these estimates have been based on the phonon-independent ME form of $`\chi `$. Second, and more importantly, the nonadiabatic renormalization of $`\chi `$ induces a non-zero isotope effect which, in principle, could be measured. To evaluate the Pauli susceptibility we make use of the static limit of the Kubo formula: $$\chi (T)=\underset{𝐪0}{lim}\mu _B^2_0^\beta 𝑑\tau T_\tau S_z(𝐪,\tau )S_z(𝐪,0),$$ (1) where $`\beta `$ is the inverse temperature $`T`$ and $$S_z(𝐪)=\underset{𝐤,\sigma =\pm 1}{}\sigma c_{𝐤+𝐪\sigma }^{}c_{𝐤\sigma },$$ (2) where $`c_{𝐤\sigma }^{}`$ ($`c_{𝐤\sigma }`$) is the creation (annhilication) operator for electron with momentum $`𝐤`$ and spin direction $`\sigma =\pm 1`$. In what follows, we shall focus on the evaluation of Eq.(1) for a system of electrons interacting with phonons through the coupling $`g(𝐪)`$. In terms of electron and phonon Green’s functions, Eq.(1) reduces to the following general expression: $$\chi (T)=\underset{𝐪0}{lim}\mu _B^2T\underset{m}{}\underset{k}{}G(m,𝐤+𝐪)G(m,𝐤)\mathrm{\Gamma }(𝐤+𝐪,𝐤;m),$$ (3) where $`\omega _m=(2m+1)\pi T`$ and $$G(m,𝐤)=\left[i\omega _mϵ(𝐤)\mathrm{\Sigma }(m,𝐤)\right]^1,$$ (4) is the Green’s function for an electron with dispersion $`ϵ(𝐤)`$ and electron-phonon self-energy $`\mathrm{\Sigma }(m,𝐤)`$. In Eq.(3), $`\mathrm{\Gamma }(𝐤+𝐪,𝐤;m)`$ is the irreducible electron-phonon vertex function which is given by all diagrams which cannot be separated into two different parts by cutting a single electron or phonon propagator line. The reducible part of the vertex function gives in fact zero contribution when the summation over the spin indeces is performed in eqs.(1-2) . In this paper we compute eq.(3) by employing a self-consistent calculation which amount to evaluate $`\mathrm{\Sigma }(m,𝐤)`$ in the non-crossing approximation. For dispersionless phonons with frequency $`\omega _0`$, we consider therefore the electron-phonon self-energy as given by: $$\mathrm{\Sigma }(n,𝐤)=T\underset{m𝐤^{}}{}g(𝐤𝐤^{})^2\frac{\omega _0^2}{(\omega _n\omega _m)^2\omega _0^2}G(m,𝐤^{}).$$ (5) In the above equation we have implicitly assumed that the phonons are already renormalized and that $`\omega _0`$ is a dressed phonon frequency. In a conserving approach, the vertex function resulting from the non-crossing approximation for $`\mathrm{\Sigma }(m,𝐤)`$ is given by all the ladder contributions. Therefore the vertex function satisfies the following ladder equation: $`\mathrm{\Gamma }(𝐤+𝐪,𝐤;n+m,n)=1+`$ $`T{\displaystyle \underset{m^{}𝐤^{}}{}}g(𝐤𝐤^{})^2{\displaystyle \frac{\omega _0^2}{(\omega _n\omega _m^{})^2+\omega _0^2}}G(m^{},𝐤^{})G(m^{}+m,𝐤^{}+𝐪)`$ (6) $`\times \mathrm{\Gamma }(𝐤^{}+𝐪,𝐤^{};m^{}+m,m^{}).`$ Actually, from eq.(3), to evaluate $`\chi `$ we only need to retain the static limit of eq.(6) which is given by setting first $`\omega _m=0`$ and after $`𝐪=0`$. As already shown in Refs., if we exchange the order of the two limits, the resulting dynamical limit of the vertex will be in general different from the static one. Therefore setting $`\omega _m=0`$ and $`𝐪=0`$ in both hand sides of eq.(6) may give a non well defined result because in that point the vertex in non-analytic. However, as we shall show below, the computing procedure we employ in handling the vertex function automatically provides the correct static limit by simply setting $`\omega _m=0`$, $`𝐪=0`$ in eq.(6), regardless of the order of the two limits. Therefore, by setting $`lim_{𝐪0}\mathrm{\Gamma }(𝐤+𝐪,𝐤;n,n)=\mathrm{\Gamma }_s(𝐤,n)`$, the static limit of eq.(6) reduces to: $$\mathrm{\Gamma }_s(𝐤,n)=1+T\underset{m^{}𝐤^{}}{}g(𝐤𝐤^{})^2\frac{\omega _0^2}{(\omega _n\omega _m^{})^2+\omega _0^2}G(m^{},𝐤^{})^2\mathrm{\Gamma }_s(𝐤^{},m^{}).$$ (7) Without loss of generality, the solution of the set of equations (4), (5) and (7) can be found by using a structureless electron-phonon interaction $`g(𝐪)g^2`$. The resulting self-energy is then momentum independent and for a system with a half-filled electron band of constant DOS over the entire bandwidth $`2E_F`$, the self-energy can be written as $`\mathrm{\Sigma }(n)=i\omega _niW_n`$, where $$W_n=\omega _n+\lambda \pi T\underset{m}{}\frac{\omega _0^2}{(\omega _n\omega _m)^2+\omega _0^2}\frac{2}{\pi }\mathrm{arctan}\left(\frac{E_F}{W_m}\right),$$ (8) is the renormalized electron frequency and $`\lambda =g^2N(0)`$ is the electron-phonon coupling. Within the same approximation scheme, $`\mathrm{\Gamma }_s(𝐤,n)`$ becomes momentum-independent and the resulting vertex function $`\mathrm{\Gamma }_s(n)`$ satisfies the following equation: $$\mathrm{\Gamma }_s(n)=1\lambda T\underset{m^{}}{}\frac{\omega _0^2}{(\omega _n\omega _m^{})^2+\omega _0^2}\frac{2E_F}{W_m^{}^2+E_F^2}\mathrm{\Gamma }_s(m^{}).$$ (9) We can verify that the above equation gives indeed the static limit of the vertex by neglecting the renormalization of the frequency, $`W_m^{}\omega _m^{}`$, and by performing the zero temperature limit. In this way, to the first order in $`\lambda `$ and at zero external frequency, Eq.(9) becomes: $$\mathrm{\Gamma }_s(0)=1\lambda \frac{d\omega }{2\pi }\frac{\omega _0^2}{\omega ^2+\omega _0^2}\frac{2E_F}{\omega ^2+E_F^2}=1\lambda \frac{\omega _0}{\omega _0+E_F}.$$ (10) $`\mathrm{\Gamma }_s(0)`$ coincides therefore with the static limit already calculated in the perturbation theory. We are now in the position to evaluate the Pauli susceptibility. Since both the self-energy and the vertex function are independent of the momentum, equation (3) can be analitically integrated over the energy and the final expression for $`\chi (T)`$ reduces to: $$\chi (T)=\chi _PT\underset{m}{}\frac{2E_F}{E_F^2+W_m^2}\mathrm{\Gamma }_s(m),$$ (11) where $`\chi _P=\mu _B^2N(0)`$, and $`W_m`$ and $`\mathrm{\Gamma }_s(m)`$ are the solution of equations (8) and (9), respectively. We solve the set of equations (8), (9) and (11) for a temperature $`T/\omega _0=0.02`$ and differtent values of $`\lambda `$ and $`\omega _0/E_F`$. The frequency summations appearing both in the self-energy (8) and in the vertex function (9) is truncated at the frequency cut-off $`\omega _c=(2N+1)\pi T`$ with $`N=400`$ corresponding to $`\omega _c50\omega _0`$. The solutions of (8) and (9) are then calculated by iteration and the results are plugged into eq.(11). The high-frequency part ($`\omega _m>\omega _c\omega _0`$) of the summation in eq.(11) is calculated by setting $`W_m=\omega _m`$ and $`\mathrm{\Gamma }_s(m)=1`$, since in this high-frequency region the contribution from the electron-phonon coupling is negligible. The procedure outlined above permits to estimate the zero temperature susceptibility $`\chi `$ also for the smallest value of $`\omega _0/E_F`$ we used in the calculations ($`\omega _0/E_F=0.01`$). In Fig. 1 we show the zero temperature calculated Pauli susceptibility as a function of the adiabatic parameter $`\omega _0/E_F`$ and for different values of the electron-phonon coupling constant $`\lambda `$. When $`\omega _0/E_F0`$, $`\chi `$ approaches its free-electron value $`\chi _P`$, irrespectively of the value of $`\lambda `$ and we recover therefore the result of the ME theory. Instead, when $`\omega _0/E_F`$ is larger than zero, $`\chi `$ becomes dependent of $`\lambda `$ and results to be always lowered with respect to $`\chi _P`$. In Fig. 2 $`\chi /\chi _P`$ is plotted as a function of the electron-phonon coupling $`\lambda `$ for different values of $`\omega _0/E_F`$. For small values of $`\omega _0/E_F`$, $`\chi /\chi _P`$ decreases almost linearly with $`\lambda `$. The main result of our calculations is therefore that $`\chi (0)/\chi _P<1`$ as soon as $`\omega _0/E_F>0`$. Preliminary calculations including higher orders vertex corrections confirm this feature. The reduction of the Pauli susceptibility induced by the electron-phonon interaction when $`\omega _0/E_F`$ is finite requires to re-consider the estimates of the electron density of states based on EPR measurements . In these estimates, in fact, the measured $`\chi `$ is fitted with the ME expression of the susceptibility $$\chi N(0)\frac{N_0(0)}{1I},$$ (12) where in the last equality we have explicitly separated $`N(0)`$ into the free-electron form $`N_0(0)`$ and the Stoner enhancement $`1/(1I)`$ given by the many-electrons effects. Theoretical estimations of $`1/(1I)`$ permit therefore to obtain $`N_0(0)`$ from the experimental $`\chi `$ . However, this procedure may sistematically underestimate $`N_0(0)`$ if $`\omega _0/E_F`$ is no longer negligible like in the fullerene compounds. In fact, in view of the previous results, $`N_0(0)`$ of eq.(12) should be replaced by $`N_0^{}(0)N_0(0)f(\lambda ,\omega _0/E_F)`$, where the function $`f`$ takes into account the phonon renormalization effects and is less than the unity. From the calculations showed in Figs. 1 and 2, $`f`$ can be as small as $`0.8\mathrm{\hspace{0.17em}0.7}`$, leading to an underestimation of the bare density of states $`N_0(0)`$ of $`20\mathrm{\hspace{0.17em}30}\%`$. Another remarkable feature of the nonadiabatic phonon renormalization is the lattice induced isotope effect on $`\chi `$. From Fig.1 in fact it is obvious that a change in frequency $`\omega _0`$ induces a lowering of $`\chi `$. Such a change of $`\omega _0`$ can be induced by isotope substitution leading therefore to a non-zero value of the isotope coefficient: $$\alpha _\chi =\frac{d\mathrm{log}\chi }{d\mathrm{log}M}=\frac{1}{2}\frac{d\mathrm{log}\chi }{d\mathrm{log}(\omega _0/E_F)},$$ (13) where $`M`$ is the ion mass and, in the last equality, we have used $`\omega _0(M)^{1/2}`$ (note that in the nonadiabatic regime $`\chi `$ depends also on $`\lambda `$, however $`\lambda `$ is independent of $`M`$). In Fig. 3 we show the numerical evaluation of eq.13 as a function of $`\omega _0/E_F`$ and for different values of $`\lambda `$. As expected, the resulting isotope coefficient $`\alpha _\chi `$ vanishes at the adiabatic limit. However, for nonzero values of $`\omega _0/E_F`$, it becomes negative and for ordinary values of $`\lambda `$ can be of order $`0.05`$. This is a rather small value, nevertheless it provides a clear indication of nonadiabaticity. It would be extremely interesting to investigate experimentally the presence or the absence of an isotope effect on $`\chi `$ in the fullerene compounds. The outcome of such kind of experiment could provide us with an estimate of $`\omega _0/E_F`$ and therefore of the degree of nonadiabaticity in such narrow band materials. \***
no-problem/9904/cond-mat9904418.html
ar5iv
text
# Detecting Josephson effect in the excitonic condensate by coherent emission of light ## ACKNOWLEDGMENTS I am grateful to J. Ketterson and Yi Sun for pointing out to Ref.. I am also grateful to J. Fernandez-Rossier for turning my attention to Ref. as well as for sending me the preprint .
no-problem/9904/quant-ph9904055.html
ar5iv
text
# Time-of-arrival distribution for arbitrary potentials and Wigner’s time-energy uncertainty relation. \[ ## Abstract A realization of the concept of “crossing state” invoked, but not implemented, by Wigner, allows to advance in two important aspects of the formalization of the time of arrival in quantum mechanics: (i) For free motion, we find that the limitations described by Aharonov et al. in Phys. Rev. A 57, 4130 (1998) for the time-of-arrival uncertainty at low energies for certain measurement models are in fact already present in the intrinsic time-of-arrival distribution of Kijowski; (ii) We have also found a covariant generalization of this distribution for arbitrary potentials and positions. \] In spite of the emphasis of quantum theory on the concept of “observable”, the formalization of “time observables” is still a major open and challenging question. The “arrival time” has in particular received much attention in the last few years (see for a recent review). Considering several candidates proposed for the time-of-arrival distribution in the simple free motion, one dimensional case, some of us have recently argued in favor of a distribution originally proposed by Kijowski , $`\mathrm{\Pi }_K`$, because it satisfies a number of desirable conditions. This distribution can be associated with a POVM and obtained in terms of the eigenfunctions $`|T,\alpha ;X`$ ($`\alpha =\pm `$) of the time-of-arrival operator $`\widehat{T}`$ introduced by Aharonov and Bohm $`\widehat{T}={\displaystyle \frac{m}{2}}[(\widehat{q}X){\displaystyle \frac{1}{\widehat{p}}}+{\displaystyle \frac{1}{\widehat{p}}}(\widehat{q}X)],`$ (1) $`\mathrm{\Pi }_K(t+T;X;\psi (t))={\displaystyle \underset{\alpha }{}}|T,\alpha ;X|\psi (t)|^2,`$ (2) (We consider here the general 1D case with both positive and negative momenta as in .) where $`m`$ is the mass, $`X`$ is the arrival point, $`\widehat{q}`$ and $`\widehat{p}`$ are position and momentum operators, and $$|T,\alpha ;X=e^{i\widehat{H}_0T/\mathrm{}}(|\widehat{p}|/m)^{1/2}\mathrm{\Theta }(\alpha \widehat{p})|X.$$ (3) (The operator $`|\widehat{p}|^{1/2}`$ is defined by its action on momentum plane waves, $`|\widehat{p}|^{1/2}|p=|p|^{1/2}|p`$.) $`\mathrm{\Pi }_K(t+T;X;\psi (t))`$ represents the probability density of arriving at $`X`$, at the instant $`T+t`$, for a given wavepacket $`\psi (t)`$, and $`t`$ is the parametric time that characterizes the evolution of the state $`\psi (t)`$. (Typically one sets $`t=0`$ so that $`T`$ is the “nominal arrival time”.) This distribution satisfies in particular the important covariance condition under time translations, $`\mathrm{\Pi }_K(t+Tt^{};X;\psi (t+t^{}))=\mathrm{\Pi }_K(t+T;X;\psi (t))`$. For other properties see . In this letter we study two major aspects of this distribution that had not been addressed: (i) For states with positive or negative momenta we shall obtain the states of minimum time uncertainty (for given energy width), and find the same type of limitation pointed out by Aharonov et al.; (ii) we shall also generalize (2) for arbitrary potentials. To handle conveniently this two issues let us first elaborate on the form of $`\mathrm{\Pi }_K`$. For $`\alpha =+`$ the contribution in (2) can be interpreted as a quantum version of the positive flux at the time $`T+t`$ due to right moving particles. Similarly, for $`\alpha =`$, one has a quantum version of minus the negative flux due to left moving particles, again a positive quantity. Explicitly, $`\mathrm{\Pi }_{K,\alpha }(t+T)`$ $`=`$ $`\psi (t+T)|(|\widehat{p}|/m)^{1/2}\mathrm{\Theta }(\alpha \widehat{p})\delta (\widehat{q}X)`$ (4) $`\times `$ $`\mathrm{\Theta }(\alpha \widehat{p})(|\widehat{p}|/m)^{1/2}|\psi (t+T),`$ (5) where $`\delta (\widehat{q}X)=|XX|`$. The positive operator in (5) corresponds to the classical dynamical variable $$\delta (qX)\frac{\alpha p}{m}\mathrm{\Theta }(\alpha p),$$ (6) whose average represents the modulus of the flux of particles of the classical ensemble that arrive at $`X`$ from one side at a given time. This connexion was pointed out by Delgado . There are of course many possible quantizations of this quantity but the symmetrical one in (5) turns out to be the only one that satisfies the symmetry and minimum variance properties of $`\mathrm{\Pi }_K`$. It is useful to write the positive operator in (5) in the form $`|u_\alpha u_\alpha |`$, where $$|u_\alpha =|T=0,\alpha ;X$$ (7) is the “crossing state”. As emphasized in it is important to keep in mind that, being non-normalizable, its literal interpretation is problematic. Only normalized wave packets peaked around these states have properties as close as desired to the sharp crossing behaviour expected on intuitive grounds. Let us first discuss the point (i) related to the time-energy uncertainty principle. Since the Hamiltonian and the time operator (1) are conjugate variables a minimum uncertainty product can be established in the usual fashion . However, Aharonov et al. have proposed, based on a series of models, a second limitation on the possible values of the time-of-arrival uncertainty: $`\delta tE_k>\mathrm{}`$, where $`\delta t`$ is the width of the “pointer variable” used to measure the arrival time, and $`E_k`$ “the typical initial kinetic energy of the particle” . It is to be stressed that this relation is based on measurement models for the arrival time where some extra (clock) degree of freedom is coupled continuously with the particle. We shall see that in fact the “intrinsic” distribution $`\mathrm{\Pi }_K`$ (there is no explicit recourse to additional pointer degrees of freedom to define $`\mathrm{\Pi }_K`$) is consistent with the behaviour that Aharonov et al. described for their models . There are many other time-energy uncertainty relations , but here we shall be mainly interested in the version of E. P. Wigner , because his formalism is particularly suited for the time of arrival. In the original paper Wigner did not consider in detail any application and described a variational method to find the states of minimum uncertainty product, but did not actually obtain these states, except in two analytically solvable cases. We shall extend Wigner’s work in several directions by evaluating the states of minimum uncertainty product and applying the formalism to the arrival time. For completeness we shall next briefly summarize the main results obtained by Wigner in , and add a number of comments and observations relevant for our application. He defines the basic amplitude as $`\chi (t)=u|\psi (t)`$, where $`|u`$ is in principle any state vector. (Wigner’s formalism encompasses many different time-energy uncertainty products, depending on the $`|u`$ chosen, each with its own physical interpretation.) Note that $`t`$ is considered the independent variable of $`\chi `$, and $`|u`$ is fixed. $`|u`$ is not necessarily a Hilbert space normalizable vector. It may also be, for example, a position, or a time-of-arrival eigenvector provided the minimum uncertainty product state obtained is square integrable. $`P(t)|\chi (t)|^2/_{\mathrm{}}^{\mathrm{}}|\chi (t)|^2𝑑t`$ plays the role of a normalized distribution for being found at $`|u`$ at time $`t`$. This is not a standard quantity in the ordinary formulation of quantum mechanics (which assigns probabilities only at fixed time $`t`$), but the interpretation is consistent with the ordinary formulation in the following way: Here “being found” implies operationally to perform measurements of $`|uu|`$ at a given time $`t`$ for the members of an ensemble prepared at $`t_0<t`$ according to $`\psi (t_0)`$. This is repeated at different times but the ensemble is always prepared anew at $`t_0`$ with the same specifications. The distribution of positive counts as a function of time is proportional to $`|u|\psi (t)|^2`$, and $`P(t)`$ is obtained when this distribution can be normalized (which is not always the case). ($`P(t)`$ does not correspond to a continuous measurement that modifies $`\psi (t)`$.) The moments of $`P(t)`$ are defined in the usual way, and in particular the second moment with respect to $`t_0`$ is defined as $$\tau ^2_{\mathrm{}}^{\mathrm{}}|\chi (t)|^2(tt_0)^2𝑑t/_{\mathrm{}}^{\mathrm{}}|\chi (t)|^2𝑑t.$$ (8) The information contained in $`\chi (t)`$ can also be encoded in its Fourier transform, $$\eta (E)h^{1/2}\chi (t)e^{iEt/\mathrm{}}𝑑t.$$ (9) The conjugate variable $`E`$ has dimensions of energy, but $`\eta (E)`$ is not, in general, an energy amplitude in the conventional sense. $`\eta `$ and the conventional energy amplitude of $`\psi (t=0)`$, can be related by expanding $`\psi (t)`$ in a basis of energy eigenstates $`|E,\alpha `$, $$\eta (E)=h^{1/2}\underset{\alpha }{}u|E,\alpha E,\alpha |\psi (0)\mathrm{\Theta }(E).$$ (10) In a general case $`\alpha `$ is an index to account for the possible degeneracy. In particular, for free motion, $`\alpha =\pm `$, and $$|E,\alpha =\left(\frac{m}{2E}\right)^{1/4}|p=\alpha (2mE)^{1/2}.$$ (11) Analogously to (8) the second energy moment with respect to $`E_0`$ is defined as $$ϵ^2_0^{\mathrm{}}|\eta (E)|^2(EE_0)^2𝑑E/_0^{\mathrm{}}|\eta (E)|^2𝑑E.$$ (12) Neither $`t_0`$ nor $`E_0`$ should in general be identified with the average values of time and energy. They are instead reference parameters fixed beforehand to evaluate the moments. As a consequence, $`ϵ^2`$ and $`\tau ^2`$ should not in general be identified with the “variances” $`(\mathrm{\Delta }E)^2`$ and $`(\mathrm{\Delta }t)^2`$, which are the second moments with respect to the average values. Since $`\eta (E)`$ and $`\chi (t)`$ are Fourier transforms of each other the uncertainty product $`ϵ\tau `$ is greater than $`\mathrm{}/2`$ (a peculiarity of time and energy with respect to position and momentum is that the equality is never satisfied ). In fact the bound increases substantially as $`E_0`$ decreases. Wigner sought for the function $`\eta (E)`$ that renders $`\tau `$ to a minimum for fixed $`ϵ`$. In order to have a finite second moment $`\tau ^2`$, $`\eta (E)`$ must vanish at the origin, $`\eta (0)=0`$, so $`\eta `$ must vanish at both ends of integration $`E=0,\mathrm{}`$. Using partial integration, and the notation $`\eta _0=\eta e^{iEt_0/\mathrm{}}`$ (Wigner showed that the minimum of $`\tau ϵ`$ must correspond to a real $`\eta _0`$.), one finds that $$\tau ^2=\mathrm{}^2_0^{\mathrm{}}|\eta _0(E)/E|^2𝑑E/_0^{\mathrm{}}|\eta _0(E)|^2𝑑E.$$ (13) The product $`\tau ^2ϵ^2`$ subject to the constraint of fixed $`ϵ^2`$ is minimized by variational calculus. This leads, using (12) and (13), to $$\mathrm{}^2\frac{^2\eta _0}{E^2}+\frac{\lambda ^{}}{ϵ^2}(EE_0)^2\eta _0=(\tau ^2+\lambda ^{})\eta _0,$$ (14) where $`\lambda ^{}`$ is a Lagrange multiplier. This equation is formally similar to the Schr̈odinger equation for the harmonic oscillator, except for the boundary condition at $`E=0`$, $`\eta _0(0)=0`$, and the subsidiary condition for $`ϵ`$ (12). The minimum $`\tau `$ is obtained from the lowest eigenvalue corresponding to the value of $`\lambda ^{}`$ where the subsidiary condition is satisfied. Fortunately the solution depends only on the ratio $`E_0/ϵ`$, namely $`\eta (E;E_0;ϵ)=g(E/ϵ;E_0/ϵ)`$, where $`g`$ is the solution of (14) with $`ϵ=1`$. Note also that, since $`|\eta _0|^2=|\eta |^2`$, the value of $`t_0`$ does not play any role in the minimization process. (Physically the state $`\chi _0(t)`$ corresponding to $`t_0=0`$ is valid for any other time $`t_0`$ by a shift of the argument.) The minimization of $`\tau `$ for fixed $`E_0/ϵ`$ requires a method to solve the differential equation (14) for many different values of $`\lambda ^{}`$, until $`ϵ^2=1`$ is satisfied. In our calculation the successive values $`\lambda ^{}(n)`$ have been obtained with a Newton-Raphson algorithm, and the lowest eigenstate and eigenvalue of (14) for each $`\lambda ^{}(n)`$ with a very efficient “relaxation method” . The only explicit case considered in the original paper by Wigner was $`|u=|X`$. The corresponding $`P(t)`$ provides a time distribution for the presence of the particle at $`X`$ but not for its arrival. In an intriguing “general observation”, Wigner stated that, instead of asking for the probability that the particle be at a definite landmark in space, just at the time $`t`$, “it would be more natural to ask … for the probability that the particle crosses the aforementioned landmark at the time $`t`$ from the left, and also that it crosses the landmark, at a given time, from the right.” But the paragraph ends with “This point …, interesting though it may be, will not be elaborated further”. Precisely, it is our aim here to elaborate further this question. Indeed the probabilities mentioned by Wigner can be obtained by means of the crossing states $`|u_+`$ and $`|u_{}`$ discussed before, see (7). Specializing to states having only positive/negative momentum $`\mathrm{\Pi }_K=\mathrm{\Pi }_{K,\alpha }(t)=|u_\alpha |\psi (t)|^2`$ provides the arrival time distribution at $`t`$. Considering $`\chi (t)=u_\alpha |\psi (t)`$, we see that Wigner’s probability density is nothing but Kijowski’s distribution, $`P(t)=\mathrm{\Pi }_{K,\alpha }(t)=|\chi (t)|^2`$. Moreover, the Fourier transform of $`\chi (t)`$ is in this case the standard energy amplitude, $`\eta (E)=E,\alpha |\psi (0)`$, so that $`ϵ`$ becomes the spread (around $`E_0`$) of the ordinary energy distribution. For a given ratio $`E_0/ϵ`$ there is a minimum value of $`\tau ϵ`$. The family of states of minimal uncertainty $`\eta (E;E_0;ϵ)`$ with the same ratio $`E_0/ϵ`$ have in common the same value of $`E/ϵ`$. Figure 1 shows $`\tau ϵ`$ versus $`E/ϵ`$ for the states of minimized time–energy uncertainty product. For comparison we also show the curve $`ϵ/E`$. Clearly $$\tau >\mathrm{}/E,$$ (15) which has the same form as the relation proposed by Aharonov et al. based on measurement models . However, $`\tau `$ is not due to the effect of any measuring apparatus, it is an intrinsic uncertainty associated with an intrinsic time-of-arrival distribution. It is not the coupling introduced in a measurement between the particle and other degrees of freedom that leads to this relation but the very quantum mechanical nature of the particle alone and the lower bound of the energy. To elaborate the figure, $`E_0/ϵ`$ has been increased regularly from the minimum possible value $`1`$. (For smaller values it is impossible to satisfy the subsidiary condition.) For each value the minimization of $`\tau ϵ`$ is performed and the corresponding $`E/ϵ`$ is obtained. As $`E/ϵ\mathrm{}`$ the minimum uncertainty product tends to the (global) minimum $`\mathrm{}/2`$, the same value found for position and momentum, because the effect of the lower bound of the energy tends to disappear in that limit, and $`\eta (E)`$ becomes closer and closer to a Gaussian centered at $`E_0`$ with variance $`ϵ^2`$ . However, in the opposite limit the lower bound at $`E=0`$ plays an important role. Indeed, $`E/ϵ0`$ corresponds to the limit $`E_0/ϵ1`$, and the only way to satisfy the constraint is by strictly localizing $`\eta (E)`$ at $`E=0`$, ($`\mathrm{\Delta }E0`$), but this completely delocalizes the conjugate time variable, namely $`\tau \mathrm{}`$. Thus, (15) appears as a consequence of the ordinary uncertainty principle, due to the tendency of the minimum uncertainty product states to have smaller variances for smaller energies. (The precise dependence for arbitrary values of $`E_0/ϵ`$ has to be obtained numerically.) The second question we shall address is the generalization of the free motion distribution (2) for arbitrary potentials and positions. A generalization based on a quantization of classical expressions as in (1) is problematic: the classical expressions for the time of arrival will rarely be analytical, not all phase space points lead to arrival, and the ordering problems may be formidable. The way out though, will be surprisingly simple in terms of crossing states. There is in fact nothing that limits (6), and the corresponding operator in (5) to free motion. In particular, the classical motivation for considering $`|u_\alpha `$ a “crossing state” is equally valid when an arbitrary potential is present. The state (3) may be regarded as one that has evolved “backwards” with $`H_0`$ a time $`T`$ from the crossing state $`|u_\alpha `$ so that it becomes $`|u_\alpha `$ precisely at the nominal arrival time $`T`$. In the same vein we construct for an arbitrary Hamiltonian $`H`$ $$|T,\alpha ;X=e^{i\widehat{H}T/\mathrm{}}|u_\alpha ,$$ (16) so that at the nominal arrival time $`T`$, $`|u_\alpha `$ is recovered. The generalization of the arrival time distribution for arbitrary potentials is therefore $$\mathrm{\Pi }(t+T;X;\psi (t))=\underset{\alpha }{}|u_\alpha |e^{i\widehat{H}T/\mathrm{}}|\psi (t)|^2.$$ (17) It is evidently covariant under time translations as it should; in general it is not normalized, and it may be unnormalizable (its classical counterpart shares these properties). For example, it may be constant for stationary states, or periodic for oscillating coherent states in a harmonic potential, but it provides in any case relative information by comparison of two times. Consistent with this, the states $`|T,\alpha ;X`$ do not form in general a complete basis. According to its classical analog it takes into account any crossings (not only first, or last). To illustrate the distribution we have evaluated $`\mathrm{\Pi }_+`$ for a collision with tunnelling at three different positions before, in, and after a square barrier, see Figure 2. The only previous attempt to generalize $`\mathrm{\Pi }_K`$ applied only to asymptotic positions where the motion is essentially free . Our generalization is instead valid for arbitrary positions. One of us (J.G.M.) acknowledges C. R. Leavens for helpful discussions. This work was supported by Gobierno Autónomo de Canarias (PB/95), MEC (PB97-1482), and CERION.
no-problem/9904/astro-ph9904195.html
ar5iv
text
# An Efficient Implementation of Flux Formulae in Multidimensional Relativistic Hydrodynamical Codes ## Abstract We derive and analyze a simplified formulation of the numerical viscosity terms appearing in the expression of the numerical fluxes associated to several High-Resolution Shock-Capturing schemes. After some algebraic pre-processing, we give explicit expressions for the numerical viscosity terms of two of the most widely used flux formulae, which implementation saves computational time in multidimensional simulations of relativistic flows. Additionally, such treatment explicitely cancells and factorizes a number of terms helping to amortiguate the growing of round-off errors. We have checked the performance of our formulation running a 3D relativistic hydrodynamical code to solve a standard test-bed problem and found that the improvement in efficiency is of high practical interest in numerical simulations of relativistic flows in Astrophysics. thanks: This work has been supported by the Spanish DGES (grant PB97-1432). The calculations were carried out on a SGI Origin 2000, at the Centre de Informàtica de la Universitat de València. PACS 47.11.+j, 47.75.+f, 95.30.Lz Key words: Non-linear Systems of Conservation Laws. High Resolution Shock Capturing methods. Special Relativistic Hydrodynamics. General Relativistic Hydrodynamics. The numerical study of the evolution of multidimensional relativistic flows turns out to be a topic of crucial interest in, at least, two different scientific fields: Nuclear Physics (studies of the properties of the equation of state for nuclear matter via comparison of simulations and experiments of heavy ion collisions) and Relativistic Astrophysics. The field of Numerical Relativistic Astrophysics is recently undergoing an extraordinary developement after the important efforts of people working in building up robust codes able to describe many different astrophysical scenarios, such that relativistic jets in quasars and microquasars, accretion onto compact objects, collision of compact objects, stellar core collapse and recent models of Gamma-Ray bursts (see, e.g., the recent review in and references therein). Thus, the improvement in the efficiency of multidimensional hydro-codes becomes a necessity. It is well known the performance of modern high-resolution shock-capturing techniques (HRSC) in simulations of complex classical flows. Most of the HRSC methods are based on the solution of local Riemann problems (i.e., initial value problems with discontiuous initial data) and since 1991 several Riemann Solvers or Flux Formulae have been specifically designed in relativistic fluid dynamics (see, e.g., for a review on Riemann solvers in Relativistic Astrophysics). In addition, in a recent paper we showed the way for applying special relativistic Riemann solvers in General Relativistic Hydrodynamics, hence any future new Riemann solver, exhaustively analyzed in Special Relativistic Hydrodynamics (SRH), could be applied to get the numerical solution of local Riemann problems in General Relativistic Hydrodynamics. Consequently, the interest of the results we obtain in this note goes beyond the domain of SRH and can be easily extended to General Relativistic Hydrodynamics. For consistency, we start by summarizing the basics of the HRSC techniques. A system of conservation laws is $$\frac{𝐮}{t}+\underset{i=1}{\overset{3}{}}\frac{𝐟^i(𝐮)}{x_i}=0$$ (1) where u$`\mathrm{}^d`$ is the vector of unknowns and $`𝐟^i`$(u) is the flux in the $`i`$-direction. In the above system (1) we can define a $`5\times 5`$-Jacobian matrix $`^i`$(u) associated to the flux in the $`i`$-direction as: $$^i=\frac{𝐟^i(𝐮)}{𝐮}.$$ (2) The system is said to be hyperbolic if the Jacobian matrices have real eigenvalues. The main ingredients of a HRSC algorithm are: i) A finite discretization of the equations in conservation form (1). Using a method of lines, this discretization reads: $$\frac{d𝐮_{i,j,k}(t)}{dt}=\frac{\widehat{𝐟}_{i+\frac{1}{2},j,k}\widehat{𝐟}_{i\frac{1}{2},j,k}}{\mathrm{\Delta }x}\frac{\widehat{𝐠}_{i,j+\frac{1}{2},k}\widehat{𝐠}_{i,j\frac{1}{2},k}}{\mathrm{\Delta }y}\frac{\widehat{𝐡}_{i,j,k+\frac{1}{2}}\widehat{𝐡}_{i,j,k\frac{1}{2}}}{\mathrm{\Delta }z}$$ (3) where subscripts $`i,j,k`$ are related, respectively, with $`x`$, $`y`$ and $`z`$-discretizations, and refer to cell-centered quantities. The cell width, in the three coordinate directions are, respectively, $`\mathrm{\Delta }x`$, $`\mathrm{\Delta }y`$ and $`\mathrm{\Delta }z`$. ii) Quantities $`\widehat{𝐟}_{i+\frac{1}{2},j,k}`$, $`\widehat{𝐠}_{i,j+\frac{1}{2},k}`$ and $`\widehat{𝐡}_{i,j,k+\frac{1}{2}}`$ are called the numerical fluxes at the cell interfaces. These numerical fluxes are, in general, functions of the states of the system at each side of the cell interface. Some HRSC methods derive expressions for the numerical fluxes by giving a consistent flux formulae or solving local Riemann problems, with an exact or approximate Riemann solver, after a cell reconstruction procedure that gives the state at both sides of the interface, denoted by $`L`$ (left state) and $`R`$ (right state). Several monotonic cell reconstruction prescriptions have been given in the scientific literature to achieve different orders of spatial accuracy . For clarity, from now on we will omit the indexes relative to the grid and restrict our study to the $`x_1`$-splitting of the above system (1), assuming that the vector of unknowns satisfies $`𝐮=𝐮(x_1,t)`$. We have focussed our analysis to some of the most popular HRSC algorithms, and analyzed their expressions for the numerical fluxes. Hence, the sample considered is: HLLE , Roe , Marquina (M) , and a modified Marquina’s flux formula (MM) . The above selection gathers the most fundamental differences among the large sample of HRSC flux formulae. HLLE is the simplest one, it does not need the full spectral decomposition of the Jacobian matrices. Roe’s solver linearizes the information contained in the spectral decomposition into an average state. Marquina’s (and its modified version) flux formula considers the information coming from each side of a given interface (it is not a Riemann solver) and, in some astrophysical applications, has produced better results in modelling complex flows. After some algebraic work, all these flux formulae can be cast into the following general form: $$\widehat{𝐟}(𝐮^L,𝐮^R)=\frac{1}{2}\left((+\stackrel{~}{}^L)𝐟^L+(\stackrel{~}{}^R)𝐟^R+(𝒬^L𝐮^L𝒬^R𝐮^R)\right)$$ (4) where $`𝐟^{L,R}`$ stands for the fluxes evaluated at the states $`𝐮^{L,R}`$ and $``$ is the unit matrix. Following Harten , the $`𝒬^,`$ terms in the above equation will be referred as the numerical viscosity matrix. Matrices $`\stackrel{~}{}^{L,R}`$ and $`𝒬^{L,R}`$ can be expressed as linear combinations of the projectors onto each eigenspace, i.e., the direct product of the corresponding left and right eigenvectors $`𝐥_p,𝐫_p`$ associated to the p-th characteristic field (p=1,…,d), $$\stackrel{~}{}^{L,R}=\underset{p=1}{\overset{d}{}}b_p𝐥_p^{L,R}𝐫_p^{L,R}$$ (5) $$𝒬^{L,R}=\underset{p=1}{\overset{d}{}}c_p𝐥_p^{L,R}𝐫_p^{L,R}$$ (6) where superscripts $`L,R`$ indicate that the eigenvectors are evaluated at the state $`𝐮^{L,R}`$. The values of the coefficients $`b_p`$ and $`c_p`$ appearing in the above definitions of matrices $`\stackrel{~}{}^{L,R}`$ and $`𝒬^{L,R}`$ depend on the eigenvalues $`\lambda _p`$ as shown in Table I, for the four flux formulae analyzed. Several comments concerning Table I are in order: i) If we take into account the orthonormality relations between the right and left eigenvectors $$\underset{p=1}{\overset{d}{}}𝐥_p𝐫_p=$$ (7) and the fact that the coefficients $`b_p`$ and $`c_p`$ are, in the case of HLLE, independents of $`p`$, then matrices $`\stackrel{~}{}^{L,R}`$ and $`𝒬^{L,R}`$ are, trivially, the unit matrix multiplied by the corresponding factors. ii) For HLLE’s and Roe’s flux formulae their corresponding matrices $`\stackrel{~}{}^{L,R}`$ and $`𝒬^{L,R}`$ satisfy the relations: $`\stackrel{~}{}^L`$ = $`\stackrel{~}{}^R`$, $`𝒬^L`$ = $`𝒬^R`$ iii) As it is well known, the knowledge of the spectral decomposition of the Jacobian matrices is a basic ingredient to build up Riemann solvers or many flux formulae. Nevertheless, while HLLE’s flux formula only needs the values of the maximum and minimum speeds of propagation of the signals, Roe’s and Marquina’s flux formulae need explicitly the full knowledge of the spectral decomposition, including right and left eigenvectors. The system governing the evolution of multidimensional relativistic perfect fluids can be written in Cartesian coordinates in the form (1), with $`d=5`$, where, in units such that the speed of light $`c=1`$, the vector of unknowns $`𝐮`$ is given by $$𝐮=(D,S^1,S^2,S^3,\tau )^T,$$ (8) the fluxes are defined by $$𝐟^i=(Dv^i,S^1v^i+p\delta ^{1i},S^2v^i+p\delta ^{2i},S^3v^i+p\delta ^{3i},S^iDv^i)^T$$ (9) where $`D(=\rho W)`$ is the rest mass density, $`S^j(=\rho hW^2v^j)`$ is the j-component of the momentum density, and $`\tau (=\rho hW^2p\rho W)`$ is the energy density, $`W=(1\mathrm{v}^2)^{1/2}`$ is the Lorentz factor, $`\rho `$ is the rest–mass density, $`p`$ the pressure and $`h`$ the specific enthalpy given by $`h=1+\epsilon +p/\rho `$ with $`\epsilon `$ being the specific internal energy. The system of SRH is closed with an equation of state $`p=p(\rho ,\epsilon )`$ from which the local sound speed, $`c_s`$, can be obtained $$hc_s^2=\frac{p}{\rho }+(p/\rho ^2)\frac{p}{ϵ},$$ (10) In a previous paper we derived the explicit analytical expressions for the full (right and left) spectral decomposition. We denote the five characteristic fields by $`p=1,\mathrm{},5,0,0,0,+`$, in the standard ordenation. A very worthy simplification on the calculation of matrices $`𝒬`$ arises when some eigenvalue is degenerate, i.e., when the system is not strictly hyperbolic. In SRH, like in multidimensional Newtonian hydrodynamics, there is a linearly degenerate field, $`p=0`$, such that the corresponding eigenvalue $`\lambda _0`$ is triple (the system in the $`j`$-direction splitting is not strictly hyperbolic, although the set of eigenvectors is complete). According to equation (6), and using the orthonormality relations between the right and left eigenvectors $$\underset{k=1}{\overset{3}{}}r_{0,k}^ml_{0,k}^n=\delta ^{mn}r_+^ml_+^nr_{}^ml_{}^n$$ (11) where $`m,n=1,\mathrm{}5`$ denote the components of the 5-vector, it is possible to eliminate the three eigenvectors associated to the degenerate field and to write down the following simplified expression (omitting $`L`$,$`R`$ superscripts) $$𝒬^{mn}=c_0\delta ^{mn}+(c_+c_0)r_+^ml_+^n+(c_{}c_0)r_{}^ml_{}^n.$$ (12) Notice that only $`r_\pm `$ and $`l_\pm `$ are needed to evaluate the numerical viscosity. The same procedure can be applied to any system of conservation laws where one of the eigenvectors has multiple degeneracy, because orthogonality relations always allow us to skip the explicit dependence on one of the vector subspaces of the spectral decomposition. In particular, it could be of great interest in the case of the equations of relativistic magnetohydrodynamics where, in the ansatz of a directional splitting, similar degeneracy arises in the structure of the characteristic fields associated to each one of the fluxes. The explicit formulae for the numerical viscosity term corresponding to the system of equations of special relativistic hydrodynamics are: HLLE’s flux formulae. Since the numerical viscosity matrix is proportional to the identity, the application of the above recipes is obvious. Roe’s flux formulae. The numerical flux across some given interface can be written $$\widehat{𝐟}(𝐮^L,𝐮^R)=\frac{1}{2}[𝐟^L+𝐟^R+𝐪]$$ (13) $`𝐪`$ being the five–vector calculated from the corresponding numerical viscosity matrices of Table I: $$𝐪=𝒬(𝐮^L𝐮^R)𝒬\mathrm{\Delta }𝐮$$ (14) In Roe’s Riemann solver the quantities relative to the spectral decomposition are evaluated using the corresponding Roe-averages of the left and right states, denoted by $`\stackrel{~}{𝐮}`$ (see , for the Newtonian case and for the relativistic case). In practice, other particular averaging (e.g., arithmetic means) have also been used. Note that in the following expressions (15) all quantities are evaluated using Roe’s average, except for $`\mathrm{\Delta }u_m`$. After some algebra, the viscosity vector associated to the numerical flux in the $`j`$-direction is $`q_1`$ $`=`$ $`\lambda _0\mathrm{\Delta }u_1+\chi _a`$ $`q_2`$ $`=`$ $`\lambda _0\mathrm{\Delta }u_2+hW(v_x\chi _a+\chi _b\delta _{jx})`$ $`q_3`$ $`=`$ $`\lambda _0\mathrm{\Delta }u_3+hW(v_y\chi _a+\chi _b\delta _{jy})`$ $`q_4`$ $`=`$ $`\lambda _0\mathrm{\Delta }u_4+hW(v_z\chi _a+\chi _b\delta _{jz})`$ $`q_5`$ $`=`$ $`\lambda _0\mathrm{\Delta }u_5+hW(\chi _a+v_j\chi _b)\chi _a`$ (15) where $`\chi _a`$ $`=`$ $`{\displaystyle \underset{m=1}{\overset{5}{}}}\left[(\lambda _+\lambda _0)l_+^m+(\lambda _{}\lambda _0)l_{}^m\right]\mathrm{\Delta }u_m`$ (16) $`\chi _b`$ $`=`$ $`{\displaystyle \underset{m=1}{\overset{5}{}}}\left[(\lambda _+\lambda _0)V_+^jl_+^m+(\lambda _{}\lambda _0)V_{}^jl_{}^m\right]\mathrm{\Delta }u_m`$ (17) $`V_\pm ^j`$ $`=`$ $`{\displaystyle \frac{\lambda _\pm v^j}{1v^j\lambda _\pm }}`$ (18) M and MM- flux formulae. The numerical flux across a given interface can be written like equation (13) with $$𝐪=𝐪^L𝐪^R$$ (19) $$𝐪^{L,R}=𝒬^{L,R}𝐮^{L,R}$$ (20) Omitting the superscripts $`L,R`$ and taken into account the expressions in Table I for MM and the results in , the viscosity vector in the $`x`$-splitting is: $`q_1^{L,R}`$ $`=`$ $`{\displaystyle \frac{h^2}{\mathrm{\Delta }}}\left\{M\left[𝒜_{}\mathrm{\Omega }_+𝒜_+\mathrm{\Omega }_{}\right]+p(c_+\mathrm{}_+c_{}\mathrm{}_{})\right\}+`$ $`c_0p{\displaystyle \frac{W}{h}}\left\{{\displaystyle \frac{𝒦}{𝒦1}}+{\displaystyle \frac{v_y^2+v_z^2}{1v_x^2}}\right\}`$ $`q_2^{L,R}`$ $`=`$ $`{\displaystyle \frac{h^2W}{\mathrm{\Delta }}}\left\{M𝒜_+𝒜_{}\left[\mathrm{\Omega }_+\lambda _+\mathrm{\Omega }_{}\lambda _{}\right]+p(c_+\lambda _+𝒜_+\mathrm{}_+c_{}\lambda _{}𝒜_{}\mathrm{}_{})\right\}+`$ $`c_0pW^2v_x\left\{{\displaystyle \frac{1}{𝒦1}}+2{\displaystyle \frac{v_y^2+v_z^2}{1v_x^2}}\right\}`$ $`q_3^{L,R}`$ $`=`$ $`{\displaystyle \frac{h^2W}{\mathrm{\Delta }}}v_y\left\{M\left[\mathrm{\Omega }_+𝒜_{}\mathrm{\Omega }_{}𝒜_+\right]+p(c_+\mathrm{}_+c_{}\mathrm{}_{})\right\}+`$ $`c_0p\left\{{\displaystyle \frac{W^2}{𝒦1}}+{\displaystyle \frac{1+2W^2(v_y^2+v_z^2)}{1v_x^2}}\right\}`$ $`q_4^{L,R}`$ $`=`$ $`{\displaystyle \frac{h^2W}{\mathrm{\Delta }}}v_z\left\{M\left[\mathrm{\Omega }_+𝒜_{}\mathrm{\Omega }_{}𝒜_+\right]+p(c_+\mathrm{}_+c_{}\mathrm{}_{})\right\}+`$ $`c_0p\left\{{\displaystyle \frac{W^2}{𝒦1}}+{\displaystyle \frac{1+2W^2(v_y^2+v_z^2)}{1v_x^2}}\right\}`$ $`q_5^{L,R}`$ $`=`$ $`{\displaystyle \frac{h^2}{\mathrm{\Delta }}}\left\{M\left[𝒜_{}\mathrm{\Omega }_+𝒟_+𝒜_+\mathrm{\Omega }_{}𝒟_{}\right]+p[c_+\mathrm{}_+𝒟_+c_{}\mathrm{}_{}𝒟_{}]\right\}+`$ (21) $`c_0p{\displaystyle \frac{W}{h}}\left\{{\displaystyle \frac{hW𝒦}{𝒦1}}+{\displaystyle \frac{(2hW1)(v_y^2+v_z^2)}{1v_x^2}}\right\},`$ with the following auxiliary quantities $`M=\rho hW^2(𝒦1),\mathrm{\Omega }_\pm =c_\pm (v_x\lambda _{}),𝒟_\pm =hW𝒜_\pm 1,`$ $`𝒦{\displaystyle \frac{\stackrel{~}{\kappa }}{\stackrel{~}{\kappa }c_s^2}},\stackrel{~}{\kappa }={\displaystyle \frac{1}{\rho }}{\displaystyle \frac{p}{\epsilon }}|_\rho ,𝒜_\pm {\displaystyle \frac{1v_xv_x}{1v_x\lambda _\pm }}`$ $`\mathrm{\Delta }=h^3W(𝒦1)(1v_xv_x)(𝒜_+\lambda _+𝒜_{}\lambda _{})`$ $`\mathrm{}_\pm =\pm \left\{v_xW^2(\mathrm{v}^2v_xv_x)(2𝒦1)(v_x𝒜_\pm \lambda _\pm )+𝒦𝒜_\pm \lambda _\pm \right\}`$ (22) where quantities $`c_{\pm ,0}`$ are given in Table I and $`\mathrm{\Delta }`$ is the determinant of the matrix of right-eigenvectors. The corresponding viscosity vectors in the other directions are trivially obtained by a cyclic permutation of subindices $`x,y,z`$. We have tested the efficiency of our numerical proposal, for Roe’s and MM’s flux formulae, running GENESIS (a 3D special relativistic hydro-code ), without any optimization at compilation level, in a SGI Origin 2000. A standard initial value problem has been chosen: $`\rho _L=10`$, $`ϵ_L=2`$, $`v_L=0`$, $`\gamma _L=5/3`$, $`\rho _R=1`$, $`ϵ_R=10^6`$, $`v_R=0`$ and $`\gamma _R=5/3`$, where the subscript $`L`$ ($`R`$) denotes the state to the left (right) of the initial discontinuity. This test problem has been considered by several authors in the past (see for details in 1D, 2D and 3D). We have compared the performance of two different implementations of the numerical flux subroutine: i) Case A, stands for the results obtained using our analytical prescription. This means to write down, in the numerical flux routine, just the expressions derived here for the viscosity vector $`𝐪`$. ii) Case B, stands for the results obtained running the code with a standard high-efficiency subroutine for inverting matrices (we use a LU decomposition plus an implicit pivoting which is, for general matrices, $`O(N^3)`$). This subroutine is called to get the left eigenvectors from the matrix of right eigenvectors and is adapted to the particular dimensions of the matrices ($`3\times 3`$, in 1D, $`4\times 4`$, in 2D and $`5\times 5`$, in 3D). Hence, unlike case A, now we have to calculate numerically the following quantities: the matrix of left eigenvectors, the characteristic variables and, finally, the components of the viscosity vector $`𝐪`$. Table II summarizes the results: the direct implementation of our numerical viscosity formulae leads to an improvement of the efficiency (in terms of CPU time) of the numerical fluxes subroutine in a factor which, in 3D calculations, ranges between about eleven and twelve depending on the particular flux formula used. When comparing Roe’s and MM’s cases a factor two –in favour of Roe– arises due to the fact that MM’s flux formulae needs to compute two viscosity vectors (one per each side of a given interface), unlike Roe’s flux formula which needs only one viscosity vector evaluated at the average state. As it must be, the efficiency increases with the number of spatial dimensions involved in the problem due to the computationally expensive matrix inverting operations performed at each interface to get the numerical fluxes. Since the numerical flux routine is, typically, one of the most time-consuming, it translates into a speed up factor between two and four in the total execution time, depending on the specific weight of the flux formulae routine in each particular application. Our formulation also gives a unified description of the numerical fluxes (4), permitting a unique implementation with the possibility of switching in cases when the utilisation of a specific flux formula is more appropriate. In addition, due to the fact that we have eliminated, in the generalized MM’s flux formula, all the conditional clauses, the efficiency is ensured either for scalar or vectorial processors. Another worthy by-products of our algebraic pre-processing concerns with the significant reduction of round-off errors, as a consequence of the number of operations suppressed and factorization. One of the important issues in designing a multidimensional hydro-code is the accurate preservation of any symmetries present in a physical problem. A numerical violation of these symmetries could arise as a consequence of accumulation of round-off errors in the calculation of the numerical fluxes, as we have explained in a previous paper . The algebraic simplifications, shown in the present paper, reduce the number of operations and cure such problem. Two last additional consequences arise from our work. First is that similar expressions can be worked out for any non-linear hyperbolic system of conservation laws for which the full spectral decomposition is known. In particular, when some of the vectorial subspaces has multiple degeneracy, a similar algebraic preprocessing is very convenient. The other important consequence is that an appropriate combination of a simplified formulation of the numerical viscosity together with the use of special relativistic Riemann solvers in General Relativistic Hydrodynamics , should allow a very easy and efficient extension to General Relativistic Hydrodynamics.
no-problem/9904/cond-mat9904065.html
ar5iv
text
# Morphological phase transitions of thin fluid films on chemically structured substrates ## Abstract Using an interface displacement model derived from a microscopic density functional theory we investigate thin liquidlike wetting layers adsorbed on flat substrates with an embedded chemical heterogeneity forming a stripe. For a wide range of effective interface potentials we find first-order phase transitions as well as continuous changes between lateral interfacial configurations bound to and repelled from the stripe area. We determine phase diagrams and discuss the conditions under which these morphological changes arise. A variety of experimental techniques have emerged which allow one to endow solid surfaces with a rich, well-defined and permanent chemical pattern while keeping the surface flat on a molecular scale (see, *e.g.*, ref. ). An important application of these structures is microfluidics , *i.e.*, the guidance of tiny amounts of adsorbed liquids along these chemical microstructures. This enables one to control the microscopic flow of liquids on designated chemical channels and to faciliate the fabrication of “chemical chips” which may act as microlaboratories for the investigation and processing of rare and valuable liquids . Although these applications involve dynamical processes, as a prerequisite it is important to investigate these systems in thermal equilibrium as a function of the thermodynamic parameters pressure (or, equivalently, chemical potential $`\mu `$) and temperature $`T`$. Some recent studies were concerned with the static behaviour of liquid channels on chemical lanes within the micrometre range (see refs. and ). On this scale the morphology of the adsorbed liquid is determined by gross features such as the various surface tensions involved. However, with the rapidly proceeding miniaturisation of microstructures in mind, here we are interested in a much smaller scale within which details of the molecular forces become relevant . The paradigmatic system considered here is a chemically heterogeneous surface which, in top view, exhibits a single stripe. The substrate is flat and composed of two different chemical species such that one of them (denoted by “$`+`$”) forms a single slab of width $`a`$ embedded in the other one (denoted by “$``$”; see fig. 1(a)). Based on a density functional approach it turns out that the liquid-vapour interface of a thin liquidlike layer in contact with the wall interacts with the wall via an *effective* interface potential $`\mathrm{\Lambda }(x,l(x))`$, where $`l(x)`$ is the local thickness of the layer at the lateral position $`x`$. The substrate potential entering into $`\mathrm{\Lambda }(x,l(x))`$ has been obtained from a pairwise summation over all substrate-fluid particle interactions assuming sharp chemical steps at $`x=\pm a/2`$. Since the system we consider here is translationally invariant in the $`y`$ direction, $`\mathrm{\Lambda }`$ does not depend on $`y`$. We take both the substrate-fluid ($`s`$) and the fluid-fluid ($`f`$) interaction potential to be of Lennard-Jones type: $`\varphi _{s,f}(r)=4ϵ_{s,f}((\sigma _{s,f}/r)^{12}(\sigma _{s,f}/r)^6)`$. We choose the two chemical species such that a flat, semi-infinite, and *homogeneous* substrate composed of each of the species *alone* exhibits an effective interface potential $`\mathrm{\Lambda }_\pm (l)`$ as depicted in fig. 1(b). (One may interpret these interface potentials as corresponding to a “hydrophilic” ($`\mathrm{\Lambda }_{}(l)`$) and a “hydrophobic” ($`\mathrm{\Lambda }_+(l)`$) surface even for a nonvolatile liquid.) For our choice of potential parameters the outer part of the substrate undergoes a critical wetting transition at $`k_BT_w/ϵ_f=1.2`$, whereas a homogeneous substrate filled with species corresponding to the stripe part exhibits a first-order wetting transition at $`k_BT_w/ϵ_f1.102`$. For the temperature $`k_BT/ϵ_f=1.1`$ considered throughout the paper both substrate types are only partially wet at bulk liquid-vapour coexistence $`\mu =\mu _0`$, *i.e.*, $`\mathrm{\Delta }\mu =\mu _0\mu =0`$. Within a simple interface displacement model, which can be derived systematically from density functional theory (see ref. ) the equilibrium contour $`l(x)`$ of the liquid-vapour interface minimizes the functional $$\mathrm{\Omega }_s[l(x)]=_A𝑑x𝑑y\mathrm{\Lambda }(x,l(x))+\sigma _{lg}_A𝑑x𝑑y\sqrt{1+\left(\frac{dl(x)}{dx}\right)^2}$$ (1) with the surface area $`A=L_xL_y`$ of the substrate surface and $`\mathrm{\Lambda }(x,l)=\mathrm{\Delta }\mu \mathrm{\Delta }\rho l+\omega (x,l)`$ where, as it turns out, $`\omega (x,l)=_{i2}a_i(x)l^i`$ for Lennard-Jones potentials, $`\mathrm{\Delta }\rho =\rho _l\rho _g`$ is the difference in number densities between the bulk phases, and $`\sigma _{lg}`$ is the surface tension associated with the area of the liquid-vapour interface. Instead of the chemical potential one may use the pressure $`p`$ of the bulk vapour phase as a thermodynamic control parameter. In this case $`\mathrm{\Delta }\mu =0`$ corresponds to $`p=p_{sat}(T)`$ at which the vapour phase is saturated. In a pressure-temperature ensemble one has to replace $`\mathrm{\Delta }\mu \mathrm{\Delta }\rho `$ by the pressure difference $`\mathrm{\Delta }p=p_{sat}p`$. In eq. (1) we have omitted the free energy contributions from the wall-liquid interface at $`z=0`$ which are constant with respect to $`l(x)`$. Subtraction of the surface free energy $`\mathrm{\Omega }_s(l_{})`$ corresponding to the homogeneous outer (“$``$”) substrate yields the line contribution $`L_y\mathrm{\Omega }_l[l(x)]=\mathrm{\Omega }_s[l(x)]\mathrm{\Omega }_s(l_{})`$ to the free energy of the fluid configuration associated with the presence of the chemical stripe. $`\mathrm{\Omega }_s[l(x)]`$ (or equivalently, $`\mathrm{\Omega }_l[l(x)]`$) is minimized numerically with respect to $`l(x)`$ yielding the equilibrium contour within mean field theory. Using the interface potential $`\mathrm{\Lambda }(x,l)`$ we find equilibrium interfacial morphologies, pertinent examples of which are shown in fig. 2(a). This figure depicts the interface profiles for varying stripe widths $`a`$ at a fixed value $`\mathrm{\Delta }\mu =0.003ϵ_f`$. Within a wide range of values for $`a`$ there are two different minimal interfaces, one “bound” to and the other “repelled” from the stripe area. Figure 2(b) displays the values of the line free energy density $`\mathrm{\Omega }_l`$ corresponding to the interface profiles shown in fig. 2(a). For large $`a`$ the solution bound to the stripe area has a lower line free energy $`\mathrm{\Omega }_l`$ than the solution repelled from the stripe. As $`a`$ is decreased the latter solution is favoured in terms of the free energy. Thus at some value $`a=a_t`$ there is a phase transition between both interfacial configurations. Due to the break in slope of $`\mathrm{\Omega }_l(a)`$ at $`a=a_t`$ this transition is first order. We refer to this phenomenon as “morphological phase transition” in the sense that the interface profile $`l(x)`$ undergoes an abrupt structural change. The repelled solution exhibits a coverage which is even larger than that corresponding to the homogeneous “$``$” substrate. This counterintuitive result shows that compared with a homogeneous substrate, one can *increase* the total adsorption by the immersion of a slab of a material that favours *thinner* liquidlike films. The occurrence of this phenomenon persists if the depth of the slab is not macroscopicly large as in fig. 1(a) but only molecularly small corresponding to a different material within an imprinted overlayer covering a homogeneous underlying substrate. This example shows that gradual changes in the architecture of chemically microstructured devices can lead to abrupt changes in the morphology of adsorbed liquids. We emphasise that such phase transitions are not only of theoretical interest. Their existence illustrates the care required to avoid the unwanted filling of the nonwet space between liquid channels in such devices. For $`|x|\mathrm{}`$ the interface profile $`l(x)`$ asymptotically approaches the equilibrium film thickness $`l_{}`$ corresponding to the outer part of the substrate. If the stripe width is sufficiently large, in the middle of the stripe area $`l(x)`$ also approaches the equilibrium film thickness $`l_+`$ corresponding to the stripe part. In this case, as expected, $`l(x)`$ smoothly interpolates between the two minima I and II of the interface potentials of the respective homogeneous and flat “$`+`$” and “$``$” substrate taking full advantage of the deep minimum I (see fig. 1(b)). If the stripe width shrinks this gain in free energy decreases accordingly and, moreover, the relative cost for this benefit in terms of the associated increased area of the liquid-vapour interface increases. For $`a<a_t`$ the loss of free energy by occupying the higher minimum III instead of I is outweighed by the gain in free energy due to a reduced area of the liquid-vapour interface, leading to the repelled solution because the position of the local minimum III occurs at a larger value of $`l`$ than for II. The physical nature of this transition is similar to the “unbending” transition on a corrugated wall as described in ref. . As explained in ref. similar effects such as “out-of-phase behaviour” can also occur on nonplanar walls if there are competing minima of the effective interface potential. For values of $`\mathrm{\Delta }\mu `$ which are larger than a certain critical value $`\mathrm{\Delta }\mu _c`$ the morphological phase transition does not occur, *i.e.*, there is only one stable solution for every value of $`a`$. Figure 3 shows the behaviour of the liquid-vapour interfaces for $`\mathrm{\Delta }\mu =0.014ϵ_f>\mathrm{\Delta }\mu _c`$. In this case the interface profile gradually changes from a repelled configuration for small $`a`$ to a bound configuration for large $`a`$. The corresponding line free energy $`\mathrm{\Omega }_l`$ is an analytic function that does not exhibit a cusp singularity and associated metastable branches. Figure 4 displays the line of first-order phase transitions in the $`(a,\mathrm{\Delta }\mu )`$ plane at which the bound and the repelled solutions coexist. Within the present mean field description this line ends at a critical point with $`\mathrm{\Delta }\mu _c0.010ϵ_f`$ and $`a_c3.8\sigma _f`$. The number of degrees of freedom involved in the morphological phase transition is proportional to the volume $`L_ya(l_{}l_+)`$ and thus quasi-onedimensional. Therefore the system cannot support a phase transition in the strict sense of statistical mechanics. Actually the fluctuations are so strong that at the phase boundary shown in fig. 4 the system cannot sustain the sharp coexistence between the bound and the repelled configuration. Instead the system will break up into domains along the $`y`$ direction with alternating regions of increased and reduced coverage with the positions of the domain boundaries fluctuating. This leads to a rounding of the first-order phase transition due to the aforementioned finite-size effect in two directions and eliminates the critical point shown in fig. 4. However, for large values of $`a`$ or $`l_{}l_+`$ the number of degrees of freedom involved becomes quasi-twodimensional so that the rounding of the first-order phase transition sharpens up turning it ultimately into a true discontinuity. Following the general ideas of finite size scaling of first-order phase transitions one can estimate the width $`2\delta \mu `$ of the rounding around the mean field location $`\mathrm{\Delta }\mu _t`$ of the phase transition according to the implicit equation $$|\mathrm{\Delta }\mathrm{\Omega }_l(T,a,\mathrm{\Delta }\mu _t(a)+\delta \mu (a))|\frac{k_BT}{\xi _b}\mathrm{exp}\left(\frac{a\mathrm{\Sigma }_l(T,\mathrm{\Delta }\mu _t(a))}{k_BT}\right)$$ (2) where $`\xi _b`$ is the bulk correlation length (which is of the order of $`\sigma _f`$) and $`\mathrm{\Sigma }_l\sigma _{lg}(l_{}l_+)`$ estimates the cost in free energy to maintain an interface between two domains as described above; $`\mathrm{\Delta }\mathrm{\Omega }_l`$ is the difference in line free energies between the bound and the repelled solution (compare refs. and ). A rough numerical estimate of $`\delta \mu `$ yields the boundaries of the smeared out transition region which are indicated by the dashed lines in fig. 4. Whereas far from the mean field critical point the width of the transition region is exponentially small such that the transition is quasi-first order, for $`\mathrm{\Delta }\mu `$ increasing towards $`\mathrm{\Delta }\mu _c`$ the rounding of the phase transition becomes significant. In closing we note that the only prerequisite for the occurrence of the morphological phase transition described here is that the effective interface potentials of the materials involved exhibit the gross features shown in fig. 1(b). This holds both for volatile and nonvolatile liquids and does not depend on a particular type of interaction potential or thermodynamic ensemble. Such morphological transitions also occur for other shapes of chemical substrate heterogeneities such as periodic arrays of stripes and circular or even irregular areas. A variety of experimental techniques such as reflection interference contrast microscopy (RICM) or a force microscope used in tapping mode are being developed which allow one to scan the morphology of fluid films with a spatial resolution down to fractions of a nanometre. The detection of the morphological phase transition predicted here could serve as a promising testing ground for the development of such experimental techniques. ###### Acknowledgements. C.B. and S.D. gratefully acknowledge financial support by the German Science Foundation within the Special Research Initiative *Wetting and Structure Formation at Interfaces*.
no-problem/9904/hep-ph9904442.html
ar5iv
text
# Generalized polarizabilities of the pion in chiral perturbation theory ## I Introduction Compton scattering of real photons (RCS) is one of the simplest reactions for obtaining information on the structure of a stable composite system. When expanded in the frequency of the photon, the leading-order term of the low-energy scattering amplitude is specified by the model-independent Thomson limit in terms of the charge and the mass of the target. Genuine structure effects first appear at second order and can be parametrized in terms of the electric and magnetic polarizabilities (for an overview see, e.g., Refs. ). As there is no stable pion target, the empirical information on the electromagnetic polarizabilities has been extracted from high-energy pion-nucleus bremsstrahlung and radiative pion photoproduction off the nucleon . In principle, the electromagnetic polarizabilities of the pion also enter into the crossed process $`\gamma \gamma \pi \pi `$. However, there is some debate concerning the accuracy of extracting these quantities from the crossed channel . From a theoretical point of view, a precise determination of the pion polarizabilities is of great importance, since (approximate) chiral symmetry allows one to predict the electromagnetic polarizabilities of the charged pion in terms of the radiative decay $`\pi ^+e^+\nu _e\gamma `$ . Corrections to the leading-order PCAC result have been calculated at $`𝒪(p^6)`$ in chiral perturbation theory and turn out to be be rather small . New experiments are presently being carried out or have been proposed to significantly reduce the uncertainties in the empirical results and thus subject the predictions of chiral symmetry to a stringent test. Clearly, the possibilities to investigate the structure of the target increase substantially if virtual photons are used, because energy and three-momentum can be varied independently and, furthermore, the longitudinal component of the transition current can be explored. In particular, virtual Compton scattering (VCS) off the nucleon, as tested in the reaction $`e^{}+pe^{}+p+\gamma `$, has attracted considerable interest (see, e.g., Refs. ). The pion-VCS amplitude of $`\gamma ^{}+\pi \gamma +\pi `$ can, in principle, be studied through the inelastic scattering of high-energy pions off atomic electrons, $`\pi +e^{}\pi +e^{}+\gamma `$. Such events are presently analyzed as part of the SELEX E781 experiment . In this paper, we will investigate the VCS reaction $`\gamma ^{}+\pi \gamma +\pi `$ in the framework of chiral perturbation theory at $`𝒪(p^4)`$. We will first give a short survey of chiral perturbation theory and then define our conventions for the VCS invariant amplitude. We then discuss the result for the soft-photon and residual amplitudes, respectively. Finally, the model-dependent residual amplitude is analyzed in terms of alternative definitions of generalized polarizabilities. ## II The chiral Lagrangian Chiral perturbation theory (ChPT) is based on the chiral $`\text{SU(2)}_L\times \text{SU(2)}_R`$ symmetry of QCD in the limit of vanishing $`u`$\- and $`d`$-quark masses. The assumption of spontaneous symmetry breaking down to $`\text{SU(2)}_V`$ gives rise to three massless pseudoscalar Goldstone bosons with vanishing interactions in the limit of zero energies. These Goldstone bosons are identified with the physical pion triplet, the nonzero pion masses resulting from an explicit symmetry breaking in QCD through the quark masses. The effective Lagrangian of the pion interaction is organized in a so-called momentum expansion, $$_{\text{eff}}=_2+_4+\mathrm{},$$ (1) where the subscripts refer to the order in the expansion. Interactions with external fields, such as the electromagnetic field, as well as explicit symmetry breaking due to the finite quark masses, are systematically incorporated into the effective Lagrangian. Covariant derivatives and quark-mass terms count as $`𝒪(p)`$ and $`𝒪(p^2)`$, respectively. Weinberg’s power counting scheme allows for a classification of the Feynman diagrams by establishing a relation between the momentum expansion and the loop expansion. The most general chiral Lagrangian at $`𝒪(p^2)`$ is given by $$_2=\frac{F^2}{4}\text{Tr}\left[D_\mu U(D^\mu U)^{}+\chi U^{}+U\chi ^{}\right],$$ (2) where $`U`$ is a unimodular unitary $`(2\times 2)`$ matrix, transforming as $`V_RUV_L^{}`$ for $`(V_L,V_R)\text{SU(2)}_L\times \text{SU(2)}_R`$. As a parametrization of $`U`$ we will use $$U(x)=\frac{\sigma (x)+i\stackrel{}{\tau }\stackrel{}{\pi }(x)}{F},\sigma ^2(x)+\stackrel{}{\pi }^2(x)=F^2,$$ (3) where $`F`$ denotes the pion-decay constant in the chiral limit: $`F_\pi =F[1+𝒪(\widehat{m})]=92.4`$ MeV. We will work in the isospin-symmetric limit $`m_u=m_d=\widehat{m}`$. The quark mass is contained in $`\chi =2B_0\widehat{m}=m_\pi ^2`$ at $`𝒪(p^2)`$, where $`B_0`$ is related to the quark condensate $`<\overline{q}q>`$. The covariant derivative $`D_\mu U=_\mu U+\frac{i}{2}eA_\mu [\tau _3,U]`$ contains the coupling to the electromagnetic field $`A_\mu `$. The most general structure of $`_4`$, first obtained by Gasser and Leutwyler (see Eq. (5.5) of Ref. ), reads, in the standard trace notation, $`_4^{GL}`$ $`=`$ $`{\displaystyle \frac{l_1}{4}}\left\{\text{Tr}[D_\mu U(D^\mu U)^{}]\right\}^2+{\displaystyle \frac{l_2}{4}}\text{Tr}[D_\mu U(D_\nu U)^{}]\text{Tr}[D^\mu U(D^\nu U)^{}]+{\displaystyle \frac{l_3}{16}}\left[\text{Tr}(\chi U^{}+U\chi ^{})\right]^2`$ (6) $`+{\displaystyle \frac{l_4}{4}}\text{Tr}[D_\mu U(D^\mu \chi )^{}+D_\mu \chi (D^\mu U)^{}]+l_5\left[\text{Tr}(F_{\mu \nu }^RUF_L^{\mu \nu }U^{}){\displaystyle \frac{1}{2}}\text{Tr}(F_{\mu \nu }^LF_L^{\mu \nu }+F_{\mu \nu }^RF_R^{\mu \nu })\right]`$ $`+i{\displaystyle \frac{l_6}{2}}\text{Tr}[F_{\mu \nu }^RD^\mu U(D^\nu U)^{}+F_{\mu \nu }^L(D^\mu U)^{}D^\nu U]{\displaystyle \frac{l_7}{16}}\left[\text{Tr}(\chi U^{}U\chi ^{})\right]^2+\mathrm{},`$ where three terms containing only external fields have been omitted. For the electromagnetic interaction, the field-strength tensors are given by $`F_L^{\mu \nu }=F_R^{\mu \nu }=\frac{e}{2}\tau _3(^\mu A^\nu ^\nu A^\mu )`$. ## III Conventions In the following, we will discuss the VCS amplitude for $`\gamma ^{}(q,ϵ)+\pi ^i(p_i)\gamma (q^{},ϵ^{})+\pi ^j(p_f)`$ ($`q^20`$, $`q^2=0`$, $`q^{}ϵ^{}=0`$). Throughout the calculation we use the conventions of Bjorken and Drell with $`e^2/4\pi 1/137`$, $`e>0`$. For the isospin decomposition of the invariant amplitude we use $$_{ij}=\delta _{ij}𝒜+(\delta _{ij}\delta _{i3}\delta _{j3}),$$ (7) where $`i`$ and $`j`$ denote the cartesian isospin indices of the initial and final pions, respectively. With the definition $$|\pi ^\pm (p)>=\frac{1}{\sqrt{2}}[a_1^{}(p)\pm ia_2^{}(p)]|0>,|\pi ^0(p)>=a_3^{}(p)|0>,$$ we may express the physical amplitudes in terms of the isospin amplitudes $`_{\pi ^+}=_\pi ^{}`$ $`=`$ $`{\displaystyle \frac{1}{2}}(_{11}+_{22})=𝒜+,`$ (8) $`_{\pi ^0}`$ $`=`$ $`_{33}=𝒜.`$ (9) We split the contributions to $`_{ij}`$ into a pole piece ($`P`$) and a one-particle-irreducible, residual part ($`R`$), $`𝒜=𝒜_P+𝒜_R`$, $`=_P+_R`$ (see Fig. 1). Since the $`\pi ^0`$ is its own antiparticle, the electromagnetic vertex $`\pi ^0\pi ^0\gamma ^{}`$ vanishes due to charge-conjugation invariance and hence $`𝒜_P0`$. In general, the pole piece $`_P`$ and the one-particle-irreducible piece $`_R`$ are not separately gauge invariant. ## IV Soft-photon amplitude According to Weinberg’s power counting, a calculation of the $`s`$\- and $`u`$-channel pole terms at $`𝒪(p^4)`$ involves the renormalized irreducible vertex at $`𝒪(p^4)`$, $$\mathrm{\Gamma }^\mu (p^{},p)=(p^{}+p)^\mu F(q^2)+(p^{}p)^\mu \frac{p^2p^2}{q^2}[1F(q^2)],q=p^{}p,$$ (10) where $`F(q^2)`$ is the prediction for the electromagnetic form factor of the pion (see Eq. (15.3) of Ref. ). To that order, the renormalized propagator is simply given by $$i\mathrm{\Delta }_R(p)=\frac{i}{p^2m_\pi ^2+i0^+},$$ (11) with $`m_\pi ^2`$ the $`𝒪(p^4)`$ result for the pion mass squared (see Eq. (12.2) of Ref. ). Note that Eqs. (10) and (11) satisfy the Ward-Takahashi identity $`q_\mu \mathrm{\Gamma }^\mu (p^{},p)=\mathrm{\Delta }_R^1(p^{})\mathrm{\Delta }_R^1(p)`$. With these ingredients the result for $`_P`$ at $`𝒪(p^4)`$ reads $$_P=ie^2\left\{F(q^2)\left[\frac{2p_fϵ^{}(2p_i+q)ϵ}{sm_\pi ^2}+\frac{(2p_fq)ϵ\mathrm{\hspace{0.17em}2}p_iϵ^{}}{um_\pi ^2}\right]+2qϵqϵ^{}\frac{1F(q^2)}{q^2}\right\},$$ (12) which is easily seen not to be gauge invariant by itself. The set of one-particle-irreducible diagrams is shown in Fig. 2 and gives rise to a residual part of the form $$_R=ie^2\left[2ϵϵ^{}+2(q^2ϵϵ^{}qϵqϵ^{})\frac{F(q^2)1}{q^2}\right]+\stackrel{~}{}_R.$$ (13) We combine Eqs. (12) and (13) into the form $$=\stackrel{~}{}_P+\stackrel{~}{}_R,$$ (14) where $$\stackrel{~}{}_P=ie^2F(q^2)\left[\frac{2p_fϵ^{}(2p_i+q)ϵ}{sm_\pi ^2}+\frac{(2p_fq)ϵ\mathrm{\hspace{0.17em}2}p_iϵ^{}}{um_\pi ^2}2ϵϵ^{}\right],$$ (15) with the result that $`\stackrel{~}{}_P`$ and $`\stackrel{~}{}_R`$ are now separately gauge invariant. In particular, $`\stackrel{~}{}_P`$ has the form of the soft-photon result obtained in Eq. (10) of Ref. . A somewhat different approach for obtaining the soft-photon result can be found in Ref. . ## V Residual amplitudes As has been discussed in detail in Ref. , a gauge-invariant parametrization of the residual amplitude for $`\gamma ^{}+\pi \gamma +\pi `$ can be written in terms of three invariant functions $`f_i(q^2,qq^{},qP)`$, where $`P=p_i+p_f`$. At $`𝒪(p^4)`$, the result for the residual isospin amplitudes $`𝒜_R`$ and $`\stackrel{~}{}_R`$ reads: $`𝒜_R`$ $`=`$ $`ie^2(q^{}ϵqϵ^{}qq^{}ϵϵ^{}){\displaystyle \frac{m_\pi ^2+2qq^{}q^2}{8\pi ^2F_\pi ^2qq^{}}}𝒢(q^2,qq^{}),`$ (16) $`\stackrel{~}{}_R`$ $`=`$ $`ie^2(q^{}ϵqϵ^{}qq^{}ϵϵ^{})\left[{\displaystyle \frac{4(2l_5^rl_6^r)}{F_\pi ^2}}{\displaystyle \frac{2m_\pi ^2+2qq^{}q^2}{16\pi ^2F_\pi ^2qq^{}}}𝒢(q^2,qq^{})\right],`$ (17) where the combination $`2l_5^rl_6^r=(2.85\pm 0.42)\times 10^3`$ is determined through the decay $`\pi ^+e^+\nu _e\gamma `$. In Eqs. (16) and (17) we have introduced the abbreviation $$𝒢(q^2,qq^{})=1+\frac{m_\pi ^2}{qq^{}}\left[J^{(1)}(a)J^{(1)}(b)\right]\frac{q^2}{2qq^{}}\left[J^{(0)}(a)J^{(0)}(b)\right],$$ (18) where $$J^{(n)}(x):=_0^1𝑑yy^n\mathrm{ln}[1+x(y^2y)i0^+]$$ and $$a:=\frac{q^2}{m_\pi ^2},b:=\frac{q^22qq^{}}{m_\pi ^2}.$$ The one-loop integrals $`J^{(0)}`$ and $`J^{(1)}`$ are given by (see Appendix C of Ref. <sup>*</sup><sup>*</sup>*In reproducing these results we found Refs. and useful.) $`J^{(0)}(x)`$ $`=`$ $`\{\begin{array}{c}2\sigma \mathrm{ln}\left(\frac{\sigma 1}{\sigma +1}\right)(x<0),\hfill \\ 2+2\sqrt{\frac{4}{x}1}\text{arccot}\left(\sqrt{\frac{4}{x}1}\right)(0x<4),\hfill \\ 2\sigma \mathrm{ln}\left(\frac{1\sigma }{1+\sigma }\right)i\pi \sigma (4<x),\hfill \end{array}`$ $`J^{(1)}(x)`$ $`=`$ $`\{\begin{array}{c}\frac{1}{2}\mathrm{ln}^2\left(\frac{\sigma 1}{\sigma +1}\right)(x<0),\hfill \\ \frac{1}{2}\mathrm{arccos}^2\left(1\frac{x}{2}\right)(0x<4),\hfill \\ \frac{1}{2}\mathrm{ln}^2\left(\frac{1\sigma }{1+\sigma }\right)\frac{\pi ^2}{2}+i\pi \mathrm{ln}\left(\frac{1\sigma }{1+\sigma }\right)(4<x),\hfill \end{array}`$ with $$\sigma (x)=\sqrt{1\frac{4}{x}},x[0,4].$$ A comparison of Eqs. (16) and (17) with Eq. (18) of Ref. shows that, at $`𝒪(p^4)`$, only one of the three functions $`f_i(q^2,qq^{},qP)`$ contributes, i.e., $`f_2=f_3=0`$. Furthermore, at this order in the chiral expansion, the function $`f_1`$ does not depend on $`qP=q^{}P`$. Our result for $`𝒜_R`$ is in agreement with Ref. , where the photoproduction of neutral pion pairs in the Coulomb field of a nucleus was studied. ## VI Generalized polarizabilities of Guichon, Liu, and Thomas In order to discuss the generalized polarizabilities, we expand the function $`𝒢`$ of Eq. (18) for negative $`q^2`$ around $`qq^{}=0`$, $$𝒢(q^2,qq^{})=\frac{qq^{}}{m_\pi ^2}J^{(0)^{}}\left(\frac{q^2}{m_\pi ^2}\right)+𝒪[(qq^{})^2],J^{(0)^{}}(x)=\frac{dJ^{(0)}(x)}{dx},$$ (21) where $$J^{(0)^{}}(x)=\frac{1}{x}\left[1\frac{2}{x\sigma }\mathrm{ln}\left(\frac{\sigma 1}{\sigma +1}\right)\right]=\frac{1}{x}\left[1+2J^{(1)^{}}(x)\right],x<0.$$ (22) For the charged and neutral pion we obtain, respectively, $`f_1^{\pi ^\pm }(q^2,qq^{},qP)`$ $`=`$ $`{\displaystyle \frac{4(2l_5^rl_6^r)}{F_\pi ^2}}+{\displaystyle \frac{2qq^{}q^2}{16\pi ^2F_\pi ^2qq^{}}}𝒢(q^2,qq^{})`$ (23) $`=`$ $`{\displaystyle \frac{4(2l_5^rl_6^r)}{F_\pi ^2}}+{\displaystyle \frac{q^2}{16\pi ^2F_\pi ^2m_\pi ^2}}J^{(0)^{}}\left({\displaystyle \frac{q^2}{m_\pi ^2}}\right)+𝒪(qq^{}),`$ (24) $`f_1^{\pi ^0}(q^2,qq^{},qP)`$ $`=`$ $`{\displaystyle \frac{1}{8\pi ^2F_\pi ^2}}\left(1{\displaystyle \frac{q^2}{m_\pi ^2}}\right)J^{(0)^{}}\left({\displaystyle \frac{q^2}{m_\pi ^2}}\right)+𝒪(qq^{}).`$ (25) We will first discuss the generalized polarizabilities as defined in Ref. , where the residual amplitude was analyzed in the photon-pion center-of-mass frame in terms of a multipole expansion. Only terms linear in the frequency of the final photon were kept, and the result was parametrized in terms of “generalized polarizabilities.” The connection with the covariant approach was established in Ref. , where it was also found that only two of the three polarizabilities $`P^{(01,01)0}`$, $`P^{(11,11)0}`$, and $`\widehat{P}^{(01,1)0}`$ of Ref. are independent, once the constraints due to charge conjugation are combined with particle-crossing symmetry. According to Eqs. (35) and (36) of Ref. we define generalized electric and magnetic polarizabilities $`\alpha (|\stackrel{}{q}|^2)`$ and $`\beta (|\stackrel{}{q}|^2)`$, respectively, as $`\alpha (|\stackrel{}{q}|^2)`$ $``$ $`{\displaystyle \frac{e^2}{4\pi }}\sqrt{{\displaystyle \frac{3}{2}}}P^{(01,01)0}(|\stackrel{}{q}|)`$ (26) $`=`$ $`{\displaystyle \frac{e^2}{8\pi m_\pi }}\sqrt{{\displaystyle \frac{m_\pi }{E_i}}}\left[f_1(\omega _0^2|\stackrel{}{q}|^2,0,0)+2m_\pi {\displaystyle \frac{|\stackrel{}{q}|^2}{\omega _0}}f_2(\omega _0^2|\stackrel{}{q}|^2,0,0)\right],`$ (27) $`\beta (|\stackrel{}{q}|^2)`$ $``$ $`{\displaystyle \frac{e^2}{4\pi }}\sqrt{{\displaystyle \frac{3}{8}}}P^{(11,11)0}(|\stackrel{}{q}|)={\displaystyle \frac{e^2}{8\pi m_\pi }}\sqrt{{\displaystyle \frac{m_\pi }{E_i}}}f_1(\omega _0^2|\stackrel{}{q}|^2,0,0),`$ (28) where $`\omega _0=q_0|_{\omega ^{}=0}=m_\pi \sqrt{m_\pi ^2+|\stackrel{}{q}|^2}`$. A few remarks are in order at this point. 1. In our present work, we strictly stick to the convention of Ref. . This is why Eqs. (26) and (28) differ by an overall factor $`1/2m_\pi `$ from Ref. , where in Eq. (1) an additional factor $`2m_\pi `$ was introduced for the spin-0 case. 2. The variable $`q^2`$ only appears in the combination $`q^2/m_\pi ^2`$, resulting in $$\frac{q^2}{m_\pi ^2}|_{\omega ^{}=0}=2\frac{m_\pi E_i}{m_\pi },E_i=\sqrt{m_\pi ^2+|\stackrel{}{q}|^2}.$$ 3. The factor $`\sqrt{m_\pi /E_i}`$ originates from an additional normalization factor $`𝒩`$ in Eq. (32) of Ref. , such that $$\frac{2m_\pi }{\sqrt{4E_iE_f}}\stackrel{\omega ^{}0}{}\sqrt{\frac{m_\pi }{E_i}}.$$ Using the results of Eqs. (23) and (25) together with $`f_2=0`$, we then obtain $`\alpha _{\pi ^\pm }(|\stackrel{}{q}|^2)`$ $`=`$ $`\beta _{\pi ^\pm }(|\stackrel{}{q}|^2)={\displaystyle \frac{e^2}{8\pi m_\pi }}\sqrt{{\displaystyle \frac{m_\pi }{E_i}}}\left[{\displaystyle \frac{4(2l_5^rl_6^r)}{F_\pi ^2}}2{\displaystyle \frac{m_\pi E_i}{m_\pi }}{\displaystyle \frac{1}{(4\pi F_\pi )^2}}J^{(0)^{}}\left(2{\displaystyle \frac{m_\pi E_i}{m_\pi }}\right)\right],`$ (29) $`\alpha _{\pi ^0}(|\stackrel{}{q}|^2)`$ $`=`$ $`\beta _{\pi ^0}(|\stackrel{}{q}|^2)={\displaystyle \frac{e^2}{4\pi }}{\displaystyle \frac{1}{(4\pi F_\pi )^2m_\pi }}\sqrt{{\displaystyle \frac{m_\pi }{E_i}}}\left(12{\displaystyle \frac{m_\pi E_i}{m_\pi }}\right)J^{(0)^{}}\left(2{\displaystyle \frac{m_\pi E_i}{m_\pi }}\right).`$ (31) At the one-loop level, the $`|\stackrel{}{q}|^2`$ dependence is entirely given in terms of the pion mass $`m_\pi `$ and the pion-decay constant $`F_\pi `$, i.e., no additional $`𝒪(p^4)`$ low-energy constant enters. At $`|\stackrel{}{q}|^2=0`$, Eqs. (29) and (31) reduce to the RCS polarizabilities $`\overline{\alpha }_{\pi ^\pm }`$ $`=`$ $`\overline{\beta }_{\pi ^\pm }={\displaystyle \frac{e^2}{4\pi }}{\displaystyle \frac{2}{m_\pi F_\pi ^2}}(2l_5^rl_6^r)=(2.68\pm 0.42)\times 10^4\text{fm}^3,`$ (32) $`\overline{\alpha }_{\pi ^0}`$ $`=`$ $`\overline{\beta }_{\pi ^0}={\displaystyle \frac{e^2}{4\pi }}{\displaystyle \frac{1}{96\pi ^2F_\pi ^2m_\pi }}=0.50\times 10^4\text{fm}^3,`$ (33) where we made use of $`J^{(0)^{}}(0)=\frac{1}{6}`$. At $`𝒪(p^6)`$, the RCS predictions for the charged pion read $`\overline{\alpha }_{\pi \pm }=(2.4\pm 0.5)\times 10^4\text{fm}^3`$ and $`\overline{\beta }_{\pi ^\pm }=(2.1\pm 0.5)\times 10^4\text{fm}^3`$ . The corresponding corrections amount to a 12% (24%) change of the $`𝒪(p^4)`$ result, indicating a good convergence. We also note that the original degeneracy $`\overline{\alpha }=\overline{\beta }`$ is lifted at $`𝒪(p^6)`$. The predictions of ChPT have to be compared with the empirical results $`\overline{\alpha }_{\pi \pm }=(6.8\pm 1.4)\times 10^4\text{fm}^3`$ , $`\overline{\alpha }_{\pi \pm }=(20\pm 12)\times 10^4\text{fm}^3`$ , and $`\overline{\beta }_{\pi ^\pm }=(7.1\pm 4.6)\times 10^4\text{fm}^3`$ . Clearly, an improved accuracy is required to test the chiral predictions. For the neutral pion, the $`𝒪(p^6)`$ corrections turn out to be much larger, $`\overline{\alpha }_{\pi ^0}=(0.35\pm 0.10)\times 10^4\text{fm}^3`$ and $`\overline{\beta }_{\pi ^0}=(1.50\pm 0.20)\times 10^4\text{fm}^3`$ . ## VII Alternative definition of the generalized dipole polarizabilities Another generalization of the RCS polarizabilities is obtained by parametrizing the invariant amplitude as A detailed discussion will be given in Ref. . $$i=B_1F^{\mu \nu }F_{\mu \nu }^{}+\frac{1}{4}B_2(P_\mu F^{\mu \nu })(P^\rho F_{\rho \nu }^{})+\frac{1}{4}B_5(P^\nu q^\mu F_{\mu \nu })(P^\sigma q^\rho F_{\rho \sigma }^{}),$$ (34) where $`F^{\mu \nu }`$ and $`F_{\mu \nu }^{}`$ refer, respectively, to the gauge-invariant combinations $$F^{\mu \nu }=iq^\mu ϵ^\nu +iq^\nu ϵ^\mu ,F_{\mu \nu }^{}=iq_\mu ^{}ϵ_\nu ^{}iq_\nu ^{}ϵ_\mu ^{}.$$ The functions $`B_1`$, $`B_2`$, and $`B_5`$ are even functions of $`P`$. Introducing the suggestive notation $$\stackrel{}{E}=i(q_0\stackrel{}{ϵ}\stackrel{}{q}ϵ_0),\stackrel{}{B}=i\stackrel{}{q}\times \stackrel{}{ϵ},\stackrel{}{E}^{}=i(q_0^{}\stackrel{}{ϵ}^{}\stackrel{}{q}^{}ϵ_0^{}),\stackrel{}{B}^{}=i\stackrel{}{q}^{}\times \stackrel{}{ϵ}^{},$$ the structures of Eq. (34) are particularly simple when evaluated in the pion Breit frame (p.B.f.) defined by $`\stackrel{}{P}=0`$, $`F^{\mu \nu }F_{\mu \nu }^{}`$ $`=`$ $`[2\stackrel{}{E}\stackrel{}{E}^{}+2\stackrel{}{B}\stackrel{}{B}^{}]_{p.B.f.},`$ $`P_\mu F^{\mu \nu }P^\rho F_{\rho \nu }^{}`$ $`=`$ $`[P_0^2\stackrel{}{E}\stackrel{}{E}^{}]_{p.B.f.},`$ $`P^\nu q^\mu F_{\mu \nu }P^\rho q^\sigma F_{\sigma \rho }^{}`$ $`=`$ $`[P_0^2\stackrel{}{q}\stackrel{}{E}\stackrel{}{q}\stackrel{}{E}^{}]_{p.B.f.}.`$ Note that by definition $`[P_0^2]_{p.B.f.}=P^2`$. In the p.B.f., Eq. (34) can thus be expressed as $$i=\left[2B_1\stackrel{}{B}\stackrel{}{B}^{}\left(2B_1+\frac{P^2}{4}B_2\right)\stackrel{}{E}\stackrel{}{E}^{}+\frac{P^2}{4}B_5\stackrel{}{q}\stackrel{}{E}\stackrel{}{q}\stackrel{}{E}^{}\right]_{p.B.f.}.$$ (35) Since $`\stackrel{}{E}=\stackrel{}{E}_T+\stackrel{}{E}_L`$, $`\stackrel{}{E}\stackrel{}{E}^{}`$ contains both transverse and longitudinal components with respect to $`\widehat{q}`$, for which reason we will introduce the quantities $`\alpha _T`$ and $`\alpha _L`$ below: $$i=\left\{2B_1\stackrel{}{B}\stackrel{}{B}^{}\left(2B_1+\frac{P^2}{4}B_2\right)\stackrel{}{E}_T\stackrel{}{E}^{}+\left[\frac{P^2}{4}B_5|\stackrel{}{q}|^2\left(2B_1+\frac{P^2}{4}B_2\right)\right]\stackrel{}{E}_L\stackrel{}{E}^{}\right\}_{p.B.f.}.$$ (36) We now consider the limit $`\omega ^{}0`$ of the residual amplitudes, for which $`B_i^rb_i^r(q^2)`$, and define three generalized dipole polarizabilities in terms of the invariants of Eq. (34), $`8\pi m_\pi \beta (q^2)`$ $``$ $`2b_1^r(q^2),`$ (37) $`8\pi m_\pi \alpha _T(q^2)`$ $``$ $`2b_1^r(q^2)\left(M^2{\displaystyle \frac{q^2}{4}}\right)b_2^r(q^2),`$ (38) $`8\pi m_\pi \alpha _L(q^2)`$ $``$ $`2b_1^r(q^2)\left(M^2{\displaystyle \frac{q^2}{4}}\right)[b_2^r(q^2)+q^2b_5^r(q^2)],`$ (39) the superscript $`r`$ referring to the residual amplitudes beyond the soft-photon result. In general, the transverse and longitudinal electric polarizabilities $`\alpha _T`$ and $`\alpha _L`$ will differ by a term, vanishing however in the RCS limit $`q^2=0`$. Comparing with Eq. (36), the generalized dipole polarizabilities are seen to be defined such that they multiply the structures $`\stackrel{}{B}\stackrel{}{B}^{}`$, $`\stackrel{}{E}_T\stackrel{}{E}^{}`$, and $`\stackrel{}{E}_L\stackrel{}{E}^{}`$, respectively, as $`\omega ^{}0`$. We note that $`[\stackrel{}{B}\stackrel{}{B}^{}]_{p.B.f.}`$ and $`[\stackrel{}{E}_L\stackrel{}{E}^{}]_{p.B.f.}`$ are of $`𝒪(\omega ^{})`$ whereas $`[\stackrel{}{E}_T\stackrel{}{E}^{}]_{p.B.f.}=𝒪(\omega ^2)`$, i.e., that different powers of $`\omega ^{}`$ have been kept. At $`q^2=0`$, the usual RCS polarizabilities are recovered, $$\beta (0)=\overline{\beta },\alpha _L(0)=\alpha _T(0)=\overline{\alpha }.$$ (40) The connection to the generalized polarizabilities of Guichon et al. can either be established by direct comparison or via the results of Ref. , $`\alpha (|\stackrel{}{q}|^2)`$ $`=`$ $`{\displaystyle \frac{e^2}{4\pi }}\sqrt{{\displaystyle \frac{3}{2}}}P^{(01,01)0}(|\stackrel{}{q}|)=\sqrt{{\displaystyle \frac{m_\pi }{E_i}}}\alpha _L(\omega _0^2\stackrel{}{q}^2),`$ (41) $`\beta (|\stackrel{}{q}|^2)`$ $`=`$ $`{\displaystyle \frac{e^2}{4\pi }}\sqrt{{\displaystyle \frac{3}{8}}}P^{(11,11)0}(|\stackrel{}{q}|)=\sqrt{{\displaystyle \frac{m_\pi }{E_i}}}\beta (\omega _0^2\stackrel{}{q}^2),`$ (42) with $`\omega _0=q_0|_{\omega ^{}=0}=m_\pi E_i`$, $`\omega _0^2\stackrel{}{q}^2=2m_\pi (m_\pi E_i)`$, and $`E_i=\sqrt{m_\pi ^2+\stackrel{}{q}^2}`$, and all variables referring to the cm frame. Using Eqs. (35) - (37) of Ref. , we find that the transverse electric dipole polarizability is part of a second-order contribution in $`\omega ^{}`$ beyond the approximation of Guichon et al., $$\alpha _T(q^2)=\alpha _L(q^2)+\frac{e^2}{4\pi }(4M^2q^2)q^2\stackrel{~}{f}_3(q^2,0,0),$$ (43) where $`qP\stackrel{~}{f}_3f_3`$. At $`𝒪(p^4)`$, $`f_2=f_3=0`$, with the result of particularly simple expressions for the generalized dipole polarizabilities, $`\alpha _L^{\pi ^\pm }(q^2)`$ $`=`$ $`\alpha _T^{\pi ^\pm }(q^2)=\beta ^{\pi ^\pm }(q^2)={\displaystyle \frac{e^2}{8\pi m_\pi }}\left[{\displaystyle \frac{4(2l_5^rl_6^r)}{F_\pi ^2}}{\displaystyle \frac{q^2}{m_\pi ^2}}{\displaystyle \frac{1}{(4\pi F_\pi )^2}}J^{(0)^{}}\left({\displaystyle \frac{q^2}{m_\pi ^2}}\right)\right],`$ (44) $`\alpha _L^{\pi ^0}(q^2)`$ $`=`$ $`\alpha _T^{\pi ^0}(q^2)=\beta ^{\pi ^0}(q^2)={\displaystyle \frac{e^2}{4\pi }}{\displaystyle \frac{1}{(4\pi F_\pi )^2m_\pi }}\left(1{\displaystyle \frac{q^2}{m_\pi ^2}}\right)J^{(0)^{}}\left({\displaystyle \frac{q^2}{m_\pi ^2}}\right).`$ (45) The results for the generalized dipole polarizabilities are shown in Fig. 3. Even though chiral perturbation theory is only applicable for small external momenta, for the sake of completeness we also quote the asymptotic behavior as $`q^2\mathrm{}`$, $`\alpha _L^{\pi ^\pm }(q^2)`$ $``$ $`\overline{\alpha }_{\pi ^\pm }+3\overline{\alpha }_{\pi ^0}=1.18\times 10^4\text{fm}^3,`$ (46) $`\alpha _L^{\pi ^0}(q^2)`$ $``$ $`6\overline{\alpha }_{\pi ^0}=3.0\times 10^4\text{fm}^3.`$ (47) As in the case of real Compton scattering, we expect the degeneracy $`\alpha _L(q^2)=\alpha _T(q^2)=\beta (q^2)`$ to be lifted at the two-loop level. ## VIII Summary We have calculated the invariant amplitudes for virtual Compton scattering off the pion, $`\gamma ^{}+\pi \gamma +\pi `$, at the one-loop level, $`𝒪(p^4)`$, in chiral perturbation theory. For the charged pion, the result may be decomposed into a gauge-invariant soft-photon amplitude involving the electromagnetic form factor of the pion and a gauge-invariant residual amplitude. For the neutral pion, the soft-photon amplitude vanishes. We have analyzed the low-energy behavior of the residual amplitudes in terms of generalized polarizabilities. In this context we have introduced two alternative definitions of the generalized polarizabilities, a first one based on a multipole expansion in the center-of-mass frame, and a second one based on a covariant approach interpreted in the pion Breit frame. The connection between the different approaches has been established. In the framework of ChPT at $`𝒪(p^4)`$, the momentum dependence of the generalized polarizabilities is entirely predicted in terms of the pion mass and the pion-decay constant, i.e., no additional counter-term contribution appears. As in the case of real Compton scattering, the results at $`𝒪(p^4)`$ show a degeneracy of the polarizabilities, $`\alpha _L(q^2)=\alpha _T(q^2)=\beta (q^2)`$, which we expect to be lifted at the two-loop level. ## IX Acknowledgements This work was supported by the Deutsche Forschungsgemeinschaft (SFB 443). A. L. thanks the theory group of the Institut für Kernphysik for the hospitality and support during his stay in Mainz where part of his work was done.
no-problem/9904/quant-ph9904037.html
ar5iv
text
# Quantum Information is physical too… ## ACKNOWLEDGMENTS The author wishes to acknowledge many useful discussions and several stimulating arguments with R. Spekkens, as well as useful discussions with both A. Steinberg and J. Sipe.
no-problem/9904/quant-ph9904064.html
ar5iv
text
# Tunnelling series in terms of perturbation theory for quantum spin systems ## I acknowledgment The work of O. Z. was supported by International Science Education Program (ISEP), grant No. QSU082068.
no-problem/9904/cs9904019.html
ar5iv
text
# Bounds for Small-Error and Zero-Error Quantum Algorithms ## 1 Motivation and summary of results A general goal in the design of randomized algorithms is to obtain fast algorithms with small error probabilities. Along these lines is also the goal of obtaining fast algorithms that are zero-error (a.k.a. Las Vegas), as opposed to bounded-error (a.k.a. Monte Carlo). We examine these themes in the context of quantum algorithms, and present a number of new upper and lower bounds that contrast with those that arise in the classical case. The error probabilities of many classical probabilistic algorithms can be reduced by techniques that are commonly referred to as amplification. For example, if an algorithm $`A`$ that errs with probability $`\frac{1}{3}`$ is known, then an error probability bounded above by an arbitrarily small $`\epsilon >0`$ can be obtained by running $`A`$ independently $`\mathrm{\Theta }(\mathrm{log}(1/\epsilon ))`$ times and taking the majority value of the outcomes. This amplification procedure increases the running time of the algorithm by a multiplicative factor of $`\mathrm{\Theta }(\mathrm{log}(1/\epsilon ))`$ and is optimal (assuming that $`A`$ is only used as a black-box). We first consider the question of whether or not it is possible to perform amplification more efficiently on a quantum computer. A classical probabilistic algorithm $`A`$ is said to $`(p,q)`$-compute a function $`f:\{0,1\}^{}\{0,1\}`$ if $`\mathrm{Pr}[A(x)=1]`$ $`\{\begin{array}{cc}p\hfill & \text{if }f(x)=0\hfill \\ q\hfill & \text{if }f(x)=1\text{.}\hfill \end{array}`$ Algorithm $`A`$ can be regarded as a deterministic algorithm with an auxiliary input $`r`$, which is uniformly distributed over some underlying sample space $`S`$ (usually $`S`$ is of the form $`\{0,1\}^{l(|x|)}`$). We will focus our attention on the one-sided-error case (i.e. when $`p=0`$) and prove bounds on quantum amplification by translating them to bounds on quantum search. In this case, for any $`x\{0,1\}^n`$, $`f(x)=1`$ iff $`(rS)(A(x,r)=1)`$. Grover’s quantum search algorithm (and some refinements of it ) can be cast as a quantum amplification method that is provably more efficient than any classical method. It amplifies a $`(0,q)`$-algorithm to a $`(0,\frac{1}{2})`$-quantum-computer with $`O(1/\sqrt{q})`$ executions of $`A`$, whereas classically $`\mathrm{\Theta }(1/q)`$ executions of $`A`$ would be required to achieve this. It is natural to consider other amplification problems, such as amplifying $`(0,q)`$-computers to $`(0,1\epsilon )`$-quantum-computers ($`0<q<1\epsilon <1`$). We give a tight analysis of this. ###### Theorem 1 Let $`A:\{0,1\}^n\times S\{0,1\}`$ be a classical probabilistic algorithm that $`(0,q)`$-computes some function $`f`$, and let $`N=|S|`$ and $`\epsilon 2^N`$. Then, given a black-box for $`A`$, the number of calls to $`A`$ that are necessary and sufficient to $`(0,1\epsilon )`$-quantum-compute $`f`$ is $`\mathrm{\Theta }\left(\sqrt{N}\left(\sqrt{\mathrm{log}(1/\epsilon )+qN}\sqrt{qN}\right)\right).`$ (1) The lower bound is proven via the polynomial method and with adaptations of techniques from . The upper bound is obtained by a combination of ideas, including repeated calls to an exact quantum search algorithm for the special case where the exact number of solutions is known . From Theorem 1 we deduce that amplifying $`(0,\frac{1}{2})`$ classical computers to $`(0,1\epsilon )`$ quantum computers requires $`\mathrm{\Theta }(\mathrm{log}(1/\epsilon ))`$ executions, and hence cannot be done more efficiently in the quantum case than in the classical case. These bounds also imply a remarkable algorithm for amplifying a classical $`(0,\frac{1}{N})`$-computer $`A`$ to a $`(0,1\epsilon )`$ quantum computer. Note that if we follow the natural approach of composing an optimal $`(0,\frac{1}{N})(0,\frac{1}{2})`$ amplifier with an optimal $`(0,\frac{1}{2})(0,1\epsilon )`$ amplifier then our amplifier makes $`\mathrm{\Theta }(\sqrt{N}\mathrm{log}(1/\epsilon ))`$ calls to $`A`$. On the other hand, Theorem 1 shows that, in the case where $`N=|S|`$, there is a more efficient $`(0,\frac{1}{N})(0,1\epsilon )`$ amplifier that makes only $`\mathrm{\Theta }(\sqrt{N\mathrm{log}(1/\epsilon )})`$ calls to $`A`$ (and this is optimal). Next we turn our attention to the zero-error (Las Vegas) model. A zero-error algorithm never outputs an incorrect answer but it may claim ignorance (output ‘inconclusive’) with probability $`1/2`$. Suppose we want to compute some function $`f:\{0,1\}^N\{0,1\}`$. The input $`x\{0,1\}^N`$ can only be accessed by means of queries to a black-box which returns the $`i`$th bit of $`x`$ when queried on $`i`$. Let $`D(f)`$ denote the number of variables that a deterministic classical algorithm needs to query (in the worst case) in order to compute $`f`$, $`R_0(f)`$ the number of queries for a zero-error classical algorithm, and $`R_2(f)`$ for bounded-error. There is a monotone function $`g`$ with $`R_0(g)O(D(g)^{0.753\mathrm{}})`$ , and it is known that $`R_0(f)\sqrt{D(f)}`$ for any function $`f`$ . It is a longstanding open question whether $`R_0(f)\sqrt{D(f)}`$ is tight. We solve the analogous question for monotone functions for the quantum case. Let $`Q_E(f)`$, $`Q_0(f)`$, $`Q_2(f)`$ respectively be the number of queries that an exact, zero-error, or bounded-error quantum algorithm must make to compute $`f`$. For zero-error quantum algorithms, there is an issue about the precision with which its gates are implemented: any slight imprecisions can reduce an implementation of a zero-error algorithm to a bounded-error one. We address this issue by requiring our zero-error quantum algorithms to be self-certifying in the sense that they produce, with constant probability, a certificate for the value of $`f`$ that can be verified by a classical algorithm. As a result, the algorithms remain zero-error even with imperfect quantum gates. The number of queries is then counted as the sum of those of the quantum algorithm (that searches for a certificate) and the classical algorithm (that verifies a certificate). Our upper bounds for $`Q_0(f)`$ will all be with self-certifying algorithms. We first show that $`Q_0(f)\sqrt{D(f)}`$ for every monotone $`f`$ (even without the self-certifying requirement). Then we exhibit a family of monotone functions that nearly achieves this gap: for every $`\epsilon >0`$ we construct a $`g`$ such that $`Q_0(g)O(D(g)^{0.5+\epsilon })`$. In fact even $`Q_0(g)O(R_2(g)^{0.5+\epsilon })`$. These $`g`$ are so-called “AND-OR-trees”. They are the first examples of functions $`f:\{0,1\}^N\{0,1\}`$ whose quantum zero-error query complexity is asymptotically less than their classical zero-error or bounded-error query complexity. It should be noted that $`Q_0(\text{OR})=N`$ , so the quadratic speedup from Grover’s algorithm is lost when zero-error performance is required. Furthermore, we apply the idea behind the above zero-error quantum algorithms to obtain a new result in communication complexity. We derive from the AND-OR-trees a communication complexity problem where an asymptotic gap occurs between the zero-error quantum communication complexity and the zero-error classical communication complexity (there was a previous example of a zero-error gap for a function with restricted domain in and bounded-error gaps in ). This result includes a new lower bound in classical communication complexity. We also state a result by Hartmut Klauck, inspired by an earlier version of this paper, which gives the first total function with quantum-classical gap in the zero-error model of communication complexity. Finally, a class of black-box problems that has received wide attention concerns the determination of monotone graph properties . Consider a directed graph on $`n`$ vertices. It has $`n(n1)`$ possible edges and hence can be represented by a black-box of $`n(n1)`$ binary variables, where each variable indicates whether or not a specific edge is present. A nontrivial monotone graph property is a property of such a graph (i.e. a function $`P:\{0,1\}^{n(n1)}\{0,1\}`$) that is non-constant, invariant under permutations of the vertices of the graph, and monotone. Clearly, $`n(n1)`$ is an upper bound on the number of queries required to compute such properties. The Aanderaa-Karp-Rosenberg or evasiveness conjecture states that $`D(P)=n(n1)`$ for all $`P`$. The best known general lower bound is $`\mathrm{\Omega }(n^2)`$ . It has also been conjectured that $`R_0(P)\mathrm{\Omega }(n^2)`$ for all $`P`$, but the current best bound is only $`\mathrm{\Omega }(n^{4/3})`$ . A natural question is whether or not quantum algorithms can determine monotone graph properties more efficiently. We show that they can. Firstly, in the exact model we exhibit a $`P`$ with $`Q_E(P)<n(n1)`$, so the evasiveness conjecture fails in the case of quantum computers. However, we also prove $`Q_E(P)\mathrm{\Omega }(n^2)`$ for all $`P`$, so evasiveness does hold up to a constant factor for exact quantum computers. Secondly, we give a nontrivial monotone graph property for which the evasiveness conjecture is violated by a zero-error quantum algorithm: let STAR be the property that the graph has a vertex which is adjacent to all other vertices. Any classical (zero-error or bounded-error) algorithm for STAR requires $`\mathrm{\Omega }(n^2)`$ queries. We give a zero-error quantum algorithm that determines STAR with only $`O(n^{3/2})`$ queries. Finally, for bounded-error quantum algorithms, the OR problem trivially translates into the monotone graph property “there is at least one edge”, which can be determined with only $`O(n)`$ queries via Grover’s algorithm . ## 2 Basic definitions and terminology See for details and references for the quantum circuit model. For $`b\{0,1\}`$, a query gate $`O`$ for an input $`x=(x_0,\mathrm{},x_{N1})\{0,1\}^N`$ performs the following mapping, which is our only way to access the bits $`x_j`$: $$|j,b|j,bx_j.$$ We sometimes use the term “black-box” for $`x`$ as well as $`O`$. A quantum algorithm or gate network $`A`$ with $`T`$ queries is a unitary transformation $`A=U_TOU_{T1}O\mathrm{}OU_1OU_0`$. Here the $`U_i`$ are unitary transformations that do not depend on $`x`$. Without loss of generality we fix the initial state to $`|\stackrel{}{0}`$, independent of $`x`$. The final state is then a superposition $`A|\stackrel{}{0}`$ which depends on $`x`$ only via the $`T`$ query gates. One specific qubit of the final state (the rightmost one, say) is designated for the output. The acceptance probability of a quantum network on a specific black-box $`x`$ is defined to be the probability that the output qubit is 1 (if a measurement is performed on the final state). We want to compute a function $`f:\{0,1\}^N\{0,1\}`$, using as few queries as possible (on the worst-case input). We distinguish between three different error-models. In the case of exact computation, an algorithm must always give the correct answer $`f(x)`$ for every $`x`$. In the case of bounded-error computation, an algorithm must give the correct answer $`f(x)`$ with probability $`2/3`$ for every $`x`$. In the case of zero-error computation, an algorithm is allowed to give the answer ‘don’t know’ with probability $`1/2`$, but if it outputs an answer (0 or 1), then this must be the correct answer. The complexity in this zero-error model is equal up to a factor of 2 to the expected complexity of an optimal algorithm that always outputs the correct answer. Let $`D(f)`$, $`R_0(f)`$, and $`R_2(f)`$ denote the exact, zero-error and bounded-error classical complexities, respectively, and $`Q_E(f)`$, $`Q_0(f)`$, $`Q_2(f)`$ be the corresponding quantum complexities. Note that $`ND(f)Q_E(f)Q_0(f)Q_2(f)`$ and $`ND(f)R_0(f)R_2(f)Q_2(f)`$ for every $`f`$. ## 3 Tight trade-offs for quantum searching In this section, we prove Theorem 1, stated in Section 1. The search problem is the following: for a given black-box $`x`$, find a $`j`$ such that $`x_j=1`$ using as few queries to $`x`$ as possible. A quantum computer can achieve error probability $`1/3`$ using $`T\mathrm{\Theta }(\sqrt{N})`$ queries . We address the question of how large the number of queries should be in order to be able to achieve a very small error $`\epsilon `$. We will prove that if $`T<N`$, then $`T\mathrm{\Theta }\left(\sqrt{N\mathrm{log}(1/\epsilon )}\right).`$ This result will actually be a special case of a more general theorem that involves a promise on the number of solutions. Suppose we want to search a space of $`N`$ items with error $`\epsilon `$, and we are promised that there are at least some number $`t<N`$ solutions. The higher $`t`$ is, the fewer queries we will need. In the appendix we give the following lower bound on $`\epsilon `$ in terms of $`T`$, using tools from . ###### Theorem 2 Under the promise that the number of solutions is at least $`t`$, every quantum search algorithm that uses $`TNt`$ queries has error probability $$\epsilon \mathrm{\Omega }\left(e^{4bT^2/(Nt)8T\sqrt{tN/(Nt)^2}}\right).$$ Here $`b`$ is a positive universal constant. This theorem implies a lower bound on $`T`$ in terms of $`\epsilon `$. To give a tight characterization of the relations between $`T`$, $`N`$, $`t`$ and $`\epsilon `$, we need the following upper bound on $`T`$ for the case $`t=1`$: ###### Theorem 3 For every $`\epsilon >0`$ there exists a quantum search algorithm with error probability $`\epsilon `$ and $`O\left(\sqrt{N\mathrm{log}(1/\epsilon )}\right)`$ queries. Proof Set $`t_0=\mathrm{log}(1/\epsilon )`$. Consider the following algorithm: 1. Apply exact search for $`t=1,\mathrm{},t_0`$, each of which takes $`O(\sqrt{N/t})`$ queries. 2. If no solution has been found, then conduct $`t_0`$ searches, each with $`O(\sqrt{N/t_0})`$ queries. 3. Output a solution if one has been found, otherwise output ‘no’. The query complexity of this algorithm is bounded by $$\underset{t=1}{\overset{t_0}{}}O\left(\sqrt{\frac{N}{t}}\right)+t_0O\left(\sqrt{\frac{N}{t_0}}\right)=O\left(\sqrt{N\mathrm{log}(1/\epsilon )}\right).$$ If the real number of solutions was in $`\{1,\mathrm{},t_0\}`$, then a solution will be found with certainty in step 1. If the real number of solutions was $`>t_0`$, then each of the searches in step 2 can be made to have error probability $`1/2`$, so we have total error probability at most $`(1/2)^{t_0}\epsilon `$. $`\mathrm{}`$ A more precise analysis gives $`T2.45\sqrt{N\mathrm{log}(1/\epsilon )}`$. It is interesting that we can use this to prove something about the constant $`b`$ of the Coppersmith-Rivlin theorem (see appendix): for $`t=1`$ and $`\epsilon o(1)`$, the lower bound asymptotically becomes $`T\sqrt{N\mathrm{log}(1/\epsilon )/4b}`$. Together these two bounds imply $`b1/4(2.45)^20.042`$. The main theorem of this section tightly characterizes the various trade-offs between the size of the search space $`N`$, the promise $`t`$, the error probability $`\epsilon `$, and the required number of queries: ###### Theorem 4 Fix $`\eta (0,1)`$, and let $`N>0`$, $`\epsilon 2^N`$, and $`t\eta N`$. Let $`T`$ be the optimal number of queries a quantum computer needs to search with error $`\epsilon `$ through an unordered list of $`N`$ items containing at least $`t`$ solutions. Then $$\mathrm{log}(1/\epsilon )\mathrm{\Theta }\left(\frac{T^2}{N}+T\sqrt{\frac{t}{N}}\right).$$ Proof From Theorem 2 we obtain the upper bound $`\mathrm{log}(1/\epsilon )O\left({\displaystyle \frac{T^2}{N}}+T\sqrt{{\displaystyle \frac{t}{N}}}\right).`$ To prove a lower bound on $`\mathrm{log}(1/\epsilon )`$ we distinguish two cases. Case 1: $`T\sqrt{tN}`$. By Theorem 3, we can achieve error $`\epsilon `$ using $`T_uO(\sqrt{N\mathrm{log}(1/\epsilon )})`$ queries. Now (leaving out some constant factors): $$\mathrm{log}(1/\epsilon )\frac{T_u^2}{N}\frac{1}{2}\left(\frac{T^2}{N}+T\frac{T}{N}\right)\frac{1}{2}\left(\frac{T^2}{N}+T\sqrt{\frac{t}{N}}\right).$$ Case 2: $`T<\sqrt{tN}`$. We can achieve error $`1/2`$ using $`O(\sqrt{N/t})`$ queries, and then classically amplify this to error $`1/\epsilon `$ using $`O(\mathrm{log}(1/\epsilon ))`$ repetitions. This takes $`T_uO(\sqrt{N/t}\mathrm{log}(1/\epsilon ))`$ queries in total. Now: $$\mathrm{log}(1/\epsilon )T_u\sqrt{\frac{t}{N}}\frac{1}{2}\left(T\sqrt{\frac{t}{N}}+T\sqrt{\frac{t}{N}}\right)$$ $$\frac{1}{2}\left(\frac{T^2}{N}+T\sqrt{\frac{t}{N}}\right).$$ $`\mathrm{}`$ Rewriting Theorem 4 (with $`q=t/N`$) yields the general bound of Theorem 1. For $`t=1`$ this becomes $`T\mathrm{\Theta }(\sqrt{N\mathrm{log}(1/\epsilon )})`$. Thus no quantum search algorithm with $`O(\sqrt{N})`$ queries has error probability $`o(1)`$. Also, a quantum search algorithm with $`\epsilon 2^N`$ needs $`\mathrm{\Omega }(N)`$ queries. For the case $`\epsilon =1/3`$ we re-derive the bound $`\mathrm{\Theta }(\sqrt{N/t})`$ from . ## 4 Applications of Theorem 1 to amplification In this section we apply the bounds from Theorem 1 to examine the speedup possible for amplifying classical one-sided error algorithms via quantum algorithms. Observe that searching for items in a search space of size $`N`$ and figuring out whether a probabilistic one-sided error algorithm $`A`$ with sample space $`S`$ of size $`N`$ accepts are essentially the same thing. Let us analyze some special cases more closely. Suppose that we want to amplify an algorithm $`A`$ that $`(0,\frac{1}{2})`$-computes some function $`f`$ to $`(0,1\epsilon )`$. Then substituting $`|S|=N`$ and $`q=\frac{1}{2}`$ into Eq. (1) in Theorem 1 yields ###### Theorem 5 Let $`A:\{0,1\}^n\times S\{0,1\}`$ be a classical probabilistic algorithm that $`(0,\frac{1}{2})`$-computes some function $`f`$, and $`\epsilon 2^{|S|}`$. Then, given a black-box for $`A`$, the number of calls to $`A`$ that any quantum algorithm needs to make to $`(0,1\epsilon )`$-compute $`f`$ is $`\mathrm{\Omega }(\mathrm{log}(1/\epsilon ))`$. Hence amplification of one-sided error algorithms with fixed initial success probability cannot be done more efficiently in the quantum case than in the classical case. Since one-sided error algorithms are a special case of bounded-error algorithms, the same lower bound also holds for amplification of bounded-error algorithms. A similar but slightly more elaborate argument as above shows that a quantum computer still needs $`\mathrm{\Omega }(\mathrm{log}(1/\epsilon ))`$ applications of $`A`$ when $`A`$ is zero-error. Some other special cases of Theorem 1: in order to amplify a $`(0,\frac{1}{N})`$-computer $`A`$ to a $`(0,\frac{1}{2})`$-computer, $`\mathrm{\Theta }(\sqrt{N})`$ calls to $`A`$ are necessary and sufficient (and this is essentially a restatement of known results of Grover and others about quantum searching ). Also, in order to amplify a $`(0,\frac{1}{N})`$-computer with sample space of size $`N`$ to a $`(0,1\epsilon )`$-computer, $`\mathrm{\Theta }(\sqrt{N\mathrm{log}(1/\epsilon )})`$ calls to $`A`$ are necessary and sufficient. Finally, consider what happens if the size of the sample space is unknown and we only know that $`A`$ is a classical one-sided error algorithm with success probability $`q`$. Quantum amplitude amplification can improve the success probability to $`1/2`$ using $`O(1/\sqrt{q})`$ repetitions of $`A`$. We can then classically amplify the success probability further to $`1\epsilon `$ using $`O(\mathrm{log}(1/\epsilon ))`$ repetitions. In all, this method uses $`O(\mathrm{log}(1/\epsilon )/\sqrt{q})`$ applications of $`A`$. Theorem 4 implies that this is best possible in the worst case (i.e. if $`A`$ happens to be a classical algorithm with very large sample space). ## 5 Zero-error quantum algorithms In this section we consider zero-error complexity of functions in the query (a.k.a. black-box) setting. The best general bound that we can prove between the quantum zero-error complexity $`Q_0(f)`$ and the classical deterministic complexity $`D(f)`$ for total functions is the following (the proof is similar to the $`D(f)O(Q_E(f)^4)`$ result given in and uses an unpublished proof technique of Nisan and Smolensky): ###### Theorem 6 For every total function $`f`$ we have $`D(f)O(Q_0(f)^4)`$. We will in particular look at monotone increasing $`f`$. Here the value of $`f`$ cannot flip from 1 to 0 if more variables are set to 1. For such $`f`$, we improve the bound to: ###### Theorem 7 For every total monotone Boolean function $`f`$ we have $`D(f)Q_0(f)^2`$. Proof Let $`s(f)`$ be the sensitivity of $`f`$: the maximum, over all $`x`$, of the number of variables that we can individually flip in $`x`$ to change $`f(x)`$. Let $`x`$ be an input on which the sensitivity of $`f`$ equals $`s(f)`$. Assume without loss of generality that $`f(x)=0`$. All sensitive variables must be 0 in $`x`$, and setting one or more of them to 1 changes the value of $`f`$ from 0 to 1. Hence by fixing all variables in $`x`$ except for the $`s(f)`$ sensitive variables, we obtain the OR function on $`s(f)`$ variables. Since OR on $`s(f)`$ variables has $`Q_0(\text{OR})=s(f)`$ \[3, Proposition 6.1\], it follows that $`s(f)Q_0(f)`$. It is known (see for instance ) that $`D(f)s(f)^2`$ for monotone $`f`$, hence $`D(f)Q_0(f)^2`$. $`\mathrm{}`$ Important examples of monotone functions are AND-OR trees. These can be represented as trees of depth $`d`$ where the $`N`$ leaves are the variables, and the $`d`$ levels of internal nodes are alternatingly labeled with ANDs and ORs. Using techniques from , it is easy to show that $`Q_E(f)N/2`$ and $`D(f)=N`$ for such trees. However, we show that in the zero-error setting quantum computers can achieve significant speed-ups for such functions. These are in fact the first total functions with superlinear gap between quantum and classical zero-error complexity. Interestingly, the quantum algorithms for these functions are not just zero-error: if they output an answer $`b\{0,1\}`$ then they also output a $`b`$-certificate for this answer. This is a set of indices of variables whose values force the function to the value $`b`$. We prove that for sufficiently large $`d`$, quantum computers can obtain near-quadratic speed-ups on $`d`$-level AND-OR trees which are uniform, i.e. have branching factor $`N^{1/d}`$ at each level. Using the next lemma (which is proved in the appendix) we show that Theorem 7 is almost tight: for every $`\epsilon >0`$ there exists a total monotone $`f`$ with $`Q_0(f)O(N^{1/2+\epsilon })`$. ###### Lemma 1 Let $`d1`$ and let $`f`$ denote the uniform $`d`$-level AND-OR tree on $`N`$ variables that has an OR as root. There exists a quantum algorithm $`A_1`$ that finds a 1-certificate in expected number of queries $`O(N^{1/2+1/2d})`$ if $`f(x)=1`$ and does not terminate if $`f(x)=0`$. Similarly, there exists a quantum algorithm $`A_0`$ that finds a 0-certificate in expected number of queries $`O(N^{1/2+1/d})`$ if $`f(x)=0`$ and does not terminate if $`f(x)=1`$. ###### Theorem 8 Let $`d1`$ and let $`f`$ denote the uniform $`d`$-level AND-OR tree on $`N`$ variables that has an OR as root. Then $`Q_0(f)O(N^{1/2+1/d})`$ and $`R_2(f)\mathrm{\Omega }(N)`$. Proof Run the algorithms $`A_1`$ and $`A_0`$ of Lemma 1 side-by-side until one of them terminates with a certificate. This gives a certificate-finding quantum algorithm for $`f`$ with expected number of queries $`O(N^{1/2+1/d})`$. Run this algorithm for twice its expected number of queries and answer ‘don’t know’ if it hasn’t terminated after that time. By Markov’s inequality, the probability of non-termination is $`1/2`$, so we obtain an algorithm for our zero-error setting with $`Q_0(f)O(N^{1/2+1/d})`$ queries. The classical lower bound follows from combining two known results. First, an AND-OR tree of depth $`d`$ on $`N`$ variables has $`R_0(f)N/2^d`$ \[20, Theorem 2.1\] (see also ). Second, for such trees we have $`R_2(f)\mathrm{\Omega }(R_0(f))`$ . Hence $`R_2(f)\mathrm{\Omega }(N)`$. $`\mathrm{}`$ This analysis is not quite optimal. It gives only trivial bounds for $`d=2`$, but a more refined analysis shows that we can also get speed-ups for such 2-level trees: ###### Theorem 9 Let $`f`$ be the AND of $`N^{1/3}`$ ORs of $`N^{2/3}`$ variables each. Then $`Q_0(f)\mathrm{\Theta }(N^{2/3})`$ and $`R_2(f)\mathrm{\Omega }(N)`$. Proof A similar analysis as before shows $`Q_0(f)O(N^{2/3})`$ and $`R_2(f)\mathrm{\Omega }(N)`$. For the quantum lower bound: note that if we set all variables to 1 except for the $`N^{2/3}`$ variables in the first subtree, then $`f`$ becomes the OR of $`N^{2/3}`$ variables. This is known to have zero-error complexity exactly $`N^{2/3}`$ \[3, Proposition 6.1\], hence $`Q_0(f)\mathrm{\Omega }(N^{2/3})`$. $`\mathrm{}`$ If we consider a tree with $`\sqrt{N}`$ subtrees of $`\sqrt{N}`$ variables each, we would get $`Q_0(f)O(N^{3/4})`$ and $`R_2(f)\mathrm{\Omega }(N)`$. The best lower bound we can prove here is $`Q_0(f)\mathrm{\Omega }(\sqrt{N})`$. However, if we also require the quantum algorithm to output a certificate for $`f`$, we can prove a tight quantum lower bound of $`\mathrm{\Omega }(N^{3/4})`$. We do not give the proof here, which is a technical and more elaborate version of the proof of the classical lower bound of Theorem 10. ## 6 Zero-error communication complexity The results of the previous section can be translated to the setting of communication complexity . Here there are two parties, Alice and Bob, who want to compute some relation $`R\{0,1\}^N\times \{0,1\}^N\times \{0,1\}^M`$. Alice gets input $`x\{0,1\}^N`$ and Bob gets input $`y\{0,1\}^N`$. Together they want to compute some $`z\{0,1\}^M`$ such that $`(x,y,z)R`$, exchanging as few bits of communication as possible. The often studied setting where Alice and Bob want to compute some function $`f:\{0,1\}^N\times \{0,1\}^N\{0,1\}`$ is a special case of this. In the case of quantum communication, Alice and Bob can exchange and process qubits, potentially giving them more power than classical communication. Let $`g:\{0,1\}^N\{0,1\}`$ be one of the AND-OR-trees of the previous section. We can derive from this a communication problem $`f:\{0,1\}^N\times \{0,1\}^N\{0,1\}`$ by defining $`f(x,y)=g(xy)`$, where $`xy\{0,1\}^N`$ is the vector obtained by bitwise AND-ing Alice’s $`x`$ and Bob’s $`y`$. Let us call such a problem a “distributed” AND-OR-tree. Buhrman, Cleve, and Wigderson show how to turn a $`T`$-query quantum black-box algorithm for $`g`$ into a communication protocol for $`f`$ with $`O(T\mathrm{log}N)`$ qubits of communication. Thus, using the upper bounds of the previous section, for every $`\epsilon >0`$, there exists a distributed AND-OR-tree $`f`$ that has a $`O(N^{1/2+\epsilon })`$-qubit zero-error protocol. It is conceivable that the classical zero-error communication complexity of these functions is $`\omega (N^{1/2+\epsilon })`$; however, we are not able to prove such a lower bound at this time. Nevertheless, we are able to establish a quantum-classical separation for a relation that is closely related to the AND-OR-tree functions, which is explained below. For any AND-OR tree function $`g:\{0,1\}^N\{0,1\}`$ and input $`x\{0,1\}^N`$, a certificate for the value of $`g`$ on input $`x`$ is a subset $`c`$ of the indices $`\{0,1,\mathrm{},N1\}`$ such that the values $`\{x_i:ic\}`$ determine the value of $`g(x)`$. It is natural to denote $`c`$ as an element of $`\{0,1\}^N`$, representing the characteristic function of the set. For example, for $$g(x_0,x_1,x_2,x_3)=(x_0x_1)(x_2x_3),$$ (2) a certificate for the value of $`g`$ on input $`x=1011`$ is $`c=1001`$, which indicates that $`x_0=1`$ and $`x_3=1`$ determine the value of $`g`$. We can define a communication problem based on finding these certificates as follows. For any AND-OR tree function $`g:\{0,1\}^N\{0,1\}`$ and $`x,y\{0,1\}^N`$, a certificate for the value of $`g`$ on distributed inputs $`x`$ and $`y`$ is a subset $`c`$ of $`\{0,1,\mathrm{},N1\}`$ (denoted as an element of $`\{0,1\}^N`$) such that the values $`\{(x_i,y_i):ic\}`$ determine the value of $`g(xy)`$. Define the relation $`R\{0,1\}^N\times \{0,1\}^N\times \{0,1\}^N`$ such that $`(x,y,c)R`$ iff $`c`$ is a certificate for the value of $`g`$ on distributed inputs $`x`$ and $`y`$. For example, when $`R`$ is with respect to the function $`g`$ of equation (2), $`(1011,1111,1001)R`$, because, for $`x=1011`$ and $`y=1111`$, an appropriate certificate is $`c=1001`$. The zero-error certificate-finding algorithm for $`g`$ of the previous section, together with the -translation from black-box algorithms to communication protocols, implies a zero-error quantum communication protocol for $`R`$. Thus, Theorem 8 implies that for every $`\epsilon >0`$ there exists a relation $`R\{0,1\}^N\times \{0,1\}^N\times \{0,1\}^N`$ for which there is a zero-error quantum protocol with $`O(N^{1/2+\epsilon })`$ qubits of communication. Although we suspect that the classical zero-error communication complexity of these relations is $`\mathrm{\Omega }(N)`$, we are only able to prove lower bounds for relations derived from 2-level trees: ###### Theorem 10 Let $`g:\{0,1\}^N\{0,1\}`$ be an AND of $`N^{1/3}`$ ORs of $`N^{2/3}`$ variables each. Let $`R\{0,1\}^N\times \{0,1\}^N\times \{0,1\}^N`$ be the certificate-relation derived from $`g`$. Then there exists a zero-error $`O(N^{2/3}\mathrm{log}N)`$-qubit quantum protocol for $`R`$, whereas, any zero-error classical protocol for $`R`$ needs $`\mathrm{\Omega }(N)`$ bits of communication. Proof The quantum upper bound follows from Theorem 9 and the -reduction. For the classical lower bound, suppose we have a classical zero-error protocol $`P`$ for $`R`$ with $`T`$ bits of communication. We will show how we can use this to solve the Disjointness problem on $`k=N^{1/3}(N^{2/3}1)`$ variables. (Given Alice’s input $`x\{0,1\}^k`$ and Bob’s $`y\{0,1\}^k`$, the Disjointness problem is to determine if $`x`$ and $`y`$ have a 1 at the same position somewhere.) Let $`Q`$ be the following classical protocol. Alice and Bob view their $`k`$-bit input as made up of $`N^{1/3}`$ subtrees of $`N^{2/3}1`$ variables each. They add a dummy variable with value 1 to each subtree and apply a random permutation to each subtree (Alice and Bob have to apply the same permutation to a subtree, so we assume a public coin). Call the $`N`$-bit strings they now have $`x^{}`$ and $`y^{}`$. Then they apply $`P`$ to $`x^{}`$ and $`y^{}`$. Since $`f(x^{},y^{})=1`$, after an expected number of $`O(T)`$ bits of communication $`P`$ will deliver a certificate which is a common 1 in each subtree. If one of these common 1s is non-dummy then Alice and Bob output 1, otherwise they output 0. It is easy to see that this protocol solves Disjointness with success probability 1 if $`xy=\stackrel{}{0}`$ and with success probability $`1/2`$ if $`xy\stackrel{}{0}`$. It assumes a public coin and uses $`O(T)`$ bits of communication. Now the well-known $`\mathrm{\Omega }(k)`$ bound for classical bounded-error Disjointness on $`k`$ variables implies $`T\mathrm{\Omega }(k)=\mathrm{\Omega }(N)`$. $`\mathrm{}`$ The relation of Theorem 10 is “total”, in the sense that, for every $`x,y\{0,1\}^N`$, there exists a $`c`$ such that $`(x,y,c)R`$. It should be noted that one can trivially construct a total relation from any partial function by allowing any output for inputs that are outside the domain of the function. In this manner, a total relation with an exponential quantum-classical zero-error gap can be immediately obtained from the distributed Deutsch-Jozsa problem of . The total relation of Theorem 10 is different from this in that it is not a trivial extension of a partial function. After reading a first version of this paper, Hartmut Klauck proved a separation which is the first example of a total function with superlinear gap between quantum and classical zero-error communication complexity . Consider the iterated non-disjointness function: Alice and Bob each receive $`s`$ sets of size $`n`$ from a size-$`poly(n)`$ universe (so the input length is $`N\mathrm{\Theta }(sn\mathrm{log}n)`$ bits), and they have to output 1 iff all $`s`$ pairs of sets intersect. Klauck’s function $`f`$ is an intricate subset of this iterated non-disjointness function, but still an explicit and total function. Results of about limited non-deterministic communication complexity imply a lower bound for classical zero-error protocols for $`f`$. On the other hand, because $`f`$ can be written as a 2-level AND-OR-tree, the methods of this paper imply a more efficient quantum zero-error protocol. Choosing $`s=n^{5/6}`$, Klauck obtains a polynomial gap: ###### Theorem 11 (Klauck ) For $`N\mathrm{\Theta }(n^{11/6}\mathrm{log}n)`$ there exists a total function $`f:\{0,1\}^N\times \{0,1\}^N\{0,1\}`$, such that there is a quantum zero-error protocol for $`f`$ with $`O(N^{10/11+\epsilon })`$ qubits of communication (for all $`\epsilon >0`$), whereas every classical zero-error protocol for $`f`$ needs $`\mathrm{\Omega }(N/\mathrm{log}N)`$ bits of communication. ## 7 Quantum complexity of graph properties Graph properties form an interesting subset of the set of all Boolean functions. Here an input of $`N=n(n1)`$ bits represents the edges of a directed graph on $`n`$ vertices. (Our results hold for properties of directed as well as undirected graphs.) A graph property $`P`$ is a subset of the set of all graphs that is closed under permutation of the nodes (so if $`X,Y`$ represent isomorphic graphs, then $`XP`$ iff $`YP`$). We are interested in the number of queries of the form “is there an edge from node $`i`$ to node $`j`$?” that we need to determine for a given graph whether it has a certain property $`P`$. Since we can view $`P`$ as a total function on $`N`$ variables, we can use the notations $`D(P)`$, etc. A property $`P`$ is evasive if $`D(P)=n(n1)`$, so if in the worst case all $`N`$ edges have to be examined. The complexity of graph properties has been well-studied classically, especially for monotone graph properties (a property is monotone if adding edges cannot destroy the property). In the sequel, let $`P`$ stand for a (non-constant) monotone graph property. Much research revolved around the so-called Aanderaa-Karp-Rosenberg conjecture or evasiveness conjecture, which states that every $`P`$ is evasive. This conjecture is still open; see for an overview. It has been proved for $`n`$ equals a prime power , but for other $`n`$ the best known general bound is $`D(P)\mathrm{\Omega }(n^2)`$ . (Evasiveness has also been proved for bipartite graphs .) For the classical zero-error complexity, the best known general result is $`R_0(P)\mathrm{\Omega }(n^{4/3})`$ , but it has been conjectured that $`R_0(P)\mathrm{\Theta }(n^2)`$. To the best of our knowledge, no $`P`$ is known to have $`R_2(P)o(n^2)`$. In this section we examine the complexity of monotone graph properties on a quantum computer. First we show that if we replace exact classical algorithms by exact quantum algorithms, then the evasiveness conjecture fails. However, the conjecture does hold up to a constant factor. ###### Theorem 12 For all $`P`$, $`Q_E(P)\mathrm{\Omega }(n^2)`$. There is a $`P`$ such that $`Q_E(P)<n(n1)`$ for every $`n>2`$. Proof For the lower bound, let $`deg(f)`$ denote the degree of the unique multilinear multivariate polynomial $`p`$ that represents a function $`f`$ (i.e. $`p(X)=f(X)`$ for all $`X`$). proves that $`Q_E(f)deg(f)/2`$ for every $`f`$. Dodis and Khanna \[12, Theorem 5.1\] prove that $`deg(P)\mathrm{\Omega }(n^2)`$ for all monotone graph properties $`P`$. Combining these two facts gives the lower bound. Let $`P`$ be the property “the graph contains more than $`n(n1)/2`$ edges”. This is just a special case of the Majority function. Let $`f`$ be Majority on $`N`$ variables. It is known that $`Q_E(f)N+1e(N)`$, where $`e(N)`$ is the number of 1s in the binary expansion of $`N`$. This was first noted by Hayes, Kutin and Van Melkebeek . It also follows immediately from classical results that show that an item with the Majority value can be identified classically deterministically with $`Ne(N)`$ comparisons between bits (a comparison between two black-box-bits is the XOR of two bits, which can be computed with 1 quantum query ). One further query to this item suffices to determine the Majority value. For $`N=n(n1)`$ and $`n>2`$ we have $`e(N)2`$ and hence $`Q_E(f)Ne(N)+1<N`$. $`\mathrm{}`$ In the zero-error case, we can show polynomial gaps between quantum and classical complexities, so here the evasiveness conjecture fails even if we ignore constant factors. ###### Theorem 13 For all $`P`$, $`Q_0(P)\mathrm{\Omega }(n)`$. There is a $`P`$ such that $`Q_0(P)O(n^{3/2})`$ and $`R_2(P)\mathrm{\Omega }(n^2)`$. Proof The quantum lower bound follows from $`D(P)Q_0(P)^2`$ (Theorem 7) and $`D(P)\mathrm{\Omega }(n^2)`$. Consider the property “the graph contains a star”, where a star is a node that has edges to all other nodes. This property corresponds to a 2-level tree, where the first level is an OR of $`n`$ subtrees, and each subtree is an AND of $`n1`$ variables. The $`n1`$ variables in the $`i`$th subtree correspond to the $`n1`$ edges $`(i,j)`$ for $`ji`$. The $`i`$th subtree is 1 iff the $`i`$th node is the center of a star, so the root of the tree is 1 iff the graph contains a star. Now we can show $`Q_0(P)O(n^{3/2})`$ and $`R_2(P)\mathrm{\Omega }(n^2)`$ analogously to Theorem 9. $`\mathrm{}`$ Combined with the translation of a quantum algorithm to a polynomial , this theorem shows that a “zero-error polynomial” for the STAR-graph property can have degree $`O(n^{3/2})`$. Thus proving a general lower bound on zero-error polynomials for graph properties will not improve Hajnal’s randomized lower bound of $`n^{4/3}`$ further then $`n^{3/2}`$. In particular, a proof that $`R_0(P)\mathrm{\Omega }(n^2)`$ cannot be obtained via a lower bound on degrees of polynomials. This contrasts with the case of exact computation, where the $`\mathrm{\Omega }(n^2)`$ lower bound on $`deg(P)`$ implies both $`D(P)\mathrm{\Omega }(n^2)`$ and $`Q_E(P)\mathrm{\Omega }(n^2)`$. Finally, for the bounded-error case we have quadratic gaps between quantum and classical: the property “the graph has at least one edge” has $`Q_2(P)O(n)`$ by Grover’s quantum search algorithm. Combining that $`D(P)\mathrm{\Omega }(n^2)`$ for all $`P`$ and $`D(f)O(Q_2(f)^4)`$ for all monotone $`f`$ , we also obtain a general lower bound: ###### Theorem 14 For all $`P`$, we have $`Q_2(P)\mathrm{\Omega }(\sqrt{n})`$. There is a $`P`$ such that $`Q_2(P)O(n)`$. Acknowledgments We thank Hartmut Klauck for informing us about Theorem 11, Ramamohan Paturi for Lemma 3, David Deutsch, Wim van Dam, and Michele Mosca for helpful discussions which emphasized the importance of small-error quantum search, and Mosca and Yevgeniy Dodis for helpful discussions about graph properties. ## Appendix A Proof of Theorem 2 Here we prove a lower bound on small-error quantum search. The key lemma of gives the following relation between a $`T`$-query network and a polynomial that expresses its acceptance probability as a function of the input $`X`$ (such a relation is also implicit in some of the proofs of ): ###### Lemma 2 The acceptance probability of a quantum network that makes $`T`$ queries to a black-box $`X`$, can be written as a real-valued multilinear $`N`$-variate polynomial $`P(X)`$ of degree at most $`2T`$. An $`N`$-variate polynomial $`P`$ of degree $`d`$ can be reduced to a single-variate one in the following way (due to ). Let the symmetrization $`P^{sym}`$ be the average of $`P`$ over all permutations of its input: $$P^{sym}(X)=\frac{_{\pi S_N}P(\pi (X))}{N!}.$$ $`P^{sym}`$ is an $`N`$-variate polynomial of degree at most $`d`$. It can be shown that there is a single-variate polynomial $`Q`$ of degree at most $`d`$, such that $`P^{sym}(X)=Q(|X|)`$ for all $`X\{0,1\}^N`$. Here $`|X|`$ denotes the Hamming weight (number of 1s) of $`X`$. Note that a quantum search algorithm $`A`$ can be used to compute the OR-function of $`X`$ (i.e. decide whether $`X`$ contains at least one 1): we let $`A`$ return some $`j`$ and then we output the bit $`x_j`$. If OR$`(X)=0`$, then we give the correct answer with certainty; if OR$`(X)=1`$ then the probability of error $`\epsilon `$ is the same as for $`A`$. Rather than proving a lower bound on search directly, we will prove a lower bound on computing the OR-function; this clearly implies a lower bound for search. The main idea of the proof is the following. By Lemma 2, the acceptance probability of a quantum computer with $`T`$ queries that computes the OR with error probability $`\epsilon `$ (under the promise that there are either 0 or at least $`t`$ solutions) can be written as a multivariate polynomial $`P`$ of degree $`2T`$ of the $`N`$ bits of $`X`$. This polynomial has the properties that > $`P(\stackrel{}{0})=0`$ <sup>1</sup><sup>1</sup>1Since we can always test whether we actually found a solution at the expense of one more query, we can assume the algorithm always gives the right answer ‘no’ if the input contains only 0s. Hence $`s(0)=0`$. However, our results remain unaffected up to constant factors if we also allow a small error here (i.e. $`0s(0)\epsilon `$). > $`1\epsilon P(X)1`$ whenever $`|X|[t,N]`$ By symmetrizing, $`P`$ can be reduced to a single-variate polynomial $`s`$ of degree $`d2T`$ with the following properties: > $`s(0)=0`$ > $`1\epsilon s(x)1`$ for all integers $`x[t,N]`$ We will prove a lower bound on $`\epsilon `$ in terms of $`d`$. Since $`d2T`$, this will imply a lower bound on $`\epsilon `$ in terms of $`T`$. Our proof uses three results about polynomials. The first gives a general bound for polynomials that are bounded by 1 at integer points \[11, p. 980\]: ###### Theorem 15 (Coppersmith & Rivlin) For every polynomial $`p`$ of degree $`d`$ that has absolute value $$|p(x)|1\text{ for all integers }x[0,n],$$ we have $$|p(x)|<ae^{bd^2/n}\text{ for all real }x[0,n],$$ where $`a,b>0`$ are universal constants. (No explicit values for $`a`$ and $`b`$ are given in .) The second two tools concern the Chebyshev polynomials $`T_d`$, defined as : $$T_d(x)=\frac{1}{2}\left(\left(x+\sqrt{x^21}\right)^d+\left(x\sqrt{x^21}\right)^d\right).$$ $`T_d`$ has degree $`d`$ and its absolute value $`|T_d(x)|`$ is bounded by 1 if $`x[1,1]`$. Among all polynomials with those two properties, $`T_d`$ grows fastest on the interval $`[1,\mathrm{})`$ (\[36, p.108\] and \[32, Fact 2\]): ###### Theorem 16 If $`q`$ is a polynomial of degree $`d`$ such that $`|q(x)|1`$ for all $`x[1,1]`$ then $`|q(x)||T_d(x)|`$ for all $`x1`$. Paturi (\[32, before Fact 2\] and personal communication) proved ###### Lemma 3 (Paturi) $`T_d(1+\mu )e^{2d\sqrt{2\mu +\mu ^2}}`$ for all $`\mu 0`$. Proof For $`x=1+\mu `$: $`T_d(x)(x+\sqrt{x^21})^d=(1+\mu +\sqrt{2\mu +\mu ^2})^d(1+2\sqrt{2\mu +\mu ^2})^de^{2d\sqrt{2\mu +\mu ^2}}`$. $`\mathrm{}`$ Now we can prove: ###### Theorem 17 Let $`1t<N`$ be an integer. Every polynomial $`s`$ of degree $`dNt`$ such that $`s(0)=0`$ and $`1\epsilon s(x)1`$ for all integers $`x[t,N]`$ has $$\epsilon \frac{1}{a}e^{bd^2/(Nt)4d\sqrt{tN/(Nt)^2}},$$ where $`a,b`$ are as in Theorem 15. Proof A polynomial $`p`$ with $`p(0)=0`$ and $`p(x)=1`$ for all integers $`x[t,N]`$ must have degree $`>Nt`$. Since $`dNt`$ for our $`s`$, we have $`\epsilon >0`$. Now $`p(x)=1s(Nx)`$ has degree $`d`$ and > $`0p(x)\epsilon `$ for all integers $`x[0,Nt]`$ > $`p(N)=1`$ Applying Theorem 15 to $`p/\epsilon `$ (which is bounded by 1 at integer points) with $`n=Nt`$ we obtain: $$|p(x)|<\epsilon ae^{bd^2/(Nt)}\text{ for all real }x[0,Nt].$$ Now we rescale $`p`$ to $`q(x)=p((x+1)(Nt)/2)`$ (i.e. the domain $`[0,Nt]`$ is transformed to $`[1,1]`$), which has the following properties: > $`|q(x)|<\epsilon ae^{bd^2/(Nt)}\text{ for all real }x[1,1]`$ > $`q(1+\mu )=p(N)=1`$ for $`\mu =2t/(Nt)`$. Thus $`q`$ is “small” on all $`x[1,1]`$ and “big” somewhere outside this interval ($`q(1+\mu )=1`$). Linking this with Theorem 16 and Lemma 3 we obtain $`1`$ $`=`$ $`q(1+\mu )`$ $``$ $`\epsilon ae^{bd^2/(Nt)}|T_d(1+\mu )|`$ $``$ $`\epsilon ae^{bd^2/(Nt)}e^{2d\sqrt{2\mu +\mu ^2}}`$ $`=`$ $`\epsilon ae^{bd^2/(Nt)+2d\sqrt{4t/(Nt)+4t^2/(Nt)^2}}`$ $`=`$ $`\epsilon ae^{bd^2/(Nt)+4d\sqrt{tN/(Nt)^2}}.`$ Rearranging gives the bound. $`\mathrm{}`$ Since a quantum search algorithm with $`T`$ queries induces a polynomial $`s`$ with the properties mentioned in Theorem 17 and $`d2T`$, we obtain the following bound for quantum search under the promise (if $`TNt`$, then $`\epsilon >0`$): Theorem 2 Under the promise that the number of solutions is at least $`t`$, every quantum search algorithm that uses $`TNt`$ queries has error probability $$\epsilon \mathrm{\Omega }\left(e^{4bT^2/(Nt)8T\sqrt{tN/(Nt)^2}}\right).$$ ## Appendix B Proof of Lemma 1 Lemma 1 Let $`d1`$ and let $`f`$ denote the uniform $`d`$-level AND-OR tree on $`N`$ variables that has an OR as root. There exists a quantum algorithm $`A_1`$ that finds a 1-certificate in expected number of queries $`O(N^{1/2+1/2d})`$ if $`f(X)=1`$ and does not terminate if $`f(X)=0`$. Similarly, there exists a quantum algorithm $`A_0`$ that finds a 0-certificate in expected number of queries $`O(N^{1/2+1/d})`$ if $`f(X)=0`$ and does not terminate if $`f(X)=1`$. Proof By induction on $`d`$. Base step. For $`d=1`$ the bounds are trivial. Induction step (assume the lemma for $`d1`$). Let $`f`$ be the uniform $`d`$-level AND-OR tree on $`N`$ variables. The root is an OR of $`N^{1/d}`$ subtrees, each of which has $`N^{(d1)/d}`$ variables. We construct $`A_1`$ as follows. First use multi-level Grover-search as in \[9, Theorem 1.15\] to find a subtree of the root whose value is 1, if there is one. This takes $`O(N^{1/2}(\mathrm{log}N)^{d1})`$ queries and works with bounded-error. By the induction hypothesis there exists an algorithm $`A_0^{}`$ with expected number of $`O((N^{(d1)/d})^{1/2+1/(d1)})=O(N^{1/2+1/2d})`$ queries that finds a 1-certificate for this subtree (note that the subtree has an AND as root, so the roles of 0 and 1 are reversed). If $`A_0^{}`$ has not terminated after, say, 10 times its expected number of queries, then terminate it and start all over with the multi-level Grover search. The expected number of queries for one such run is $`O(N^{1/2}(\mathrm{log}N)^{d1})+10O(N^{1/2+1/2d})=O(N^{1/2+1/2d})`$. If $`f(X)=1`$, then the expected number of runs before success is $`O(1)`$ and $`A_1`$ will find a 1-certificate after a total expected number of $`O(N^{1/2+1/2d})`$ queries. If $`f(X)=0`$, then the subtree found by the multi-level Grover-search will have value 0, so then $`A_0^{}`$ will never terminate by itself and $`A_1`$ will start over again and again but never terminates. We construct $`A_0`$ as follows. By the induction hypothesis there exists an algorithm $`A_1^{}`$ with expected number of $`O((N^{(d1)/d})^{1/2+1/2(d1)})=O(N^{1/2})`$ queries that finds a 0-certificate for a subtree whose value is 0, and that runs forever if the subtree has value 1. $`A_0`$ first runs $`A_1^{}`$ on the first subtree until it terminates, then on the second subtree, etc. If $`f(X)=0`$, then each run of $`A_1^{}`$ will eventually terminate with a 0-certificate for a subtree, and the 0-certificates of the $`N^{1/d}`$ subtrees together form a 0-certificate for $`f`$. The total expected number of queries is the sum of the expectations over all $`N^{1/d}`$ subtrees, which is $`N^{1/d}O(N^{1/2})=O(N^{1/2+1/d})`$. If $`f(X)=1`$, then one of the subtrees has value 1 and the run of $`A_1^{}`$ on that subtree will not terminate, so then $`A_0`$ will not terminate. $`\mathrm{}`$
no-problem/9904/hep-ph9904210.html
ar5iv
text
# 1 Introduction ## 1 Introduction There is now strong evidence for atmospheric neutrino oscillations , which confirms the earlier indications of the effect . The most recent analyses of Super-Kamiokande involve the hypothesis of $`\nu _\mu \nu _\tau `$ oscillations with maximal mixing $`\mathrm{sin}^22\theta _{23}=1`$ and a mass splitting of $`\mathrm{\Delta }m_{23}^2=2.2\times 10^3eV^2`$. Using all their data sets analysed in different ways they quote $`\mathrm{sin}^22\theta _{23}>0.82`$ and a mass splitting of $`5\times 10^4eV^2<\mathrm{\Delta }m_{23}^2<6\times 10^3eV^2`$ at 90% confidence level. The evidence for solar neutrino oscillations is almost as strong . There are a panoply of experiments looking at different energy ranges, and the best fit to all of them has been narrowed down to two basic scenarios corresponding to either resonant oscillations $`\nu _e\nu _0`$ (where for example $`\nu _0`$ may be a linear combination of $`\nu _\mu ,\nu _\tau `$) inside the Sun (MSW ) or “just-so” oscillations in the vacuum between the Sun and the Earth . There are three MSW fits and one vacuum oscillation fit: (i) the small angle MSW solution is $`\mathrm{sin}^22\theta _{12}5\times 10^3`$ and $`\mathrm{\Delta }m_{12}^25\times 10^6eV^2`$; (ii) the large angle MSW solution is $`\mathrm{sin}^22\theta _{12}0.76`$ and $`\mathrm{\Delta }m_{12}^21.8\times 10^5eV^2`$; (iii) an additional MSW large angle solution exists with a lower probability ; (iv) The vacuum oscillation solution is $`\mathrm{sin}^22\theta _{12}0.75`$ and $`\mathrm{\Delta }m_{12}^26.5\times 10^{11}eV^2`$ . The standard model has zero neutrino masses, so any indication of neutrino mass is very exciting since it represents new physics beyond the standard model. In this paper we shall assume the see-saw mechanism and no light sterile neutrinos. The see-saw mechanism implies that the three light neutrino masses arise from some heavy “right-handed neutrinos” $`N_R^p`$ (in general there can be $`Z`$ gauge singlets with $`p=1,\mathrm{}Z`$) with a $`Z\times Z`$ Majorana mass matrix $`M_{RR}^{pq}`$ whose entries take values at or below the unification scale $`M_U10^{16}`$ GeV. The presence of electroweak scale Dirac mass terms $`m_{LR}^{ip}`$ (a $`3\times Z`$ matrix) connecting the left-handed neutrinos $`\nu _L^i`$ ($`i=1,\mathrm{}3`$) to the right-handed neutrinos $`N_R^p`$ then results in a very light see-saw suppressed effective $`3\times 3`$ Majorana mass matrix $$m_{LL}=m_{LR}M_{RR}^1m_{LR}^T$$ (1) for the left-handed neutrinos $`\nu _L^i`$, which are the light physical degrees of freedom observed by experiment. Not surprisingly, following the recent data, there has been a torrent of theoretical papers concerned with understanding how to extend the standard model in order to accomodate the atmospheric and solar neutrino data . Perhaps the minimal extension of the standard model capable of accounting for the atmospheric neutrino data involves the addition of a single right-handed neutrino $`N_R`$ . This is a special case of the general see-saw model with $`Z=1`$, so that $`M_{RR}`$ is a trivial $`1\times 1`$ matrix and $`m_{LR}`$ is a $`3\times 1`$ column matrix where $`m_{LR}^T=(\lambda _{\nu _e},\lambda _{\nu _\mu },\lambda _{\nu _\tau })v_2`$ with $`v_2`$ the vacuum expectation value of the Higgs field $`H_2`$ which is responsible for the neutrino Dirac masses, and the notation for the Yukawa couplings $`\lambda _i`$ indicates that we are in the charged lepton mass eigenstate basis $`e_L,\mu _L,\tau _L`$ with corresponding neutrinos $`\nu _{e_L},\nu _{\mu _L},\nu _{\tau _L}`$. Since $`M_{RR}`$ is trivially invertible the light effective mass matrix in Eq.1 in the $`\nu _{e_L},\nu _{\mu _L},\nu _{\tau _L}`$ basis is simply given by $$m_{LL}=\frac{m_{LR}m_{LR}^T}{M_{RR}}=\left(\begin{array}{ccc}\lambda _{\nu _e}^2\hfill & \lambda _{\nu _e}\lambda _{\nu _\mu }\hfill & \lambda _{\nu _e}\lambda _{\nu _\tau }\hfill \\ \lambda _{\nu _e}\lambda _{\nu _\mu }\hfill & \lambda _{\nu _\mu }^2\hfill & \lambda _{\nu _\mu }\lambda _{\nu _\tau }\hfill \\ \lambda _{\nu _e}\lambda _{\nu _\tau }\hfill & \lambda _{\nu _\mu }\lambda _{\nu _\tau }\hfill & \lambda _{\nu _\tau }^2\hfill \end{array}\right)\frac{v_2^2}{M_{RR}}.$$ (2) The matrix in Eq.2 has vanishing determinant which implies a zero eigenvalue. Furthermore the submatrix in the 23 sector has zero determinant which implies a second zero eigenvalue associated with this sector. In order to account for the Super-Kamiokande data we assumed : $$\lambda _{\nu _e}\lambda _{\nu _\mu }\lambda _{\nu _\tau }.$$ (3) In the $`\lambda _{\nu _e}=0`$ limit the matrix in Eq.2 has zeros along the first row and column, and so clearly $`\nu _e`$ is massless, and the other two eigenvectors are simply $$\left(\begin{array}{c}\nu _0\hfill \\ \nu _3\hfill \end{array}\right)=\left(\begin{array}{cc}c_{23}\hfill & s_{23}\hfill \\ s_{23}\hfill & c_{23}\hfill \end{array}\right)\left(\begin{array}{c}\nu _\mu \hfill \\ \nu _\tau \hfill \end{array}\right)$$ (4) where $`t_{23}=\lambda _{\nu _\mu }/\lambda _{\nu _\tau }`$, with $`\nu _0`$ being massless, due to the vanishing of the determinant of the 23 submatrix and $`\nu _3`$ having a mass $`m_{\nu _3}=(\lambda _{\nu _\mu }^2+\lambda _{\nu _\tau }^2)v_2^2/M_{RR}`$. The Super-Kamiokande data is accounted for by choosing the parameters such that $`t_{23}1`$ and $`m_{\nu _3}5\times 10^2`$ eV. In this approximation the atmospheric neutrino data is then consistent with $`\nu _\mu \nu _\tau `$ oscillations via two state mixing, between $`\nu _3`$ and $`\nu _0`$. Note how the single right-handed neutrino coupling to the 23 sector implies vanishing determinant of the 23 submatrix. This provides a natural explanation of both large 23 mixing angles and a hierarchy of neutrino masses in the 23 sector at the same time . In order to account for the solar neutrino data a small mass perturbation is required to lift the massless degeneracy of the two neutrinos $`\nu _0,\nu _e`$. In our original approach <sup>1</sup><sup>1</sup>1 Another approach which does not rely on additional right-handed neutrinos is to use SUSY radiative corrections so that the one-loop corrected neutrino masses are not zero but of order $`10^5`$ eV suitable for the vacuum oscillation solution. we introduced additional right-handed neutrinos in order to provide a subdominant contribution to the effective mass matrix in Eq.2. To be precise we assumed a single dominant right-handed neutrino below the unification scale, with additional right-handed neutrinos at the unification scale which lead to subdominant contributions to the effective neutrino mass matrix. By appealing to quark and lepton mass hierarchy we assumed that the additional subdominant right-handed neutrinos generate a contribution $`m_{\nu _\tau }m_t^2/M_U2\times 10^3`$ eV, where $`m_t`$ is the top quark mass. The effect of this is to give a mass perturbation to the 33 component of the mass matrix in Eq.2, which results in $`\nu _0`$ picking up a small mass, through its $`\nu _\tau `$ component, while $`\nu _e`$ remains massless. Solar neutrino oscillations then arise from $`\nu _e\nu _0`$ with the mass splitting in the right range for the small angle MSW solution, controlled by a small mixing angle $`\theta _{12}\lambda _{\nu _e}/\sqrt{\lambda _{\nu _\mu }^2+\lambda _{\nu _\tau }^2}`$. The main prediction of this scheme is of the neutrino oscillation $`\nu _e\nu _3`$ with a mass difference $`\mathrm{\Delta }m_{13}^2\mathrm{\Delta }m_{23}^2`$ determined by the Super-Kamiokande data and a mixing angle $`\theta _{13}\theta _{12}`$ determined by the small angle MSW solution. Such oscillations may be observable at the proposed long baseline experiments via $`\nu _3\nu _e`$ which implies $`\nu _\mu \nu _e`$ oscillations with $`\mathrm{sin}^22\theta 5\times 10^3`$ (the small MSW angle) and $`\mathrm{\Delta }m^22.2\times 10^3eV^2`$ (the Super-Kamiokande square mass difference). It should be clear from the foregoing discussion that the motivation for single right-handed neutrino dominance (SRHND) is that the determinant of the 23 submatrix of Eq.2 approximately vanishes, leading to a natural explanation of both large neutrino mixing angles and hierarchical neutrino masses in the 23 sector at the same time . Although the explicit example of SRHND above was based on one of the right-handed neutrinos being lighter than the others, it is clear that the idea of SRHND is more general than this. In the present paper we shall define SRHND more generally as the requirement that a single right-handed neutrino gives the dominant contribution to the 23 submatrix of the light effective neutrino mass matrix. We shall propose SRHND as a general requirement and address the following two questions: 1. What are the general conditions under which SRHND in the 23 block can arise and how can we quantify the contribution of the sub-dominant right-handed neutrinos which are responsible for breaking the massless degeneracy, and allowing the small angle MSW solution? 2. How can we understand the pattern of neutrino Yukawa couplings in Eq.3 where the assumed equality $`\lambda _{\nu _\mu }\lambda _{\nu _\tau }`$ is apparently at odds with the hierarchical Yukawa couplings in the quark and charged lepton sector? In order to address the two questions above we shall discuss SRHND in the context of a $`U(1)`$ family symmetry. In fact neutrino masses and mixing angles have already been studied in the context of $`U(1)`$ family symmetry models but in the models that exist to date either SRHND is not present at all , , or where it is present its presence has apparently gone unnoticed . <sup>2</sup><sup>2</sup>2We should point out that the condition of the approximately vanishing subdeterminant was first clearly stated in ref.. However all the actual examples presented there correspond to a single right-handed neutrino giving the dominant contribution to the 23 block of the effective neutrino mass matrix, which is essentially the mechanism first proposed in ref.. Also note that SRHND has very recently been applied to an $`SU(2)`$ family symmetry model. Where there is no SRHND, either the contribution to the 23 mixing angles coming from the neutrino matrix are small , or the 23 neutrino mass hierarchy is not described by the Wolfenstein expansion parameter . Where the 23 neutrino mass hierarchies are described by the Wolfenstein expansion parameter and large 23 mixing angles naturally arise , we shall show that the physical reason why these models are successful is that a single right-handed neutrino is giving the dominant contribution to the 23 submatrix of $`m_{LL}`$. We shall give general conditions that theories with $`U(1)`$ family symmetry must satisfy in order to have SRHND and show that the models in satisfy these conditions. ## 2 MSSM with $`Z`$ Right-handed Neutrinos To fix the notation, we assume the Yukawa terms of the minimal supersymmetric standard model (MSSM) augmented by $`Z`$ right-handed neutrinos, $`_{yuk}=ϵ_{ab}\left[Y_{ij}^uH_u^aQ_i^bU_j^c+Y_{ij}^dH_d^aQ_i^bD_j^c+Y_{ij}^eH_d^aL_i^bE_j^cY_{ip}^\nu H_u^aL_i^bN_p^c+{\displaystyle \frac{1}{2}}M_{RR}^{pq}N_p^cN_q^c\right]`$ $`+H.c.`$ (5) where $`ϵ_{ab}=ϵ_{ba}`$, $`ϵ_{12}=1`$, and the remaining notation is standard except that the $`Z`$ right-handed neutrinos $`N_R^p`$ have been replaced by their CP conjugates $`N_p^c`$ with $`p,q=1,\mathrm{},Z`$. When the two Higgs doublets get their vacuum expectation values (VEVS) $`<H_u^2>=v_2`$, $`<H_d^1>=v_1`$ with $`\mathrm{tan}\beta v_2/v_1`$ we find the terms $$_{yuk}=v_2Y_{ij}^uU_iU_j^c+v_1Y_{ij}^dD_iD_j^c+v_1Y_{ij}^eE_iE_j^c+v_2Y_{ip}^\nu N_iN_p^c+\frac{1}{2}M_{RR}^{pq}N_p^cN_q^c+H.c.$$ (6) Replacing CP conjugate fields we can write in a matrix notation $$_{yuk}=\overline{U}_Lv_2Y^uU_R+\overline{D}_Lv_1Y^dD_R+\overline{E}_Lv_1Y^eE_R+\overline{N}_Lv_2Y^\nu N_R+\frac{1}{2}N_R^TM_{RR}N_R+H.c.$$ (7) where we have assumed that all the masses and Yukawa couplings are real and written $`Y^{}=Y`$. The diagonal mass matrices are given by the following unitary transformations $`v_2Y_{diag}^u=V_{uL}v_2Y^uV_{uR}^{}=\mathrm{diag}(\mathrm{m}_\mathrm{u},\mathrm{m}_\mathrm{c},\mathrm{m}_\mathrm{t}),`$ $`v_1Y_{diag}^d=V_{dL}v_1Y^dV_{dR}^{}=\mathrm{diag}(\mathrm{m}_\mathrm{d},\mathrm{m}_\mathrm{s},\mathrm{m}_\mathrm{b}),`$ $`v_1Y_{diag}^e=V_{eL}v_1Y^eV_{eR}^{}=\mathrm{diag}(\mathrm{m}_\mathrm{e},\mathrm{m}_\mu ,\mathrm{m}_\tau ),`$ $`M_{RR}^{diag}=\mathrm{\Omega }_{RR}M_{RR}\mathrm{\Omega }_{RR}^{}=\mathrm{diag}(\mathrm{M}_{\mathrm{R1}},\mathrm{},\mathrm{M}_{\mathrm{RZ}}),`$ (8) where the unitary transformations are also orthogonal. From Eq.1 the light effective left-handed Majorana neutrino mass matrix is $$m_{LL}=v_2^2Y_\nu M_{RR}^1Y_\nu ^T$$ (9) Having constructed the light Majorana mass matrix it must then be diagonalised by unitary transformations, $$m_{LL}^{diag}=V_{\nu L}m_{LL}V_{\nu L}^{}=\mathrm{diag}(\mathrm{m}_{\nu _1},\mathrm{m}_{\nu _2},\mathrm{m}_{\nu _3}).$$ (10) The CKM matrix is given by $$V_{CKM}=V_{uL}V_{dL}^{}$$ (11) and its leptonic analogue is the MNS matrix $$V_{MNS}=V_{\nu L}V_{eL}^{}.$$ (12) ## 3 Wolfenstein Expansions The Wolfenstein parametrisation of the CKM matrix yields the approximate form : $$V_{CKM}\left(\begin{array}{ccc}1\hfill & \lambda \hfill & \lambda ^3\hfill \\ \lambda \hfill & 1\hfill & \lambda ^2\hfill \\ \lambda ^3\hfill & \lambda ^2\hfill & 1\hfill \end{array}\right)$$ (13) where $`\lambda V_{us}0.22`$. The horizontal quark and lepton mass ratios may similarly be expanded in terms of the Wolfenstein parameter: <sup>3</sup><sup>3</sup>3We follow the expansions in ref. even though $`\frac{m_e}{m_\tau }\lambda ^5`$ is a better fit. $$\frac{m_u}{m_t}\lambda ^8,\frac{m_c}{m_t}\lambda ^4,\frac{m_d}{m_b}\lambda ^4,\frac{m_s}{m_b}\lambda ^2,\frac{m_e}{m_\tau }\lambda ^4,\frac{m_\mu }{m_\tau }\lambda ^2.$$ (14) Assuming the MSSM the vertical quark and lepton mass ratios at $`M_U`$ are $$\frac{m_b}{m_t}\lambda ^3,\frac{m_b}{m_\tau }1.$$ (15) Assuming that $`V_{CKM}V_{uL}V_{dL}`$, and the diagonal elements of the Yukawa matrices are of the same order as the eigenvalues:<sup>4</sup><sup>4</sup>4Again this is similar to ref. except that we allow a more general $`\mathrm{tan}\beta `$ dependence $$Y^u\left(\begin{array}{ccc}\lambda ^8\hfill & \lambda ^5\hfill & \lambda ^3\hfill \\ \hfill & \lambda ^4\hfill & \lambda ^2\hfill \\ \hfill & \hfill & 1\hfill \end{array}\right),Y^d\left(\begin{array}{ccc}\lambda ^4\hfill & \lambda ^3\hfill & \lambda ^3\hfill \\ \hfill & \lambda ^2\hfill & \lambda ^2\hfill \\ \hfill & \hfill & 1\hfill \end{array}\right)\lambda ^n$$ (16) where $`\mathrm{tan}\beta \lambda ^{n3}`$. Note that the CKM matrix only gives information about the upper triangular parts of the quark Yukawa matrices. The MNS matrix is less well determined, but Super-Kamiokande tells us that $`\theta _{23}1`$ and the small angle MSW solution implies $`\theta _{12}\lambda ^2`$. In addition for $`\mathrm{\Delta }m^2>9\times 10^4eV^2`$ (i.e. over most of the atmospheric range) CHOOZ fails to observe $`\nu _e\nu _3`$ and excludes $`\mathrm{sin}^22\theta _{13}>0.18`$ or $`\theta _{13}>0.22`$. Hence CHOOZ allows $`\theta _{13}\lambda ^2`$. If we assume for the sake of argument that $`\theta _{13}\lambda ^2`$ (recall that this is a prediction of SRHND which follows from Eqs.2 and 3) then $`V_{MNS}`$ is given by: $$V_{MNS}\left(\begin{array}{ccc}1\hfill & \lambda ^2\hfill & \lambda ^2\hfill \\ \lambda ^2\hfill & 1\hfill & 1\hfill \\ \lambda ^2\hfill & 1\hfill & 1\hfill \end{array}\right)$$ (17) Then, in a similar way to the quarks, assuming that $`V_{MNS}V_{\nu L}V_{eL}`$, and the diagonal elements of the charged lepton matrix are of the same order as the eigenvalues we deduce $$Y^e\left(\begin{array}{ccc}\lambda ^4\hfill & \lambda ^4\hfill & \lambda ^2\hfill \\ \hfill & \lambda ^2\hfill & 1\hfill \\ \hfill & \hfill & 1\hfill \end{array}\right)\lambda ^n.$$ (18) The same argument applied to $`m_{LL}`$ runs into trouble because the hierarchy between the second and third eigenvalues is apparently not consistent with $`\theta _{23}1`$. To be precise Super-Kamiokande tells us that $`m_{\nu _3}5\times 10^2`$ eV, and small angle MSW tells us that $`m_{\nu _2}2\times 10^3`$ eV, hence $$\frac{m_{\nu _2}}{m_{\nu _3}}\lambda ^2.$$ (19) The problem is how to generate such a hierarchy in the presence of large neutrino mixing angles. Note that this problem can be avoided for the charged lepton matrix in Eq.18 due to the undetermined 32 element which can be small, but for the symmetric neutrino matrix it is a problem. Fortunately the solution is provided by SRHND which implies that $`m_{LL}`$ is given from Eqs.2 and 3 as: $$m_{LL}\left(\begin{array}{ccc}\lambda ^4\hfill & \lambda ^2\hfill & \lambda ^2\hfill \\ \lambda ^2\hfill & 1\hfill & 1\hfill \\ \lambda ^2\hfill & 1\hfill & 1\hfill \end{array}\right)m_{\nu _3}$$ (20) It is clear that SRHND leads to the prediction $$\frac{m_{\nu _1}}{m_{\nu _3}}\lambda ^4,$$ (21) in addition to the previously mentioned prediction $`\theta _{13}\lambda ^2`$. The key to obtaining the hierarchy in Eq.19 from Eq.20 is the requirement that the determinant of the 23 submatrix must vanish to order $`\lambda ^2`$. Since this subdeterminant naturally vanishes for a single right-handed neutrino coupling to the 23 sector, as in Eq.2, all that is required is for the subdominant right-handed neutrino to generate a perturbation to the masses in the 23 sector which are of order $`\lambda ^2`$ smaller than the leading contribution. We shall now discuss how this can come about in the framework of theories with broken $`U(1)`$ family symmetry. ## 4 U(1) Family Symmetry The idea of accounting for the fermion mass spectrum via a broken family symmetry has a long history , . For definiteness we shall focus on a particular class of model based on a single pseudo-anomalous $`U(1)_X`$ gauged family symmetry . We assume that the $`U(1)_X`$ is broken by the equal VEVs of two MSSM singlets $`\theta ,\overline{\theta }`$ which have vector-like charges $`\pm 1`$ . Theories in which the $`U(1)_X`$ is broken by a chiral MSSM singlet $`\chi `$ which has charge of one sign only, say $`+1`$, have also been proposed , . In all these cases the $`U(1)_X`$ has anomalies in the effective low energy theory below $`M_U`$ but these are compensated by string theory effects at $`M_U`$ and the Green-Schwartz mechanism provides a dimension-five interaction term, whose structure demands a specific pattern among the anomaly coefficients : $$A(SU(3)_c^2U(1)_X):A(SU(2)_L^2U(1)_X):A(U(1)_Y^2U(1)_X)=1:1:5/3$$ (22) The $`U(1)_X`$ breaking scale is set by $`<\theta >=<\overline{\theta }>`$ where the VEVs arise from a Green-Schwartz computable Fayet-Illiopoulos $`D`$-term which determines these VEVs to be one or two orders of magnitude below $`M_U`$. Additional exotic matter which exists in vector-like pairs with opposite charges $`\pm X_i`$ at a heavy mass scale $`M_V`$ (generated by the VEVs of yet more singlets) allows the Wolfenstein parameter to be generated by the ratio $$\frac{<\theta >}{M_V}=\frac{<\overline{\theta }>}{M_V}=\lambda 0.22$$ (23) The idea is that at tree-level the $`U(1)_X`$ family symmetry only permits third family Yukawa couplings (e.g. the top quark Yukawa coupling). Smaller Yukawa couplings are generated effectively from higher dimension non-renormalisable operators corresponding to insertions of $`\theta `$ and $`\overline{\theta }`$ fields and hence to powers of the expansion parameter in Eq.23, which we have identified with the Wolfenstein parameter. The number of powers of the expansion parameter is controlled by the $`U(1)_X`$ charge of the particular MSSM operator. <sup>5</sup><sup>5</sup>5Of course this simple picture may in reality be more complicated if several different vector mass scales are assumed, and taking into account the order one dimensionless couplings involving different $`\theta `$ and $`\overline{\theta }`$ fields coupling the MSSM fields to the heavy vector matter. By making various dynamical assumptions it is possible to generate several different expansion parameters which may be in expanded non-integer powers . It is also possible to introduce several $`U(1)`$ symmetries, such as a model recently proposed based on a family-independent pseudo-anomalous $`U(1)_X`$ symmetry together with two further anomaly-free but family-dependent $`U(1)`$ symmetries . For our purposes here it is sufficient to assume a single $`U(1)_X`$ family symmetry with the single Wolfenstein expansion parameter in Eq.23 raised to integer powers. The MSSM fields $`Q_i`$, $`U_j^c`$, $`D_j^c`$, $`L_i`$, $`E_j^c`$, $`H_u`$, $`H_d`$ are assigned $`U(1)_X`$ charges $`q_i`$, $`u_j`$, $`d_j`$, $`l_i`$, $`e_j`$, $`h_u`$, $`h_d`$ consistent with Eq.22. This restricts the physical values of the charges which we are permitted to assign. <sup>6</sup><sup>6</sup>6This restriction may be relaxed by assuming that the heavy vector matter has $`X_i`$ charges chosen to cancel the anomalies, but we prefer instead to regard this as a welcome constraint on the charges. We shall, however, allow heavy MSSM singlets with arbitrary charges to cancel $`U(1)_X^3`$ anomalies. We do not impose any restriction on the $`Z`$ right-handed MSSM singlet neutrinos $`N_p^c`$ which therefore have unconstrained charges $`n_p`$. We shall suppose that the right-handed neutrino Majorana mass matrix $`M_{RR}`$ arises from the VEV of another MSSM singlet $`\mathrm{\Sigma }`$ with charge $`\sigma `$ . The anomaly restriction means that there must exist a physical basis where the Higgs charges are equal and opposite, $`h_u=h_d`$ in order to cancel their contributions to the anomalies, and gives a zero charge to the $`\mu H_uH_d`$ term. The other operators in Eq.5 will in general have non-zero charges and from Eqs.23, the associated Yukawa couplings and Majorana mass terms may then be expanded in powers of the Wolfenstein parameter, $`Y_{ij}^u\lambda ^{|q_i+u_j+h_u|},Y_{ij}^d\lambda ^{|q_i+d_j+h_d|},Y_{ij}^e\lambda ^{|l_i+e_j+h_d|},`$ (24) $`Y_{ip}^\nu \lambda ^{|l_i+n_p+h_u|},M_{RR}^{pq}\lambda ^{|n_p+n_q+\sigma |}<\mathrm{\Sigma }>.`$ (25) In the physical basis of charges discussed so far the quarks and leptons must contribute to the anomalies in the ratios in Eq.22. A corollary of this is that the physical charges are related to traceless charges (denoted by primes) by two flavour-independent $`SU(5)`$ shifts $`\mathrm{\Delta }t\mathrm{\Delta }q=\mathrm{\Delta }u=\mathrm{\Delta }e`$ and $`\mathrm{\Delta }f\mathrm{\Delta }l=\mathrm{\Delta }d`$ : $$q_i^{}=q_i+\mathrm{\Delta }t,u_i^{}=u_i+\mathrm{\Delta }t,e_i^{}=e_i+\mathrm{\Delta }t,l_i^{}=l_i+\mathrm{\Delta }f,d_i^{}=d_i+\mathrm{\Delta }f,$$ (26) It is possible to absorb the $`SU(5)`$ shifts into the Higgs charges by defining $$h_u^{}h_u2\mathrm{\Delta }t,h_d^{}h_d\mathrm{\Delta }t\mathrm{\Delta }f,$$ (27) so that $$q_i+u_j+h_u=q_i^{}+u_j^{}+h_u^{},q_i+d_j+h_d=q_i^{}+d_j^{}+h_d^{},l_i+e_j+h_d=l_i^{}+e_j^{}+h_d^{}.$$ (28) The couplings in Eq.24 may then be equivalently expanded in terms of primed charges. Tracelessness implies that the first family charges may be eliminated $$q_1^{}=q_2^{}q_3^{},u_1^{}=u_2^{}u_3^{},d_1^{}=d_2^{}d_3^{},l_1^{}=l_2^{}l_3^{},e_1^{}=e_2^{}e_3^{}.$$ (29) Since the 33 component of the Yukawa matrices are either renormalisable or related to $`\mathrm{tan}\beta `$ dependent integers we can eliminate the primed Higgs charges using $$h_u^{}=q_3^{}u_3^{},h_d^{}=n_dq_3^{}d_3^{}=n_el_3^{}e_3^{}.$$ (30) Using Eqs.28, 29, 30 the Yukawa matrices in 24 may then be expressed as $$Y^u\left(\begin{array}{ccc}\lambda ^{|\gamma _u+\delta _u|}\hfill & \lambda ^{|\gamma _u+\beta _u|}\hfill & \lambda ^{|\gamma _u|}\hfill \\ \lambda ^{|\alpha _u+\delta _u|}\hfill & \lambda ^{|\alpha _u+\beta _u|}\hfill & \lambda ^{|\alpha _u|}\hfill \\ \lambda ^{|\delta _u|}\hfill & \lambda ^{|\beta _u|}\hfill & 1\hfill \end{array}\right),Y^d\left(\begin{array}{ccc}\lambda ^{|\gamma _d+\delta _d+n_d|}\hfill & \lambda ^{|\gamma _d+\beta _d+n_d|}\hfill & \lambda ^{|\gamma _d+n_d|}\hfill \\ \lambda ^{|\alpha _d+\delta _d+n_d|}\hfill & \lambda ^{|\alpha _d+\beta _d+n_d|}\hfill & \lambda ^{|\alpha _d+n_d|}\hfill \\ \lambda ^{|\delta _d+n_d|}\hfill & \lambda ^{|\beta _d+n_d|}\hfill & \lambda ^{|n_d|}\hfill \end{array}\right),$$ $$Y^e\left(\begin{array}{ccc}\lambda ^{|\gamma _e+\delta _e+n_e|}\hfill & \lambda ^{|\gamma _e+\beta _e+n_e|}\hfill & \lambda ^{|\gamma _e+n_e|}\hfill \\ \lambda ^{|\alpha _e+\delta _e+n_e|}\hfill & \lambda ^{|\alpha _e+\beta _e+n_e|}\hfill & \lambda ^{|\alpha _e+n_e|}\hfill \\ \lambda ^{|\delta _e+n_e|}\hfill & \lambda ^{|\beta _e+n_e|}\hfill & \lambda ^{|n_e|}\hfill \end{array}\right),$$ (31) where $`\alpha _u=\alpha _d=q_2^{}q_3^{},\alpha _e=l_2^{}l_3^{},`$ $`\beta _u=u_2^{}u_3^{},\beta _d=d_2^{}d_3^{},\beta _e=e_2^{}e_3^{},`$ $`\gamma _u=\gamma _d=q_2^{}2q_3^{},\gamma _e=l_2^{}2l_3^{},`$ $`\delta _u=u_2^{}2u_3^{},\delta _d=d_2^{}2d_3^{},\delta _e=e_2^{}2e_3^{}.`$ (32) The above analysis applies quite generally to any theory based on a single pseudo-anomalous $`U(1)_X`$ gauged family symmetry. However the quark and lepton charges may be constrained by imposing unification constraints on the theory. For example: * $`SU(5)`$ unification implies <sup>7</sup><sup>7</sup>7Note that $`SU(5)`$ automatically guarantees Green-Schwartz anomaly cancellation for any choice of charges. $`l_i=d_i`$, $`q_i=u_i=e_i`$ but allows $`Z`$ arbitrary right-handed neutrino charges $`n_p`$. * $`SU(2)_R`$ gauge symmetry implies that $`Z=3`$ with $`n_i=e_i`$ and $`d_i=u_i`$. * Left-right symmetry is stronger than $`SU(2)_R`$ and implies $`n_i=e_i=l_i`$ and $`d_i=u_i=q_i`$. * Pati-Salam $`SU(4)\times SU(2)_L\times SU(2)_R`$ implies $`l_i=q_i`$ and $`u_i=d_i=e_i=n_i`$. * $`SO(10)`$ unification implies $`l_i=d_i=q_i=u_i=e_i=n_i`$ * Trinification $`SU(3)^3`$ implies $`u_i=d_i`$, $`l_i=e_i=n_i`$ and unconstrained $`q_i`$ * Flipped $`SU(5)\times U(1)`$ implies $`q_i=d_i=n_i`$, $`u_i=l_i`$ and unconstrained $`e_i`$ As discussed in ref. these examples are difficult to reconcile with the data without either appealing to group theoretical Clebsch relations or carefully chosen dynamical assumptions. <sup>8</sup><sup>8</sup>8The $`SU(3)^3`$ model discussed there looks the most natural. We shall therefore not impose such gauge unification constraints here but instead consider the general case in Eqs.31. By comparing Eqs.31, to Eqs.16, 18 suitable choices of the integers $`\alpha _a,\beta _a,\gamma _a,\delta _a`$, (where $`a=u,d,e`$) can readily be deduced. Note especially that $`m_b/m_\tau 1`$ implies $$n=|n_e|=|n_d|,\mathrm{tan}\beta \lambda ^{n3}.$$ (33) It is straightforward to scan over all the possible positive and negative integers $`\alpha _a,\beta _a,\gamma _a,\delta _a,n_a`$ to find acceptable Yukawa matrices from Eqs.31. For example a special case is when $`\alpha _a,\beta _a,\gamma _a,\delta _a,n_a`$ are all positive definite integers . In this case from Eqs.16, 18, 31 we find $`\alpha _u=\alpha _d=2`$, $`\alpha _e=0`$, $`\beta _u=2`$, $`\beta _d=0`$, $`\beta _e=2`$, $`\gamma _u=\gamma _d=3`$, $`\gamma _e=2`$, $`\delta _u=5`$, $`\delta _d=1`$, $`\delta _e=2`$, $`n_e=n_d=n`$. <sup>9</sup><sup>9</sup>9 Note that $`n_e=n_d=n`$ imposes the non-trivial constraint that $`\alpha _e+\beta _e+\gamma _e+\delta _e=\alpha _d+\beta _d+\gamma _d+\delta _d`$ which is satisfied here. If for example we had taken $`\frac{m_e}{m_\tau }\lambda ^5`$ it would not be satisfied. The Yukawa matrices are then fully specified in this example, up to a $`\mathrm{tan}\beta `$ dependence: $$Y^u\left(\begin{array}{ccc}\lambda ^8\hfill & \lambda ^5\hfill & \lambda ^3\hfill \\ \lambda ^7\hfill & \lambda ^4\hfill & \lambda ^2\hfill \\ \lambda ^5\hfill & \lambda ^2\hfill & 1\hfill \end{array}\right),Y^d\left(\begin{array}{ccc}\lambda ^4\hfill & \lambda ^3\hfill & \lambda ^3\hfill \\ \lambda ^3\hfill & \lambda ^2\hfill & \lambda ^2\hfill \\ \lambda \hfill & 1\hfill & 1\hfill \end{array}\right)\lambda ^n,Y^e\left(\begin{array}{ccc}\lambda ^4\hfill & \lambda ^4\hfill & \lambda ^2\hfill \\ \lambda ^2\hfill & \lambda ^2\hfill & 1\hfill \\ \lambda ^2\hfill & \lambda ^2\hfill & 1\hfill \end{array}\right)\lambda ^n.$$ (34) Given $`\alpha _a,\beta _a,\gamma _a,\delta _a,n_a`$ above and using Eqs.29, 30, 32 we find the following traceless: $`q_i^{}={\displaystyle \frac{1}{3}}(4,1,5),u_i^{}={\displaystyle \frac{1}{3}}(8,1,7),d_i^{}={\displaystyle \frac{1}{3}}(2,1,1)`$ $`l_i^{}={\displaystyle \frac{1}{3}}(4,2,2),e_i^{}={\displaystyle \frac{1}{3}}(2,2,4),h_u^{}=4,h_d^{}=2+n`$ (35) The physical (unprimed) charges are by definition those which lead to the Higgs charges satisfying $`h_u=h_d`$. Eq.27 shows that there remains an ambiguity in the choice of Higgs charges and hence in $`\mathrm{\Delta }t,\mathrm{\Delta }f`$ which are two unknowns constrained by only one relation, namely $`3\mathrm{\Delta }t=h_d^{}+h_u^{}+\mathrm{\Delta }f`$. We can regard $`\mathrm{\Delta }f`$ as being a completely free parameter whose choice specifies all the physical (unprimed) charges uniquely. For example we may set the Higgs charges to be zero by taking <sup>10</sup><sup>10</sup>10Note that in general both both $`\mathrm{\Delta }t`$ and $`\mathrm{\Delta }f`$ are non-zero and so the family symmetry $`U(1)_X`$ cannot be anomaly-free and is instead pseudo-anomalous . $`\mathrm{\Delta }t=2`$, $`\mathrm{\Delta }f=n`$ which enables the physical (unprimed) charges to be deduced from Eq.26. Other choices of $`\mathrm{\Delta }f`$ will lead to different choices of physical charges. ## 5 SRHND and U(1) Family Symmetry We now turn our attention to the neutrino sector, which is the main focus of this paper. Since the $`Z`$ right-handed neutrinos are not constrained by anomaly cancellation it is most convenient to work with physical (unprimed) charges as in Eq.25. $`Y^\nu `$ clearly depends on the combination of lepton and Higgs charges $$l_i+h_u=l_i^{}+h_u^{}/32h_d^{}/35\mathrm{\Delta }f/3$$ which is not fixed by the primed charges due to the remaining freedom in $`\mathrm{\Delta }f`$. In dealing with the neutrino sector it is convenient to absorb the Higgs charge $`h_u`$ into the definition of the lepton charges $`l_i`$ so that Eq.25 becomes $$Y_{ip}^\nu \lambda ^{|l_i+n_p|},M_{RR}^{pq}\lambda ^{|n_p+n_q+\sigma |}<\mathrm{\Sigma }>$$ (36) where the redefined $`l_i`$ are related to the traceless charges $`l_i^{}`$ by arbitrary family-independent shifts, and using Eq.35 may be written as: $$l_i=(2+l_3,l_3,l_3)$$ (37) where the numerical value of $`l_3`$ remains a free choice. The light Majorana matrix may then be constructed from Eq.9 which we repeat below $$m_{LL}=v_2^2Y_\nu M_{RR}^1Y_\nu ^T$$ (38) If we were to assume positive definite values for $`l_i+n_p`$ and $`n_p+n_q+\sigma `$ then the modulus signs could be dropped and the right-handed neutrino charges $`n_p`$ would cancel when $`m_{LL}`$ is constructed from Eqs.38 and 36 . The argument relies on the observation that if the modulus signs are dropped from Eq.36 one can always write $`Y^\nu =diag(\lambda ^{l_1},\lambda ^{l_2},\lambda ^{l_3})Y_Ddiag(\lambda ^{n_1},\mathrm{},\lambda ^{n_Z}),`$ $`M_{RR}=diag(\lambda ^{n_1},\mathrm{},\lambda ^{n_Z})M_Mdiag(\lambda ^{n_1},\mathrm{},\lambda ^{n_Z})`$ (39) where $`Y_D`$ and $`M_M`$ are democratic matrices. Inserting Eq.39 into Eq.9 the right-handed neutrino charges are seen to cancel. Such a cancellation would imply that every right-handed neutrino would contribute equally to every entry in $`m_{LL}`$ regardless of the right-handed neutrino charges. From the point of view of SRHND it is therefore important that such a cancellation does not take place, and so we shall require that at least some of the combinations $`l_i+n_p`$ and $`n_p+n_q+\sigma `$ take negative values. In such a case the choice of right-handed neutrino charges will play an important role in determining $`m_{LL}`$, and each particular choice of $`n_p`$ must be analysed separately. At first sight the general case of $`Z`$ right-handed neutrinos with unconstrained charges $`n_p`$ leading to non-positive definite exponents in Eq.36 seems to make the determination of $`m_{LL}`$ an intractable problem. However we have already argued that the atmospheric neutrino data suggests SRHND in the 23 sector and this will lead to $`m_{LL}`$ of the form given in Eq.20. We shall now formulate the general conditions which will lead to SRHND in the 23 sector. ### 5.1 One Right-handed Neutrino Let us first consider the case $`Z=1`$ where there is just a single right-handed neutrino, which for later convenience we shall refer to as $`N_3^c`$ with charge $`n_3`$. In this case Eq.36 becomes $$Y_{i3}^\nu \lambda ^{|l_i+n_3|},M_{RR}^{33}\lambda ^{|2n_3+\sigma |}<\mathrm{\Sigma }>.$$ (40) Being a $`1\times 1`$ matrix $`M_{RR}^{33}`$ is trivially inverted and we obtain from Eqs.38, $$m_{LL}^{ij}\lambda ^{|l_i+n_3|}\lambda ^{|l_j+n_3|}\frac{v_2^2}{M_{RR}^{33}}$$ (41) which should be compared to Eq.2, where we identify <sup>11</sup><sup>11</sup>11Even though the couplings in Eq.2 were defined in the diagonal charged lepton basis, the identification is still valid to a consistent order of the expansion parameter. $$Y_{i3}^\nu \lambda ^{|l_i+n_3|}(\lambda _{\nu _e},\lambda _{\nu _\mu },\lambda _{\nu _\tau })$$ (42) Then Eq.20 requires that $$|l_2+n_3|=|l_3+n_3|,|l_1+n_3||l_3+n_3|=2$$ (43) If both $`l_2+n_3`$ and $`l_3+n_3`$ have the same sign (SS) then $`l_2=l_3`$, whereas if they have opposite signs (OS) then $`l_2+l_3=2n_3`$. Similarly if both $`l_1+n_3`$ and $`l_3+n_3`$ have the SS then $`l_1l_3=2`$, whereas if they have OS then $`l_1+l_3=22n_3`$. Interestingly the SS cases $`l_2=l_3`$, $`l_1l_3=2`$ have already arisen in the example in Eq.35, which corresponds to $`l_i`$ charges in Eq.37. This is no surprise since it originates from the charged lepton Yukawa matrix in Eq.18 which follows from the assumption $`V_{MNS}V_{\nu L}V_{eL}`$ and the Super-Kamiokande data and the MSW solution. To summarise, from Eqs.40, 41 and imposing Eq.43 the single right-handed neutrino included so far leads to $$m_{LL}\left(\begin{array}{ccc}\lambda ^4\hfill & \lambda ^2\hfill & \lambda ^2\hfill \\ \lambda ^2\hfill & 1\hfill & 1\hfill \\ \lambda ^2\hfill & 1\hfill & 1\hfill \end{array}\right)m_{\nu _3}$$ (44) where the atmospheric neutrino mass is given $$m_{\nu _3}\lambda ^{2|l_3+n_3||2n_3+\sigma |}\frac{v_2^2}{<\mathrm{\Sigma }>}$$ (45) With only a single right-handed neutrino $`m_{LL}`$ in Eq.44 has two zero eigenvalues, and a vanishing determinant of the 23 submatrix, as in Eq.2. In order to implement the small angle MSW solution we need to include the effect of subdominant right-handed neutrinos which break the massless degeneracy. SRHND requires that the elements in the 23 sector of Eq.44 must receive corrections of order $`\lambda ^2`$ from the subdominant neutrinos so that the determinant of the 23 submatrix only approximately vanishes to this order leading to a small eigenvalue of order $`\lambda ^2`$ and the desired mass hierarchy in Eq.19. ### 5.2 Two Right-handed Neutrinos We now include a second right-handed neutrino $`N_2^c`$ with charge $`n_2`$, in addition to $`N_3^c`$ with charge $`n_3`$. With two right-handed neutrinos, $`Z=2`$, the heavy Majorana mass matrix from Eq.36 is $$M_{RR}\left(\begin{array}{cc}\lambda ^{|2n_2+\sigma |}\hfill & \lambda ^{|n_2+n_3+\sigma |}\hfill \\ \lambda ^{|n_2+n_3+\sigma |}\hfill & \lambda ^{|2n_3+\sigma |}\hfill \end{array}\right)<\mathrm{\Sigma }>$$ (46) For SRHND we clearly require $`n_2n_3`$ to avoid the two right-handed neutrinos contributing democratically. More generally for SRHND we need to avoid large right-handed neutrino mixing angles. If we assume without loss of generality that $`\lambda ^{|2n_2+\sigma |}>\lambda ^{|2n_3+\sigma |}`$, so that $`N_2^c`$ is heavier than $`N_3^c`$, then this implies $$|2n_2+\sigma |<|2n_3+\sigma |$$ (47) Then the small mixing angle requirement is $$|2n_2+\sigma |<|n_2+n_3+\sigma |$$ (48) The lightest eigenvalue is of order the diagonal element provided $$|2n_2+\sigma |2|n_2+n_3+\sigma ||2n_3+\sigma |$$ (49) Assuming all these conditions are met then $`M_{RR}`$ will be diagonalised by small angle rotations and have hierarchical eigenvalues set by the diagonal elements. As a first approximation we may drop the off-diagonal elements and write $$M_{RR}diag(M_{R2},M_{R3})$$ (50) where $$M_{R2}\lambda ^{|2n_2+\sigma |}<\mathrm{\Sigma }>,M_{R3}\lambda ^{|2n_3+\sigma |}<\mathrm{\Sigma }>$$ (51) Then the light Majorana matrix is given by adding the separate contribution from each of the two right-handed neutrinos $$m_{LL}^{ij}=v_2^2\left(\frac{Y_\nu ^{i2}Y_\nu ^{j2}}{M_{R2}}+\frac{Y_\nu ^{i3}Y_\nu ^{j3}}{M_{R3}}\right)$$ (52) It is clear that the dominant contribution to a particular element of $`m_{LL}`$ will come from the right-handed neutrino which is at the same time the lightest, and couples the most strongly to left-handed neutrinos. Without loss of generality we have taken $`N_3^c`$ to be the lighter right-handed neutrino and to give the dominant contribution to the 23 block of $`m_{LL}`$ in Eq.41. We therefore write the subdominant contribution coming from the second right-handed neutrino $`N_2^c`$ as $$\delta m_{LL}^{ij}=\lambda ^{|l_i+n_2|}\lambda ^{|l_j+n_2|}\frac{v_2^2}{M_{R2}}$$ (53) As discussed below Eq.44 we require: $$\frac{\delta m_{LL}^{33}}{m_{LL}^{33}}=\frac{\lambda ^{2|l_3+n_2|}}{\lambda ^{2|l_3+n_3|}}\frac{M_{R3}}{M_{R2}}\lambda ^2$$ (54) From Eqs.51, $$\frac{M_{R3}}{M_{R2}}\lambda ^{|2n_3+\sigma ||2n_2+\sigma |}$$ (55) so Eq.54 implies the condition $$2|l_3+n_2|2|l_3+n_3|+|2n_3+\sigma ||2n_2+\sigma |=2$$ (56) We already observed that the required MSW perturbation is $$\delta m_{LL}^{33}\frac{v_2^2}{M_U}$$ (57) so we deduce $$\frac{M_{R2}}{M_U}\lambda ^{2|l_3+n_2|},\frac{<\mathrm{\Sigma }>}{M_U}\lambda ^{2|l_3+n_2||2n_2+\sigma |}$$ (58) There is the further requirement that the powers of $`\lambda `$ occuring in $`M_{RR}`$ and $`Y_\nu `$ be either integer or half-integer. <sup>12</sup><sup>12</sup>12In the case of half-integer powers this implies that the $`\theta `$, $`\overline{\theta }`$ fields which break the $`U(1)_X`$ symmetry must have charges $`\pm 1/2`$ and the expansion parameter in Eq.23 must be redefined so that $`\frac{<\theta >}{M_V}=\frac{<\overline{\theta }>}{M_V}=\lambda ^{1/2}`$, as in ref.. By scanning over half-integer and integer values of $`l_3,n_2,n_3,\sigma `$ we find that there are no solutions which satisfy all the above constraints for integer powers of $`\lambda `$ in $`M_{RR}`$ and $`Y_\nu `$.<sup>13</sup><sup>13</sup>13I am grateful to Y. Nir (private communication) for pointing this out. However there are a large number of solutions involving half-integer powers of $`\lambda `$ in $`M_{RR}`$ and $`Y_\nu `$ (of course $`m_{LL}`$ in Eq.20 always involves integer powers of $`\lambda `$.) The condition in Eq.56 may be achieved in various ways with $`N_3^c`$ being lighter than $`N_2^c`$ by a factor, $`M_{R3}/M_{R2}\lambda ^{|2n_3+\sigma ||2n_2+\sigma |}\lambda ^a`$, and the ratio of the Dirac couplings of $`N_2^c`$, $`N_3^c`$ to $`L_3`$ given by $`\lambda ^{2|l_3+n_2|2|l_3+n_3|}\lambda ^{2a}`$, where $`a>0`$ is a positive integer. For example $`l_3=1/2,n_2=0,n_3=1,\sigma =0`$ satisfies all the conditions with $`a=2`$ and $`Y_\nu `$ involving half-integer exponents. Further examples are listed in Table 1. ### 5.3 Three Right-handed Neutrinos We now wish to extend the discussion to include three right-handed neutrinos $`Z=3`$, by introducing a third right-handed neutrino $`N_1^c`$ with charge $`n_1`$ in addition to the two already introduced above. Again we shall suppose that $`N_3^c`$ gives the dominant contribution to the 23 sector masses. As for the $`Z=2`$ case we require $`n_3n_2,n_1`$, and we need to ensure that $`N_3^c`$ does not have large mixing angles in $`M_{RR}`$ in order to isolate it from the other right-handed neutrinos. This can be ensured by a sequence of conditions similar to Eqs.47, 48, 49. Then, after small angle rotations, $`M_{RR}`$ can be written in block diagonal form. $$M_{RR}\left(\begin{array}{ccc}\lambda ^{|2n_1+\sigma |}\hfill & \lambda ^{|n_1+n_2+\sigma |}\hfill & 0\hfill \\ \lambda ^{|n_2+n_1+\sigma |}\hfill & \lambda ^{|2n_2+\sigma |}\hfill & 0\hfill \\ 0\hfill & 0\hfill & \lambda ^{|2n_3+\sigma |}\hfill \end{array}\right)<\mathrm{\Sigma }>$$ (59) which is the analogue of Eq.50. The new feature of the $`Z=3`$ case compared to the $`Z=2`$ case is that there are now several possibilities for the structure of the upper $`2\times 2`$ block in Eq.59 which are all consistent with SRHND, which are listed below. “Diagonal dominated” corresponding to $`|n_1+n_2+\sigma |>min(|2n_1+\sigma |,|2n_2+\sigma |)`$: $$M_{RR}^{upper}\left(\begin{array}{cc}\lambda ^{|2n_1+\sigma |}\hfill & 0\hfill \\ 0\hfill & \lambda ^{|2n_2+\sigma |}\hfill \end{array}\right)<\mathrm{\Sigma }>$$ (60) “Off-diagonal dominated” corresponding to $`|n_1+n_2+\sigma |<|2n_1+\sigma |,|2n_2+\sigma |`$: $$M_{RR}^{upper}\left(\begin{array}{cc}0\hfill & \lambda ^{|n_1+n_2+\sigma |}\hfill \\ \lambda ^{|n_2+n_1+\sigma |}\hfill & 0\hfill \end{array}\right)<\mathrm{\Sigma }>$$ (61) “Democratic” corresponding to $`|n_1+n_2+\sigma |=|2n_1+\sigma |=|2n_2+\sigma |`$: $$M_{RR}^{upper}\left(\begin{array}{cc}\lambda ^{|2n_1+\sigma |}\hfill & \lambda ^{|n_1+n_2+\sigma |}\hfill \\ \lambda ^{|n_2+n_1+\sigma |}\hfill & \lambda ^{|2n_2+\sigma |}\hfill \end{array}\right)<\mathrm{\Sigma }>$$ (62) In the “diagonal dominated” case after small angle rotations the light effective Majorana mass matrix in Eq.38 may be calculated in the diagonal right-handed neutrino basis $$m_{LL}=v_2^2Y_\nu M_{RR}^1Y_\nu ^T=v_2^2Y_\nu \mathrm{\Omega }_{RR}^{}(M_{RR}^{diag})^1\mathrm{\Omega }_{RR}Y_\nu ^T$$ (63) The advantage of working in a diagonal right-handed neutrino mass basis is that $`(M_{RR}^{diag})^1=\mathrm{diag}(\mathrm{M}_{\mathrm{R1}}^1,\mathrm{M}_{\mathrm{R2}}^1,\mathrm{M}_{\mathrm{R3}}^1)`$ so if we define $`\stackrel{~}{Y}_\nu Y_\nu \mathrm{\Omega }_{RR}^{}`$ as the neutrino Yukawa matrix in the diagonal right-handed neutrino basis, then the effective light mass matrix elements are given from Eq.63 by $$m_{LL}^{ij}=\underset{p=1}{\overset{3}{}}v_2^2\frac{\stackrel{~}{Y}_\nu ^{ip}\stackrel{~}{Y}_\nu ^{jp}}{M_{Rp}}$$ (64) In this case $`\mathrm{\Omega }_{RR}`$ involves small angle rotations and so $`\stackrel{~}{Y}_\nu Y_\nu `$, and the contributions to $`m_{LL}`$ from the neutrinos $`N_1^c,N_2^c`$ are: $$\delta m_{LL}^{ij}=v_2^2\left(\frac{Y_\nu ^{i1}Y_\nu ^{j1}}{M_{R1}}+\frac{Y_\nu ^{i2}Y_\nu ^{j2}}{M_{R2}}\right)$$ (65) where $$M_{R1}\lambda ^{|2n_1+\sigma |}<\mathrm{\Sigma }>,M_{R2}\lambda ^{|2n_2+\sigma |}<\mathrm{\Sigma }>$$ (66) and from Eq.36 $`Y_\nu ^{ip}=\lambda ^{|l_i+n_p|}`$. Similar to Eq.54 in this case we require $$\frac{\delta m_{LL}^{33}}{m_{LL}^{33}}\frac{\lambda ^{2|l_3+n_1|}}{\lambda ^{2|l_3+n_3|}}\frac{M_{R3}}{M_{R1}}+\frac{\lambda ^{2|l_3+n_2|}}{\lambda ^{2|l_3+n_3|}}\frac{M_{R3}}{M_{R2}}\lambda ^2$$ (67) Thus the conditions for the “diagonal dominated case” are: $`2|l_3+n_1|2|l_3+n_3|+|2n_3+\sigma ||2n_1+\sigma |2,`$ $`2|l_3+n_2|2|l_3+n_3|+|2n_3+\sigma ||2n_2+\sigma |2`$ (68) where at least one of the inequalities must be saturated. In the “off-diagonal dominated” case $`M_{RR}`$ can again be simply inverted leading to $$\delta m_{LL}^{ij}=v_2^2\left(\frac{Y_\nu ^{i1}Y_\nu ^{j2}}{M_{R12}}+\frac{Y_\nu ^{i2}Y_\nu ^{j1}}{M_{R12}}\right)$$ (69) where $$M_{R12}\lambda ^{|n_1+n_2+\sigma |}<\mathrm{\Sigma }>.$$ (70) Again similar to Eq.54 we require $$\frac{\delta m_{LL}^{33}}{m_{LL}^{33}}\frac{\lambda ^{|l_3+n_1|+|l_3+n_2|}}{\lambda ^{2|l_3+n_3|}}\frac{M_{R3}}{M_{R12}}\lambda ^2$$ (71) Thus the condition for the “off-diagonal dominated case” is: $$|l_3+n_1|+|l_3+n_2|2|l_3+n_3|+|2n_3+\sigma ||n_1+n_2+\sigma |=2.$$ (72) In the “democratic” case $`M_{RR}`$ can be readily inverted leading to a result of order $$\delta m_{LL}^{ij}v_2^2\left(\frac{Y_\nu ^{i1}Y_\nu ^{j1}}{M}\right)$$ (73) where the right-handed neutrino masses in the upper block, $`M`$, are all equal by the democratic assumption and we have specialised to $`n_1=n_2`$ which implies from Eq.36 that $`Y_\nu ^{i1}Y_\nu ^{i2}`$. Once again similar to Eq.54 we require $$\frac{\delta m_{LL}^{33}}{m_{LL}^{33}}\frac{\lambda ^{2|l_3+n_1|}}{\lambda ^{2|l_3+n_3|}}\frac{M_{R3}}{M}\lambda ^2$$ (74) Thus the condition for the “democratic case” is: $$2|l_3+n_1|2|l_3+n_3|+|2n_3+\sigma ||2n_1+\sigma |=2.$$ (75) In practice examples of all three kinds can easily be constructed along the same lines as the explicit $`Z=2`$ case. The “democratic” case with $`n_1=n_2`$ is isomorphic to the $`Z=2`$ case. The $`Z=2`$ results trivially generalise in this case to $`n_p=(n_2,n_2,n_3)`$ where some examples of charges were listed in Table 1. For example $`l_3=1/2,n_p=(0,0,1),\sigma =0`$ satisfies all the “democratic” conditions with $`a=2`$ and $`Y_\nu `$ involving half-integer exponents. Clearly in the “democratic” case the $`Z=2`$ results can immediately be generalised to any number of right-handed neutrinos $`Z`$ with $`n_p=(n_2,\mathrm{},n_2,n_3)`$, where $`(n_2,n_3)`$ are the $`Z=2`$ charges. The “diagonal dominated” case also follows a similar pattern to the $`Z=2`$ case with the lighter of $`N_1^c`$, $`N_2^c`$ playing the role of the subdominant right-handed neutrino in the $`Z=2`$ case. It is straightforward to scan over all the half-integer and integer charges which satisfy the “diagonal dominated” conditions and generate a list of charges for this case, analagous to Table 1. A single example will suffice: $`l_3=3/2,n_p=(0,1,2),\sigma =0`$ satisfies all the “diagonal dominated” conditions and Eq.68 is saturated by $`N_2^c`$ which plays the role of the subdominant right-handed neutrino of the $`Z=2`$ case, with $`N_1^c`$ being both heavier and having more suppressed Dirac couplings. Again the “diagonal dominated” case can immediately be generalised to any number $`Z`$ of right-handed neutrinos $`n_p=(n_q,n_2,n_3)`$, where $`(n_2,n_3)`$ are the $`Z=2`$ charges with $`N_2^c`$ playing the role of the subdominant right-handed neutrino and $`N_q^c`$ giving subsubdominant contributions to the 23 block of $`m_{LL}`$. Examples of the “off-diagonal dominated” kind have already been proposed in the literature, although they were not interpreted as being due to SRHND . To show that the models in ref. are examples of SRHND of the “off-diagonal dominated” kind it suffices to consider a specific example: $$l_i=(2,0,0),n_p=(1,1,0),\sigma =0$$ (76) It is immediately clear that the charges in Eq.76 satisfy the conditions for SRHND in general Eq.43 and in particular the “off-diagonal dominated” conditions $`|n_1+n_2+\sigma |<|2n_1+\sigma |,|2n_2+\sigma |`$ and Eq.72. This immediately substantiates our claim that these models correspond to SRHND of the “off-diagonal dominated” kind. Note that $`Y^\nu `$ involves integer exponents. In view of the interest in this example in the literature we develop it in a little more detail below. The charges in Eq.76 lead to the following neutrino Yukawa and heavy Majorana matrices $$Y^\nu \left(\begin{array}{ccc}\lambda ^3\hfill & \lambda \hfill & \lambda ^2\hfill \\ \lambda \hfill & \lambda \hfill & 1\hfill \\ \lambda \hfill & \lambda \hfill & 1\hfill \end{array}\right),M_{RR}\left(\begin{array}{ccc}\lambda ^2\hfill & 1\hfill & \lambda \hfill \\ 1\hfill & \lambda ^2\hfill & \lambda \hfill \\ \lambda \hfill & \lambda \hfill & 1\hfill \end{array}\right)<\mathrm{\Sigma }>$$ (77) Due to the assumed charges, the heavy Majorana matrix is dominated by three equal mass terms $`<\mathrm{\Sigma }>N_1^cN_2^c`$, $`<\mathrm{\Sigma }>N_2^cN_1^c`$ and $`<\mathrm{\Sigma }>N_3^cN_3^c`$, leading to three roughly degenerate right-handed neutrinos. However of the three right-handed neutrinos it is $`N_3^c`$ which couples dominantly to the left-handed neutrinos of the second and third family, due to the assumed choice of $`X`$ charges, and hence dominates the 23 sector of $`m_{LL}`$. To see this we evaluate $`m_{LL}`$ in the basis in which $$M_{RR}\left(\begin{array}{ccc}0\hfill & 1\hfill & 0\hfill \\ 1\hfill & 0\hfill & 0\hfill \\ 0\hfill & 0\hfill & 1\hfill \end{array}\right)<\mathrm{\Sigma }>,M_{RR}^1\left(\begin{array}{ccc}0\hfill & 1\hfill & 0\hfill \\ 1\hfill & 0\hfill & 0\hfill \\ 0\hfill & 0\hfill & 1\hfill \end{array}\right)<\mathrm{\Sigma }>^1$$ (78) In this basis we define $`\stackrel{~}{Y}_\nu Y_\nu \mathrm{\Omega }_{RR}^{}`$ where $$\mathrm{\Omega }_{RR}\left(\begin{array}{ccc}1\hfill & \lambda ^2\hfill & \lambda \hfill \\ \lambda ^2\hfill & 1\hfill & \lambda \hfill \\ \lambda \hfill & \lambda \hfill & 1\hfill \end{array}\right)$$ (79) Evaluating $`m_{LL}`$ in this basis we find from Eqs.9 and 78 $$m_{LL}^{ij}=\frac{v_2^2}{<\mathrm{\Sigma }>}(\stackrel{~}{Y}_\nu ^{i1}\stackrel{~}{Y}_\nu ^{j2}+\stackrel{~}{Y}_\nu ^{i2}\stackrel{~}{Y}_\nu ^{j1}+\stackrel{~}{Y}_\nu ^{i3}\stackrel{~}{Y}_\nu ^{j3})$$ (80) corresponding to the contributions from the inverse mass terms $`<\mathrm{\Sigma }>N_1^cN_2^c`$, $`<\mathrm{\Sigma }>N_2^cN_1^c`$ and $`<\mathrm{\Sigma }>N_3^cN_3^c`$, respectively. Since $`\stackrel{~}{Y}_\nu Y_\nu `$ with the order one contributions to $`\stackrel{~}{Y}_\nu `$ coming exclusively from $`N_3^c`$, it is clear (by explicit evaluation of Eq.80) that $`N_3^c`$ dominates the contributions to the 23 block of $`m_{LL}`$, with corrections of order $`\lambda ^2`$ coming from the other contributions. The remaining parts of $`m_{LL}`$ receive contributions at the same order as the $`N_3^c`$ contributions coming from $`N_1^c,N_2^c`$. Thus the resulting light effective neutrino matrix is as in Eq.20, with SRHND in the 23 sector due to $`N_3^c`$ dominance with $`O(\lambda ^2)`$ corrections from other right-handed neutrinos. Finally we note that for $`Z>3`$ the above three categories “diagonal dominated”, “off-diagonal dominated” and “democratic” may be combined in all possible ways. ## 6 Conclusion We have suggested a natural explanation of both neutrino mass hierarchies and large neutrino mixing angles, as required by the atmospheric neutrino data, in terms of a single right-handed neutrino giving the dominant contribution to the 23 block of the light effective neutrino matrix. We illustrated this mechanism in the framework of models with a single pseudo-anomalous $`U(1)_X`$ family symmetry, expanding all masses and mixing angles in terms of the Wolfenstein parameter $`\lambda `$. Sub-dominant contributions to the 23 sector from other right-handed neutrinos, suppressed by a factor of $`\lambda ^2`$, are required to give small mass splittings appropriate to the small angle MSW solution to the solar neutrino problem. We gave general conditions for achieving this in the framework of $`U(1)_X`$ family symmetry models containing arbitrary numbers of right-handed neutrinos $`Z`$. We classified the $`Z=3`$ cases into three categories: “diagonal dominated”, “off-diagonal dominated” and “democratic”, and discussed examples of each kind. Although the approach in is based on the formal condition that the subdeterminant vanishes to order $`\lambda ^2`$, we have shown that explicit examples of this kind of model may be classified within our framework as SRHND of the “off-diagonal dominated” kind. Although we discussed a particular family symmetry it is clear that the idea of SRHND is more general and has recently been used in a model with $`SU(2)`$ family symmetry .
no-problem/9904/astro-ph9904419.html
ar5iv
text
# Jet-Induced Explosions of Core Collapse Supernovae ## 1 Introduction Recent observations of core collapse supernovae provide increasing evidence that the core collapse process is intrinsically asymmetric: 1) The spectra of these supernovae are significantly polarized indicating asymmetric envelopes (Méndez et al. 1988; Höflich 1991, 1995; Jeffrey 1991; Trammel et al. 1993, Tran et al. 1997). The degree of polarization tend to vary inversely with the mass of the hydrogen envelope, being maximum for Type Ib/c events with no hydrogen (Wang et al. 1996; Wang, Wheeler & Höflich 1999; Wheeler, Höflich & Wang 1999). 2) After the explosion, neutron stars are observed with high velocities, up to 1000 km s<sup>-1</sup>(Strom et al. 1995). 3) Observations of SN 1987A showed that radioactive material was brought to the hydrogen rich layers of the ejecta very quickly during the explosion (Lucy 1988; Sunyaev et al. 1987, Tueller et al. 1991). 4) The remnant of the Cas A supernova shows rapidly moving oxygen-rich matter outside the nominal boundary of the remnant (Fesen & Gunderson, 1996) and evidence for two oppositely directed jets of high-velocity material (Fesen 1999; Reed, Hester, & Winkler 1999). 5) High velocity “bullets” of matter have been observed in the Vela supernova remnant (Taylor et al. 1993) Understanding the mechanism of producing supernovae explosions by core collapse is a physics problem that has challenged researchers for decades (Hoyle & Fowler 1960; Colgate & White 1966). The current most sophisticated calculations based on the neutrino energy deposition mechanism are multidimensional and involve the convection of the newly formed neutron star. These, however, but have failed to produce robust explosions (Herant et al. 1994; Burrows, Hayes & Fryxell 1995; Janka & Müller 1996; Mezzacappa et al 1997; Lichtenstadt, Khokhlov & Wheeler 1999). Even when successful, these models do not explain why SN 1998bw produced the strongest radio source ever associated with a supernova, probably requiring a relativistic blast wave (Kulkarni et al. 1998), or account for a probable link between SN 1998bw and the $`\gamma `$-ray burst GRB980425 observed in the same general location in the same general time frame (Galama et al. 1998). The discovery of pulsars led to early considerations of the role of rotating magnetized neutron stars in the explosion mechanism (LeBlanc and Wilson 1970; Ostriker & Gunn 1971; Bisnovatyi-Kogan 1971). LeBlanc and Wilson studied the magneto-rotational core collapse of a $`7M_{}`$ star. They numerically solved the two-dimensional MHD equations coupled to the equation for neutrino transport. Their simulations showed the formation of two oppositely directed, high-density, supersonic jets of material emanating from the collapsed core. They estimated that at the surface located $`4\times 10^8`$ cm from the center, the jet carried away $`10^{32}`$ g with $`12\times 10^{51}`$ ergs in $`1`$ s. The magnetic field generated in this calculation was $`10^{15}`$ Gauss. Evidence now exists for strongly magnetized neutron stars, “magnetars” (Duncan & Thompson 1992; Kouveliotou et al. 1998). The LeBlanc-Wilson mechanism is extremely asymmetric and contains jets. Their calculations only followed the jet to a distance of $`10^8`$ cm, whereas a stellar core has a radius of $`10^{10}`$ cm or more. The issues that arise are: how can this asymmetry propagate to much larger distances inside the star? Can these jets induce asymmetry at distances comparable to the stellar radius, or even push through the entire star and exit? In this paper, we model the explosion of a core collapse supernova assuming that the LeBlanc-Wilson mechanism has operated in the center. We take a $`15M_{}`$ main-sequence star evolved to the point of the explosion (Straniero, Chieffi & Limongi 1999) and assume that the star has lost all of its hydrogen envelope before the explosion. The resulting $`4.1`$M model of a helium star corresponds to the explosion of a Type Ib or Ic supernova. The simulations show that the jets cause a very asymmetric explosion of the star. Most of the observations of asymmetries listed above can be explained by this process. ## 2 Numerical Simulations Figure 1 presents a schematic of the setup of the computation. The computational domain is a cube of size $`L=1.5\times 10^{11}`$ cm with a spherical helium star of radius $`R_{\mathrm{star}}=1.88\times 10^{10}`$ cm and mass $`M_{\mathrm{star}}4.1`$M placed in the center. The distribution of physical parameters inside the star is shown in Figure 2. The innermost part with mass $`M_{\mathrm{core}}1.6M_{}`$ and radius $`R_{\mathrm{core}}=3.82\times 10^8`$ cm, consisting of Fe and Si, is assumed to have collapsed on a timescale much faster than the outer, lower-density material. It is removed and replaced by a point gravitational source with mass $`M_{\mathrm{core}}`$ representing the newly formed neutron star. The remaining mass, from $`1.6`$ to $`4.1M_{}`$, consists of an O-Ne-Mg inner layer surrounded by the C-O and He-envelopes. This structure is mapped onto the computational domain from $`R_{\mathrm{core}}`$ to $`R_{\mathrm{star}}`$. At $`R_{\mathrm{core}}`$ and the outer boundary of the computational domain, we impose an outflow boundary condition assuming zero pressure, velocity, and density gradients. At two polar locations where the jets are initiated at $`R_{core}`$, we impose an inflow with velocity $`v_j`$, density $`\rho _j`$ and pressure $`P_j`$. The jet parameters are chosen to represent the results of LeBlanc & Wilson (1970). At $`R_{\mathrm{core}}`$, the jet density and pressure are the same as those of the background material, $`\rho _j=6.5\times 10^5`$ g cm<sup>-3</sup> and $`P_j=1.0\times 10^{23}`$ ergs cm<sup>-3</sup>, respectively. The radii of the cylindrical jets entering the computational domain are approximately $`r_j=1.2\times 10^8`$ cm. For the first 0.5 s, the jet velocity at $`R_{\mathrm{core}}`$ is kept constant at $`v_j=3.22\times 10^9cms^1`$. This results in a mass flux rate of $`9.5\times 10^{31}`$ g s<sup>-1</sup> with an energy deposition rate $`dE/dt=5\times 10^{50}`$ ergs/s for each jet. After 0.5 s, the velocity of the jets at $`R_{\mathrm{core}}`$ was gradually decreased to zero at approximately 2 s. The total energy deposited by the jets is $`E_j9\times 10^{50}`$ ergs and the total mass ejected is $`M_j2\times 10^{32}`$ grams or $`0.1`$M. These parameters are consistent within, but somewhat less than, those of the LeBlanc-Wilson model. The amount of material ejected is less than that which falls through the inner boundary during the jet operation, $`4\times 10^{32}`$ g. This amounts to an implicit assumption that $`1/2`$ of the matter accreted is channeled back out into the jets. More accurate jet parameters can only be determined by self-consistently modeling the formation of the jets in the vicinity of a neutron star. The stellar material was described by the time-dependent, compressible, Euler equations for inviscid flow with an ideal gas equation of state $`P=E(\gamma 1)`$ with constant $`\gamma =5/3`$. The Euler equations were integrated using an explicit, second-order accurate, Godunov type, adaptive-mesh-refinement, massively parallel, Fully-Threaded Tree (FTT) program, ALLA (Khokhlov 1998, Khokhlov & Chtchelkanova 1999). Euler fluxes were evaluated by solving a Riemann problem at cell interfaces. FTT discretization of the computational domain allowed the mesh to be refined or coarsened at the level of individual cells. Physical scales involved in the simulation range from the size of the computational domain ($`1.5\times 10^{11}`$ cm) to the jet diameter ($`10^8`$ cm) and span at least three orders of magnitude. We used a cartesian, nonuniformly refined FTT mesh with fine cells $`\mathrm{\Delta }_{\mathrm{min}}3.7\times 10^7`$ cm near $`R_{\mathrm{core}}`$ to resolve the jets, and with cell size increasing towards the outer boundary of the computational domain where the cell size was $`\mathrm{\Delta }_{\mathrm{max}}=2.3\times 10^9`$ cm. This mesh was fixed from initial time $`0`$ to $`6`$ s of physical time. After that, the inner parts were coarsened near the center by a factor of four, and the central hole was eliminated. At this moment, the jets have exited the star and the details of the flow near $`R_{\mathrm{core}}`$ do not affect the essential features of the explosion. In this first, demonstration calculation, we did not use the time-adaptive mesh refinement capability of ALLA. It will be used to follow shocks and mixing processes with higher resolution in future simulations. We computed the entire configuration including both jets and assuming no symmetries. The total number of computational cells used in the simulation was $`2\times 10^6`$, whereas a uniform resolution $`\mathrm{\Delta }_{\mathrm{min}}`$ would have required $`7\times 10^{10}`$ cells. ## 3 Results and Discussion Figure 3 shows the propagation of the jet inside the star. As the jets move outwards, they remain collimated and do not develop much internal structure. A bow shock forms at the head of the jet and spreads in all directions, roughly cylindrically around each jet. The sound crossing time $`\tau (r)=r/a_s(r)`$ is shown as a function of stellar radius $`r`$ in Figure 2, where $`a_s(r)`$ is the sound speed at a given radius for the initial stellar model. It might be expected that if energy were released at the center of a star on a timescale much shorter that $`\tau (r)`$, the effect of energy deposition at $`r`$ would resemble that of a strong point explosion. In particular, the jet characteristic time $`\tau _j1`$ s is much shorter than the sound crossing time of the star, $`\tau (R_{\mathrm{star}})10^3`$ s (Figure 2). Nonetheless, these jets stay collimated enough to reach the surface as strong jets. It is known that supersonic jets stay collimated for a long distance. For example, Norman et al. (1983) simulated supersonic jets with densities $`\rho _j`$ both less than and greater than a uniform background, $`\rho _b`$. Jets with $`\rho _j/\rho _b1`$ developed a bow shock and little internal structure. Our jets resemble those with $`\rho _j/\rho _b1`$. The stellar matter is shocked by the bow shock, and then flows out and acts as a high-pressure confining medium by forming a cocoon around the jet. The sound crossing time of the dense O-Ne-Mg mantle, $`\tau (R10^9\mathrm{cm})10`$ s, is only ten times longer than $`\tau _j`$, and the jets are capable of penetrating this dense inner part of the star in $`2`$ s. By the time the jets penetrate into the less dense C-O and He layers, the inflow of material into the jets has been turned off. By this time, however, the jets have become long bullets of high-density material moving through the background low-density material almost ballistically. The higher pressures in these jets cause them to spread laterally. This spreading is limited by a secondary shock that forms around each jet between the jet and the material already shocked by the bow shock. The radius of the jets, $`3\times 10^9`$ cm as they emerge from the star, is larger than the initial radius, $`10^8`$ cm, but it is still significantly less than the radius of the star. After about 5.9 s, the bow shock reaches the edge of the star and breaks through. Figure 4 shows the subsequent evolution of the star after the breakthrough. By $`20`$ s, most of the material in the jets has left the star propagates into the interstellar medium ballistically. We estimate the total mass in these two jets as $`M_j0.05M_{}`$ and the total kinetic energy $`E_j2.5\times 10^{50}`$ ergs. The average velocity of the jet is about 25,000 $`kms^1`$. The laterally expanding bow shocks generated by the jets (Figure 3) move towards the equator where they collide with each other. The collision of the shocks first produces a regular reflection that then becomes a Mach reflection. The Mach stem moves outwards along the equatorial plane. The result is that the material in the equatorial plane is compressed and accelerated more than material in other directions (excluding the jet material). At $`t29`$ s, the Mach stem reaches the outer edge of the star, and the star begins to settle into the free expansion regime. The computation was terminated at $`35`$ s, before free expansion was attained. The stellar ejecta at this time is highly asymmetric. The density contour of $`50\mathrm{g}\mathrm{cm}^3`$, which is the average density of the ejecta at this time, forms an oblate configuration with the equator-to-polar velocity ratio $`2/1`$. Complex shock and rarefaction interactions inside the expanding envelope will continue to change the distribution of the parameters inside the ejecta. Nonetheless, we expect that the resulting configuration will resemble an oblate ellipsoid with a very high degree of asymmetry, axis ratios $`2`$. ## 4 Conclusions We have numerically studied the explosion of a supernova caused by supersonic jets generated in the center of the supernova as a result of the core collapse into a neutron star. We simulated the process of the jet propagation through the star, jet breakthrough, and the ejection of the supernova envelope by the lateral shocks generated during jet propagation. The end result of the interaction is a highly nonspherical supernova explosion with two high-velocity jets of material moving in polar directions ahead of an oblate, highly distorted ejecta containing most of the supernova material. Below we argue that such a model explains many of the observations that are difficult or impossible to explain by the neutrino deposition explosion mechanisms. We have assumed that the jets were generated by a magneto-rotational mechanism during core collapse and neutron star formation (LeBlanc & Wilson 1970). That collimated jets could be a common phenomenon in core collapse supernovae and be associated with $`\gamma `$-ray bursts was raised recently by Wang & Wheeler (1998). A different mechanism of jet generation involving neutrino radiation during collapse of a very massive star into a black hole has been recently discussed by MacFayden & Woosley (1999) in the context of a “failed” supernova, also to explain $`\gamma `$-ray bursts. Low density relativistic jets may also be produced by the intense radiation of the newly born pulsar, as discussed by Blackman & Yi (1998) and Yi et al. (1999). We found in our preliminary simulations (not presented here) that lower density and higher velocity jets than the one considered in this paper may produce similar hydrodynamical effects. The asymmetric explosion generated in this calculation provides ejection velocities that are comparable to those observed in supernovae. For this particular calculation, an energy of $`2.5\times 10^{50}`$ ergs is invested in the jet and the star of $`2.5`$M is ejected with kinetic energy of $`6.5\times 10^{50}`$ ergs and average velocity $`3,0004,000kms^1`$. Increasing the jet opening angle, jet duration, or jet velocity would result in a more powerful explosion. The density and velocity profiles of the main ejecta (excluding jets) are oblate with equator to polar ratios greater than 2/1. This structure will produce significant polarization, of order 1% or more as observed in bare-core supernovae (Höflich, Wheeler & Wang 1999). The two polar jets move outward from the star with a speed $`25,000kms^1`$, much greater than the ejecta itself. They may be detected in supernova remnants and might account for the evidence of jets in in Cas A (Fesen & Gunderson 1996; Reed, Hester & Winkler 1999). The composition of the jets must reflect the composition of the innermost parts of the star, and should contain heavy and intermediate-mass elements. During the explosion, the jets would bring heavy and intermediate mass elements into the outer layers. This will influence the spectral and polarization properties of a supernova. Here we considered a bare helium core, but if the core were inside a hydrogen envelope, the explosion would remain very inhomogeneous. Radioactive elements could potentially be carried into the hydrogen envelope. This could explain the early appearance of X-rays, as in SN1987A. It is plausible that a sufficiently powerful jet could even penetrate a hydrogen envelope. We assumed that the jets are identical which is not the general case. Any momentum imbalance might impart a kick to the neutron star. From momentum conservation, we estimate the required difference between the inflow velocities of the jets, $`\mathrm{\Delta }v_j`$, be of the order of $$\frac{\mathrm{\Delta }v_j}{v_j}\frac{M_{NS}}{M_j}\frac{v_{NS}}{v_j}1.0\left(\frac{v_{NS}}{1,000\mathrm{k}\mathrm{m}/\mathrm{s}}\right)\left(\frac{30,000\mathrm{k}\mathrm{m}/\mathrm{s}}{v_j}\right),$$ where $`V_{NS}`$ is the kick velocity, and we have taken the neutron mass $`M_{NS}=1.5`$M, and the jet mass $`M_j=10^{32}g`$. Although the required jet asymmetry, $`\frac{\mathrm{\Delta }v_j}{v_j}=1`$, to produce a 1,000 km/s kick may seem extreme, the parameters of jets selected for this calculations are mild. If the duration of the jets is increased by a factor of two, asymmetry of only 0.5 would be required. When the jets break through the stellar photosphere, a small amount of mass will be accelerated through the density gradient to very high velocities. Our resolution was not enough and the code does not have a relativistic Riemann solver incorporated to make quantitative predictions; however, a small fraction of the material at the stellar surface was observed to move with a velocity of up to $``$ 90,000 km s<sup>-1</sup>. This may, in principle, lead to the $`\gamma `$-ray burst and the radio outburst similar to those associated with SN 1998bw/GRB980425. The jet-induced explosion of a supernova computed in this paper is entirely due to the action of the jet on the surrounding star. The mechanism that determines the energy of such an explosion must be related to the shut-off of the accretion onto the neutron star by the lateral shocks that accelerate the material outwards. The explosion thus does not depend on neutrino transport or re-acceleration of the stalled shock. This work opens many issues that require further investigation. A study must be made of different input parameters, including properties of the jets and of the initial star, and the jet engine mechanisms must be studied as well. These studies are currently underway. Computations were performed on Origin 2000 at the Naval Research Laboratory. The authors are grateful to Rob Duncan and Insu Yi for helpful discussions, and to Almadena Chtchelkanova for the development of massively parallel software used in the simulations. This research was supported in part by NSF Grant 95-28110, NASA Grant NAG 5-2888, NASA Grant LSTA-98-022 and a grant from the Texas Advanced Research Program. The Laboratory for Computational Physics and Fluid Dynamics at the Naval Research Laboratory thanks NASA Astrophysics Theory Program for support. Figure Captions Figure 1 Figure 2
no-problem/9904/quant-ph9904081.html
ar5iv
text
# Untitled Document Quantum Mechanics and Elements of Reality Ulrich Mohrhoff a) E-mail: ujm@auroville.org.in Sri Aurobindo Ashram, Pondicherry 605002, India Abstract It is widely accepted that a Born probability of 1 is sufficient for the existence of a corresponding element of reality. Recently Vaidman has extended this idea to the ABL probabilities of the time-symmetrized version of quantum mechanics originated by Aharonov, Bergmann, and Lebowitz. Several authors have objected to Vaidman’s time-symmetrized elements of reality without casting doubt on the widely accepted sufficiency condition for ‘ordinary’ elements of reality. In this paper I show that while the proper truth condition for a quantum counterfactual is an ABL probability of 1, neither a Born probability of 1 nor an ABL probability of 1 is sufficient for the existence of an element of reality. The reason this is so is that the contingent properties of quantum-mechanical systems are extrinsic. To obtain this result, I need to discuss objective probabilities, retroactive causality, and the objectivity or otherwise of the psychological arrow of time. One consequence of the extrinsic nature of quantum-mechanical properties is that quantum mechanics presupposes property-defining actual events (or states of affairs) and therefore cannot be called upon to account for their occurrence (existence). Neither these events nor the correlations between them are capable of explanation, the former because they are causal primaries, the latter because they are fundamental: there are no underlying causal processes. Causal connections are something we project onto the statistical correlations, and this works only to the extent that statistical variations can be ignored. There are nevertheless important conclusions to be drawn from the quantum-mechanical correlations, such as the spatial nonseparability of the world. 1. Introduction Recently the concept of time-symmetric elements of reality, introduced by Vaidman (1996a, 1997), stirred up a lively controversy which culminated in the joint publication of two papers in this journal (Kastner, 1999; Vaidman 1999). Using the standard formalism of standard quantum mechanics, one calculates the Born probability $$P_B(a_i)=|\mathrm{\Psi }|𝐏_{A=a_i}|\mathrm{\Psi }|,$$ $`(1)`$ where the operator $`𝐏_{A=a_i}`$ projects on the subspace corresponding to the eigenvalue $`a_i`$ of the observable $`A`$. $`P_B(a_i)`$ is generally regarded as the probability with which a measurement of $`A`$ performed after the ‘preparation’ of a system $`S`$ in the ‘state’ $`|\mathrm{\Psi }`$ yields the result $`a_i`$. But $`P_B(a_i)`$ is not the only such probability. Using a nonstandard formulation of standard quantum theory called time-symmetrized quantum theory (Aharonov and Vaidman, 1991; Vaidman, 1998), one calculates the ABL probability $$P_{ABL}(a_i)=\frac{\left|\mathrm{\Psi }_2|𝐏_{A=a_i}|\mathrm{\Psi }_1\right|^2}{\mathrm{\Sigma }_j\left|\mathrm{\Psi }_2|𝐏_{A=a_j}|\mathrm{\Psi }_1\right|^2}.$$ $`(2)`$ ABL probabilities were first introduced in a seminal paper by Aharonov, Bergmann, and Lebowitz (1964). In this paper it was shown that $`P_B(a_i)`$ can also be thought of as the probability with which a measurement of $`A`$, performed before what may be called the ‘retroparation’ of $`S`$ in the ‘state’ $`|\mathrm{\Psi }`$, yields the result $`a_i`$. Further it was shown that if a system is ‘prepared’ at the time $`t_1`$ and ‘retropared’ at the time $`t_2`$ in the respective ‘states’ $`|\mathrm{\Psi }_1`$ and $`|\mathrm{\Psi }_2`$, the probability with which a measurement of $`A`$ performed at an intermediate time $`t_m`$ yields (or would have yielded) the result $`a_i`$, is given by $`P_{ABL}(a_i)`$.1 The $`\mathrm{\Psi }`$’s in (2) are related to the ‘pre-/retropared’ $`\mathrm{\Psi }`$’s via unitary transformations$`U(t_mt_1)`$ and $`U(t_mt_2)`$. Born probabilities can be measured (as relative frequencies) using preselected ensembles (that is, ensembles of identically ‘prepared’ systems). ABL probabilities can be measured using pre- and postselected ensembles (that is, ensembles of systems that are both identically ‘prepared’ and identically ‘retropared’). If the Born probability $`P_B(a_i,t)`$ of obtaining the result $`a_i`$ at time $`t`$ is equal to 1, one feels justified in regarding the value $`a_i`$ of the observable $`A`$ as a property that is actually possessed at the time $`t`$, that is, one feels justified in assuming that at the time $`t`$ there is is an element of reality corresponding to the value $`a_i`$ of the observable $`A`$ irrespective of whether $`A`$ is actually measured. Redhead (1987) has expressed this feeling as the following ‘sufficiency condition’: (ER1) If we can predict with certainty, or at any rate with \[Born\] probability one, the result of measuring a physical quantity at time $`t`$, then at the time $`t`$ there exists an element of reality corresponding to the physical quantity and having a value equal to the predicted measurement result. The controversy about time-symmetric elements of reality arose because it appeared that Vaidman (1993) made the same claim with regard to ABL probabilities: (ER2) If we can infer with certainty \[that is, with ABL probability one\] that the result of measuring at time $`t`$ an observable $`A`$ is $`a`$, then at the time $`t`$ there exists an element of reality $`A=a`$. In response to criticism by Kastner and others (Kastner, 1999; and references therein), Vaidman (1999) clarified that he intended the term ‘element of reality’ in a ‘technical’ rather than ‘ontological’ sense: saying that there is an element of reality $`A=a`$ is the same as saying that if $`A`$ is measured, the result is certain to be $`a`$. In other words, (ER2) is a tautology: if $`A=a`$ is certain to be found then $`A=a`$ is certain to be found. In formulating (ER2), Vaidman does not affirm the existence of an element of reality irrespective of whether $`A`$ is actually measured. (ER2) defines what it means to affirm the existence of an element of reality corresponding to $`A=a`$. To say that there is such an element of reality is to affirm the truth of a conditional, not the existence of an actual situation or state of affairs. Since ordinarily the locution ‘element of reality’ refers to an actual state of affairs, Vaidman’s terminological choice was unfortunate and has mislead many readers. But beyond that, his reading of (ER2) is unobjectionable. I shall, however, stick to the ordinary, ontological meaning of ‘element of reality.’ In what follows, (ER2) is to be understood accordingly, that is, as affirming an actual state of affairs (the existence of an ‘ordinary’ element of reality) just in case the corresponding ABL probability is one. Hence my showing that (ER2), thus understood, is false, has no bearing on Vaidman’s reading of (ER2). A definition cannot be false. The aim of this paper is to show that not only (ER2) but also (ER1) is false. In Sec. 2 I discuss the three-box gedanken experiment due to Vaidman (1996b). By calculating the ABL probabilities associated with different versions of this experiment I show that positions are extrinsic, and that, consequently, (ER1) and (ER2) are both false. Since the validity of arguments based on time-symmetric quantum counterfactuals is open to debate, in Sec. 3 I show without making use of ABL probabilities that positions are extrinsic and that (ER1) is false. This leads to the conclusion that the measurement problem is a pseudoproblem, and that all that ever gets objectively entangled is counterfactuals. In Sec. 4 I establish the cogency of the argument of Sec. 2 by showing that the proper condition for the truth of a quantum counterfactual is $`P_{ABL}=1`$. This necessitates a discussion of objective probabilities, retroactive causality, and the objectivity or otherwise of the psychological arrow of time. In Sec. 5 I argue that since quantum mechanics presupposes the occurrence/existence of actual events and/or states of affairs, it cannot be called upon to account for the emergence of ‘classicality.’ What is more, if quantum mechanics is as fundamental as its mathematical simplicity and empirical success suggest, the property-defining events or states of affairs presupposed by quantum mechanics are causal primaries – nothing accounts for their occurrence or existence. If this is correct, the remaining interpretative task consists not in explaining the quantum-mechanical correlations and/or correlata but in understanding what they are trying to tell us about the world. I confine myself to pointing out, in Sec. 6, the most notable implications of the diachronic correlations, viz., the existence of entities of limited transtemporal identity, objective indefiniteness, and the spatial nonseparability of the world. The extrinsic nature of positions, finally, appears to involve a twofold vicious regress. Its resolution involves macroscopic objects, which are defined and discussed in Sec. 7. Section 8 concludes with a remark on the tension of contrast between objective indefiniteness and the inherent definiteness of language. 2. The Lesson of the Three-Box Experiment In the following I present a somewhat different but conceptually equivalent version of Vaidman’s (1996b) three-box experiment. Consider a wall in which there are three holes $`A`$, $`B`$ and $`C`$. In front of the wall there is a particle source $`Q`$. Behind the wall there is a particle detector $`D`$. Both $`Q`$ and $`D`$ are equidistant from the three holes. Behind $`C`$ there is one other device; its purpose is to cause a phase shift by $`\pi `$. Particles emerging from the wall are thus preselected in a ‘state’ $`|\mathrm{\Psi }_1`$ proportional to $`|a+|b+|c`$, where $`|a`$, $`|b`$ and $`|c`$ represent the respective alternatives ‘particle goes through $`A`$,’ ‘particle goes through $`B`$,’ and ‘particle goes through $`C`$,’ while detected particles are postselected in a ‘state’ $`|\mathrm{\Psi }_2`$ proportional to $`|a+|b|c`$. We will consider two possible intermediate measurements. First we place near $`A`$ a device $`F_a`$ that beeps whenever a particle passes through $`A`$. With the help of the ABL formula one finds that every particle of this particular pre- and postselected ensemble $``$ causes $`F_a`$ to beep with probability 1, as one may verify by calculating the probability with which a particle would be found passing through the union $`BC`$ of $`B`$ and $`C`$: $$P_{ABL}(BC)\left|\mathrm{\Psi }_2|𝐏_{BC}|\mathrm{\Psi }_1\right|^2=0,$$ where $`𝐏_{BC}=|bb|+|cc|`$ projects on the subspace corresponding to the alternative ‘particle goes through $`BC`$.’ We obtain the same result by considering what would happen if $`A`$ were closed, or if all particles that make $`F_a`$ beep were removed from $``$. The remaining particles are pre- and postselected in ‘states’ proportional to $`|b+|c`$ and $`|b|c`$, respectively, and these ‘states’ are orthogonal. The result is an empty ensemble: if $`A`$ were closed, no particle would arrive at $`D`$. Does this warrant the conclusion that all particles belonging to $``$ pass through $`A`$? Let us instead place near $`B`$ a device $`F_b`$ that beeps whenever a particle passes through $`B`$. Considering the invariance of $`|\mathrm{\Psi }_1`$ and $`|\mathrm{\Psi }_2`$ under interchange of $`|a`$ and $`|b`$, one is not surprised to find that the ensemble $``$ would be empty if the particles causing $`F_b`$ to beep were removed. Hence if the conclusion that all particles belonging to $``$ pass through $`A`$ is warranted, so is the conclusion that the same particles also pass through $`B`$. If these ‘conclusions’ were legitimate, they would make nonsense of the very concept of localization. Therefore we are forced to conclude instead that an ABL probability equal to 1 does not warrant the existence of a corresponding element of reality (in the straightforward, ontological sense). Taken in this sense, (ER2) is false. It pays to investigate further. We are in fact dealing with four different experimental arrangements: (i) there is no beeper, (ii) $`F_a`$ is the only beeper in place, (iii) $`F_b`$ is the only beeper in place, (iv) both $`F_a`$ and $`F_b`$ are in place. The first arrangement permits no legitimate inference concerning the hole taken by a particle. Assuming that $`F_a`$ is 100% efficient, the second arrangement guarantees that one of two inferences is warranted: ‘the particle goes through $`A`$’ (in case $`F_a`$ beeps) or ‘the particle goes through $`BC`$’ (in case $`F_a`$ fails to beep). Assuming that $`F_b`$ is equally efficient, the third arrangement likewise guarantees that one of two inferences is warranted: ‘the particle goes through $`B`$’ or ‘the particle goes through $`AC`$.’ The fourth arrangement, finally, guarantees that one of three inferences is warranted: ‘the particle goes through $`A`$’ (in case $`F_a`$ beeps), ‘the particle goes through $`B`$’ (in case $`F_b`$ beeps), and ‘the particle goes through $`C`$’ (in case neither $`F_a`$ nor $`F_b`$ beeps). The following counterfactuals are therefore true: (i) If $`F_a`$ were in place, either it would beep and the particle would go through $`A`$, or it would fail to beep and the particle would go through $`BC`$. (ii) If $`F_b`$ were in place, either it would beep and the particle would go through $`B`$, or it would fail to beep and the particle would go through $`AC`$. (iii) If both $`F_a`$ and $`F_b`$ were in place, one of the following three conjunctions would be true: $`F_a`$ beeps and the particle goes through $`A`$; $`F_b`$ beeps and the particle goes through $`B`$; neither beeper beeps and the particle goes through $`C`$. If we confine the discussion to particles that are emitted by $`Q`$ and detected by $`D`$, then the following counterfactuals are true: If $`F_a`$ but not $`F_b`$ were present, the alternatives represented by $`|b`$ and $`|c`$ would interfere with each other but not with the alternative represented by $`|a`$; as a consequence, they would interfere destructively; therefore $`F_a`$ would beep and the particle would go through $`A`$. By the same token, if $`F_b`$ were the only beeper present, the alternatives represented by $`|a`$ and $`|c`$ would interfere destructively, $`F_b`$ would beep, and the particle would go through $`B`$. Finally, if both beepers were present, no interference would take place; each particle would go through a particular hole, but not all particles would go through the same hole. To my mind, these counterfactuals are unobjectionable. One does not have to delve into the general philosophy of counterfactuals to see that they are true. I concur with Vaidman (1999) that quantum counterfactuals are unambiguous. Quantum counterfactuals are statements about possible worlds in which the outcomes of all measurements but one are the same as in the actual world. The remaining measurement is performed in a number of possible worlds (the number depends on the range of possible values) but not in the actual world. The three-box (or three-hole) experiment demonstrates that position probabilities cannot be assigned independently of experimental arrangements. More specifically, they cannot be assigned without specifying a set of experimentally distinguishable alternatives. A position probability of 1 depends not only on the way the particle is ‘prepared’ and ‘retropared’ but also on the set $`L`$ of alternative locations that can be experimentally distinguished. If $`L=\{A,BC\}`$, the particle is certain to be found in (or going through) $`A`$, but the inference of an element of reality ‘the particle went through $`A`$’ is warranted only if the members of $`L`$ are actually distinguished (that is, only if the corresponding experiment is actually performed). It follows that the position of a particle is an extrinsic property. By an extrinsic property $`p`$ of $`S`$ I mean a property of $`S`$ that is undefined unless either the truth or the falsity of the proposition $`𝐩=`$ ‘$`S`$ is $`p`$’ can be inferred from what happens or is the case in the ‘rest of the world’ $`𝒲S`$. The position of a particle is undefined unless there is a specific set $`\{R_i\}`$ of alternative locations, and unless there is a matter of fact about the particular location $`R_j`$ at which the particle is, or has been, present. (By ‘a matter of fact about the particular location $`R_j`$’ I mean an actual event or an actual state of affairs from which that location can be inferred.2 Examples of actual events are the click of a Geiger counter or the deflection of a pointer needle. An actual state of affairs is expressed, for instance, by the statement ‘The needle points to the left.’ Can such events and states of affairs be defined in quantum-mechanical terms? See below.) Positions are defined in terms of position-indicating matters of fact. They ‘dangle’ from actual events or actual states of affairs. And if it is true that ‘\[t\]here is nothing in quantum theory making it applicable to three atoms and inapplicable to $`10^{23}`$’ (Peres and Zurek, 1982), this must be as true of footballs and cats as it is of particles and atoms. The positions of things are what matters of fact imply concerning the positions of things. If this is correct then (ER1) is as false as (ER2). In particular, the ‘sufficiency condition’ (ER1) is not sufficient for the presence of a material object $`O`$ in a region of space $`R`$. The condition that is both necessary and sufficient for the presence of $`O`$ in $`R`$, is the existence of a matter of fact that indicates $`O`$’s presence in $`R`$. If there isn’t any such matter of fact (now or anytime past or future), and if there also isn’t any matter of fact that indicates $`O`$’s absence from $`R`$, then the sentence ‘$`O`$ is in $`R`$’ is neither true nor false but meaningless, and $`O`$’s position with respect to $`R`$ (inside or outside) is undefined. 3. Probabilities, Conditonals, Elements of Reality, and the Measurement Problem In the previous section I made use of the ABL probabilities associated with different versions of Vaidman’s three-box experiment to show that positions are extrinsic, and that, consequently, both (ER1) and (ER2) are false. The validity of arguments based on time-symmetric quantum counterfactuals might be challenged. In the present section I therefore show without recourse to ABL probabilities that positions are extrinsic and that (ER1) is false. In the following section I shall establish the cogency of the argument of the previous section by showing that the proper condition for the truth of a quantum counterfactual is $`P_{ABL}=1`$. (It is readily verified that $`P_B=1`$ is sufficient but not necessary for this condition to be met.) Consider two perfect detectors $`D_1`$ and $`D_2`$ whose respective (disjoint) sensitive regions are $`R_1`$ and $`R_2`$. If the support of the (normalized) wave function associated with the (center-of-mass) position of an object $`O`$ is neither wholly inside $`R_1`$ nor wholly inside $`R_2`$, nothing necessitates the detection of $`O`$ by $`D_1`$, and nothing necessitates the detection of $`O`$ by $`D_2`$. But if the wave function vanishes outside $`R_1R_2`$, the probabilities for either of the detectors to click add up to 1, so either of the detectors is certain to click. Two perfect detectors with sensitive regions $`R_1`$ and $`R_2`$ constitute one perfect detector $`D`$ with sensitive region $`R_1R_2`$. But how can it be certain that one detector will click when individually neither detector is certain to click? What could cause $`D`$ to click while causing neither $`D_1`$ nor $`D_2`$ to click? That two perfect detectors with disjoint sensitive regions constitute one perfect detector for the union of the two regions forms part of the definition of what we mean by a perfect detector. By definition, a perfect detector clicks when the quantum-mechanical probability for it to click is 1. $`D`$ is certain to click because the probabilities for either of the two detectors to click add up to 1. Hence the question of what causes $`D`$ to click does not arise. Perfect detectors are theoretical constructs that by definition behave in a certain way. If real detectors would behave in the same way, it would be proper to inquire why they behave in this way. But real detectors are not perfect and do not behave in this way. A real detector is not certain to click when the corresponding quantum-mechanical probability is 1. Hence the question of what causes a real detector to click does not arise. Nothing causes a real detector to click. What I aim at in this paper is an interpretation of quantum mechanics that takes standard quantum mechanics to be fundamental and complete. My claim that nothing causes a real detector to click is based (i) on this assumption and (ii) on the observation that the efficiency of a real detector cannot be accounted for in quantum-mechanical terms. All quantum-mechanical probability assignments are relative to perfect detectors. If quantum mechanics predicts that $`D_1`$ will click with a probability of $`1/2`$, it does not predict that a real detector will click in 50% of all runs of the actual experiment. What it predicts is that $`D_1`$ will click in 50% of those runs of the experiment in which either $`D_1`$ or $`D_2`$ clicks. Quantum mechanics has nothing to say about the percentage of runs in which no counter clicks (that is, is tells us nothing about the efficiency of $`D`$, or of any other real detector for that matter). If quantum mechanics predicts that $`D_1`$ will click with probability 1, it accounts for the fact that whenever either $`D_1`$ or $`D_2`$ clicks, it is $`D_1`$ that clicks. It does not account for the clicking of either $`D_1`$ or $`D_2`$. Where quantum mechanics is concerned, nothing causes the clicking. And if quantum mechanics is as fundamental and complete as is here assumed, then this is true without qualification: nothing causes the clicking.3 It is well known that all actually existing detectors are less than perfect. On the other hand, there is no (obvious) theoretical limit to the efficiency of a real detector. It might one day be possible to build a detector with an efficiency arbitrarily close to 100%. However, unless the efficiency of detectors is exactly 100%, it remains impossible to interpret a ‘preparation’ that warrants assigning probability 1 as causing a detector to click. If the preparation is to be a sufficient reason for the click, the detector must always click (that is, it must be perfect). What if it were possible to build perfect detectors? We could then speak of the preparation as the cause of the click, but if quantum mechanics is fundamental and complete, it would still be impossible to explain how the preparation causes the click: ex hypothesi, no underlying mechanism exists. The perfect correlation between preparation and click would have to be accepted as a brute fact. So would the fact that either $`D_1`$ or $`D_2`$ clicks when neither of them is certain to click. Causality would be just another name for such correlations, not an explanation. Quantum mechanics assigns probabilities (whether Born or ABL) to alternative events (e.g., deflection of the pointer needle to the left or to the right) or to alternative states of affairs (e.g., the needle’s pointing left or right). Implicit in every normalized distribution of probabilities over a specified set of alternative events or states of affairs is the assumption that exactly one of the specified alternatives happens or obtains. If we assign normalized probabilities to a set of counterfactuals, we still assume (counterfactually) that exactly one of the counterfactuals is true. In other words, if we assign probabilities to the possible results of an unperformed measurement, we still assume that the measurement, if it had been performed, would have yielded a definite result. Like all (normalized) probabilities, the probabilities assigned by quantum mechanics are assigned to mutually exclusive and jointly exhaustive possibilities, and they are assigned on the supposition that exactly one possibility is, or would have been, a fact. Even the predictions of the standard version of standard quantum mechanics therefore are conditionals. Everything this version tells us conforms to the following pattern: If there is going to be a matter of fact about the alternative taken (from a specific range of alternatives), then such and such are the Born probabilities with which that matter of fact will indicate this or that alternative. It is important to understand that quantum mechanics never allows us to predict that there will be such a matter of fact, unconditionally. If the Born probability of a particular event $`F`$ is 1, we are not entitled to predict that $`F`$ will happen. What we are entitled to infer is only this: Given that one of a specified set of events will happen, and given that $`F`$ is an element of this set, the event that will happen is $`F`$. In order to get from a true conditional to an element of reality, a condition has to be met: a measurement must be successfully performed, there must be a matter of fact about the value of an observable, one of a specific set of alternative property-indicating events or states of affairs must happen or obtain. Quantum mechanics does not predict that a measurement will take place, nor the time at which one will take place, nor does it specify the conditions in which one will take place. It requires us to assume that one will take place, for it is on this assumption that its probability assignments are founded. It follows that (ER1) is false. A Born probability equal to 1 is equivalent to a conditional $`c`$. The inference of a corresponding element of reality is warranted only if the condition laid down by $`c`$ is actually met. It also follows that positions are extrinsic. The condition laid down by $`c`$ is the existence of a matter of fact about the value taken by some observable. If this observable has for its spectrum a set $`\{R_i\}`$ of mutually disjoint regions of space, if $`R`$ is an element of $`\{R_i\}`$, and if the Born probability associated with $`R`$ is 1, then $`O`$ is inside $`R`$ just in case there is a matter of fact about the particular element of $`\{R_i\}`$ that contains $`O`$. I conclude this section with a few remarks concerning the so-called measurement problem. First some basic facts. Quantum mechanics represents the possible values $`q_i^k`$ of all observables $`Q^k`$ as projection operators $`𝐏_{Q^k=q_i^k}`$ on some Hilbert space $``$. The projection operators that jointly represent the range of possible values of a given observable are mutually orthogonal. If one defines the ‘state’ of a system as a probability measure on the projection operators on $``$ (Cassinello and Sánchez-Gómez, 1996; Jauch, 1968, p. 94) resulting from a preparation of the system (Jauch, 1968, p. 92), one finds (Cassinello and Sánchez-Gómez, 1996; Jauch, 1968, p. 132) that every such probability measure has the form $`P(𝐏)=\text{Tr}(\mathrm{𝐖𝐏})`$, where $`𝐖`$ is a unique density operator (that is, a unique self-adjoint, positive operator satisfying $`\text{Tr}(𝐖)=1`$ and $`𝐖^2𝐖`$). \[The trace $`\text{Tr}(𝐗)`$ is the sum $`_ii|𝐗|i`$, where $`\{|i\}`$ is any orthonormal basis in $``$.\] If $`𝐖^2(t)=𝐖(t)`$, $`𝐖(t)`$ projects on a one-dimensional subspace of $``$ and thus is equivalent – apart from an irrelevant phase factor – to a ‘state’ vector $`|\mathrm{\Psi }(t)`$ or a wave function $`\mathrm{\Psi }(x,t)`$, $`x`$ being any point in the system’s configuration space. In this case one retrieves the Born formula (1). The quantum-mechanical ‘state’ vector (or the wave function, or the density operator) thus is essentially a probability measure on the projection operators on $``$, specifying probability distributions over all sets of mutually orthogonal subspaces of $``$. Hence the $`t`$ that appears in the ‘states’ $`𝐖(t)`$, $`|\mathrm{\Psi }(t)`$, and $`\mathrm{\Psi }(x,t)`$ has the same significance as the $`t`$ that appears in the time-dependent probabilities $`P_B(q_i^k,t)`$. Now recall that quantum mechanics predicts neither that a measurement will take place nor the time at which one will take place. It requires us to assume that a measurement will take place at a specified time. The time-dependence of the ‘state’ vector therefore is a dependence on the specified time at which a specified observable (with a specified range of values) is measured either in the actual world or in a set of possible worlds. It is not the time-dependence of a state of affairs that evolves in time. On the supposition that $`|\mathrm{\Psi }(t)`$ represents a state of affairs that evolves in time (so that at every time $`t`$ a state of affairs $`|\mathrm{\Psi }(t)`$ obtains), one needs to explain what brings about the real or apparent discontinuous transition from a state of affairs represented by a ket of the form $`|\mathrm{\Psi }(t)=_ia_i(t)|a_i`$ to the state of affairs represented by one of the kets $`|a_i`$. This is the measurement problem. It is a pseudoproblem because a collection of time-dependent probabilities is not a state of affairs that evolves in time. The probability for something to happen at the time $`t`$ does not exist at $`t`$, any more than the probability for something to be located in $`R`$ exists in $`R`$. The probabilities $`P_B(q_i^k,t)`$ are determined by the relevant matters of fact about the properties possessed by a physical system at or before a certain time $`t_0`$. In the special case of a complete measurement performed at $`t_0`$, they are given by the Born formula $$P_B(q_i^k,t)=|\mathrm{\Psi }(t)|𝐏_{Q^k=q_i^k}|\mathrm{\Psi }(t)|\text{for }tt_0,$$ $`(3)`$ where $`|\mathrm{\Psi }(t)=U(tt_0)|\mathrm{\Psi }(t_0)`$. $`|\mathrm{\Psi }(t_0)`$ is the ‘state’ ‘prepared’ by the measurement at $`t_0`$ (that is, it represents the properties possessed by the system at $`t_0`$). $`U(tt_0)`$ is the unitary operator that governs the time-dependence of quantum-mechanical probabilities (often misleadingly referred to as the ‘time evolution operator’). And $`t`$ is the stipulated time at which the next measurement is performed, either actually or counterfactually. Thus all that a superposition of the form $`|\mathrm{\Psi }(t)=_ia_i(t)|a_i`$ tells us, is this: If there is a matter of fact from which one can infer the particular property (from the set of properties represented by the kets $`|a_i`$) that is actually possessed by the system at the stipulated time $`t`$, then the prior probability that this matter of fact indicates the property represented by $`|a_i`$ is $`\left|a_i\right|^2`$. It is self-evident that if there is such a matter of fact, and if this matter of fact is taken into account, the correct basis for further conditional inferences is not $`|\mathrm{\Psi }(t)`$ but one of the kets $`|a_i`$. This obvious truism is the entire content of the so-called projection postulate (Lüders, 1951; von Neumann, 1955). By the same token, all that an entangled ‘state’ of the form $`_ia_i(t)|b_i|a_i`$ tells us, is this: If there are two matters of fact, one indicating which of the properties represented by the kets $`|a_i`$ is possessed by the first system, and another indicating which of the properties represented by the kets $`|b_i`$ is possessed by the second system, then the two matters of fact together indicate $`|a_i`$ and $`|b_i`$ with probability $`\left|a_i\right|^2`$, and they indicate $`|a_i`$ and $`|b_j`$ ($`ji`$) with probability 0. But if there is any such matter of fact, these entangled probabilities are based on an incomplete set of facts and are therefore subjective (that is, they reflect our ignorance of some relevant fact). All that ever gets objectively entangled is counterfactuals. 4. Objective Probabilities, Retrocausation, and the Arrow of Time What most strikingly distinguishes quantum physics from classical physics is the existence of objective probabilities. In a classical world there are no (nontrivial) objective probabilities: the probability of dealing an ace is not $`1/13`$ but either $`1`$ or $`0`$, depending on whether or not an ace is top card. Objective probabilities have nothing to do with ignorance; there is nothing (that is, no actual state of affairs, no actually possessed property, no actually obtained measurement result) for us to be ignorant of. Then what is it that has an objective probability? What are objective probabilities distributed over? The obvious answer is: counterfactuals. Only a contrary-to-fact conditional can be assigned an objective probability. Objective probabilities are distributed over the possible results of unperformed measurements. Objective probabilities are objective in the sense that they are not subjective, and they are not subjective because they would be so only if the corresponding measurements were performed. In short, objective probabilities are probabilities that are counterfactually subjective. Probabilities can be objective only if they are based on a complete set of facts. Otherwise they are subjective: they reflect our ignorance of some of the relevant facts. Born probabilities are in general calculated on the basis of an incomplete set of facts; they take into account the relevant past matters of fact but ignore the relevant future matters of fact. Born probabilities are objective only if there are no relevant future matters of fact. Thus they cannot be objective if any one of the measurements to the possible results of which they are assigned, is actually performed. This is equally true of ABL probabilities: if one of the measurements to the possible results of which they are assigned, is actually performed, they too are calculated on the basis of an incomplete set of facts. They take into account all revelant matters of fact except the result of the actually performed measurement. On the other hand, if none of these measurements is actually performed, ABL probabilities take into account all relevant matters of fact and are therefore objective. Thus probabilities are objective only if they are distributed over alternative properties or values none of which are actually possessed, and only if they are based on all relevant events or states of affairs, including those that are yet to occur or obtain. In general the objective probabilities associated with contrary-to-fact conditionals depend also on events that haven’t yet happened or states of affairs that are yet to obtain. Hence some kind of retroactive causality appears to be at work. This necessitates a few remarks concerning causality and the apparent ‘flow’ of time. But first let us note that nothing entails the existence of time-reversed causal connections between actual events and/or states of affairs. To take a concrete example, suppose that at $`t_1`$ the $`x`$ component $`\sigma _x`$ of the spin of an electron is measured, that at $`t_2>t_1`$ $`\sigma _y`$ is measured, and that the respective results are $`_x`$ and $`_y`$. Then a measurement of $`\sigma _x`$ would have yielded $`_x`$ if it had been performed at an intermediate time $`t_m`$, and a measurement of $`\sigma _y`$ would have yielded $`_y`$ if it had been performed instead. What would have happened at $`t_m`$ depends not only on what happens at $`t_1`$ but also on what happens at $`t_2`$. But if either $`\sigma _x`$ or $`\sigma _y`$ is actually measured at $`t_m`$ (other things being equal), nothing compels us to take the view that $`_y`$ was obtained at $`t_m`$ because the same result was obtained at $`t_2`$. We can stick to the idea that causes precede their effects, according to which $`_y`$ was obtained at $`t_2`$ because the same result was obtained at $`t_m`$. The point, however, is that nothing in the physics prevents us from taking the opposite view. The distinction we make between a cause and its effect is based on the apparent ‘motion’ of our location in time – the present moment – toward the future. This special location and its apparent ‘motion’ are as extraneous to physics as are our location and motion in space (Price, 1996). Equally extraneous, therefore, is the distinction between causes and effects. Physics deals with correlations between actual events or states of affairs, classical physics with deterministic correlations, quantum physics with statistical ones. Classical physics allows us to explain the deterministic correlations (abstracted from what appear to be universal regularities) in terms of causal links between individual events. And for some reason to be explained presently, we identify the earlier of two diachronically correlated events as the cause and the later as the effect. The time symmetry of the classical laws of motion, however, makes it equally possible to take the opposite view, according to which the later event is the cause and the earlier event the effect. In a deterministic world, the state of affairs at any time $`t`$ determines the state of affairs at any other time $`t^{}`$, irrespective of the temporal order of $`t`$ and $`t^{}`$. The belief in a time-asymmetric physical causality is nothing but an animistic projection of the perspective of a conscious agent into the inanimate world, as I proceed to show. I conceive of myself as a causal agent with a certain freedom of choice. But I cannot conceive of my choice as exerting a causal influence on anything that I knew, or could have known, at the time $`t_c`$ of my choice. I can conceive of my choice as causally determining only such events or states of affairs as are unknowable to me at $`t_c`$. On the simplest account, what I knew or could have known at $`t_c`$ is everything that happened before $`t_c`$. And what is unknowable to me at $`t_c`$ is everything that will happen thereafter. This is the reason why we tend to believe that we can causally influence the future but not the past. And this constraint on our (real or imagined) causal efficacy is what we impose, without justification, both on the deterministic world of classical physics and on the indeterministic world of quantum physics. In my goal-directed activities I exploit the time-symmetric laws of physics. When I kick a football with the intention of scoring a goal, I make (implicit or explicit) use of my knowledge of the time-symmetric law that governs the ball’s trajectory. But my thinking of the kick as the cause and of the scored goal as its effect has nothing to do with the underlying physics. It has everything to do with my self-perception as an agent and my successive experience of the world. The time-asymmetric causality of a conscious agent in a successively experienced world rides piggyback on the symmetric determinisms of the physical world, and in general it rides into the future because in general the future is what is unknowable to us. But it may also ride into the past. Three factors account for this possibility. First, as I said, the underlying physics is time-symmetric. If we ignore the strange case of the neutral kaon (which doesn’t appear to be relevant to the interpretation of quantum mechanics), this is as true of quantum physics as it is of classical physics. If the standard formulation of quantum physics is asymmetric with respect to time, it is because we think (again without justification) that a measurement does more than yield a particular result. We tend to think that it also prepares a state of affairs which evolves toward the future. But if this is a consistent way of thinking – it is not (Mohrhoff, 1999) – then it is equally consistent to think that a measurement ‘retropares’ a state of affairs that evolves toward the past, as Aharonov, Bergmann, and Lebowitz (1964) have shown. Second, what matters is what can be known. If I could know the future, I could not conceive of it as causally dependent on my present choice. In fact, if I could (in principle) know both the past and the future, I could not conceive of myself as an agent. I can conceive of my choice as causally determining the future precisely because I cannot know the future. This has nothing to do with the truism that the future does not (yet) exist. Even if the future in some way ‘already’ exists, it can in part be determined by my present choice, provided I cannot know it at the time of my choice. By the same token, a past state of affairs can be determined by my present choice, provided I cannot know that state of affairs before the choice is made. There are two possible reasons why a state of affairs $`F`$ cannot be known to me at a given time $`t`$: (i) $`F`$ may obtain only after $`t`$; (ii) at $`t`$ there may as yet exist no matter of fact from which $`F`$ can be inferred. This takes us to the last of the three factors which account for the possibility of retrocausation: the contingent properties of physical systems are extrinsic. By a contingent property I mean a property that may or may not be possessed by a given system at a given time. For example, being inside a given region of space and having a spin component of $`+\mathrm{}/2`$ along a given axis are contingent properties of electrons. Properties that can be retrocausally determined by the choice of an experimenter, cannot be intrinsic. \[A property $`p`$ of a physical system $`S`$ is intrinsic iff the proposition $`𝐩=`$ ‘$`S`$ is $`p`$’ is ‘of itself’ (that is, unconditionally) either true or false at any time.\] If $`p`$ is an extrinsic property of $`S`$, the respective criteria for the truth and the falsity of the proposition $`𝐩=`$ ‘$`S`$ is $`p`$’ are to be sought in the ‘rest of the world’ $`𝒲S`$, and it is possible that neither criterion is satisfied, in which case $`𝐩`$ is neither true nor false but meaningless. It is also possible that each criterion consists in an event that may occur only after the time to which $`𝐩`$ refers. If this event is to some extent determined by an experimenter’s choice, retrocausation is at work. On the other hand, if $`p`$ is an intrinsic property of $`S`$, $`𝐩`$ has a truth value (‘true’ or ‘false’) independently of what happens in $`𝒲S`$, so a fortiori it has a truth value independently of what happens there after the time $`t`$ to which $`𝐩`$ refers. There is then no reason why the truth value of $`𝐩`$ should be unknowable until some time $`t^{}>t`$. In principle it is knowable at $`t`$, and therefore we cannot (or at any rate, need not) conceive of it as being to some extent determined by the experimenter’s choice at $`t^{}`$. A paradigm case of retrocausation at work (Mohrhoff, 1999) is the experiment of Englert, Scully, and Walther (1994; Scully, Englert, and Walther, 1991). This experiment permits the experimenters to choose between (i) measuring the phase relation with which a given atom emerges coherently from (the union of) two slits and (ii) determining the particular slit from which the atom emerges. The experimenters can exert this choice after the atom has emerged from the slit plate and even after it has hit the screen. By choosing to create a matter of fact about the slit taken by the atom, they retroactively cause the atom to have passed through a particular slit. By choosing instead to create a matter of fact about the atom’s phase relation, they retroactively cause the atom to have emerged with a definite phase relation. The retrocausal efficacy of their choice rests on the three factors listed above (in different order): (i) The four propositions $`𝐚_1=`$ “the atom went through the first slit,” $`𝐚_2=`$ “the atom went through the second slit,” $`𝐚_+=`$ “the atom emerged from the slits in phase,” and $`𝐚_{}=`$ “the atom emerged from the slits out of phase” affirm extrinsic properties. (ii) There exist time-symmetric correlations between the atom’s possible properties at the time of its passing the slit plate and the possible results of two mutually exclusive experiments that can be performed at a later time. (iii) The result of the actually performed experiment is the first (earliest) matter of fact about either the particular slit taken by the atom or the phase relation with which the atom emerged from the slits. Before they made their choice, the experimenters could not possibly have known the slit from which, or the phase relation with which, the atom emerged. Probabilities, I said, can be objective only if they are based on all relevant matters of fact, including those still in the future. We are now in a position to see clearly why it should be so. Our distinction between the past, the present, and the future has nothing to do with physics. Physics knows nothing of the experiential now (the special moment at which the world has the technicolor reality it has in consciousness), nor does it know anything of the difference between what happened before now (the past) and what will happen after now (the future). 5. The World According to Quantum Mechanics: Fundamentally Inexplicable Correlations Between Fundamentally Inexplicable Events It is commonly believed that it is the business of quantum mechanics to account for the occurrence/existence of actual events or states of affairs. Environment-induced superselection (Joos and Zeh, 1985; Zurek, 1981, 1982), decoherent histories (Gell-Mann and Hartle, 1990; Griffiths, 1984; Omnès, 1992), quantum state diffusion (Gisin and Percival, 1992; Percival 1994), and spontaneous collapse (Ghirardi, Rimini and Weber, 1986; Pearle, 1989) are just some of the strategies that have been adopted with a view to explaining the emergence of ‘classicality.’ Whatever is achieved by these interesting endeavors, they miss this crucial point: quantum mechanics only takes us from facts to probabilities of possible facts. The question of how it is that exactly one possibility is realized must not be asked of a formalism that serves to assign probabilities on the implicit assumption that exactly one of a specified set of possibilities is realized. Even the step from probability 1 to factuality crosses a gulf that quantum mechanics cannot bridge. Quantum mechanics can tell us that $`O`$ is certain to be found in $`R`$ given that there is a matter of fact about its presence or otherwise in $`R`$, but only the actual matter of fact warrants the inference that $`O`$ is in $`R`$. Quantum mechanics does not predict that a measurement will take place, nor the time at which one will take place, nor does it specify the conditions in which one will take place. And if quantum mechanics is as fundamental as I presume it is, nothing allows us to predict that or when a measurement takes place, or to specify conditions in which one is certain to take place, for there is nothing that causes a measurement to take place. In other words, a matter of fact about the value of an observable is a causal primary. A causal primary is an event or state of affairs the occurrence or existence of which is not necessitated by any cause, antecedent or otherwise. I do not mean to say that in general nothing causes a measurement to yield this rather than that particular value. Unless one postulates hidden variables, this is a triviality. What I mean to say is that nothing ever causes a measurement to take place. Measurements (and in clear this means detection events) are causal primaries. No detector is 100% efficient. Using similar detectors in series, it is easy enough to experimentally establish a detector’s (approximate) likelihood to click when the corresponding Born probability is 1, but of this likelihood no theoretical account is possible.4 There are two kinds of probability, the probability that a detector will respond (rather than not respond), and the probability that this (rather than any other) detector will respond given that exactly one detector will respond. The former probability cannot be calculated using the quantum formalism (nor, if quantum mechanics is fundamental and complete, any other formalism). One can of course analyze the efficiency of, say, a Geiger counter into the efficiencies of its ‘component detectors’ (the ionization cross sections of the ionizable targets it contains), but the efficiencies of the ‘elementary detectors’ cannot be analyzed any further. This entails that a fundamental coupling constant such as the fine structure constant cannot be calculated from ‘first principles;’ it can only be gleaned from the experimental data. A fortiori, no theoretical account is possible of why or when a detector is certain to click. It never is. Quantum physics thus is concerned with correlations between events or states of affairs that are uncaused and therefore fundamentally inexplicable. As physicists we are not likely to take kindly to this conclusion, which may account for the blind spot behind which its inevitability has been hidden so long. But we certainly are at a loss when it comes to accounting for the world of definite occurrences. Recently Mermin (1998) advocated an interpretation of the formalism of standard quantum mechanics according to which “\[c\]orrelations have physical reality; that which they correlate, does not.” He does not claim that there are no correlata, only that they are not part of physical reality. The correlated events belong to a larger reality which includes consciousness and which lies outside the scope of physics. Thus Mermin agrees that, where physics is concerned, the correlata are fundamentally inexplicable. The idea that the correlata are conscious perceptions (Lockwood, 1989; Page, 1996), or beliefs (Albert, 1992), or knowings (Stapp, 1993) has a respectable pedigree (London and Bauer, 1939; von Neumann, 1955). If one thinks of the state vector as representing a state of affairs that evolves in time, one needs something that is ‘more actual’ than the state vector – something that bestows ‘a higher degree of actuality’ than does the state vector – to explain why every successful measurement has exactly one result, or why measurements are possible at all. This is the spurious measurement problem all over again. It is spurious because the state vector does not represent an evolving state of affairs. If we were to relinquish this unwarranted notion, we would not need two kinds of reality to make sense of quantum mechanics, such as a physical reality and a reality that includes consciousness (Mermin, 1998), or a potential reality and an actual reality (Heisenberg, 1958; Popper, 1982; Shimony, 1978, 1989), or a mind-constructed ‘empirical’ reality and a mind-independent ‘veiled’ reality (d’Espagnat, 1995), or an unrecorded ‘smoky dragon’ reality and an irreversibly recorded reality (Wheeler, 1983). We could confine ourselves to talking about events that are causal primaries, the inferences that are warranted by such events, the correlations between such events or such inferences, and the further inferences that are warranted by these correlations. I do not deny that there is a larger reality that includes consciousness and that lies outside the scope of physics. What I maintain is that the interpretative problems concerning quantum mechanics can be solved without appealing to any larger reality, and that such an appeal does not help solving those problems because it is neither necessary nor possible to account for the occurrence of a causal primary. Theoretical physics is partly mathematics and partly semantics. The semantic task is to name the fundamental epistemological and/or ontological entities and/or relations represented by the symbols of the formalism. I cannot think of a more satisfactory choice of a basic (and therefore not further explicable) ontological entity for a physical theory than a causal primary – something that is inexplicable by definition. Ever since the seminal paper by Einstein, Podolsky, and Rosen (1935), it has been argued that quantum mechanics is incomplete (Bell, 1966; Ford and Mantica, 1992; Lockwood, 1989; Primas, 1990). In point of fact, no theory can be more complete (with regard to its subject matter) than one that accounts for everything (within its subject matter) but what is inexplicable by definition. If there is anything that is incomplete, it is reality itself (that is, reality is incomplete relative to our description of it, which is ‘overcomplete’) – but I’m getting ahead of myself. Because the occurrence/existence of actual events or states of affairs is presupposed by the formalism, locutions such as ‘actual event,’ ‘actual state of affairs,’ ‘matter of fact’ cannot even be defined within the formalism. This conclusion too is unlikely to be popular with physicists, who naturally prefer to define their concepts in terms of the mathematical formalism they use. Einstein spent the last thirty years of his life trying (in vain) to get rid of field sources – those entities that have the insolence to be real by themselves rather than by courtesy of some equation (Pais, 1982). Small wonder if he resisted Bohr’s insight that not even the properties of things can be defined in purely mathematical terms. But Bohr was right. If Bohr (1934, 1963) insisted on the necessity of describing quantum phenomena in terms of experimental arrangements, it was because he held that the properties of quantum systems are defined by the experimental arrangements in which they are displayed (d’Espagnat, 1976). For ‘experimental arrangement’ read: what matters of fact permit us to infer concerning the properties of a given system at a given time. The contingent properties of physical systems are defined in terms of the actual events or states of affairs from which they can be inferred. They ‘dangle’ from what happens or is the case in the rest of the world. They cannot be defined in purely mathematical terms, for only intrinsic properties can be so defined. The scope of physics is not restricted to laboratory experiments. Any matter of fact that has a bearing on the properties of a physical system qualifies as a ‘measurement result.’ What is relevant is the occurrence or existence of an event or state of affairs warranting the assertability of a statement of the form ‘$`S`$ is $`p`$ (at the time $`t`$),’ irrespective of whether anyone is around to assert, or take cognizance, of that event or state of affairs, and irrespective of whether it has been anyone’s intention to learn something about $`S`$. The following picture emerges. The world is a mass of events that are causal primaries. Without any correlations between these events, it would be a total chaos. As it turns out, the uncaused events are strongly correlated. If we don’t look too closely, they fall into neat patterns that admit of being thought of as persistent objects with definite and continuously evolving positions. Projecting our time-asymmetric agent-causality into the time-symmetric world of physics, we think of the positions possessed at later times as causally determined by the positions possessed at earlier times. If we look more closely, we find that positions aren’t always attributable, and that those that are attributable aren’t always predictable on the basis of past events. We discover that positions do not ‘dangle’ from earlier positions by causal strings but instead ‘dangle’ from position-defining events that are statistically correlated but (being causal primaries) are not causally connected. Quantum mechanics describes the correlations but does nothing to explain them. Not only the correlata but also the correlations are incapable of (causal) explanation. Causal explanations are confined to the familiar macroworld of deterministic processes and things that evolve in time. This macroworld with its causal links is something we project onto the correlations and their uncaused correlata, but the projection works only to the extent that the correlations are not manifestly probabilistic.5 This is discussed in the last two sections. There are no causal processes more fundamental than the correlations and their correlata, processes that could in any manner account for the correlations or the correlata. 6. Spatial Nonseparability The remaining interpretative task thus consists not in explaining the correlations but in understanding what they are trying to tell us about the world. Here I will confine myself to discussing some of the implications of the diachronic correlations (the correlations between results of measurements performed on the same system at different times).6 The implications of the synchronic (EPR) correlations have been discussed elsewhere (Mohrhoff, submitted). Perhaps the first insight one gleans from the correlations is the existence of persisting entities. If the correlations did not permit us to speak of such entities, we could not think of the correlata as possessed properties, extrinsic or otherwise. Suppose that we perform a series of position measurements. And suppose that every position measurement yields exactly one result (that is, each time exactly one detector clicks). Then we are entitled to infer the existence of an entity $`O`$ which persists through time (if not for all time), to think of the clicks given off by the detectors as matters of fact about the successive positions of this entity, to think of the behavior of the detectors as position measurements, and to think of the detectors as detectors. (The lack of transtemporal identity among particles of the same type of course forbids us to extend to such particles the individuality of a fully ‘classical’ entity.) The successive positions of $`O`$, however, are extrinsic: they are what can be inferred from the pattern of clicks. All that can be inferred concerning $`O`$’s positions at times at which no detector clicks, is counterfactual and probabilistic.7 The detectors of the present scenario are assumed to be time-specific: a click not only indicates a position but also the time at which it is possessed. There is a persistent entity all right, but there is then no actually possessed position to go with it. The next lesson to be learned from the correlations is that the positions of things are objectively indefinite or ‘fuzzy.’ This does not mean that $`O`$ has as fuzzy position. It means that statements of the form ‘$`O`$ is in $`R`$ at $`t`$’ are sometimes neither true nor false but meaningless. This possibility stands or falls with the extrinsic nature of positions and the existence of objective probabilities. Take the counterfactual ‘If there were a matter of fact about the slit taken by the atom, the atom would have taken the first slit.’ We can assign to this counterfactual an objective probability iff the proposition ‘The atom went through the first slit’ is neither true nor false but meaningless. The reason why this proposition can be meaningless is that positions are extrinsic. It is meaningless just in case there isn’t any matter of fact about the slit taken by the atom. If it is true that the atom went through the union of the slits (that is, if the atom was emitted on one side of the slit plate and detected on the other side), and if it is meaningless to say that the atom went through the first slit (in which case it is also meaningless to say that it went through the second slit), then the conceptual distinction we make between the two slits has no reality for the atom. If that distinction were real for the atom (that is, if the atom behaved as if the two slits were distinct), the atom could not behave as if it went – as a whole, without being divided into distinct parts – simultaneously through both slits. But (if quantum mechanics is fundamental and complete) this is what the atom does when interference fringes are observed. Thus there are objects for which our conceptual distinction between mutually disjoint regions of space does not exist. It follows that the distinction between such regions cannot be real per se (that is, it cannot be an intrinsic property of the world). If it were real per se, the following would be the true: at any one time, for every finite region $`R`$, the world can be divided into things or parts that are situated inside $`R`$, and things or parts that are situated inside the complement $`R^{}`$ of $`R`$. The boundary of $`R`$ would demarcate an intrinsically distinct part of the world. But if this were the case, exactly one of the following three propositions would be true of every object $`O`$ at any given time: (i) $`O`$ is situated wholly inside $`R`$; (ii) $`O`$ is situated wholly inside $`R^{}`$; (iii) $`O`$ has two parts, one situated wholly inside $`R`$ and one situated wholly inside $`R^{}`$. If there is anything that (standard) quantum mechanics is trying to tells us about the world, it is that for at least some objects all of these propositions are sometimes false. It follows that the multiplicity and the distinctions inherent in our mathematical concept of space – a transfinite set of triplets of real numbers – are not intrinsic features of physical space. The notion that these features of our mathematical concept of space are intrinsic to physical space – in other words, the notion that the world is spatially separable – is a delusion. This notion is as inconsistent with quantum mechanics as the notion of absolute simultaneity is with special relativity. ‘Here’ and ‘there’ are not per se distinct. Reality is fundamentally nonseparable. Like the positions of things, spatial distinctions ‘dangle’ from actual events or states of affairs. Reality is not built on a space that is differentiated the way our mathematical concept of space is differentiated. A description of the world that incorporates such a space – and a fortiori every description that identifies ‘the points of space (or space-time)’ as the carriers of physical properties – is ‘overcomplete.’ Reality is built on matters of fact, and the actually existing differences between ‘here’ and ‘there’ are the differences that can be inferred from matters of fact. In and of itself, physical space – or the reality underlying it – is undifferentiated, one. 7. Macroscopic Objects The extrinsic nature of positions appears to involve a twofold vicious regress. To adequately deal with it, I need to talk about macroscopic objects. A macroscopic object $`M`$ is an object that satisfies the following criterion: any factually warranted inference concerning the position of $`M`$ at any time $`t`$ is predictable (with certainty) on the basis of factually warranted inferences about the positions of $`M`$ at earlier times. (A factually warranted inference is an inference that is warranted by some matter of fact.) Thus, to the extent that they can be inferred from actual events, the successive positions of a macroscopic object evolve deterministically. This makes it possible to ignore the fact that the positions of macroscopic objects, like all actually possessed positions, depend for their existence on position-indicating events. We can treat the positions of macroscopic objects as intrinsic properties and assume that they follow definite and causally determined trajectories, without ever risking to be contradicted by an actual event. I do not mean to say that the position of $`M`$ really is definite. Even the positions of macroscopic objects are fuzzy, albeit not manifestly so: the positional indefiniteness of $`M`$ does not evince itself through unpredictable position-indicating events. Nor do I mean to say that the positions of macroscopic objects really are intrinsic. They too ‘dangle’ from actual events. But they do so in a way that is predictable, that does not reveal any fuzziness. We may think of macroscopic objects as following definite trajectories, or we may think of them as following fuzzy trajectories. Since all matters of fact about their positions are predictable, it makes no difference: the fuzziness has no factual consequences. Classical behavior results when the factually warranted positions fuse into a not manifestly fuzzy trajectory. It has little to do with the ‘classical limit’ in which the wave packet shrinks to a continuously moving point, for the wave packet (of whatever size) is a bundle of probabilities associated with time-dependent counterfactuals, not the actual trajectory of an object.8 Good examples of how not to get from quantum to classical are the unsuccessful attempts to obtain the exponential decay law, which pertains to factually warranted inferences and is consistent with all experimental data, from the Schrödinger equation, which tells us how the probabilities associated with counterfactuals depend on time (Onley and Kumar, 1992; Singh and Whitaker, 1982). By saying that matters of fact about the positions of macroscopic objects are predictable I do not mean that the existence of such a matter of fact is predictable. Once again, a Born probability equal to 1 does not warrant the prediction that an event will happen or that a state of affairs will obtain. Only if it is taken for granted that exactly one of a range of possible events or states of affairs will happen or obtain, does a Born probability equal to 1 allow us to predict which event or state of affairs will happen or obtain. What I mean by saying that matters of fact about the (successive) positions of a macroscopic object are predictable, is this: what an actual event or state of affairs implies regarding the position of a macroscopic object is consistent with what can be predicted with the help of some classical dynamical law on the basis of earlier position-defining events. Everything a macroscopic object does (that is, every matter of fact about its present properties) follows via the pertinent classical laws from what it did (that is, from matters of fact about its past properties).9 The above definition of ‘macroscopic’ does not stipulate that events indicating departures from the classically predicted behavior occur with zero probability. An object is entitled to the label ‘macroscopic’ if no such event actually occurs during its lifetime. What matters is not whether such an event may occur (with whatever probability) but whether it ever does occur. When I speak of the existence of a matter of fact, I mean the occurrence of an actual event or the existence of an actual state of affairs. It is worth emphasizing that this is something that cannot be undone or ‘erased’ (Englert, Scully, and Walther, 1999; Mohrhoff, 1999). According to Wheeler’s interpretation of the Copenhagen interpretation, ‘no elementary quantum phenomenon is a phenomenon until it is registered, recorded, “brought to a close” by an “irreversible act of amplification,” such as the blackening of a grain of photographic emulsion or the triggering of a counter’ (Wheeler, 1983). In point of fact, there is no such thing as an ‘irreversible act of amplification.’ As long as what is ‘amplified’ is counterfactuals, the ‘act of amplification’ is reversible. No amount of amplification succeeds in turning a counterfactual into a fact. No matter how many counterfactuals get entangled, they remain counterfactuals. On the other hand, once a matter of fact exists, it is logically impossible to erase it. For the relevant matter of fact is not that the needle deflects to the left (in which case one could ‘erase’ it by returning the needle to the neutral position). The relevant matter of fact is that at a time $`t`$ the needle deflects (or points) to the left. This is a timeless truth. If at the time $`t`$ the needle deflects to the left, then it always has been and always will be true that at the time $`t`$ the needle deflects to the left. Note that an apparatus pointer is not a macroscopic object according to the above definition. In general there is nothing that allows one to predict which way the needle will deflect (given that it will deflect). Only before and after the deflection event does the needle behave as a macroscopic object. Is not such a definition self-defeating? It would be so if it were designed to explain why the needle deflects left or right (rather than both left and right). But such an explanation is neither required nor possible. If past events allow us to infer a superposition of the form $`a|\text{left}|a+b|\text{right}|b`$, they allow us to infer the following: if there is a matter of fact about the direction in which the needle deflects, it warrants the inference ‘left’ with probability $`\left|a\right|^2`$, and it warrants the inference ‘right’ with probability $`\left|b\right|^2`$. Nothing allows us to predict the existence of such a matter of fact. The deflection event is a causal primary, notwithstanding that it happens with a measurable probability, and that by a suitable choice of apparatus this probability can be made reasonably large. As I have stressed elsewhere (Mohrhoff, 1999), what is true of particles in double-slit experiments is equally true of cats in double-door experiments. Except for the myriads of matters of fact about the door taken by the cat, ‘the door taken by the cat’ is objectively undefined. This seems to entail a vicious regress. We infer the positions of particles from the positions of the detectors that click. But the positions of detectors are extrinsic, too. They are what they are because of the matters of fact from which one can (in principle) infer what they are. Thus there are detector detectors from which the positions of particle detectors are inferred, and then there are detectors from which the positions of detector detectors are inferred, and so on ad infinitum. However, as we regress from particle detectors to detector detectors and so on, we sooner or later (sooner rather than later) encounter a macroscopic detector whose position is not manifestly fuzzy. There the buck stops. The positions of things are defined in terms of the not manifestly fuzzy positions of macroscopic objects. It is therefore consistent to think of the deflection of the pointer needle as one of those uncaused actual events on which the (contingent) properties of things depend. Prima facie we have another vicious regress: Like all contingent properties, the initial and final positions of the needle are what they are because of what happens or is the case in the rest of the world. They thus presupposes other ‘deflection events,’ which presuppose yet other ‘deflection events,’ and so on ad infinitum. But since before and after its deflection the needle behaves as a macroscopic object, its initial and final positions are quantitatively defined independently of what happens elsewhere. They are positions of the kind that are used to define positions. Hence the deflection event – the transition from the initial to the final position – is also independent of what happens elsewhere. 8. Language and the Indefinite My chief conclusion in this paper is that (ER1) and (ER2) are both false. The sufficient and necessary condition for the existence of an element of reality $`A=a`$ is the existence of an actual state of affairs, or the occurrence of an actual event, from which $`A=a`$ can be inferred. The contingent properties of all quantum systems – and in clear this means the positions of all material objects and whatever other properties can be inferred from them – are extrinsic. They are defined in terms of the goings-on in the ‘rest of the world.’ The reason why this does not send us chasing the ultimate property-defining facts in neverending circles, is the existence of a special class of objects the positions of which are not manifestly indefinite. Everything a macroscopic object does (that is, every matter of fact about its present properties) follows via the pertinent classical laws from what it did (that is, from matters of fact about its past properties). This makes it possible to ignore the fact that the properties of a macroscopic object, like all contingent properties, ‘dangle’ from external events and/or states of affairs. Instead of having to conceive of the successive states of a macroscopic object as a bundle of statistically correlated inferences warranted by a multitude of causal primaries external to the object, we are free to think of the object’s successive states as an evolving collection of intrinsic properties fastened only to each other, by causal links. The familiar macroworld with its causal links and deterministic processes is something we project onto the fundamental statistical correlations and their uncaused correlata. This projection works where the correlations are not manifestly probabilistic (that is, where the statistical correlations evince no statistical variations). Diachronic correlations that are not manifestly probabilistic can be passed off as causal links. We can impose on them our agent-causality with some measure of consistency, even though this results in the application of a wrong criterion: temporal precedence takes the place of causal independence as the criterion which distinguishes a cause from its effect. Quantum mechanics presupposes the macroworld: it assigns probabilities to conditionals that refer to events or states of affairs either in the actual macroworld or in a possible macroworld. This is the reason why Bohr (1934, 1958) insisted not only on the necessity of describing quantum phenomena in terms of the experimental arrangements in which they are displayed, but also on the necessity of employing classical language in describing these experimental arrangements. Classical language is the language of causal processes, of definite states that evolve deterministically, of definite objects and of definite events – in short, the language of the macroworld. Thus in one sense the microworld is fundamental (macroscopic objects are made of particles and atoms), and in another sense the macroworld is fundamental (the contingent properties of particles and atoms are defined in terms of the goings-on in the macroworld). The mutual dependence of the quantum and classical ‘domains’ has often been remarked upon (e.g., Landau and Lifshitz, 1977), but I’m not sure it has been adequately appreciated. It seems to me that what is ultimately responsible for this mutual dependence is the conflict between a real, objective indefiniteness and the intrinsic definiteness of language. Language is inherently ‘classical.’ Discourse is of things – the discrete carriers of significance that appear as the subjects of predicative sentences. Things fall into mutually disjoint classes according to the properties they possess or lack. For any two different classes $`C_1`$ and $`C_2`$ there exists a property $`p`$ such that ‘$`x`$ has $`p`$’ is true of all members of $`C_1`$ and ‘$`x`$ lacks $`p`$’ is true of all members of $`C_2`$. This seems to warrant the following Principle of Completeness (Wolterstorff, 1980): for every thing $`x`$ and every property $`p`$, $`x`$ either has $`p`$ or lacks $`p`$. Reality, however, doesn’t play along with this linguistic requirement. Sometimes ‘$`x`$ has $`p`$’ is neither true nor false but meaningless. There are situations in which nothing in the real world corresponds to the linguistic (or conceptual) distinction between ‘$`x`$ has $`p`$’ and ‘$`x`$ lacks $`p`$’. In such situations it is nevertheless meaningful to consider what would have happened if one had found out whether $`x`$ has $`p`$ or lacks $`p`$, and to assign objective probabilities to the alternatives ‘$`x`$ has $`p`$’ and ‘$`x`$ lacks $`p`$.’ Given the intrinsic definiteness of language, the natural way to express an objective indefiniteness is to use counterfactuals. One then has one counterfactual for each alternative (‘if $`Q`$ were measured, the result would be $`q_k`$’), and at least one of them comes with a nontrivial objective probability (that is, an objective probability other than 0 or 1). The linguistic requirement of definiteness is met by the use of counterfactuals the respective consequents of which conform to the Principle of Completeness: each consequent explicitly affirms the truth of one alternative and implicitly denies the truth of the other alternatives. The objective indefiniteness finds expression in the fact that the counterfactuals are assigned nontrivial objective probabilities rather than truth values. Objective indefiniteness thus leads to the use of counterfactuals with nontrivial objective probabilities, and nontrivial objective probabilities, as we have seen, entail that the properties affirmed by the counterfactuals’ consequents are extrinsic: they are defined in terms of the goings-on in the macroworld, notwithstanding that the objects of the macroworld are made up – or shall we say, manifested by means – of nonmacroscopic objects. This mutual dependence of the two ‘domains’ would amount to a vicious circle if the properties of the macroworld were in their turn defined in terms of the microworld. But this is not the case. Since the contingent properties of things are defined in terms of events or states of affairs in the macroworld, quantum mechanics presupposes the macroworld. In particular, it presupposes such matters of fact as ‘the needle deflects to the left’ or ‘the needle is pointing left.’ By itself this does not guarantee that quantum mechanics is consistent with the existence of the macroworld - quantum mechanics (or the interpretation of quantum mechanics put forward in this paper) could lack self-consistency. But self-consistency only requires that the needle’s position too is fuzzy, and that it ‘dangles ontologically’ from the goings-on in the rest of the world. Quantum mechanics permits it to ‘dangle’ from them in such a way that, before and after the deflection, it is not manifestly fuzzy. If the needle’s position is not manifestly fuzzy, the needle behaves as a macroscopic object, and we can consistently conceive of its successive positions as ‘dangling causally’ from each other – except for one gap in the causal chain, the deflection event. But being a (probabilistic) transition between states embedded in the causal nexus of the macroworld, this too forms part of the macroworld. Quantum mechanics not only presupposes and admits of the existence of macroscopic objects, it also entails it. The existence of an unpredictable matter of fact about the position of $`O`$ entails the existence of detectors with ‘sharper’ positions; the existence of an unpredictable matter of fact about the position of one of those detectors entails the existence of detectors with yet ‘sharper’ positions; and so on. It stands to reason that one sooner or later runs out of detectors with ‘sharper’ positions. There are ‘ultimate’ detectors the positions of which are not manifestly fuzzy, and which therefore are macroscopic. References Aharonov, Y., Bergmann, P.G., and Lebowitz, J.L. (1964) ‘Time Symmetry in the Quantum Process of Measurement,’ Physical Review B 134, 1410-1416. Aharonov,Y., and Vaidman, L. (1991) ‘Complete Description of a Quantum System at a Given Time,’ Journal of Physics A 24, 2315-2328. Albert, D.Z. (1992) Quantum Mechanics and Experience, Cambridge, MA: Harvard University Press. Bartley III., W.W. (1982) Quantum Theory and the Schism in Physics, Totowa, NJ: Rowan & Littlefield. Bell, J.S. (1987) Speakable and Unspeakable in Quantum Mechanics, Cambridge: Cambridge University Press. Bell, J.S. and Nauenberg, M. (1966) ‘The Moral Aspect of Quantum Mechanics,’ in De Shalit, Feshbach, and Van Hove (1996), pp. 279-86. Reprinted in Bell (1987), pp. 22-28. Bohm, D. (1951) Quantum Theory, Englewood Cliffs, NJ: Prentice Hall. Bohr, N. (1934) Atomic Theory and the Description of Nature, Cambridge: Cambridge University Press. Bohr, N. (1958) Atomic Physics and Human Knowledge, New York: Wiley, p. 72. Bohr, N. (1963) Essays 1958-62 on Atomic Physics and Human Knowledge, New York: Wiley, p. 3. Cassinello, A., and Sánchez-Gómez, J.L. (1996) ‘On the Probabilistic Postulate of Quantum Mechanics,’ Foundations of Physics 26, 1357-1374. Davies, P. (1989) The New Physics, Cambridge: Cambridge University Press. De Shalit, A., Feshbach, H., and Van Hove, L. (1966) Preludes in Theoretical Physics, Amsterdam: North Holland. d’Espagnat, B. (1976) Conceptual Foundations of Quantum Mechanics, 2nd edition, Reading, MA: Benjamin, p. 251. d’Espagnat, B. (1995) Veiled Reality, Reading, MA: Addison-Wesley. Einstein, A., Podolsky, B., and Rosen, N. (1935) ‘Can Quantum-Mechanical Description of Physical Reality be Considered Complete?,” Physical Review 47, 777-780. Englert, B.-G., Scully, M.O., and Walther, H. (1994) ‘The Duality in Matter and Light,’ Scientific American 271, No. 6 (December), 56-61. Englert, B.-G., Scully, M.O., and Walther, H. (1999) ‘Quantum Erasure in Double-Slit Interferometers with Which-Way Detectors,” American Journal of Physics 67, 325-329. Ford, J., and Mantica, G. (1992) ‘Does Quantum Mechanics Obey the Correspondence Principle? Is it Complete?’, American Journal of Physics 60, 1068-1097. Gell-Mann, M., and Hartle, J.B. (1990) ‘Quantum Mechanics in the Light of Quantum Cosmology,’ in Zurek (1990), pp. 425-458. Ghirardi, G.C., Rimini, A., and Weber, T. (1986) ‘Unified Dynamics for Microscopic and Macroscopic Systems,’ Physical Review D 34, 470-91. Gisin, N., and Percival, I.C. (1992) ‘The Quantum-State Diffusion Model Applied to Open Systems,’ Journal of Physiccs A 25, 5677-5691. Griffiths, R.B. (1984) ‘Consistent Histories and the Interpretation of Quantum Mechanics,’ Journal of Statistical Physics 36, 219-272. Heisenberg, W. (1958) Physics and Philosophy, New York: Harper and Row, Chapter 3. Hilgevoord, J. (1998) ‘The Uncertainty Principle For Energy and Time. II,’ American Journal of Physics 66, 396-402. Jauch, J.M. (1968) Foundations of Quantum Mechanics, Reading, MA: Addison-Wesley. Joos, E., and Zeh, H.D. (1985) ‘The Emergence of Classical Properties Through Interaction With the Environment,’ Zeitschrift für Physik B 59, 223-243. Kastner, R.E. (1999) ‘Time-Symmetrized Quantum Theory, Counterfactuals, and “Advanced Action”,’ to be published in Studies in History and Philosophy of Science. Landau, L.D., and Lifshitz, E.M. (1977) Quantum Mechanics, Oxford: Pergamon Press, pp. 2-3. Langevin, P. (1939) Actualités scientifiques et industrielles: Exposés de physique générale, Paris: Hermann. Lockwood, M. (1989) Mind, Brain and the Quantum: The Compound ‘I’, Oxford: Basil Blackwell. London, F., and Bauer, E. (1939) ‘La théorie de l’observation en mécanique quantique,’ in Langevin (1939), No. 775. English translation ‘The Theory of Observation in Quantum Mechanics’ in Wheeler and Zurek (1983), pp. 217-259. Lüders, G. (1951) ‘Über die Zustandsänderung durch den Messprozess,’ Annalen der Physik (Leipzig) 8, 322-328. Mermin, N.D. (1998) “What is Quantum Mechanics Trying to Tell Us?,” American Journal of Physics 66, 753-767. Miller, A.I. (1990) 62 Years of Uncertainty, New York: Plenum Press. Mohrhoff, U. (1999) ‘Objectivity, Retrocausation, and the Experiment of Englert, Scully and Walther,’ American Journal of Physics 67, 330-335. Mohrhoff, U. (submitted) ‘What Quantum Mechanics is Trying to Tell Us.’ Omnès, R. (1992) ‘Consistent Interpretations of Quantum Mechanics,’ Reviews of Modern Physics 64, 339-382. Onley, D., and Kumar, A. (1992) ‘Time Dependence in Quantum Mechanics – Study of a Simple Decaying System,’ American Journal of Physics 60, 432-439. Page, D.N. (1996) ‘Sensible Quantum Mechanics: Are Probabilities Only in the Mind?,’ International Journal of Modern Physics D 5, 583-596. Page, D.N., and Wootters, W.K. (1983) ‘Evolution Without Evolution: Dynamics Described by Stationary Observables’ Physical Review D 27, 2885-2891. Pais, A., (1982) ‘Subtle is the Lord…’: The Science and the Life of Albert Einstein, Oxford: Clarendon Press. Pearle, P. (1989) ‘Combining Stochastic Dynamical State-Vector Reduction with Spontaneous Localization,’ Physical Review A 39, 2277-2289. Percival, I.C. (1994) ‘Primary State Diffusion,’ Proceedings of the Royal Society London A447, 189-209. Peres, A., and Zurek, W.H. (1982) ‘Is Quantum Theory Universally Valid?,’ American Journal of Physics 50, 807-810. Popper, K.R. (1982) in Bartley III (1982). Price, H. (1996) Time’s Arrow & Archimedes’ Point, New York: Oxford University Press. Primas, H. (1990) ‘Mathematical and Philosophical Questions in the Theory of Open and Macroscopic Quantum Systems,’ in Miller (1990), pp. 233-257. Redhead, M. (1987) Incompleteness, Nonlocality and Realism, Oxford: Clarendon, p. 72. Shimony, A. (1978) ‘Metaphysical Problems in the Foundations of Quantum Mechanics,’ International Philosophical Quarterly 18, 3-17. Shimony, A. (1989) ‘Conceptual Foundations of Quantum Mechanics’ in Davies (1989), 373-95. Scully, M.O., Englert, B.-G., and Walther, H. (1991) ‘Quantum Optical Tests of Complementarity,’ Nature 351, No. 6322, 111-116. Singh, I., and Whitaker, M.A.B. (1982) ‘Role of the Observer in Quantum Mechanics and the Zeno Paradox,’ American Journal of Physics 50, 882-887. Stapp, H.P. (1993) Mind, Matter, and Quantum Mechanics, Berlin: Springer. Vaidman, L. (1993) ‘Lorentz-Invariant “Elements of Reality” and the Joint Measurability of Commuting Observables,’ Physical Review Letters 70, 3369-3372. Vaidman, L. (1996a) ‘Defending Time-Symmetrized Quantum Theory,’ e-print archive quant-ph 9609007. Vaidman, L. (1996b) ‘Weak-Measurement Elements of Reality,’ Foundations of Physics 26, 895-906. Vaidman, L. (1997) ‘Time-Symmetrized Counterfactuals in Quantum Theory,’ e-print archive quant-ph 9807075, Tel-Aviv University Preprint TAUP-2459-97. Vaidman, L. (1998) ‘Time-Symmetrized Quantum Theory,’ Fortschritte der Physik 46, 729-739. Vaidman, L. (1999) ‘Defending Time-Symmetrized Quantum Counterfactuals,’ to be published in Studies in History and Philosophy of Science. von Neumann, J. (1955) Mathematical Foundations of Quantum Mechanics, Princeton: Princeton University Press. Wheeler, J.A. (1983) ‘On Recognizing “Law Without Law” (Oersted Medal Response at the Joint APS-AAPT Meeting, New York, 25 January 1983),’ American Journal of Physics 51, 398-404. Wheeler, J.A., and Zurek, W.H. (1983) Quantum Theory and Measurement, Princeton, NJ: Princeton University Press. Wolterstorff, N. (1980), Works and Worlds of Art, Oxford: Clarendon Press. Zurek, W.H., (1981) ‘Pointer Basis of Quantum Apparatus: Into What Mixture Does the Wave Packet Collapse?’, Physical Review D 24, 1516-1525. Zurek, W.H., (1982) ‘Environment-Induced Superselection Rules,’ Physical Review D 26, 1862-1880. Zurek, W.H., (1990) Complexity, Entropy, and the Physics of Information, Reading, MA: Addison-Wesley.
no-problem/9904/hep-ph9904250.html
ar5iv
text
# Discovering Supersymmetry at the Tevatron in Wino LSP Scenarios ## Abstract In supersymmetric models, Winos, partners of the SU(2) gauge bosons, may be the lightest supersymmetric particles (LSPs). For generic parameters, charged and neutral Winos are highly degenerate. Charged Winos travel macroscopic distances, but can decay to neutral Winos and extremely soft leptons or pions before reaching the muon chambers, thereby circumventing conventional trigger requirements based on energetic decay products or muon chamber hits. However, these charginos are detectable, and can be triggered on when produced in association with jets. In addition, we propose a new trigger for events with a high $`p_T`$ track and low hadronic activity. For Tevatron Run II with luminosity 2 fb<sup>-1</sup>, the proposed searches can discover Winos with masses up to 300 GeV and explore a substantial portion of the parameter space in sequestered sector models . preprint: April 1999 IASSNS–HEP–99–19 MIT–CTP–2830 PUPT–1857 hep-ph/9904250 The discovery of supersymmetry (SUSY) is much anticipated at high energy colliders. If SUSY is to retain its motivation of stabilizing the electroweak scale against large radiative corrections, at least some supersymmetric particles must have masses of order the electroweak scale. In the most widely studied models, the lightest supersymmetric particle (LSP) is assumed to be stable and the partner of the U(1)<sub>Y</sub> gauge boson. SUSY signals are then characterized by missing transverse energy ($`\overline{)E_T}`$) and are unlikely to escape detection when the Large Hadron Collider (LHC) at CERN begins operation around 2005 with center of mass energy $`\sqrt{s}=14\text{ TeV}`$. Recently, however, it has been realized that many other SUSY signatures are possible. While these signatures vary widely, a number of them are in fact even more striking than the classic $`\overline{)E_T}`$ signature and give new life to the hope that the discovery of SUSY need not wait for the LHC . In this letter, we study scenarios in which the LSP, while still the lightest neutralino, is not the U(1)<sub>Y</sub> gaugino, but the neutral SU(2) gaugino, the Wino $`\stackrel{~}{W}^0`$. We will see that this simple modification leads to drastic differences in phenomenology. These were argued to make detection difficult, based on conventional triggers, in , but were argued to provide a novel identifiable signal in Ref. . In this paper, we elaborate on this observation. As in more conventional scenarios, the neutral LSP interacts very weakly and escapes detection. The new element is that the next-to-lightest superpartner, the charged Wino $`\stackrel{~}{W}^\pm `$, is generically extremely degenerate with the LSP and decays after centimeters or meters to an LSP and an extremely soft lepton or pion. Such charged Winos are therefore missed by conventional triggers and avoid detection in traditional searches. However, if care is taken to preserve such events at the trigger level, we will see that large and spectacular signals may appear at the upcoming run of the Fermilab Tevatron with $`\sqrt{s}=2\text{ TeV}`$. At tree-level, the masses of the charginos and neutralinos depend on the U(1)<sub>Y</sub> gaugino mass $`M_1`$, the SU(2) gaugino mass $`M_2`$, the Higgsino mass $`\mu `$, and $`\mathrm{tan}\beta `$, the ratio of Higgs vacuum expectation values. Without loss of generality, we choose $`M_2`$ real and positive. Phases in the parameters $`M_1`$ and $`\mu `$ are then physical. We will consider the case $`M_2<|M_1|,|\mu |`$, so that the lightest charginos and neutralinos, $`\stackrel{~}{\chi }_1^\pm `$ and $`\stackrel{~}{\chi }_1^0`$, are Wino-like with masses $`M_2`$. We assume that all other superparticles are (much) heavier than the Winos. With this assumption we may neglect corrections to charged Wino decay from virtual supersymmetric particles. We will consider two Wino LSP scenarios. In the first, we consider the well-motivated sequestered sector models , in which there is an anomaly-mediated spectrum of gauginos and a consistent scenario involving light scalars. In these models, the gaugino mass parameters are given by $$M_i=b_ig_i^2M_{\text{SUSY}},$$ (1) where $`M_{\text{SUSY}}`$ determines the overall SUSY-breaking scale, $`i=1,2,3`$ identifies the gauge group, $`g_i`$ are gauge coupling constants, and $`b_i`$ are the 1-loop $`\beta `$-function coefficients of the (full supersymmetric) theory. Substituting the weak scale values of $`g_i`$, we find $`M_1:M_2:M_3=3.3:1:10`$. It should be borne in mind that sequestered sector models predict a large hierarchy between Wino and squark masses. Naturalness bounds therefore suggest $`M_2200300`$ GeV, and we will see that a large portion of the parameter space in these scenarios may be explored at the Tevatron. More generally, the Wino LSP scenario may be realized for a large region of SUSY parameter space if the assumption of gaugino mass unification is relaxed . We will therefore also consider an alternative set of parameters with $`M_1=1.5M_2`$. As will be seen, this choice leads to significant differences from the anomaly-mediated case, and so serves as an illustrative alternative. Since these parameters are not motivated by any model, the Wino mass $`M_2`$ is less constrained in this case. The SUSY signal depends strongly on $`\mathrm{\Delta }Mm_{\stackrel{~}{\chi }_1^\pm }m_{\stackrel{~}{\chi }_1^0}`$. At tree level, the chargino mass matrix is $$\left(\begin{array}{cc}M_2& \sqrt{2}m_Ws_\beta \\ \sqrt{2}m_Wc_\beta & \mu \end{array}\right)$$ (2) in the basis $`(i\stackrel{~}{W}^\pm ,\stackrel{~}{H}^\pm )`$, and the neutralino mass matrix is $$(\begin{array}{cccc}M_1& 0& m_Zc_\beta s_W& m_Zs_\beta s_W\\ 0& M_2& m_Zc_\beta c_W& m_Zs_\beta c_W\\ m_Zc_\beta s_W& m_Zc_\beta c_W& 0& \mu \\ m_Zs_\beta s_W& m_Zs_\beta c_W& \mu & 0\end{array})$$ (3) in the basis $`(i\stackrel{~}{B},i\stackrel{~}{W}^3,\stackrel{~}{H}_1^0,\stackrel{~}{H}_2^0)`$. Here $`s_W\mathrm{sin}\theta _W`$, $`c_W\mathrm{cos}\theta _W`$, $`s_\beta =\mathrm{sin}\beta `$, and $`c_\beta =\mathrm{cos}\beta `$. The mass matrices may be diagonalized exactly, but it is enlightening to consider a perturbation series in $`1/\mu `$ for large $`|\mu |`$. The lightest chargino and neutralino are degenerate at zeroth order with $`m_{\stackrel{~}{\chi }_1^\pm }^{(0)}=m_{\stackrel{~}{\chi }_1^0}^{(0)}=M_2`$. At the next order in $`1/\mu `$, they receive corrections from mixing with the Higgsinos. However, both masses are corrected by $`m_{\stackrel{~}{\chi }_1^\pm }^{(1)}=m_{\stackrel{~}{\chi }_1^0}^{(1)}=m_W^2\mathrm{sin}2\beta /\mu `$, so the degeneracy remains. It is only at the next order, where the neutralino mass receives contributions from U(1)<sub>Y</sub> gaugino mixing which have no counterpart in the chargino sector, that the degeneracy is broken: $$\mathrm{\Delta }M_{\mathrm{tree}}m_{\stackrel{~}{\chi }_1^\pm }^{(2)}m_{\stackrel{~}{\chi }_1^0}^{(2)}=\frac{m_W^4\mathrm{tan}^2\theta _W}{(M_1M_2)\mu ^2}\mathrm{sin}^22\beta .$$ (4) Note that for large $`\mathrm{tan}\beta `$, even this contribution is suppressed. In fact, for $`\mathrm{tan}\beta \mathrm{}`$, $`\mathrm{\Delta }M_{\mathrm{tree}}1/\mu ^4`$. (A $`1/\mu ^3`$ contribution vanishes because, in this limit, the bilinear Higgs scalar coupling $`B`$ vanishes, and so an exact Peccei-Quinn symmetry relates $`\mu \mu `$.) For all of these reasons, the mass splitting is highly suppressed, even for moderate values of $`|\mu |`$. Given the large suppression of $`\mathrm{\Delta }M_{\mathrm{tree}}`$, 1-loop contributions may be important. The leading contribution to the mass splitting from loop effects is from custodial SU(2)-breaking in the gauge boson sector. (Loop contributions from sleptons and squarks are insignificant for heavy top and bottom squarks .) The loop contribution is positive, and, in the pure Wino limit, it has the simple form , letting $`r_i=m_i/M_2`$, $$\mathrm{\Delta }M_{1\mathrm{loop}}=\frac{\alpha _2M_2}{4\pi }\left[f(r_W)c_W^2f(r_Z)s_W^2f(r_\gamma )\right],$$ (5) where $`f(a)=_0^1𝑑x(2+2x)\mathrm{log}[x^2+(1x)a^2]`$. In Fig. 1a, we plot the total mass splitting $`\mathrm{\Delta }M`$ for the anomaly-mediated value of $`M_1/M_2`$ and a moderate value of $`\mathrm{tan}\beta `$, where the tree-level mass matrices have been corrected by 1-loop gauge boson contributions including chargino and neutralino mixing and have been diagonalized numerically. We show the region (for $`\mu <0`$) of parameter space which is consistent with naturalness constraints . Typical mass splittings are of order 150 MeV to 1 GeV. In Fig. 1b we do the same for a model with $`M_1=1.5M_2`$, in which $`\mathrm{\Delta }M`$ may be even smaller. Note that the near-degeneracy of the Wino-like chargino and neutralino is generic. Generally, this degeneracy is not of great phenomenological importance, as the Wino-like chargino and neutralino both decay quickly to other particles. However, when one of them is the LSP, the other must decay into it, and the near-degeneracy results in macroscopic decay lengths with important implications. For mass splittings in the range of a few hundred MeV, the dominant chargino decays are the three-body decays $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^0(e^+\nu _e,\mu ^+\nu _\mu )`$, and the two-body decay $`\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^0\pi ^+`$. For $`\mathrm{\Delta }Mm_{\pi ^\pm }140\text{ MeV}`$, the decay rate is dominated by the electron mode, with $`\mathrm{\Gamma }(\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^0e^+\nu )\frac{G_F^2}{(2\pi )^3}\frac{16}{15}(\mathrm{\Delta }M)^5`$, corresponding to a decay length of $`c\tau |_{e\text{ mode}}=34\text{ meters}\times \left(100\text{ MeV}/\mathrm{\Delta }M\right)^5`$ . However, once the pion mode becomes available, it quickly dominates , and $`c\tau `$ becomes of order 10 cm or less. In Fig. 1, the contours are labeled also with decay lengths $`c\tau `$, where all final states are included. We find macroscopic decay lengths on the order of centimeters to meters in much of the parameter space. Amazingly, the Wino LSP scenario guarantees a mass splitting such that the chargino could decay in any of the detector components. This is an automatic consequence of the Wino LSP scenario. With conventional triggering, such Winos generally evade detection. For some range of parameters, the splitting is such that Winos decay before the muon chambers (although for long lifetimes those that do reach the muon chamber will be important). Furthermore, the decay products are soft, and will generally neither meet the calorimeter trigger threshold nor provide an observable kink. For short-lived tracks with sufficiently hard decay products and for long-lived tracks, the current bound from LEP II is about 90 GeV; otherwise it is 45–63 GeV. Of course, if chargino events are accepted, the signal of a high $`p_T`$ track that disappears, leaving only a low momentum charged lepton or pion, is spectacular, and could hardly escape off-line analysis. The essential difficulty then is the acceptance of chargino events into the data sample. In the following, we propose a number of solutions to this difficulty and consider the prospects for probing the Wino LSP scenario at the Fermilab detectors CDF II (Collider Detector at Fermilab) and (DZero) in the next Tevatron run. We discuss several possible triggers. (I) For sufficiently long-lived Winos, one can apply the usual search for heavy particles that trigger in the muon chambers. (II) For shorter-lived charginos which do not reach the muon chamber, events in which a high $`p_T`$ jet accompanies the Winos can be used by triggering on the jet and the associated missing $`E_T`$. Distinguishing these events from background in the off-line analysis will require identifying the Wino track itself. Finally, as a supplement to these two triggers, we propose to search for Winos too short-lived to reach the muon chamber by using the fact that they leave stiff tracks in the tracking chamber in events that are hadronically quiet. This can be done by (III) triggering on events with a single stiff track and no localized energy (in the form of jets) in the calorimeter. The addition of this trigger will extend the Tevatron reach for the light Wino search and furthermore should considerably enhance statistics. A more conservative but less powerful approach (III’) would be to trigger instead on events containing two stiff tracks with balancing $`p_T`$. If $`\mathrm{\Delta }M`$ is signficantly above $`m_\pi `$, as for sequestered sector models, only trigger II is useful, but in more general Wino LSP models all three triggers can be important. Trigger I is useful for detecting the processes $$q\overline{q}\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^0,\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}$$ (6) when the Winos tracks have lengths of order meters or more. Of course, for the muon chamber trigger to be useful we must distinguish Wino tracks from those produced by muons. Fortunately, Winos tend to have low velocities and associated high ionization energy loss rates $`dE/dx`$ in the vertex detector and tracking chambers. We will require the Wino tracks to have $`\beta \gamma <0.85`$, which corresponds to $`dE/dx`$ approximately double minimally-ionizing . In Fig. 2, we present the combined cross section for processes (6), using the following technique. Let $`L`$ be the minimum radial distance a charged track must travel in order to be detected by a given trigger (here, the distance to the muon chambers.) We require that each event have a charged track of length $`L`$ or greater, with pseudorapidity $`|\eta |<1.2`$. The cross section for such events depends on $`\mathrm{\Delta }M`$ through the combination $`c\tau /L`$. We present curves for several values of $`c\tau /L`$, with and without the cut on $`\beta \gamma `$. The figure shows that a cut on $`\beta \gamma `$ retains a large signal, allowing Winos to be discovered in searches for massive long-lived charged particles . The relative sensitivity of this search depends, of course, on the chargino decay length $`c\tau `$. For example, from Fig. 2, we find that for muon chambers with $`L4.5`$ m, assuming $`2\text{ fb}^1`$ integrated luminosity and demanding 5 events for discovery, the mass reach for Winos with $`c\tau 6`$ m is at least 260 GeV. Additional information from time-of-flight may also be useful for distinguishing Winos from muons. Next we consider trigger II, sensitive to the production of Winos plus a jet. Such topologies may be produced through the parton level processes $$q\overline{q}\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^0g,\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}g\mathrm{and}qg\stackrel{~}{\chi }_1^\pm \stackrel{~}{\chi }_1^0q,\stackrel{~}{\chi }_1^+\stackrel{~}{\chi }_1^{}q.$$ (7) When the jet is hard, these events are characterized by large $`\overline{)E_T}`$ resulting from a single high $`p_T`$ jet, and one or two charginos that decay in the detector. In our analysis, we require an event with $`\overline{)E_T}>30`$ GeV, and a jet with $`p_T>30`$ GeV and $`|\eta |<1.2`$. For the signal to be distinguishable in the off-line analysis from backgrounds, such as monojets resulting from $`q\overline{q}gZg\nu \overline{\nu }`$, the charginos, or their decay products, must be visible. The most obvious possibility is that the charginos leave tracks in detector components before decaying. We assume the off-line analysis will require at least one isolated high $`p_T`$ track, with $`|\eta |<2`$, that travels a radial distance greater than some minimum detection length $`L`$. These tracks will not deposit much energy in the calorimeters or (if short) hit the muon chambers, and should therefore leave a spectacular, background-free signal. (Note that in events with long tracks that also hit the muon chambers, a cut on $`\beta \gamma `$ will distinguish charginos from muons, as discussed below.) In Fig. 3, we plot cross sections, combining the four relevant processes of (7) for various values of $`c\tau /L`$. The cross sections are clearly strongly dependent on the length $`L`$. For both CDF II and , a chargino traveling a radial length $`L=10`$ cm or greater should be easily identified, as such charginos will travel through essentially all layers of the silicon vertex detector. With the same discovery criterion as above, we find a discovery reach of $`M_2140`$, $`210`$, and $`240\text{ GeV}`$ for decay lengths $`c\tau =3,10,\text{ and }30`$ cm, respectively. Winos with $`c\tau <10`$ cm decay predominantly through the pion mode. If these pions can be identified, they could conceivably extend the reach of this search for $`\mathrm{\Delta }M>m_\pi `$. However, this requires careful study outside the scope of this paper. If the chargino track lengths are $`𝒪(10\text{ cm})`$ or longer, trigger III could be applied to processes (6). The rate for chargino events accepted by such a trigger may be determined from the solid curves in Fig. 2 for various $`c\tau `$. As in the previous case, the cross sections depend strongly on the required $`L`$. For the CDF II () detector, tracking information is available at the trigger level if $`L`$ 1 m (50 cm). Once such events are accepted, the lack of calorimeter activity makes them striking; physics backgrounds are negligible, and the leading backgrounds are expected to be instrumental. (Long tracks hitting the muon chamber will be discussed below.) With the same discovery criterion as above and $`c\tau =6`$ m, both detectors have a mass reach of roughly $`320\text{ GeV}`$. Furthermore, as can be seen from Fig. 2, for $`c\tau 6`$ m the signal passing trigger III ($`c\tau /L10`$) is several times larger than that passing trigger I ($`c\tau /L1`$.) Trigger III’ accepts only the second process in (6), and requires that both chargino tracks travel through a substantial portion of the tracking chamber. Though fewer signal events pass this trigger, the ratio of signal to trigger background may be better than for trigger III. The utility of trigger III’ is less than that of trigger III, but is comparable to that of trigger I for $`\mathrm{\Delta }M<m_\pi `$. If, contrary to our assumptions, the sleptons are not much heavier than $`M_2`$, then the chargino lifetime would be smaller, and the power of trigger I reduced, making trigger III’ potentially more important. In our discussion of the discovery region for $`\mathrm{\Delta }M<m_\pi `$, we have neglected the fact that some fraction of the events passing triggers II, III, and III’ will contain Wino tracks that also pass trigger I. As before, these charginos must be distinguished from muons using a $`\beta \gamma `$ cut. However, most charginos are produced slowly, so the impact of the $`\beta \gamma `$ cut is small, reducing the discovery reach by at most 5–10 GeV. In order to summarize the discovery reach, we show in Fig. 1 the 5 event discovery contours for triggers I, II and III with $`L`$=4.5 m, 10 cm and 50 cm respectively. In (a) we have taken $`M_1/M_2`$ as suggested by the anomaly-mediated supersymmetry breaking. Since $`\mathrm{\Delta }M>m_\pi `$, only trigger II plays a role, but fortunately it can cover a large fraction of the parameter space of the sequestered sector models. In (b) we consider a more general Wino LSP model in which the discovery reach is markedly enhanced using triggers I and III. In particular, triggers I and III, which require small $`\mathrm{\Delta }M`$ so that chargino tracks are sufficiently long, are useful at large Wino masses where Wino production is too rare for trigger II to find a signal. Note that the discovery reaches depend significantly on $`M_1/M_2`$, $`\mathrm{tan}\beta `$ and $`\mathrm{sign}(\mu )`$; these particular cases are for illustration only. If candidate events are discovered, a number of important checks can be made on the Wino LSP interpretation. These include comparing the number of events with one and two charged tracks, and determining the fraction of events with anomalously large $`dE/dx`$ as mentioned above. In addition, in order to distinguish this scenario from gauge-mediated scenarios with long-lived sleptons, where macroscopic decay lengths result not from degeneracy, but from highly suppressed couplings , correlations between particle masses and cross sections may be used. Finally, as the signals discussed above are essentially background-free, the discovery potential is highly sensitive to integrated luminosity. For example, if the total luminosity is increased to $`30\text{ fb}^1`$, the various Wino mass discovery reaches estimated above increase by up to 100 GeV. It is exciting that Wino LSP searches will explore a large fraction of the parameter space of the sequestered sector scenario even before the LHC, giving the Tevatron the possibility of finding the first evidence for extra space-time dimensions. Acknowledgments — It is a great pleasure to thank John Conway and Darien Wood for many enlightening discussions. SS thanks the Institute for Advanced Study for hospitality. This work was supported in part by the Department of Energy under contracts DE–FG02–90ER40542, DE–FG–02–91ER40671 and cooperative agreement DF–FC02–94ER40818, the National Science Foundation under contract NSF PHY–9513835, a Frank and Peggy Taplin Membership (JLF), a Marvin L. Goldberger Membership (TM), and the W. M. Keck Foundation (MS).
no-problem/9904/astro-ph9904055.html
ar5iv
text
# HI and OH absorption at 𝑧=0.89 ## 1 Introduction Neutral gas at high redshifts is most easily observed through the Lyman-$`\alpha `$ transition of the hydrogen atom, which with current technology can be detected in absorption against the UV continuum of QSOs even at column densities as low as low as $`10^{13}`$ atoms/cm<sup>-2</sup>. The bulk (by mass) of the neutral gas however is found in the few very high column density systems(Rao and Briggs, 1993), where one could in principle expect a non trivial molecular fraction. However, quantitative predictions of the molecular fraction are difficult to make since the conversion of gas from atomic to molecular form depends on a variety of environmental factors like the UV background, the metallicity and the dust content, all of which are poorly constrained at high redshift. On the observational front, despite searching a large sample, mm molecular lines have been detected in absorption at high redshifts only from four sources (of which two are gravitational lenses and two appear to arise from gas associated with the AGN itself) (Wiklind & Combes 1998). Here we discuss the case of PKS 1830-21, which is the brightest known radio lens. PKS 1830-21 was identified as a candidate gravitational lens on the basis of its peculiar radio spectrum and morphology (Rao & Subhramanyan 1988, Subhramanyan et. al. 1990, Jauncey et. al. 1991). The radio structure (see Figure 3) consists of two compact flat spectrum components separated by $`1^{^{\prime \prime }}`$ (henceforth called the northeast (NE) and southwest (SW) components respectively), joined by a steep spectrum ring. At a frequency of 1.7 GHz roughly one third of the observed flux comes from the ring and each of two compact components. At the redshifted frequencies of HI (753 MHz) and OH (884 MHz) the ring is expected to be even more dominant. The lack of simultaneous multi-frequency flux density measurements of sufficient angular resolution (in view of the strong variability of 1830-21) makes a more accurate assessment of the ring flux and the relative components fluxes at the low frequencies not possible at present. For long, no optical counter part has been found for 1830-21 (Djorgovski et. al. 1992), largely because of confusion arising from its low galactic latitude, although there is now some evidence for one (Courbin et. al. 1998). Two independent gravitational lensing models have been proposed for PKS 1830-21, (Nair et. al. 1993, Kochanek & Narayan 1992). At the time that these models were made no redshift was available either for the source or the lens. The redshift of the lens is now known to be $`0.89`$ from molecular line observations (Wiklind & Combes 1996). The absorption spectra against the NE and the SW image are very different (Frye et. al. 1996, Wiklind & Combes 1998), ruling out the possibility that the molecules at $`0.89`$ are associated with the background quasar itself. The bulk of the molecular absorption occurs against the SW component, although much weaker absorption is also seen in some molecules against the NE component. The velocity separation between the absorption seen against the NE image and the SW image is 147 km/s. In addition to the molecules seen at $`z=0.88582`$, HI absorption has also been seen towards PKS 1830-21, but at a lower reshift of $`0.19`$ (Lovell et. al. 1996). The velocity width of this HI line is $`30`$ km/s and it has been interpreted as arising due to absorption in a dense spiral arm of a low redshift spiral galaxy. No molecular absorption has been detected from this lower redshift system (Wiklind & Combes 1997). In what follows we report on WSRT observations of the HI and OH absorption arising from the system at $`z=0.89`$. At mm wavelenghts only the extremely compact, flat spectrum components of the background source have sizeable flux. Consequently the spectra sample a region of order only a few tens of parsecs across. At the HI and OH frequencies however, the background source is considerably more extended. These lines are thus more suited to probe the large scale kinematics of the absorbing system as well as to determine the averaged physical properties on a kpc scale. ## 2 Observations and data reduction The observations were done with the broad band UHF receivers installed at the WSRT as part of the on going WSRT upgrade. The HI observations are summarized in Table 1 and the OH observations in Table 2 The OH observations were made using the standard interferometric mode and the data were reduced using NEWSTAR, the WSRT data reduction package. 1830-21 is spatially unresolved at the WSRT baselines. The data from the two observing runs were added together (after applying the appropriate heliocentric Doppler correction) and is shown in Figure 2. In addition, a lower resolution but larger total bandwidth spectrum was also obtained. This spectrum (which is not included here) is substantially the same as that shown in Figure 2. No broader absorption features were detected. The high resolution HI spectrum, Figure 1c was obtained using the WSRT as a compound interferometer (CI), where the telescope was divided into two phased arrays and the output of these phased arrays was fed into the correlator. This mode achieves high spectral resolution at the expense of losing spatial information. However since PKS 1830-21 is not resolved at the WSRT, there is no loss of spatial information in the CI mode. The CI data was reduced using the WASP package (Chengalur 1996). The spectrum agrees well with that of Carilli et. al. (1997), apart from the region near $`v0`$, where their spectrum is badly affected by interference. The line is fully resolved and reaches a peak optical depth of 5.5%. The lower resolution HI spectra Figure 1a&b were obtained in the standard interferometric mode and reduced using NEWSTAR. The observation on 15/Nov/96 used a much larger bandwidth, however again no new broad absorption feature was detected. As in the case for OH (but this time with better sensitivty and a longer time baseline), there is no measureable difference between the spectra obtained over a period of $`2`$ months. The flux densities were calibrated via reference to 3C48 for which we adopt a flux of 25.5 Jy at 753 MHz and 22.7 Jy at 884 MHz, which are based on the Baars et al. (1997) scale. ## 3 Discussion With peak optical depths of only 0.007 and 0.005 in the two OH absorption lines the profile shape is not so well defined as that of the HI line. However, it is clear that the OH spectrum and the HI spectrum have similar overall velocity widths. Since the separation of the two OH lines is 350 km/sec we conclude that they do not overlap, consistent with the height of the continuum inbetween the two absorption features. The 1667 MHz transition has an integrated optical depth that is larger than that of the 1665 MHz transition. Within the measurement errors the ratio of the optical depth is consistent with the 9:5 ratio expected in thermal equilibrium. There is evidence that at zero velocity the 1665 MHz line is deeper than the 1667 MHz line, suggesting variations in opacity of the 1665/1667 transitions. This could be related to the much larger molecular line optical depth at zero velocity than at -147 km/sec. We hope to address this issue with future more sensitive observations of the OH lines. The optical depth ($`10^2`$), the velocity width, the overall optical depth ratio, and the ratio of the OH column density to the excitation temperature ($`N_{OH}/T_{ex}4\times 10^{14}`$) are all within the range of OH absorption seen towards the centers of low redshift galaxies (Schmeltz et. al. 1986). Under the assumption that the absorption arises from a rotating galaxy disk, the large OH velocity width implies that the covering factor of the OH absorbing gas is $`1`$. In the Nair et. al. model, the distances of the SW and NE images from the lens center are 1.8 kpc and 3.8 kpc respectively (for $`H_0=75`$ km/s/Mpc and $`q_0=0.5`$), and hence OH would have to be widespread in the central 5–6 kpc of the galaxy. The HI spectrum is highly asymmetric, with a peak at $`148`$ km/s, the same velocity where weak molecular absorption is seen against the NE core. The HI peak is then presumably gas seen in absorption against the very compact NE component. If one assumes that this component has $`1/3`$ of the total flux then for the gas lying in front of the NE component $`N_H/T_{sys}10^{19}`$, compatible with galactic numbers of $`N_H10^{21}`$ cm<sup>-2</sup>, and $`T_{sys}100`$ K. The red wing of the HI absorption profile shows a weak but resolved feature at zero velocity, corresponding to the deep molecular absorption. The contrast in the ratio of HI optical depth at the two velocities, compared to that of the OH molecules, is striking. One possibility is that the gas in front of the SW component is primarily molecular, (i.e. similar to what is seen in many early type spirals). Because the size of the radio source at 753 MHz is estimated to be at least 200 milliarcseconds (cf Patnaik and Porcas, 1995) corresponding about 1.5 kpc, several orders of magnitude larger than at mm wavelengths, this lack of HI must indicate a genuine lack of HI in a substantial part of the inner galaxy. This, in conjunction with the OH spectrum, then suggests that the $`z=0.89`$ system is an early type spiral with a large central molecular disk, at least 5–6 kpc in size. The broad component in the HI spectrum is presumably the result of HI seen in absorption primarily against the steep spectrum ring. Since at low frequencies the ring has no gaps and the center of the lensing galaxy must lie inside the ring, then without recourse to any specific lensing model it follows that the ring must cut across both the receding and approaching sides of the major axis (Figure 3). For reasonable rotation curves the ring will cut across the major axis well beyond the rising part of the rotation curve. In principle then, the velocity width of the HI spectrum ($`260`$ km/s) corresponds to twice the rotation velocity of the galaxy (apart from an inclination correction). In practice however since the emission from the ring is weakest at the points where it cuts across the major axis, the rotation velocity could be somewhat underestimated. From the model in Nair et. al. (1993) it is straighforward to compute the velocity that one should see in absorption against the SW and the NE cores. The observed velocities are indeed obtained provided one changes the position angle slightly. The inclination angle in the Nair et. al. model is $`40\mathrm{°}`$, however this gives the mass inside the central 4 kpc as $`4\times 10^{10}M\mathrm{}`$, which is somewhat low for a source redshift $`1.52`$. As suggested by Wiklind & Combes (1998), the inclination angle may be closer to $`20\mathrm{°}`$, and the true rotation velocity more like $`300`$ km/s, more typical of early type spirals. In summary then, the OH and HI spectra are consistent with the lens being an early type spiral at a redshift of $`z0.89`$.
no-problem/9904/cond-mat9904320.html
ar5iv
text
# Self-averaging of Random and Thermally disordered diluted Ising systems \[ ## Abstract Self-averaging of singular thermodinamic quatities at criticality for randomly and thermaly diluted three dimensional Ising systems has been studied by the Monte Carlo approach. Substantially improved self-averaging is obtained for critically clustered (critically thermally diluted) vacancy distributions in comparison with the observed self-averaging for purely random diluted distributions.Critically thermal dilution, leading to maximum relative self-averaging, corresponds to the case when the characteristic vacancy ordering temperature ($`\theta `$) is made equal to the magnetic critical temperature for the pure 3D Ising system ($`T_c^{3D}`$).For the case of a high ordering temperature ($`\theta >>T_c^{3D}`$), the self-averaging obtained is comparable to that in a randomly diluted system. PACS numbers: 05.50.+q, 75.10.Nr, 75.40.Mg, 75.50.Lk \] Systems with quenched randomness have been studied intensively for several decades . One of the first results was the establishment of Harris criterion , which predicts that a weak dilution does not change the character of the critical behavior near second order phase transitions for sustems of dimension $`d`$ with specific heat exponent lower than zero (so called P systems), $`\alpha _{pure}<0\nu _{pure}>2/d`$, in the nonrandom case. This criterion has been supported by several renormalization group (RG) analyses , and by scaling analysis . It was shown to hold also with strong dilution by Chayes et al. . For $`\alpha _{pure}>0`$ (called R systems), for example the Ising 3D case, the system fixed point flows from a pure (undiluted) fixed point towards a new stable fixed point at which $`\alpha _{pure}<0`$ . For a random hypercubic sample of linear dimension $`L`$ and a number of sites $`N=L^d`$, any observable singular property $`X`$ presents different values for the different realizations of randomness corresponding to the same dilution. This means that X behaves as a atochastic variable with average $`[X]`$, variance $`(\mathrm{\Delta }X)^2`$ and a normalized square width $`R_X=(\mathrm{\Delta }X)^2/[X]^2`$. A system is said to exhibit self averaging (SA) if $`R_X0`$ as $`L\mathrm{}`$. If the system is away from criticality, $`L>>\xi `$, being $`\xi `$ the correlation lenght, the central limit theorem indicates that strong SA must be expected. Hovewer, the behavior of a ferromagnet at criticality, where $`\xi >>L`$, is not so obvious. This point has been studied recently. Wiseman and Domany (WD) investigated the self-averaging of diluted ferromagnets at criticality by means of finite-size scaling calculations, concluding weak SA for both the P and R cases. In contrast Aharony and Harris (AH), using a renormalization group analysis in $`d=4\epsilon `$ dimensions, proved the expectation of a rigorous absence of self-averaging in critically random ferromagnets . More recently, WD used Monte Carlo simulations to check the lack of self-averaging in critically disordered magnetic systems . The absence of self-averaging was confirmed. The source of discrepancy with their previous scaling analysis was attributed to the particular size dependence of the distribution of pseudocritical temperatures used in their work. Quenched randomness has been investigated basically under two different constrains, a grand-canonical constrain (average density of occupied spin states $`(p)`$ fixed) and a canonical constrain (total number of occupied sites $`(c)`$, constant). The appicability of the result obtined by Chayes et al. also to canonical ensembles, implying universality between both kinds of constrains, has been recently investigated . Monte Carlo simulations indicate universality for $`R_X`$ in the grand-canonical constrain ensembles with different concentrations $`(p)`$. Universality between results obtained in the canonical and the grand-canonical constrains at very large values of L is implied by the work of Aharony et al.. In all cases previously investigated frozen disorder was always produced in a random way, that is, vacancies were distributed throughout the lattice randomly. Real systems, however, can be realized in other kinds of vacancy distributions. In particular, dilution in a ferromagnetic lattice can be produced by an equilibrium thermal order- disorder distribution of vacancies, governed by a characteristic ordering temperature ($`\theta `$). If this ordering temperature $`\theta `$ is high enough, the equilibrium thermal disorder will be similar to the random disorder of previous investigations. However, if $`\theta `$ happens to coincide with the characteristic magnetic critical temperature ($`T_c=4.511617`$) of the undiluted system, new possibilities are open. In the present work we study the effects on the magnetization self-averaging in diluted ferromagnets obtained by thermal vacancy distributions in three dimensional Ising systems using the Monte Carlo approach. We will study the self-averaging or lack of it in these systems and will compare the results with those obtained with random vacancy distributions for the same concentration. In order to actually produce these so called thermally diluted systems we make the following steps: First we thermalize the pure system at a given temperature $`\theta `$, finding a number N+ of (+) sites and a number N- of (-) sites in thermal equilibrium. Then we consider the minority kind, either (+) or (-) and we label them as vacancies, thereafter frozening in the disordered system. Only sites of the majority kind will be occupied by spins. Once the spin system has been prepared in this way, we can study any singular magnetic physical property at a given temperature ($`T`$). The spin system (i) so constructed will have a magnetic sites concentration $`c_i=\left|N_s\right|/N`$, where $`N_s`$ is the number of majority spins from the pure system and $`N=N_++N_{}`$ is the total number of sites. For a large enough value of $`L`$ and $`\theta T_c^{3D}`$, $`c_i`$ will approach 0.5 in most cases. We perform Monte Carlo calculations of the magnetization per spin at criticality for systems thermally and randomly diluted, using in both cases the Wolff single cluster algorithm with periodic boundary conditions, on lattices of different sizes $`L`$. Determinations of the magnetic critical temperature for different dilutions ($`T_c(p)`$) were in good agreement with those previously given with high accuracy by Heuer and by Wiseman et al. . We consider thermally diluted samples at two different values of $`\theta `$, ($`\theta =4.5115=T_c^{3D}`$, and $`\theta =1000>>T_c^{3D}`$) for the same value of $`L=40`$. We can compare this thermal distributions of vacancies with the equivalent random distribution with probability $`p=0.5`$. We will consider the magnetization per spin for each sample at criticality, that is the magnetization $`M_i[T_c(p)]`$ for each realization ”i” within the full set of samples. In order to show at a glance the resulting dispersion in $`c`$ and in $`M`$, we use an scattered plot $`c`$ vs. $`M`$, where each point on the plane represents an actual realization of a given density of spins $`c_i`$ and a given magnetization per spin $`M_i`$. This plots contains more information than the usual histograms. First we consider what we may call the hipercritical case where $`\theta =1000`$. Results are shown in Fig. 1 for $`L=40`$. As expected, both diluted sets realized by thermal and random dilution look almost the same, showing the equivalent effects of a high ordering temperature and random dilution. Histograms for the magnetization are shown, at the ”grand-canonical” case with $`p=0.5`$ and $`c>0.5`$ (in our realizations, there are never more vacancies than spins). Similar normalized square widths are found, $`R_M^{random}R_M^{thermal}0.06`$ in both cases. They are smaller than the ones on Ref. 13 due to the fact that for us $`c>0.5`$ always. Next we consider the critical case, $`\theta =T_c^{3D}.`$ Results are shown in Fig. 2 again for $`L=40`$. The situation changes completely. Now the critically thermally ordered set behaves different, with an increase on the dispersion in $`c`$. The magnetization has been calculated at the critical temperature for a dilution equal to $`p=0.5`$, $`T_c(p=0.5)`$ in order to compare the normalized square width in both cases. In this case we find extremely low values of the normalized square width for the thermally diluted set of samples in the ”grand-canonical” case, $`R_M^{thermal}10^3`$, which is around $`60`$ times smaller than the width found for the random case. (See the width differences in the histograms on Fig. 2). To study the self-averaging evolution we must consider the behavior of the normalized square width $`R_M`$ with the length $`L`$ of the system. Consequently we have diluted the systems thermally at criticality ($`\theta =T_c^{3D}`$) for different values of $`L`$. Of course, in all this realizations for critical thermal cases, the corresponding value of p depends on the value of $`L`$, but tends always to $`0.5`$ as $`L`$ increases as it should. We have then calculated the magnetization for $`T=T_c(p=0.5)`$ in order to compare the evolution of the normalized square widths of the thermally diluted cases under the ”grand-canonical” constrain with the respective widths corresponding to random dilution with $`p=0.5`$. The evolution with $`L`$ of the full scattering plot ($`M`$. vs. $`c`$) for the critical thermal case is presented on Fig. 3. Note how the evolution with increasing $`L`$ of the ”cloud” of points reduces the dispersion in the values of $`c`$, and the ”cloud” tends to that for the canonical case, as should be according to . This behavior is also appreciable for random dilution, even when the decrease in the dispersion on $`c`$ is not so easily seen, due to the restriction to $`p=0.5`$ only (see Inset in Fig.3). A more accuracy statistical study will be needed to investigate quantitatively the behavior of $`R_M`$ with the dimension $`L`$ of our system, and, in agreement with Ref. 13 higher values of $`L`$ will be needed to find total universality of behavior between canonical and grand canonical constrains. The main result of this study is the substantial improvement in the self-averaging using a thermal distribution of vacancies for all $`L`$ values considered under the ”grand- canonical” constrain with $`p=0.5`$ and $`c0.5`$. The normalized squared width $`R_M`$ of the magnetization is always around one order of magnitude bigger for the random case than for the critically thermally diluted disposition of vacancies (see Fig.4). Intuitively, an equilibrium thermal distribution of vacancies at criticality implies a clustered distribution of both spins and vacancies for a given distribution level ($`p,c`$). This implies an increase in the relative number of bonds between neighboring spins, and therefore an increase of the ordering and a subsequent decrease of the normalized square width for the magnetization. We can summarize our results as follows: using the Monte Carlo approach we have investigated diluted ferromagnets using a new method to specify the vacancy distribution which allows the ”thermal” clustering of both the magnetic atoms and magnetic vacancies under specific ”thermal” conditions. This kind of thermal dilution gives results totally equivalent to those obtained with the usual random dilution method when the order-disorder distribution of the vacancies is obtained away from criticality (hipercritical conditions $`\theta >>T_c^{3D}`$) but it gives strongly different results at critical conditions ($`\theta =T_c^{3D}`$). In conclusion, our Monte Carlo results in critically thermally diluted (as opposed to randomly diluted) ”grand-canonical” samples with L increasing, show substantial self-averaging improvement of the magnetization in three dimensional highly diluted ($`p=0.5`$) Ising ferromagnets. We gratefully acknowledge financial support from DGCyT through grant PB96- 0037.
no-problem/9904/astro-ph9904103.html
ar5iv
text
# BeppoSAX monitoring of the “anomalous” X–ray pulsar 4U 0142+61 ## 1 Introduction The properties of 4U 0142+61 (White et al. 1987) remained puzzling for a long time, owing to confusion with a nearby pulsating and transient Be/neutron star system RX J0146.9+6121 (Motch et al. 1991; Mereghetti et al. 1993). The 1–10 keV spectrum is extremely soft (power law photon index of $`4`$, White et al. 1987) and led to the initial classification of 4U 0142+61 as a possible black hole candidate. ASCA observations provide evidence for a $`0.4`$ keV blackbody component contributing $`40`$% of the 0.5–10 keV band X–ray flux (White et al. 1996). The X–ray luminosity of 4U 0142+61 has not shown substantial secular variations around an average value of $`6\times 10^{34}`$ erg s<sup>-1</sup> (assuming a distance of 1 kpc). Despite the small error box (5″ radius), no optical or IR counterpart has yet been identified, down to $`V<24`$, $`R<22.5`$, $`J<20`$ and $`K<17`$ (Steinle et al. 1987; White et al. 1987; Coe & Pightling 1998). These limits rule out the presence of a massive companion. Using data from the EXOSAT archive, Israel et al. (1994) discovered pulsations at 8.7 s, which were later confirmed with ROSAT (Hellier 1994). No delays in the pulse arrival times caused by orbital motion were found, with upper limits on $`a_\mathrm{x}`$ sin $`i`$ of about $`0.37`$ lt–s for orbital periods $`P_{\mathrm{orb}}`$ between 7 min and 12 hr (Israel et al. 1994). Tighter upper limits on the $`a_\mathrm{x}\mathrm{sin}i`$ ($`0.26`$ lt–s for 70 s $`P_{\mathrm{orb}}`$ 2.5 days) have been recently obtained with a RXTE observation (Wilson et al. 1998). This yielded strong constraints on the orbital inclination and the mass of the possible companion star in the case of normal or helium main sequence star and giants with helium core. A white dwarf companion would be compatible both with current optical photometric and pulse arrival time limits. The EXOSAT and ROSAT period measurements, obtained in 1984 and 1993, provide a spin–down rate of $`2.1\times 10^{12}`$ s s<sup>-1</sup>. The properties of 4U 0142+61 are similar to those of a small group of “anomalous” X–ray pulsars (AXPs), with spin periods within a narrow range (6–12 s; Mereghetti & Stella 1995). Among these are 1E 2259+586, 1E 1048.1–5937, 1E 1841–045 (Vasisht & Gotthelf 1997), 1RXS 170849–400910 (Sugizaki et al. 1997) and AX J1845–045 (Torii et al. 1998; Gotthelf & Vasisht 1998). We present here a detailed analysis of the BeppoSAX Narrow Field Instruments (NFIs) observations of the AXP 4U 0142+61. We confirm the presence of a blackbody spectral component in the soft X–ray spectrum as seen in the ASCA data (White et al. 1996) and in other two “anomalous” X–ray pulsars: 1E 2259+589 (Corbet et al. 1995; Parmar et al. 1998) and 1E 1048.1–5937 (Oosterbroek et al. 1998). We also present the results of pulse phase spectroscopy and the pulse period history of 4U 0142+61. We also discuss the timing analysis from a serendipitous Rossi X–ray Timing Explorer (RXTE) observation of 4U 0142+61. ## 2 BeppoSAX Observations Results from the Low–Energy Concentrator Spectrometer (LECS; 0.1–10 keV; Parmar et al. 1997) and Medium–Energy Concentrator Spectrometer (MECS; 1.3–10 keV; Boella et al. 1997) on–board BeppoSAX are presented. The MECS consists of three identical grazing incidence telescopes with imaging gas scintillation proportional counters in their focal planes. The LECS uses an identical concentrator system as the MECS, but utilizes an ultra-thin (1.25 $`\mu `$m) detector entrance window and a driftless configuration to extend the low-energy response to 0.1 keV. The fields of view (FOV) of the LECS and MECS are circular with diameters of 37′ and 56′ respectively. The energy resolution of both instruments is $``$ $`8.5\sqrt{(6keV/\mathrm{E})}`$ % full–width half maximum (FWHM), where E is the energy. 4U 0142+61 was observed by BeppoSAX four times between January 1997 and February 1998 (see Table 1). One of the goals of the program was to monitor the possible X–ray flux variations on a time scale of months with two 40 ks pointings. However between the first and the subsequent observations one of the three MECS units failed (1997 May 9), and data of observations B, C and D were obtained with the remaining two MECS units. Moreover during the second and third observations BeppoSAX experienced pointing failures resulting in a shorter effective exposure time. ### 2.1 Spectral analysis Spectra were obtained centered on the position of 4U 0142+61 using an extraction radius of 8′ for both the LECS and MECS. Background subtraction was performed using standard blank field exposures. The average background subtracted source count rates are reported in Table 1 for the four observations, together with the off–axis angle and number of working MECS. The PHA spectra were rebinned so as to have $`>`$40 counts in each energy bin and reliably adopt a minimum $`\chi ^2`$ technique for model fitting. All the bins which were consistent with zero after background subtraction were rejected. Moreover the MECS spectra were restricted to the 1.8–10 keV range. Data from observation B and C were not used for spectral analysis purposes owing to poor statistics. A constant factor free to vary within a predetermined range was applied in the fitting to allow for known normalization differences the LECS and MECS. In order to compare our results with previous observations, the spectra were first fit with two models: (i) an absorbed power–law, and (ii) an absorbed power–law plus blackbody (see Table 2). The power–law model gave an unsatisfactory description of the spectra with $`\chi _\nu ^2`$ = $`\chi ^2`$/degrees of freedom (hereafter dof) of 1.6 (347 dof) and 1.9 (396) for observations A and D, respectively. We note that among simple single component spectral models, the power–law gives the lowest reduced $`\chi ^2`$; this is for a photon index $`\mathrm{\Gamma }`$ = 4.37$`\pm `$0.03 (A) and 4.55$`\pm `$0.03 (D; 90% confidence level uncertainties are used throughout the paper). The power–law plus blackbody model gave $`\chi _\nu ^2`$ = 1.28 (344) and 1.07 (290) for observations A and D, respectively (see Table 2 for details). An F–test shows that the inclusion of the blackbody component is highly significant (probability of $``$10<sup>-26</sup> and $``$10<sup>-32</sup> for obs. A and D, respectively). The best fit two–component spectra are plotted in Fig. 1. In Fig. 2 the unfolded energy spectrum for observation A is shown together with the contributions of the two spectral components, the power–law and the blackbody. We also fit the LECS and MECS spectra of observation A with other two–component spectral models, such as a power–law with a cut–off ($`\chi ^2`$/dof = 453.2/344), two blackbodies ($`\chi ^2`$/dof = 503.1/344) and a broken power–law ($`\chi ^2`$/dof = 400.6/344). All these models gave substantially worse fits than the power–law plus blackbody model. Similar results were obtained for the spectrum of observation D. Since the blackbody peaks are close to the lower end of the MECS energy range we also checked these results by analysing the LECS data only (which cover a wider spectral range than the MECS;see Table 2 and Discussion). We also checked the stability of the results obtained by rebinning by a factor of $``$3 the PHA channels in the LECS and MECS spectra. Again all the bins which were consistent with zero after background subtraction were rejected from the analysis. No significant variation in the spectral parameters and uncertainties were found for either models. The data from the High Pressure Gas Scintillation Proportional Counter (HPGSPC) and the Phoswich Detector System (PDS) did not hold any useful information on 4U 0142+61. In fact, due to the large FOVs of these intruments and the steep spectrum of 4U 0142+61, the counts above 10 keV were likely dominated by the nearby source RX J0146.9+6121. ### 2.2 Pulse timing, folded light curves and phase resolved spectroscopy The arrival times of the 0.5–10 keV photons from 4U 0142+61 were corrected to the barycenter of the solar system and 1 s binned light curves accumulated for each observation. The average count rates are reported in Table 1. The MECS counts were used to determine the 4U 0142+61 pulse period. The data from observations A, B and D were divided into 6, 4, and 5 time intervals, respectively and for each interval the relative phase of the pulsations was determined. These phases were then fit with a linear function giving a best–fit period of $`8.68804\pm 0.00007`$ s for observation A (see Table 3). For observation B a period of $`8.6882\pm 0.0002`$ s was obtained, while during observation C pulsations were not detected owing to poor statistics ($``$ 11000 photons) to detect such a weak ($``$6% pulsed fraction) signal. Finally for observations D we determined a period of 8.6883$`\pm `$0.0001 s. The BeppoSAX pulse period values are plotted in Fig. 4 together with the previous measurements (see Sect. 3 for the RXTE measurement). The background subtracted light curves from observation A, folded at the best period in different energy ranges (Fig. 3; first three panels) show a double–peaked profile (see also White et al. 1996). The pulsed fraction (semiamplitude of modulation divided by the mean source count rate) was 7.1$`\pm `$1.5%, 7.5$`\pm `$0.5% and 13$`\pm `$2% in the 0.5–1.5 keV, 1.5–4.0 keV and 4.0–10 keV energy bands, respectively. Similar results are obtained for observations B and D. Flux variations on several timescales are usually displayed by high accretion rate X–ray pulsars; we found no short or long–term variations in any of the BeppoSAX accumulated light curves. The upper two panels of Fig. 5 show the pulsed fraction versus energy during observation A for the first two harmonics. These values were obtained by fitting the corrisponding light curves with two sinusoidal functions. The first harmonic shows a nearly constant value ($``$ 6%) up to 4 keV, while at higher energies increases to about 15%. A constant value of 5–7% is inferred for the second harmonic. We calculated also the root mean square variability (rms; defined as $`\sqrt{V_{obs}V_{exp}}`$ devided by the mean source count rate, where $`V_{obs}`$ and $`V_{exp}`$ are the observed and expected variance, respectively) of the folded light curve at different energies (third panel). Finally the ratio between the power–law and the total flux for the blackbody plus power–law spectral model is shown (lowest panel). From the comparison of these quantites we can infer that: (i) since the rms variability of the folded light curves is $`\pi ^{1/2}`$ times the geometric sum of the amplitudes in all available harmonics (mainly the 1<sup>st</sup> and 2<sup>nd</sup> in the present case), it is apparent that the behaviour is consistent with the derived amplitudes of the 1<sup>st</sup> and 2<sup>nd</sup> harmonics; (ii) there is evidence for an increase of the pulsed fraction ($``$ 2$`\sigma `$ confidence level) at energies above 5 keV; (iii) there is not a simple relationship between the pulsed fraction and the flux in either of the two spectral component adopted in our analysis. A set of four phase–resolved spectra (phase boundaries 0.06, 0.26, 0.5, 0.78) were accumulated for observation A (i.e. the observation with the highest number of source counts). These were then fit with the power–law plus blackbody model described in Sect. 2.1, with N<sub>H</sub> fixed at the phase–averaged best–fit value. Initially, the blackbody temperature was fixed at its phase–averaged best–fit value and only the power–law parameters and blackbody normalisation were allowed to vary. The fits were then repeated with $`\mathrm{\Gamma }`$ fixed and blackbody parameters and power–law normalisation free (see Fig. 3). No significant changes were detected for $`\mathrm{\Gamma }`$ and $`kT`$ as a function of the pulse phase. Similar results were obtained for the fluxes of the two spectral component. ## 3 RXTE Observation The 4U 0142+61 position was included in a RXTE observation pointed at the nearby high–mass X–ray binary pulsar RX J0146.9+6121 (from 1996 March 28 11:16:48 to 22:09:20; $``$20 ks of effective exposure time). The results presented here are based on data collected in the so called “Good Xenon” operating mode with the Proportional Counter Array (PCA, Jahoda et al. 1996). The PCA consists of 5 proportional counters operating in the 2–60 keV range, with a total effective area of approximately 7000 cm<sup>2</sup> and a field of view, defined by passive collimators, of $`1\mathrm{deg}`$ FWHM. The data consist of the time of arrival and the pulse height of each count. In order to minimise the contamination from RX J0146.9+6121 (a much harder spectrum X–ray pulsar), we considered only photons in the 2–4 keV energy interval. In order to reduce the background, only the counts detected in the first Xenon layer of each counter were used. We obtained for 4U 0142+61 a best pulse period of 8.6881$`\pm `$0.0002 s (see Table 3 and Fig. 4). Figure 6 shows the corresponding folded light curve. Despite the uncertainties in the background subtraction (the contribution from RX J0146.9+6121 is difficult to estimate), it is apparent that the RXTE/PCA pulse profile of 4U 0142+61 is similar to that observed with BeppoSAX. The RXTE value we obtained is consistent to within the statistical uncertainties with that of Wilson et al. (1998; energy range 3.7–9.2 keV). The somewhat different pulse shape is likely due to the different energy band used. ## 4 Discussion Several models have been proposed in order to explain the nature of the “anomalous” X–ray pulsars. Mereghetti & Stella (1995) proposed that these sources form a homogeneous subclass of accreting neutron stars, perhaps members of low mass X–ray binaries (LMXBs), which are characterized by lower luminosities ($`10^{35}10^{36}`$ erg s<sup>-1</sup>) and higher magnetic fields ($`B10^{11}`$ G) than classical LMXBs. However the lack of evidence for a binary nature from any of these systems (Mereghetti et al. 1998; Wilson et al. 1998) argues in favor of models in which the X–ray emission originates from a compact object that is not in an interacting binary system. Van Paradijs et al. (1995) proposed that AXPs are young population I objects, which originate from the evolution of short orbital period High Mass X–ray Binaries (HMXBs), following the expansion of the massive star and the onset of unstable Roche Lobe overflow, before central hydrogen is exhausted (see also Cannon et al. 1992). The resulting common–envelope phase should cause the neutron star to spiral–in and disrupt the companion star after the so–called Thorne–Zẏtkov stage. Therefore, these sources should consist of an isolated neutron star accreting matter from a residual disk with a mass in the $`10^31`$ M range. The blackbody component in 4U 0142+61 and, likewise, other AXPs has been interpreted as evidence for quasi–spherical accretion onto an isolated neutron star formed after common envelope and spiral–in of a massive short–period (P<sub>orb</sub>$``$ 1 yr) X–ray binary. In this case the remains of the envelope of the massive star might produce two different types of matter inflow: a spherical accretion component with low specific angular momentum giving rise to the blackbody component and a high specific angular momentum component which forms an accretion disk and is likely responsible for the power–law emission (Ghosh et al. 1997). Although some problems remain open in this scenario, we note that a post common–envelope evolutionary scenario for 4U0142+614 is also suggested by the inference that about half of the X–ray absorbing material is close to the source (White et al. 1996). In this context it is interesting to note that independent evidence supports the view that the binary X–ray pulsars 4U1626–67 and HD49798 were formed following a common envelope and spiral–in evolutionary phase (see Angelini et al. 1995 and Israel et al. 1997 and references therein). The different properties of the companion stars in these two cases may reflect the fact that unstable mass transfer set in at different evolutionary phases in the nuclear evolution of their progenitors (in turn, reflecting different initial masses and orbital periods; see also Ghosh et al. 1997). Thompson & Duncan (1993, 1996) proposed that the AXPs are “magnetars”, neutron stars with a superstrong magnetic field ($`10^{1415}`$ G). This proposal is supported by the similarity in the pulse periods (8.05 , 7.5 s and 5.16 s) and of Ṗ values measured in the Soft $`\gamma `$–ray Repeaters SGR0520–66 , SGR1806-20 and SGR1900+14, respectively (Kouveliotou et al. 1998; Hurley et al. 1998). If this connection proved correct, AXPs, would be quiescent soft $`\gamma `$–ray repeaters. Heyl & Hernquist (1997) argued that their emission may be powered by the cooling of the core through a strongly magnetised envelope of matter (made up mainly by hydrogen and helium). Pulsations would originate from a temperature gradient on the surface of the star. Moreover Heyl & Hernquist (1998) showed that in this scenario, spin–down irregularities, observed in the period history of two “anomalous” X–ray pulsars (1E 2259+58 and 1E 1048–59), may be simply accounted for with glitches like those observed in young radio pulsars. The 0.5–10 keV spectrum of 4U 0142+61 is well modelled by the sum of an absorbed steep power–law and a low–energy blackbody. The latter component was introduced by White et al. (1996) based on the ASCA data; the corresponding blackbody radius and flux were $`2.4\pm 0.3`$ km (at 1 kpc) and $``$40% of the total, respectively. Neither the power–law photon index nor the blackbody temperature show evidence of changes across different BeppoSAX observations. We found marginal evidence (at about 2$`\sigma `$ confidence level) for changes relative to previous observations. A comparison of the ASCA and BeppoSAX results show that: (i) the power–law photon index increases by about 4% (observations A and D); (ii) the blackbody radius decreased by about 37% (A) and 12% (D); (iii) the blackbody flux decreased to $``$30% (A) and $``$35% (D) of the total flux therefore showing a $``$10%–5% decrease with respect to that of the ASCA observation; (iv) the 0.5–10 keV total flux decreased of about 15–12%. By fixing the parameters N<sub>H</sub>, $`kT`$ and $`\mathrm{\Gamma }`$ to the values inferred by ASCA, the BeppoSAX observation A spectrum gives a $`\chi _\nu ^2`$/dof of 1.9/348. In this case the blackbody component accounts for $``$30% of the total flux ($``$10% lower than that of ASCA). As an additional test we also merged the data for observation A (MECS 2 and 3 only) and D and fit them with the power–law plus blackbody model (see Table 2). While a small change in the spectral parameters is found relative to observations A and D, separately, the blackbody flux is $``$30% of the total. To remove possible effects introduced by the vicinity of the lower end of the energy range of the MECS and the peak of the blackbody component, we also fitted the spectrum of observations A, D and A+D by using only the data as accumulated by the LECS, the energy band of which uninterruptely covers the range 0.5–9.0 keV and allows a better fit in the energy interval (1–4 keV) where the power–law and the blackbody components overlap. Again we obtained results similar to the LECS+MECS case, but with a larger uncertainty. All these results point to a marginal variation of the spectral parameters of 4U 0142+61, if any. The BeppoSAX data suggest that the pulsed fraction for energies above 4 keV decreased (from 25%$`\pm `$5% with ASCA to 13%$`\pm `$2% with BeppoSAX; 90% uncertainties). For these energies the counts are almost entirely dominated by the power–law component (in the spectral model assumed). The pulse periods measured by BeppoSAX show that 4U 0142+61 has continued its secular spin–down during 1996–1998 (see Fig. 4). The period derivative inferred from the BeppoSAX observations alone ($``$6.0$`\pm _{5.1}^7`$ $`\times `$10<sup>-12</sup> s s<sup>-1</sup>) is consistent with the average spin–down rate ($``$2$`\times `$10<sup>-12</sup> s s<sup>-1</sup>) inferred over the 19 yr span of the historical dataset (note that in the period list we did not include the low significance detections inferred on 1985 November 11 and December 11 with EXOSAT ME, and on 1991 February 13 with ROSAT HRI; Israel et al. 1994). The inferred Ṗ for 4U 0142+61 is consistent with the uncertainties of all pulse period measurements except, perhaps, one (the 1979 Einstein SSS one; see Fig. 4) implying that, so far, no “glitches” have been observed yet. ###### Acknowledgements. We would like to thank Giancarlo Cusumano for providing the MECS off–axis matrices and the BeppoSAX Mission Planning for their constant help. The authors also thank K. Long the comment of which helped to improve an earlier version of this paper. This work was partially supported through ASI grants.
no-problem/9904/cond-mat9904450.html
ar5iv
text
# Nuclear spin driven resonant tunnelling of magnetisation in Mn12 acetate ## Abstract Current theories still fail to give a satisfactory explanation of the observed quantum phenomena in the relaxation of the magnetisation of the molecular cluster Mn<sub>12</sub> acetate. In the very low temperature regime, Prokof’ev and Stamp recently proposed that slowly changing dipolar fields and rapidly fluctuating hyperfine fields play a major role in the tunnelling process. By means of a faster relaxing minor species of Mn<sub>12</sub>ac and a new experimental ’hole digging’ method, we measured the intrinsic line width broadening due to local fluctuating fields, and found strong evidence for the influence of nuclear spins on resonance tunnelling at very low temperatures (0.04 $``$ 0.3 K). At higher temperature (1.5 $``$ 4 K), we observed a homogeneous line width broadening of the resonance transitions being in agreement with a recent calculation of Leuenberger and Loss. Observation of mesoscopic quantum phenomena in magnetism has remained a challenging problem. The first striking demonstration of quantum tunneling and quantum phase interference was found on Mn<sub>12</sub> acetate and Fe<sub>8</sub>, molecular clusters having a spin ground state $`S=10`$ . Several models and theories have been proposed to explain in detail the experimental results, published during the last five years by several authors , but there is not yet satisfactory agreement between theory and experiments concerning mainly the relaxation rate and the resonance line width . This letter is intended to report more accurate measurements which should help to find a satisfactory explanation of the observed quantum phenomena. Several authors have pointed out that in the Mn<sub>12</sub> carboxylate family different isomeric forms give rise to different relaxation rates . This has also been observed in Mn<sub>12</sub> acetate . We found that a minor species Mn<sub>12</sub>ac(2) , randomly distributed in the crystal of the major species Mn<sub>12</sub>ac(1), exhibits a faster relaxation rate which becomes temperature independent below 0.3 K. Even if this second species has been only partially characterised we can exploit it as a local probe providing unique information on the tunnelling process. We used a recently developed method for measuring the intrinsic line width broadening due to local fluctuating fields and found strong evidence for the influence of nuclear spins on resonance tunnelling. In the first part of this letter, we focus on the low temperature and low field limit which is particularly interesting because phonon-mediated relaxation is astronomically long and can therefore be neglected. In this limit, only the two lowest levels with quantum numbers $`M_z`$ =$`\pm `$10 are involved. They are coupled by a tunnel matrix element $`\mathrm{\Delta }/2`$ where $`\mathrm{\Delta }`$ is the tunnel splitting which is estimated to be about 10<sup>-10</sup> K for Mn<sub>12</sub> . In an ideal system, resonant tunnelling requires that the magnetic field (local to the giant spin) is smaller than the field associated with the tunnel splitting $`\mathrm{\Delta }`$ which means a field smaller than 10<sup>-9</sup> T for the Mn<sub>12</sub>ac clusters. This fact would make it very difficult to observe the tunnelling of isolated giant spins at constant applied field. This dilemma is solved by invoking the dynamics of dipolar interaction between molecules and nuclear spins . The tunnelling scenario can be summarised as follows: the rapidly fluctuating hyperfine field brings molecules into resonance. The dipolar field of tunnelled spins can lift the degeneracy and remove from resonance a large number of neighbouring spins. However, a gradual adjustment of the dipolar fields across the sample (up to 0.03 T in Mn<sub>12</sub>ac), caused by tunnelling relaxation, brings other molecules into resonance and allows continuous relaxation. Therefore, one expects a fast relaxation at short times, and slow logarithmic relaxation at long times. Recently, we developed a method for measuring the intrinsic line width broadening due to local fluctuating fields of the nuclear spins. It is based on the general idea that the short time relaxation rate is directly connected to the number of molecules which are in resonance at a given longitudinal applied field $`H`$. The Prokof’ev - Stamp theory predicts that the magnetisation should relax at short times with a square-root time dependence: $$M(H,t)=M_{\mathrm{in}}+(M_{\mathrm{eq}}(H)M_{\mathrm{in}})\sqrt{\mathrm{\Gamma }_{\mathrm{sqrt}}(H)t}$$ (1) Here $`M_{\mathrm{in}}`$ is the initial magnetisation at time $`t`$ = 0 (i.e. after a rapid field change), and $`M_{\mathrm{eq}}(H)`$ is the equilibrium magnetisation. The rate function $`\mathrm{\Gamma }_{\mathrm{sqrt}}(H)`$ is proportional to the normalised distribution $`P(H)`$ of molecules which are in resonance at the applied field H: $$\mathrm{\Gamma }_{\mathrm{sqrt}}(H)\frac{\mathrm{\Delta }^2}{\mathrm{}}P(H)$$ (2) where $`\mathrm{}`$ is Planck’s constant. Thus, the measurements of $`\mathrm{\Gamma }_{\mathrm{sqrt}}(H)`$, as a function of $`H`$, should give direct access to the distribution $`P(H)`$. Our measuring procedure is as follows. Starting from a well defined magnetisation state , we apply a magnetic field $`H`$ in order to measure the short-time square root relaxation behaviour. By using eq. (1), we get the rate function $`\mathrm{\Gamma }_{\mathrm{sqrt}}(H)`$ at the field $`H`$. Then, starting again from the same well defined magnetisation state, we measure $`\mathrm{\Gamma }_{\mathrm{sqrt}}(H)`$ at another field $`H`$, yielding the field dependence of $`\mathrm{\Gamma }_{\mathrm{sqrt}}(H)`$ which is proportional to the dipolar distribution $`P(H)`$ (eq. (2)). This technique can be used for following the time evolution of molecular states in the sample during a tunnelling relaxation . Starting from a well defined magnetisation state , and after applying a field $`H_{dig}`$, we let the sample relax for a time $`t_{dig}`$, called ’digging field and digging time’, respectively. During the digging time, a small fraction of the molecular spins tunnel and reverse the direction of their magnetisation. Finally, we apply a field $`H`$ to measure the short time relaxation in order to get $`\mathrm{\Gamma }_{\mathrm{sqrt}}(H)`$ (eq. (1)). The entire procedure is then repeated to probe the distribution at other fields $`H`$ yielding $`\mathrm{\Gamma }_{\mathrm{sqrt}}(H,H_{dig},t_{dig})`$ which is proportional to the number of spins which are still free for tunnelling. With this procedure one obtains the distribution $`P(H,H_{dig},t_{dig})`$, which we call the ’tunnelling distribution’. We used this new technique, which we call ’hole digging’ method , for studying Fe<sub>8</sub> molecular clusters and found that tunnelling causes rapid transitions of molecules near $`H_{dig}`$, thereby ”digging a hole” in $`P(H,H_{dig},t_{dig})`$ around $`H_{dig}`$, and also pushing other molecules away from resonance. The hole widens and moves with time, in a way depending on sample shape; the width dramatically depends on thermal annealing of the magnetisation of the sample. For small initial magnetisation , the hole width shows an intrinsic broadening which may be due to nuclear spins . For Mn<sub>12</sub>ac(1) , T $`<`$ 1.5K and small applied fields, relaxation measurements are very time consuming because the pure quantum relaxation rate between $`M_z`$ =$`\pm `$10 is of the order of years or longer . However, we found that one can use the minor species Mn<sub>12</sub>ac(2) which showed to have a much faster tunnelling rate. Furthermore, it has the advantage of being diluted over the entire crystals with a concentration of 1 to 2 percent, thus the internal dipolar fields hardly change during relaxation of Mn<sub>12</sub>ac(2). As Mn<sub>12</sub>ac(2) experiences the same environment (concerning mainly hyperfine fields) as the major species Mn<sub>12</sub>ac(1), we propose to use Mn<sub>12</sub>ac(2) as a local probe of any fluctuating field acting on the giant spins of Mn<sub>12</sub>ac molecular clusters. Below about 1.5 K, we found that the magnetisation of Mn<sub>12</sub>ac(2) can be reversed in an applied field smaller than 2 T whereas that of Mn<sub>12</sub>ac(1) hardly reverses because of the very small tunnelling rate. Fig. 1 presents a typical hysteresis loop measurements of Mn<sub>12</sub>ac(2) which is almost temperature independent below 0.6 K . These loops are strongly field sweeping rate dependent and show quantum tunnelling resonances at about equidistant fields of $`\mathrm{\Delta }`$H $``$ 0.39 T in comparison to 0.45 T for Mn<sub>12</sub>ac(1). Fig. 2 presents typical relaxation measurements of the minor species Mn<sub>12</sub>ac(2). For each curve, the major species Mn<sub>12</sub>ac(1) was demagnetised and Mn<sub>12</sub>ac(2) was saturated in a field of -1.4 T. Approximate square root relaxation was found in the range from $`0.014>M/M_S>0.01`$, where $`M_S`$ is the magnetisation of saturation of the entire crystal. The fact that the relaxation is not exactly proportional to $`\sqrt{t}`$ in the short time region, is irrelevant for the discussion of this letter . Fig. 3a presents tunnelling distributions for Mn<sub>12</sub>ac(2) for digging times between t<sub>0</sub> = 0 and 128 s. Note the depletion (”hole digging”) around the digging field $`H_{dig}`$ = 0.39 T. This hole-digging arises because only spins in resonance can tunnel. Although the hole is narrow, it is still several orders of magnitude larger than the field associated with the tunnel splitting $`\mathrm{\Delta }`$. The hole could be fitted to a Gaussian function yielding the line width $`\sigma `$ (see Fig. 3b) which we studied as a function of temperature and digging time (fig. 4). We defined an intrinsic line width $`\sigma _0`$ by a linear extrapolation of the curves to $`t_{dig}`$ = 0. For temperatures between 0.04 and 0.3K, $`\sigma _0`$ $``$ 12 mT. For T $`>`$ 0.3 K, $`\sigma _0`$ increase rapidly. The physical origin of the line width $`\sigma _0`$ is tentatively assigned to the fluctuating hyperfine fields. A simple calculation of the random hyperfine field distribution was made by Hartmann-Boutron et al. , who evaluated the maximum nuclear field operating on the lowest $`M`$ = 10 levels of Mn<sub>12</sub>. Using the same approach it is possible to calculate the whole spectrum of hyperfine levels. Assuming for the hyperfine coupling constants the values a(MnIII)= 6.9 mT and a(MnIV)= 8.5 mT, in agreement with currently accepted values for these ions , we calculate a Gaussian distribution of fields with a width of ca. 16 mT, in good agreement with the above reported experimental result. A detailed calculation of the random hyperfine field distribution can be found in Ref. . We applied also the ’hole digging’ method at temperatures between 1.5 and 4 K. At these temperatures, the relaxation rates of the minor species are very fast and can therefore be neglected. As pointed out by several groups, the relaxation of the major species Mn<sub>12</sub>ac(1) is non-exponential at temperatures below 4 K but, nevertheless, we approximately adjusted an exponential law for the short time relaxation regime (1 $``$ 100s) in order to yield a relaxation rate $`\mathrm{\Gamma }`$. We emphasise that the Prokof’ev Stamp theory cannot be applied in the higher temperature regime because it neglects thermal activation to higher energy levels. However, the main idea of the ’hole digging’ method should still hold, i.e. it should answer the question of whether the line width is homogeneously or inhomogeneously broadened. A typical result of the ’hole digging’ experiment at 2 K is presented in Fig. 5 which shows that it is impossible to dig a hole in the $`\mathrm{\Gamma }(H)`$ dependence suggesting that the line width is homogeneously broadened, as first suggested by Friedman et al. . This finding is also in good agreement with a recent calculation of Leuenberger and Loss , see also which is based on thermally assisted spin tunnelling induced by quadratic anisotropy and weak transverse magnetic fields. Their model is minimal in the sense that it is sufficient to explain the measurements without including hyperfine fields. Indeed, our measurements show that the inhomogeneous hyperfine broadening of about 12 mT is small compared to the homogeneous broadening of about 30 mT (see fig. 5) which might be due to spin- phonon coupling . In conclusion, this letter is intended to report more accurate measurements which should help to find a satisfactory explanation of the observed quantum phenomena in molecular clusters. We used the recently developed ’hole digging’ method for measuring the intrinsic line width broadening due to local fluctuating fields and found strong evidence for the influence of nuclear spins in the tunnel process at low temperature. At higher temperatures, spin-phonon coupling seems to dominate the resonant quantum tunnel transitions which leads to homogeneously broadened line widths. \*** A. Caneschi is acknowledged for providing the Mn<sub>12</sub>ac samples. We are deeply indebted to P. Stamp for many fruitful discussions. We thank B. Barbara, A. Benoit, E. Bonet Orozco, D. Mailly, and P. Pannetier for the help of constructing our micro-SQUID magnetometer.
no-problem/9904/astro-ph9904025.html
ar5iv
text
# The Afterglow of GRB 990123 and a Dense Medium ## 1. Introduction The gamma-ray burst (GRB) 990123 was an extraordinary event. It was the brightest burst yet detected with the Wide Field Camera on the BeppoSAX satellite (Feroci et al. 1999), and had a total gamma-ray fluence of $`5\times 10^4\mathrm{erg}\mathrm{cm}^2`$, which is in the top 0.3% of all bursts. It was the first burst to be simultaneous detected in the optical band. Optical emission with peak magnitude of $`V9`$ was discovered by the Robotic Optical Transient Search Experiment (ROTSE) during the burst and was found to have rapidly faded down immediately after the gamma-ray emission (Akerlof et al. 1999). The detection of the redshift showed that the burst appears at $`z1.6`$ (Andersen et al. 1999; Kulkarni et al. 1999a). This implies that if the GRB emission was directed isotropically, the inferred energy release is $`1.6\times 10^{54}\mathrm{ergs}`$ (Kulkarni et al. 1999a; Briggs et al. 1999). The burst’s afterglow was detected and monitored at X-ray, optical and radio bands. It was the brightest of all GRB X-ray afterglows observed until now. The BeppoSAX detected the flux of the afterglow at 2-10 keV six hours after the gamma-ray trigger to be $`1.1\times 10^{11}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ and the subsequent temporal decay index to be $`\alpha _X=1.44\pm 0.07`$ (Heise et al. 1999a, b). The R-band optical afterglow about 3.5 hours after the burst showed a power-law decay with index $`\alpha _{1R}=1.1\pm 0.03`$ (Kulkarni et al. 1999a; Castro-Tirado et al. 1999; Fruchter et al. 1999). This law continued until about $`2.04\pm 0.46`$ days after the burst. Then the optical emission began to decline based on another power law with index $`\alpha _{2R}=1.65\pm 0.06`$ (Kulkarni et al. 1999a) or $`1.75\pm 0.11`$ (Castro-Tirado et al. 1999) or $`1.8`$ (Fruchter et al. 1999). In addition, a radio flare was also detected about 1 day after the burst (Kulkarni et al. 1999b; Galama et al. 1999). A scenario has been proposed to explain these observations. If the burst is assumed to be produced from a jet, the steepening of the late optical afterglow decay is due to the possibility that this jet has undergone the transition from a spherical-like phase to a sideways-expansion phase (Rhoads 1997, 1999; Kulkarni et al. 1999a; Fruchter et al. 1999; Sari, Piran & Halpern 1999) or that we have observed the edge of the jet (Panaitescu & Mészáros 1998; Mészáros & Rees 1999). In this Letter we propose another possible scenario, in which the steepening of the late optical afterglow decay is due to the shock which has evolved from a relativistic phase to a nonrelativistic phase in a dense medium. According to the standard afterglow shock model (for a review see Piran 1998), the afterglow is produced by synchrotron radiation or inverse Compton scattering in the external forward wave (blast wave) of the GRB fireball expanding in a homogeneous medium. The external reverse shock of the fireball may lead to a prompt optical flash (Sari & Piran 1999). As more and more ambient matter is swept up, the forward shock gradually decelerates and eventually enters a nonrelativistic phase. In the meantime, the emission from such a shock fades down, dominating at the beginning in X-rays and progressively at optical to radio energy band. There are two limiting cases (adiabatic and highly radiative) for the hydrodynamical evolution of the shock. These cases have been well studied both analytically (e.g., Mészáros & Rees 1997; Wijers, Rees & Mészáros 1997; Waxman 1997a, b; Reichart 1997; Sari 1997; Vietri 1997; Katz & Piran 1997; Mészáros, Rees & Wijers 1998; Dai & Lu 1998a; Sari, Piran & Narayan 1998; etc) and numerically (e.g., Panaitescu, Mészáros & Rees 1998; Huang et al. 1998; Huang, Dai & Lu 1998). A partially radiative (intermediate) case has been investigated (Chiang & Dermer 1998; Cohen, Piran & Sari 1998; Dai, Huang & Lu 1999). Here we only consider the limiting cases. In the highly radiative model, since all shock-heated electrons cool faster than the age of the shock, the optical afterglow should have the same temporal decay index as the X-ray afterglow (Sari et al. 1998), incompatible with the observations (Kulkarni et al. 1999a). In the adiabatic model, however, the difference in the decay index between optical and X-ray afterglows is found to be likely $`1/4`$, which is consistent with the observational result $`\mathrm{\Delta }\alpha =\alpha _{1R}\alpha _X0.3`$. This implies that the shock producing the afterglow of GRB 990123 has evolved adiabatically. This is the starting point of our analysis. For an adiabatic shock, the time at which it enters a nonrelativistic phase $`n^{1/3}`$, where $`n`$ is the baryon number density of the medium. Therefore, this time for a shock expanding in a dense medium with density of $`n10^6\mathrm{cm}^3`$ is two orders of magnitude smaller than that for a shock with the same energy in a thin medium with density of $`n1\mathrm{cm}^3`$. Furthermore, as given in Section 2, the afterglow at the nonrelativistic phase decays faster than at the relativistic phase. It is natural to expect that this effect can provide an explanation for the steepening feature of the afterglow from GRB 990123. Dense media have been discussed in the context of GRBs. First, Katz (1994) suggested collisions of relativistic nucleons with a dense cloud as an explanation of the delayed hard photons from GRB 940217. Second, to explain the radio flare of GRB 990123, Shi & Gyuk (1999) speculated that a relativistic shock may have ploughed into a dense medium off the line of sight. Third, Piro et al. (1999) and Yoshida et al. (1999) have reported an iron emission line in the X-ray afterglow spectrum of GRB 970508 and GRB 970828 respectively. The observed line intensity requires a dense medium with a large iron mass concentrated in the vicinity of the burst (Lazzati, Campana & Ghisellini 1999). Finally, dense media (e.g., clouds or ejecta) may appear in the context of some energy source models, e.g., failed supernovae (Woosley 1993), hypernovae (Paczyński 1998), supranovae (Vietri & Stella 1998), phase transition of neutron stars to strange stars (Dai & Lu 1998b), baryon decay of neutron stars (Pen & Loeb 1998), etc. ## 2. The Evolution of a Shock in a Dense Medium ### 2.1. Relativistic Phase Now we consider an adiabatic relativistic shock expanding in a dense medium. The Blandford-McKee (1976) self similar solution gives the Lorentz factor of the shock, $`\gamma `$ $`=`$ $`{\displaystyle \frac{1}{4}}\left[{\displaystyle \frac{17E(1+z)^3}{\pi nm_pc^5t_{}^3}}\right]^{1/8}`$ (1) $`=`$ $`2E_{54}^{1/8}n_5^{1/8}t_{\mathrm{day}}^{3/8}[(1+z)/2.6]^{3/8},`$ where $`E=E_{54}\times 10^{54}\mathrm{ergs}`$ is the total isotropic energy, $`n_5=n/10^5\mathrm{cm}^3`$, $`t_{}=t_{\mathrm{day}}\times 1\mathrm{day}`$ is the observer’s time since the gamma-ray trigger, $`z`$ is the the redshift of the source generating this shock, and $`m_p`$ is the proton mass. In analyzing the spectrum and light curve of synchrotron radiation from the shock, one needs to know two crucial frequencies: the synchrotron radiation peak frequency ($`\nu _m`$) and the cooling frequency ($`\nu _c`$). In the standard afterglow shock picture, the electrons heated by the shock are assumed to have a power-law distribution: $`dN_e/d\gamma _e\gamma _e^p`$ for $`\gamma _e\gamma _{em}`$, where $`\gamma _e`$ is the electron Lorentz factor and the minimum Lorentz factor $`\gamma _{em}=610ϵ_e\gamma `$. The power-law index $`p2.56`$ by fitting the spectrum and light curve of the observed afterglow of GRB 990123 (see below). We further assume that $`ϵ_e`$ and $`ϵ_B`$ are ratios of the electron and magnetic energy densities to the thermal energy density of the shocked medium respectively. Based on these assumptions, the synchrotron radiation peak frequency in the observer’s frame can be written as $`\nu _m`$ $`=`$ $`{\displaystyle \frac{\gamma \gamma _{em}^2}{1+z}}{\displaystyle \frac{eB^{}}{2\pi m_ec}}`$ (2) $`=`$ $`8.0\times 10^{11}ϵ_e^2ϵ_{B,6}^{1/2}E_{54}^{1/2}t_{\mathrm{day}}^{3/2}`$ $`\times [(1+z)/2.6]^{1/2}\mathrm{Hz},`$ where $`ϵ_{B,6}=ϵ_B/10^6`$ and $`B^{}=(32\pi ϵ_B\gamma ^2nm_pc^2)^{1/2}`$ is the internal magnetic field strength of the shocked medium. According to Sari et al. (1998), the cooling frequency, the frequency of electrons with Lorentz factor of $`\gamma _c`$ that cool on the dynamical time of the shock, is given by $`\nu _c`$ $`=`$ $`{\displaystyle \frac{\gamma \gamma _c^2}{1+z}}{\displaystyle \frac{eB^{}}{2\pi m_ec}}={\displaystyle \frac{18\pi em_ec(1+z)}{\sigma _T^2B^3\gamma t_{}^2}}`$ (3) $`=`$ $`1.9\times 10^{16}ϵ_{B,6}^{3/2}E_{54}^{1/2}n_5^1t_{\mathrm{day}}^{1/2}`$ $`\times [(1+z)/2.6]^{1/2}\mathrm{Hz},`$ where $`\sigma _T`$ is the Thompson scattering cross section. From equations (2) and (3), Sari et al. (1998) have further defined two critical times, when the breaking frequencies $`\nu _m`$ and $`\nu _c`$ cross the observed frequency $`\nu =\nu _{15}\times 10^{15}\mathrm{Hz}`$: $`t_m=8.6\times 10^3ϵ_e^{4/3}ϵ_{B,6}^{1/3}E_{54}^{1/3}[(1+z)/2.6]^{1/3}\nu _{15}^{2/3}\mathrm{days}`$, and $`t_c=380ϵ_{B,6}^3E_{54}^1n_5^2[(1+z)/2.6]^1\nu _{15}^2\mathrm{days}`$. Therefore we see that for $`E_{54}1.6`$, $`ϵ_e0.1`$, $`ϵ_{B,6}0.02`$, and $`n_530`$ inferred in the next section, the optical afterglow in several days after the burst should result from those slowly-cooling electrons and the X-ray afterglow from those fastly-cooling electrons. The observed synchrotron radiation peak flux can be obtained by $`F_{\nu _m}`$ $`=`$ $`{\displaystyle \frac{N_e\gamma P_{\nu _m}^{}(1+z)}{4\pi D_L^2}}`$ (4) $`=`$ $`4.2ϵ_{B,6}^{1/2}E_{54}n_5^{1/2}[(1+z)/2.6]D_{L,28}^2\mathrm{Jy},`$ where $`N_e`$ is the total number of swept-up electrons, $`P_{\nu _m}^{}=m_ec^2\sigma _TB^{}/(3e)`$ is the radiated power per electron per unit frequency in the frame comoving with the shocked medium, and $`D_L=D_{L,28}\times 10^{28}\mathrm{cm}`$ is the distance to the source. In the light of equations (2)-(4), one can easily find the spectrum and light curve of the afterglow, $$F_\nu =\{\begin{array}{cccc}(\nu /\nu _m)^{(p1)/2}F_{\nu _m}\hfill & & & \\ \nu ^{(p1)/2}t_{}^{3(1p)/4}\mathrm{if}\nu _m<\nu <\nu _c;\hfill & & & \\ (\nu _c/\nu _m)^{(p1)/2}(\nu /\nu _c)^{p/2}F_{\nu _m}\hfill & & & \\ \nu ^{p/2}t_{}^{(23p)/4}\mathrm{if}\nu >\nu _c,\hfill & & & \end{array}$$ (5) where the low-frequency radiation component has not been considered (Sari et al. 1998). In the GRB 990123 case, we require $`\nu _m<\nu <\nu _c`$ for the optical afterglow and $`\nu >\nu _c`$ for the X-ray afterglow. Thus, the R-band afterglow decay index $`\alpha _R=3(1p)/4`$ and the X-ray decay index $`\alpha _X=(23p)/4`$, which are well consistent with the observational results $`\alpha _{1R}=1.1\pm 0.03`$ and $`\alpha _X=1.44\pm 0.07`$ if $`p2.56`$. ### 2.2. Nonrelativistic Phase As it sweeps up sufficient ambient matter, the shock will eventually go into a nonrelativistic phase. During such a phase, the shock’s velocity $`vt_{}^{3/5}`$, its radius $`rt_{}^{2/5}`$, the internal field strength $`B^{}t_{}^{3/5}`$ and the typical electron Lorentz factor $`\gamma _{em}t_{}^{6/5}`$. Thus, we obtain the synchrotron peak frequency $`\nu _m\gamma _{em}^2B^{}t_{}^3`$, the cooling frequency $`\nu _cB^3t_{}^2t_{}^{1/5}`$ and the peak flux $`F_{\nu _m}N_eP_{\nu _m}^{}r^3B^{}t_{}^{3/5}`$. According to these scaling laws, we further derive the spectrum and light curve at the nonrelativistic stage: $$F_\nu =\{\begin{array}{cccc}(\nu /\nu _m)^{(p1)/2}F_{\nu _m}\hfill & & & \\ \nu ^{(p1)/2}t_{}^{(2115p)/10}\mathrm{if}\nu _m<\nu <\nu _c;\hfill & & & \\ (\nu _c/\nu _m)^{(p1)/2}(\nu /\nu _c)^{p/2}F_{\nu _m}\hfill & & & \\ \nu ^{p/2}t_{}^{(43p)/2}\mathrm{if}\nu >\nu _c.\hfill & & & \end{array}$$ (6) From equation (6), we can see the R-band decay index $`\alpha _R=(2115p)/10`$ for radiation from slowly-cooling electrons or $`\alpha _R=(43p)/2`$ for radiation from rapidly-cooling electrons. If $`p2.56`$, then $`\alpha _R1.74`$ or $`1.84`$, in excellent agreement with the observations in the time interval of 2.5 days to 20 days after the burst (Kulkarni et al. 1999a; Fruchter et al. 1999; Castro-Tirado et al. 1999). ## 3. Constraints on Parameters In the above section, we show that as an adiabatic shock expands in a dense medium from an ultrarelativistic phase to a nonrelativistic phase, the decay of the radiation from such a shock will steepen. This effect may fit the observed steepening better than the alternative interpretation — jet sideways expansion. In the latter interpretation, the temporal decay of a late afterglow is very likely to be $`t_{}^p`$ (Rhoads 1997, 1999; Sari et al. 1999). We further analyze our effect and infer some parameters of the model. According to the analysis on the R-band light curve of the GRB 990123 afterglow (Kulkarni et al. 1999a; Fruchter et al. 1999; Castro-Tirado et al. 1999), the observed break occurred at $`t_{}=2.04\pm 0.46`$days. This implies $`\gamma 1`$ at $`t_{\mathrm{day}}2.5`$. From equation (1), therefore, we find $`n_516E_{54}`$, where the redshift $`z=1.6`$ has been used. We now continue to consider two observational results. First, on January 23.577 UT, the Palomar 60-inch telescope detected the R-band magnitude $`R=18.65\pm 0.04`$, corresponding to the flux $`F_R100\mu \mathrm{Jy}`$ at $`t_{\mathrm{day}}0.17`$ (Kulkarni et al. 1999a). Considering this result in equation (5) together with equations (2) and (4), we can derive $$ϵ_e^{p1}ϵ_{B,6}^{(p+1)/4}E_{54}^{(p+3)/4}n_5^{1/2}0.01,$$ (7) where the right number has been obtained by taking $`p2.56`$ and $`D_{L,28}3.7`$. Second, on January 24.65 UT, the BeppoSAX observed the X-ray (2-10 keV) flux $`F_X5\times 10^2\mu `$Jy (Heise et al. 1999a, b). Combining this result with equations (2)-(5), we can also derive $$ϵ_e^{p1}ϵ_{B,6}^{(p2)/4}E_{54}^{(p+2)/4}0.03.$$ (8) Since $`E_{54}1.6`$ (Briggs et al. 1999; Kulkarni et al. 1999a), the medium density $`n_530`$ and the solution of equations (7) and (8) is $`ϵ_e0.1`$ and $`ϵ_{B,6}0.02`$. Our inferred value of $`ϵ_e`$ is near the equipartition value, in agreement with the result of Wijers & Galama (1998) and Granot, Piran & Sari (1998), while our $`ϵ_B`$ is about six orders of magnitude smaller than the value inferred from the afterglow of GRB 970508. Of course, the field density for GRB 971214 has been estimated to be less than $`10^5`$ times the equipartition value (Wijers & Galama 1998). As suggested by Galama et al. (1999), such differences in field strength may reflect differences in energy flow from the central engine. ## 4. Discussion and Conclusion In the above section, we find the medium density $`n3\times 10^6\mathrm{cm}^3`$ for our model to fit the observed optical and X-ray afterglow of GRB 990123. Now we show that even in the the presence of such a dense medium, the optical and X-ray radiations from the forward shock were neither self absorbed in the shocked medium nor scattered in the unshocked medium. First, the self-absorption frequency of the shocked medium is (Wijers & Galama 1998; Granot et al. 1998) $`\nu _a10^3\mathrm{GHz}(ϵ_e/0.1)^1(ϵ_{B,6}/0.01)^{1/5}E_{54}^{1/5}(n_5/10)^{3/5}`$. This estimate should be the upper limit because of the presence of a possible low-energy electron population (Waxman 1997b). Clearly, $`\nu _a`$ is much less than the optical frequency, implying that the self absorption in the shocked medium didn’t affect the optical and X-ray afterglow. In fact, this estimate is valid only for $`\nu _a<\nu _m`$. When $`\nu _a>\nu _m`$, $`\nu _a`$ must have decayed. As a result, the flux at 8.46 GHz first increased as $`t_{}^{1.25}`$ and then declined as $`t_{}^{1.74}`$ for $`\nu _a<8.46`$ GHz during the nonrelativistic phase. This might provide an explanation for the observed radio flare. Second, a photon emitted from the shock may be scattered by the electrons in the unshocked medium. The scattering optical depth $`\tau \sigma _TnR`$ (where $`R`$ is the typical radius of the medium). If the medium was distributed isotropically and homogeneously and its mass $`M10M_{}`$ (the typical mass of a supernova ejecta), then $`\tau 0.05(M/10M_{})^{1/3}(n_5/10)^{2/3}1`$. This implies that the afterglow from the shock was hardly affected by the medium. For other well-studied afterglows, e.g., GRB 970228 and GRB 970508, their ambient densities must be very low for three reasons: (i) In these bursts there was no observed break in the optical light curve as long as the afterglow could be observed (Fruchter et al. 1998; Zharikov et al. 1998). (ii) The fluctuation appearing in the radio afterglow light curve of GRB 970508 requires the shock had been relativistic for several weeks (Waxman, Kulkarni & Frail 1998). (iii) The analysis of the afterglow spectrum of GRB 970508 leads to a low ambient density $`n<10\mathrm{cm}^3`$ (Wijers & Galama 1998; Granot et al. 1998). However, the observed iron emission line in the X-ray afterglow spectrum of GRB 970508 indeed requires a dense medium with density $`10^9\mathrm{cm}^3`$ (Lazzati et al. 1999). The only way to reconcile a monthly lasting power-law afterglow with iron line emission is through a particular geometry, in which the line of sight is devoid of the dense medium. In contrast to this idea, we suggest that for GRB 990123 a dense medium of $`n3\times 10^6\mathrm{cm}^3`$ appears at least at the line of sight or perhaps isotropically. How was the dense medium produced? One possibility was a cloud and another possibility was an ejecta from the GRB site. There have been several source models (mentioned in Introduction) in the literature which may lead to massive ejecta. Here we want to discuss one of them in detail. Timmes, Woosley & Weaver (1996) showed that Type II supernovae may produce a kind of neutron star with $`1.73M_{}`$. If these massive neutron stars have very short periods at birth, they may subsequently convert into strange stars due to rapid loss of angular momenta (Cheng & Dai 1998), and perhaps the strange stars are differentially rotating (Dai & Lu 1998b). Even though this model is somewhat similar to the supranova model of Vietri & Stella (1998), resultant compact objects are strange stars in our model and black holes in the supranova model. We further discuss implications of our model. First, the model leads to low-mass loading matter because of thin baryonic crusts of the strange stars. Second, such stars result in GRBs with spiky light curves, being consistent with the analytical result from the observed data of GRB 990123 (Fenimore, Ramirez-Ruiz & Wu 1999). The third advantage of this model is to be able to explain well the property of the early afterglow of GRB 970508 by considering energy injection from the central pulsar (Dai & Lu 1998b, c). Finally, a dense medium, the supernova ejecta, appears naturally. Our scenario proposed in this Letter requires a dense medium with density $`3\times 10^6\mathrm{cm}^3`$ to explain the steepening in the temporal decay of the R-band afterglow about 2.5 days after GRB 990123. We also suggest that this medium could be a supernova/supranova/hypernova ejecta. Thus, if the mass of the medium is assumed to be $`M10M_{}`$, its radius can be estimated to be $`R3\times 10^{17}\mathrm{cm}(M/10M_{})^{1/3}(n_5/10)^{1/3}`$. According to equation (1), we can integrate $`dr=2\gamma ^2cdt_{}`$ and thus find that the postburst 2.5-day time in the observer’s frame corresponds to about 20 days in the unshocked medium’s frame. This implies that the radius at which the shock entered a nonrelativistic phase is about $`5\times 10^{16}`$ cm. This radius is much less than that of the medium. Therefore, the medium discussed here was so wide and dense that the ultrarelativistic shock must have become nonrelativistic about 2.5 days after the burst. In summary, a simple explanation for the “steepening” observed in the temporal decay of the late R-band afterglow of GRB 990123 is that a shock expanding in a dense medium with density of $`3\times 10^6\mathrm{cm}^3`$ has evolved from a relativistic phase to a nonrelativistic phase. We find that this scenario not only explains well the optical afterglow but also accounts for the observed X-ray afterglow quantatitively. We would like to thank J. I. Katz, S. R. Kulkarni, A. Mitra and the anonymous referee for invaluable suggestions, and Y. F. Huang and D. M. Wei for helpful discussions. This work was supported by the National Natural Science Foundation of China (grants 19825109 and 19773007).
no-problem/9904/cond-mat9904106.html
ar5iv
text
# 1 Plot of the local exponent of 𝐷₀ as a function of t for several values of p. The Directed Polymer — Directed Percolation Transition Ehud Perlsman and Shlomo Havlin The Resnick Building, Minerva Center and Department of Physics, Bar-Ilan University, 52900 Ramat-Gan, Israel. ABSTRACT We study the relation between the directed polymer and the directed percolation models, for the case of a disordered energy landscape where the energies are taken from bimodal distribution. We find that at the critical concentration of the directed percolation, the directed polymer undergoes a transition from the directed polymer universality class to the directed percolation universality class. We also find that directed percolation clusters affect the characterisrics of the directed polymer below the critical concentration. PACS numbers: 05.50.+q, 64.60.Ak, 64.60.Cn Several models have been developed for directed path in disordered media. Two well known models are directed polymer and directed percolation . While the directed polymer is based on global optimization, the directed percolation can be described as a local process. The directed polymer is considered as a model for physical processes such as tearing or cracks , while the directed percolation has been used to model for example invasion of low viscosity fluid in high viscosity, as well as the interface of liquids in materials such as paper. The relation between the two models is not yet clear and even controversial, as discussed bellow. The two models of directed polymer and directed percolation can be described in a similar way as follows: In a square lattice which was cut along its diagonal and oriented as a triangle whose apex is up and the diagonal is its base, we assign to each bond a random number. Of all the paths leading from the apex (the origin) to the base we refer only to those whose direction is *always* down to the base. For each one of these paths we calculate the sum of the random numbers along it. In the directed polymer model, the sum of the random numbers along the path defines its value. We focus our interest on the path of minimal value (which we call ”the optimal path”), and define the roughness exponent $`\nu `$ by $`Dt^\nu `$, where D is the mean distance of the endpoint of the optimal path from the center of the base, t is the size of the triangle, and $`\nu `$ is the roughness exponent. In this model the random numbers are usually taken from a continuous distribution, and thus the optimal path is uniquely defined. Huse and Henley who introduced this model found that $`\nu 2/3`$ and the value 2/3 is considered to be exact. Another exponent which characterizes this model is $`\omega `$, defined by $`\sigma _Et^\omega `$ where E is the the value of the optimal path, and $`\sigma _E`$ is its standard deviation. These two exponents are related by the scaling relation $`\omega =2\nu 1`$ . In the directed percolation model the random numbers are taken from a bimodal (0,1) distribution, and we can define a percolation cluster as the collection of lattice sites which are connected by zero sum paths to the origin. In this model we can also define a rougheness exponent by $`WL^\nu `$ where W is the mean width and L is the mean length of a percolation cluster. W and L depend on the probability to get 0 (denoted by p) by the relations $`W(p_cp)^\nu _{}`$, $`L(p_cp)^\nu _{}`$, where $`p_c`$ is the critical probability $`0.6447`$, $`\nu _{}`$ is the transverse exponent $`1.097`$, and $`\nu _{}`$ is the longitudinal exponent $`1.733`$ . Since $`\nu =\nu _{}/\nu _{}0.633`$, there are two independent exponents in this model, compared to only one in the directed polymer model. Huse and Henley refer to the possibility that in the directed polymer model the random numbers are taken from a bimodal probability distribution, and stated that the roughness exponent has the same value as in the continuous case, i.e. $`\nu =2/3`$. On the other hand, more recently, Lebedev and Zhang claimed that in the bimodal case, for all $`pp_c`$ the exponent is actually the same as in the directed percolation case, i.e. $`\nu 0.633`$. In this Letter we address this controversial issue by studying the directed polymer model with bimodal (0,1) distribution. The numerical simulations lead to the following main conclusions: 1. For $`p<p_c`$ and for $`p>p_c`$, for large t the value of the roughness exponent is $`2/3`$, the directed polymer value. 2. For $`p=p_c`$ , the value of the roughness exponent is $`0.633`$, the directed percolation value. 3. For $`p<p_c`$ and $`tL`$, the sum of the random numbers along the optimal path is inversely proportional to the mean directed percolation cluster length, i.e. $`Et/L`$. It should be emphasized that a study of the directed polymer model yields dependence on both the critical probability and mean percolation cluster length, L, of directed percolation. So the general conclusion is that the two models are closely related. In this study the random numbers are taken from a bimodal (0,1) distribution, and thus there is usually more than one optimal path and more than one optimal endpoint. If there exists a real percolation cluster spanning from the origin to the base, all the optimal endpoints (and no other point of the base) belong to this percolation cluster, but otherwise, the optimal paths and endpoints are not necessarily connected in any special manner. The set of optimal endpoints can be characterized by two variables: 1. $`D_0`$ \- the distance of the center of the optimal endpoints set from the center of the base. The center of the set is computed by $`(X_l+X_r)/2`$ where $`X_l`$ and $`X_r`$ are the leftmost and rightmost points of the set. 2. $`W_0`$ \- the width of the optimal endpoints set, which is computed by $`W_0=(X_rX_l)`$. It is expected that for large t, $`D_0t^{\nu _D}`$, and $`W_0t^{\nu _W}`$, where $`\nu _D`$ and $`\nu _W`$ are in the range . In order to test this hypothesis and estimate $`\nu _D`$ and $`\nu _W`$, we define the local roughness exponent of a variable V at the point t, as the slope of the curve relating the logarithm of V to the logarithm of t. The numerical simulations provide estimates for the local exponents of $`D_0`$ and $`W_0`$ by computing $`(\mathrm{log}V(2t)\mathrm{log}V(t/2))/\mathrm{log}4`$, where V is either $`D_0`$ or $`W_0`$. These estimates are presented in Figures 1 and 2 for 5 probabilities: $`p=0.25`$, $`p=0.5`$, $`p=0.62`$, $`p=0.64`$ and $`p=0.6447(p_c)`$. In Figure 1 it is seen that the slope of the $`pp_c`$ curve is stable at a value $`0.63`$, while as p is further below $`p_c`$, the slope seems to crossover from $`0.63`$ to a value close to 2/3. In Figure 2 it is seen that the slope of the $`pp_c`$ curve converges to a value $``$ 0.63, while for $`p<p_c`$ the slope declines and approaches (for the probabilities further from $`p_c`$) a value close to 1/3. These crossovers from directed percolation exponents to directed polymer exponents can be explained due to the relation between the triangle size t and the longitudinal correlation length $`\xi _{}`$ . For $`t<\xi _{}`$ we expect properties of directed percolation, while for $`t>\xi _{}`$ we expect properties of directed polymer. The results presented in Figure 2 deserve some explanation, as one might expect that $`W_0`$ should be proportional to W - the width of a typical directed percolation cluster (in the relevant probability). If this was the case, we would get in Figure 2 an initial slope of 0.63, and than the slope would decline to zero. Evidently, the picture is different, as for each probability $`p<p_c`$ the slope will eventually converge to a value close to 1/3. The reason for this lies in the ultrametric tree structure of the directed polymer problem. It was shown that in the continuous distribution case, the region of endpoints of paths whose value difference from the value of the optimal path is smaller than a (small) constant, increases as $`t^{1/3}`$. In our case of discrete values, the same should hold for zero difference between the values of the optimal paths. The main conclusion from Figures 1,2 is that for $`p<p_c`$ and sufficiently large t, the optimal endpoints set of width $`t^{1/3}`$ is located around a point whose distance from the center is $`t^{2/3}`$. As for $`t1`$, $`t^{1/3}`$ is negligible compared to $`t^{2/3}`$, the situation is similar to the continuous distribution case, where instead of one endpoint there is a ”cloud” of endpoints whose distance from the center is $`t^{2/3}`$. For $`p>p_c`$ there is a finite probability for points of the base to belong to a real directed percolation cluster, so that $`W_0t`$. On the other hand, we find numerically that $`D_0t^{1/2}`$. As $`W_0D_0`$, it is certain that members of the optimal endpoints set are found in both sides of the center, and another definition of the distance is needed. A definition that takes into account the fact that the directed polymer is equally likely to choose any one of the optimal paths, is $`D_r=_in_i|x_i|/_in_i`$, where $`n_i`$ is the number of optimal paths whose endpoint is $`x_i`$. The local exponents of $`D_r`$ are presented in Figure 3 for 3 probabilities: $`p=0.5`$, $`p=0.6447(p_c)`$, and $`p=0.75`$. As might be expected, the results for $`p<p_c`$ and $`pp_c`$ approach the values 2/3 and 0.63 respectively, further supporting that for $`p<p_c`$, $`\nu =2/3`$, while for $`p=p_c`$, $`\nu 0.63`$. The results for $`p>p_c`$ are inconclusive, as the local exponent does not yet converge at this triangle size. But, for $`p>p_c`$ it is expected that the situation is governed by one big percolation cluster. If this is the case, it is possible (and much less compuer power demanding) to ”grow” percolation clusters and to find the dependence of $`D_r`$ on t in that way. The local exponents of $`D_r`$ for directed polymer at $`p=0.75`$ and for directed percolation at the same probability are shown in figure 4. As can be seen in figure 4, the results of directed polymer and directed percolation are statistically indistinguishable, and it is quite safe to assume that this situation will not change for larger t. The local exponent for directed percolation approches a value $`2/3`$, a result which was obtained earlier by Balents and Kardar . Thus we conclude that for $`p>p_c`$, $`\nu `$ of directed polymer has the same value as for $`p<p_c`$, i.e. $`\nu 2/3`$. It was mentioned above that the results of Figures 1,2 can be explained in terms of the longitudinal correlation length $`\xi _{}`$. More direct result follows from a picture of the optimal path as a series of zero sum segments whose mean length is L, connected by single bonds of value 1. (Obviously, this picture holds only for $`tL`$). According to this picture,we expect that $`Et/L`$, and for fixed t, $`E\times L`$ constant. Figure 5 presents the results for $`E/E_0`$, $`L/L_0`$, and $`(E\times L)/(E_0\times L_0)`$ in the range of probabilities $`0.5<p<0.64`$, where $`E_0`$ is E at $`p=0.5`$ and $`L_0`$ is L at $`p=0.5`$. As can be seen in Figure 5, E and L form mirror reflection of each other over two orders of magnitude, while $`E\times L`$ is a slightly decreasing function of p. So there is almost one to one correspondence between results obtained from the directed polymer model (the values of E), and results obtained *independently* from the directed percolation model (the values of L). In conclusion, it was shown that the bimodal distribution directed polymer model can be characterized in terms of the directed percolation model, and that the percolation thresehold probability $`p_c`$ plays a critical role in the directed polymer case. REFERENCES 1. D.A. Huse and C.L. Henley, Phys. Rev. Lett. 54, 2708 (1985). 2. W. Kinzel in: Percolation Structures and Processes, ed. G. Deutscher, R. Zallen and J. Adler, 1983 (Hilger, Bristol). 3. J. Kertész, V.K. Horváth and F. Weber, Fractals, 1, 67 (1992). 4. Fractals and Disordered Systems, ed. A. Bunde and S. Havlin, 2nd Edition, 1995 (Springer, Berlin). 5. A.-L. Barabási and H.E. Stanley, Fractal Concepts in Surface Growth, 1995 (Cambridge University Press, Cambridge). 6. N.I. Lebedev and Y.-C. Zhang, J. Phys. A, 28, L1-L6, (1995). 7. E. Perlsman and M. Schwartz, Europhys. Lett. 17, 11, (1992). 8. L. Balents and M. Kardar, J. Stat. Phys. 67, 1, (1992).
no-problem/9904/cond-mat9904021.html
ar5iv
text
# Storage capabilities of a 4-junction single electron trap with an on-chip resistor ## Abstract We report on the operation of a single electron trap comprising a chain of four Al/AlO<sub>x</sub>/Al tunnel junctions attached, at one side, to a memory island and, at the other side, to a miniature on-chip Cr resistor ($`R`$ 50 k$`\mathrm{\Omega }`$) which served to suppress cotunneling. At appropriate voltage bias the bi-stable states of the trap, with the charges differing by the elementary charge $`e`$, were realized. At low temperature, spontaneous switching between these states was found to be infrequent. For instance, at $`T=`$70 mK the system was capable of holding an electron for more than 2 hours, this time being limited by the time of the measurement. PACS numbers: 73.23.Hk, 73.40.Gk, 85.30.Wx The quality and complexity of Single Electron Tunneling (SET) devices, i.e. the circuits in which the tunneling of electrons is governed by the Coulomb blockade effect (see, for example, reviews and ), are steadily growing. The increase of the number of junctions in these circuits is often motivated by the necessity to increase the Coulomb energy of the system and to reduce spontaneous cotunneling events. These events, that are the higher-order processes, are associated with electron tunneling occurring in several junctions simultaneously and quantum-coherently. They set a limit to the accuracy of SET devices such as turnstiles and pumps (in which an ac signal of frequency $`f`$ applied to the gates clocks the transfer of single electrons), and the storing times of traps (capable to hold an electron on a memory node of the circuit for long periods). Because of the cotunneling, a 4-junction SET turnstile and a 3-junction pump realize relation $`I=ef`$ with an accuracy of about 1% only, which is insufficient for their applications in metrology . The 4-junction SET traps made by Fulton $`et`$ $`al.`$ and Lafarge $`et`$ $`al.`$ had maximum trapping times as short as $`1`$ s. In order to make these devices more accurate and reliable (that is to say, to reduce the probability of one of the most serious source of errors, namely the cotunneling), the number of series-connected junctions $`N`$ should be increased. As was recently demonstrated by Keller $`etal.`$ , the 7-junction SET pump driven by a 5 MHz signal had an error rate as low as 15 parts in $`10^9`$. In the static case, $`I=0`$, the duration for an electron to be kept in a trap also appreciably rises with $`N`$. For instance, the 7-junction trap of Dresselhaus $`et`$ $`al.`$ allowed electrons to be stored for several hours, this period beeing limited by the observation time. A similarly high storage capability was found for the 9-junction trap by Krupenin $`et`$ $`al.`$ On the other hand, increasing of $`N`$ is not the only way to suppress the cotunneling. As was theoretically shown by Odintsov $`et`$ $`al.`$ for the SET transistor and by Golubev and Zaikin for the $`N`$-junction ($`N2`$) chain, a dissipative environment $`|Z(\omega )|=RR_k=h/e^226`$k$`\mathrm{\Omega }`$, can do a good job of suppressing the cotunneling. (The mechanism of this suppression is qualitatively similar to that of the Coulomb blockade in a single tunnel junction arising due to a high serial resistance. This resistor hampers charge relaxation in the electric circuit and, hence, drastically influences the tunneling rates.) They found the cotunneling contribution to the $`IV`$ curve of the chain at $`T=0`$ and at a small voltage $`V`$ to be $$IV^{2(N+z)1},$$ $`(1)`$ where $`z=R/R_k`$. As can be seen from Eq.(1), parameter $`z`$ can be regarded as a number of imagined tunnel junctions $`\mathrm{\Delta }N`$ attached to the $`N`$-junction chain and ensuring similar suppression of cotunneling as resistance $`R`$. In this work we pioneered a dramatic reduction of the cotunneling in a multi-junction circuit (SET trap) by using a dissipative environment. We utilized an on-chip resistor of about 50 k$`\mathrm{\Omega }`$ (i.e. $`z2`$) to reduce cotunneling in a 4-junction chain with quite ordinary parameters. Since this resistor was roughly equivalent to two tunnel junctions, we expected the storage capability of this R-trap to be comparable to that of a 6-junction trap without resistor. The sample, comprising the trapping array itself and the readout SET electrometer positioned near the memory island (see Fig. 1), was fabricated by the well-established shadow deposition technique through the trilayer mask with ”hanging bridges” patterned by e-beam lithography and reactive-ion etching. Through the same mask, at three different angles ($`23^\mathrm{o},0^\mathrm{o}`$ and $`+12^\mathrm{o}`$), we consequently deposited $`in`$ $`situ`$ three metal layers: Cr (8 nm thick), then Al (30 nm), and after oxidation again Al (35 nm). All tunnel junctions on the chip had the same nominal dimensions, 80 nm$`\times `$ 80 nm, and, as was found from measurements of the electrometer transistor, their intrinsic capacitance was $`C160`$ aF and their tunnel resistance $`R_t70`$ k$`\mathrm{\Omega }`$. The memory island of the trap was nominally 80 nm$`\times `$ 2.2 $`\mu `$m and had a self-capacitance $`C_m`$ in the range of 100 aF. Each of three inner islands was about 80 nm$`\times `$ 300 nm in size. The Cr resistor was 8 $`\mu `$m long, 80 nm wide and its resistance ($`R50`$ k$`\mathrm{\Omega }`$) was evaluated from the measurement of similar single resistors fabricated apart on the same chip. The measurements were carried out in a dilution refrigerator within the temperature range $`T=70170`$ mK. A magnetic field of 1 T was applied perpendicular to the chip in order to keep Al parts of the circuit in the normal state. In the biasing lines we used $`\pi `$-filters against rf noise and the Thermocoax filters against the microwave frequency noise. The latter filters were the pieces (1 m long) of the coaxial cable lines (of diameter 0.5 mm) having considerable losses ($`>`$ 100 dB at $`f>10`$ GHz). The cables were thermally anchored at the mixing chamber temperature and fed through into a shielding case containing the sample holder. In order to characterize the quality of filtering by an effective temperature $`T_e`$ referred to the sample, we had measured a test sample comprising a SET electrometer coupled to an electron box . The value $`T_e=5060`$ mK had been found from the switching characteristics of the box at $`T=20`$ mK. The presence of the memory island near the electrometer clearly manifested itself as regular jumps of output voltage $`V`$ in both the $`VV_g`$ and the $`VV_{tr}`$ characteristics of the electrometer. The change of the polarization charge $`\delta Q`$ on the electrometer island caused by charging of the memory island by an elementary charge was found to be $`(7.6\pm 0.5)\times 10^2e`$. This value was large enough to reliably monitor integer jumps of the charge on the memory node against a background noise. The modulation curves recorded for ramping $`V_g`$ or $`V_{tr}`$ in positive and negative directions formed typical hysteresis loops (the memory effect) , indicating that an electron enters into, or leaves, the memory island at the different values of bias. Neighboring loops were apart from each other and their positions were rather stable in time (i.e. the drift of the background charge was reasonably small). This proved that our trap had exactly two charge states within each loop (see Fig. 2). Since the inner islands of the chain were not supplied with individual tuning gates (as it was done in, e.g., Ref. ), we were not able to maximize the width of the loops by applying appropriate voltages which could compensate the random offset charges and, thereby, increase the energy barrier $`\mathrm{\Delta }U`$ separating the states. Instead, we simply chose the loops with larger widths, which presumably corresponded to favorable distributions of the offset charges ensuring larger $`\mathrm{\Delta }U`$. The holding time within some of the loops (for instance, those marked in Fig. 2 as loops A and B) was found to be actually long. At $`T=70`$ mK, the hold time of the ”upper” and ”bottom” states for the bias corresponding to the centers of these loops was determined by the time of observation (more than 2 h). In another measurement, we ramped voltage $`V_{tr}`$ over loop B, starting from 10.8 mV and rising up to 11.8 mV at an average rate of $`5\times 10^8`$ V/s (i.e. 1 mV per 5.5 h). In response to the change of $`V_{tr}`$ the system switched from the initial ”bottom” state to the ”upper” state only once, i.e. the induced transition took place. (It occurred after a lapse of 2.5 h at $`V_{tr}11.25`$ mV, i.e. at the value inside the loop recorded at the normal sweep rate of $`1.5\times 10^4`$ V/s.) After that the system remained in that state until the end of the ramp, i.e. for about 3 h. At $`T=100`$ mK, transitions occurred on a reasonable time scale of about $`10^3`$ s when $`V_{tr}`$ was adjusted to the centers of the loops. When the bias was changed such that the energy of the occupied state exceeded the energy of the unoccupied state, induced switching occurred faster. Such case is illustrated by the time trace in Fig. 2. In order to evaluate the barrier height we elevated the temperature and thereby made thermally activated switching between the bi-stable states more intensive. We then found from the Arrhenius plot the activation energy $`\mathrm{\Delta }U=240\pm 20\mu `$eV, what corresponds to $`2.8\pm 0.2`$ K, which characterizes the pair of the charge states for loop B. This value is plausible for the evaluated parameters of the trap if we assume non-zero offset charges on the inner islands on the chain. The rate $`\mathrm{\Gamma }_{th}`$ of thermally activated transitions at $`T100`$ mK for such barrier, extrapolated from higher temperature measurements, was found to be below $`2\times 10^6\mathrm{s}^1`$. For a crude evaluation of the cotunneling rate $`\mathrm{\Gamma }_{cot}`$ we used formula (9b) obtained by Golubev and Zaikin , assuming the net energy change to be $`\mathrm{\Delta }Ek_BT`$. For $`T=70100`$ mK we obtained $`\mathrm{\Gamma }_{cot}10^{11}10^9\mathrm{s}^1`$ for our sample . The maximum rate of leakage evaluated from our short-term measurements was $`\mathrm{\Gamma }10^4\mathrm{s}^1`$, i.e. it is by several orders of magnitude higher than $`\mathrm{\Gamma }_{cot}`$. (Note that the experimental values of $`\mathrm{\Gamma }`$ are always much larger than theoretical estimations of both the thermal activation and cotunneling. ) On the other hand, the value obtained for $`\mathrm{\Gamma }`$ was considerably lower than that evaluated for the case of a 4-junction trap with similar parameters but without resistor, $`viz.`$ $`\mathrm{\Gamma }_{cot}^{}10^310^2\mathrm{s}^1`$. This fact points to the crucial role the resistor plays in the improvement of the trap’s storage capability. In particular, the characteristics of this R-trap are comparable to those of the well-characterized 7-junction SET pump in the hold mode of operation. That device had tunnel junctions of $`C220`$ aF and $`R_t470`$ k$`\mathrm{\Omega }`$, and 6 tuning gates allowed the leakage to be reduced down to $`(0.320)\times 10^4\mathrm{s}^1`$ at $`T=40`$ mK. In summary, we have fabricated and characterized a 4-junction electron trap with an on-chip resistor. We have demonstrated the device is capable of storing a fixed number of electrons on its memory node on the appropriate time scale. We believe that almost similar storing capability can be achieved for a trap consisting of only 2 tunnel junctions and equipped with a resistor of $`R100`$ k$`\mathrm{\Omega }`$ which yields $`z4`$. (Our estimation gives in that case $`\mathrm{\Gamma }_{cot}10^7\mathrm{s}^1`$.) In such 2-junction trap, the electrostatic barrier (and thereby the storing time) could be effectively controlled by a gate polarizing the island between the junctions. The obtained result is extremely encouraging with respect to constructing a fewer-junction SET pump/turnstile device with resistors, which is our next goal. For instance, a 3-junction pump supplied with two 50 k$`\mathrm{\Omega }`$ resistors, yielding $`\mathrm{\Delta }N=z=2\times R/R_k4`$, could be as accurate as its 7-junction counterpart without resistors. The obvious advantage of such an R-pump is the minimum number of gates (two) and, hence, much simpler rf-drive (two harmonic signals with a fixed phase shift). Finally, the total drift of a working point of the device caused by the fluctuations of the offset charges on 2 islands should be apparently weaker than in the case of 6 similar islands. The authors thank V. A. Krupenin for valuable discussions. The work is supported in part by the EU (MEL ARI Research Project $`22953`$ CHARGE) and the German BMBF (Grant No. 13N7168).
no-problem/9904/astro-ph9904264.html
ar5iv
text
# The gravitationally lensed quasar Q2237+0305 in X-rays: ROSAT/HRI detection of the “Einstein Cross” ## 1 Introduction The quasar Q2237+0305 at a redshift of $`z=1.609`$ is lensed by a relatively nearby galaxy at $`z_L=0.039`$ (Huchra et al. (1985)). It is a quadruply-imaged case, and one of the best investigated lens systems, both observationally and theoretically. For recent work, see Chae et al. (1998); Yonehara et al. (1998) and Blanton et al. (1998); Mediavilla et al. (1998), Schmidt et al. (1998), respectively. Q2237+0305 was the first multiple quasar system in which microlensing was detected (see, e.g. Irwin et al. (1989), Corrigan et al. (1991), Lewis et al. (1998)). The analysis of well covered microlensing light curves of a quasar can be used to uncover its size and structure (Wambsganss et al. 1990, Wambsganss & Paczyński 1992). A number of groups are optically monitoring this system to measure any microlensing effects. The expected time delay between the four images is only of order a day (Rix et al. (1992); Wambsganss & Paczyński (1994)) and hence unlikely to be determined from optical light curves. Recently, HST observations in the UV allowed the determination of highly accurate relative positions of the four images (Blanton et al. 1998). With ground-based spectrophotometry, an extended arc comprising three of the four images was discovered (Mediavilla et al. 1998). Not very much is known yet about gravitationally lensed quasars in X-rays. The double quasar Q0957+561 was seen with HEAO-1 and with ROSAT, and dramatic differences in the flux of image B of up to a factor of five were observed chartas95 (Chartas et al. 1995). There is an X-ray selected gravitationally lensed quasar, RX J0911.4+0551 which was found in the ROSAT All-Sky Survey bade97 (Bade et al. 1997). With $`z_Q=2.8`$ and an X-ray luminosity of $`L_X=4.1\times 10^{46}`$ergs/s it is a very X-ray bright quasar. The two bright images are separated by 0.8 arcsec. High-resolution optical/infrared imaging revealed four images with a maximum distance of 3.1 arcsec (bur98 (Burud et al. 1998), see also Munoz et al. 1999 or http://cfa-www.harvard.edu/castles for HST/NICMOS data obtained by the CASTLES collaboration). Here we present the first X-ray detection of Q2237+0305, an analysis of a ROSAT/HRI observation. The combined X-ray emission of the four quasar images is clearly detected, though at a relatively low count rate. Due to the coarse resolution of the ROSAT/HRI the individual images are not resolved. ## 2 Observations We observed the quasar Q2237+0305 with the ROSAT/HRI (Trümper (1983)) for a total exposure time of $`t_{\mathrm{ex}}=53869.7`$ seconds. The observations took place between November 20 and December 5, 1997. The Standard Analysis Software System (SASS) determined an average background rate of 0.0032 counts/sec/arcmin<sup>2</sup> which – multiplied by the “exposure time” $`t_{\mathrm{ex}}`$ yields an average of 171.7 background counts per square arcmin in total. To determine the count rate of Q2237+0305, we extracted the photons in circles of different sizes and subtracted the background, whose count rate was determined from empty regions of considerably larger size. A circle with a radius of 15 arcseconds centered on the pixel with the highest count (RA 22:40:30.21, Dec +03:21:28.7; J2000) resulted in a total number of 361 counts. The average number of background photons determined from ten “empty” circles nearby resulted in 39.8 background photons. This leaves 321.2 source counts, which results in a count rate of 6.0 counts per kilosecond. A similar determination with a much larger extraction radius of 50 arcseconds (100 pixels) centered on position RA 22:40:30.0, Dec +03:21:28.7 produced 800 counts. The average background for this size is 466.8 counts, which results in 333.2 source counts and a count rate of $`(6.2\pm 2.8)`$ counts/ksec. The complete HRI field of the exposure is shown in Figure 1. The central source is Q2237+0305 (labelled “1”). Table 1 contains the positions and count rates of all the sources in the field that are detected with a S/N of at least 4.0 in one of the ROSAT/HRI detection cells (squares with sizes ranging from 12 to 120 arcsec side length). In the table we also list the count rates and possible identifications of these X-ray detections. Cross-checking with the databases SIMBAD and NED, we could find one other identification of an X-ray detection in our field with a catalogued source (aside from the “target” Q2237+0305): Source No. 2 coincides with the G0 star BD +02 4540 (V magnitude: 9.6). The G5 star HD 214787 (V magnitude: 8.3) is about 50 arcsec off the position of detection No. 5, but this is a very unlikely match, even considering the poor accuracy for large off-axis angles. Figure 2 depicts a higher-resolution map of Q2237+0305. The image appears slightly elliptical; but the same small ellipticity is seen in other images as well and hence seems to be an artifact of imperfect pointing (the separation of the four quasar images is only of order one arcsecond; this cannot explain the apparent extension). ## 3 Results and Discussion The average rate of $`(6.2\pm 2.8)`$ counts/ksec translates into an energy flux of $`(2.2\pm 1.0)\times 10^{13}`$ erg/sec/cm<sup>2</sup> in the interval 0.1-2.4 keV, with the assumption of a hydrogen column density of $`n_H=5.5\times 10^{20}`$ cm<sup>-2</sup> (Dickey & Lockman 1990) and a power law index (photon) of $`\alpha =1.5`$. This flux can be converted into an X-ray luminosity in the ROSAT energy window of $`4.2\times 10^{45}`$erg/sec. The intrinsic X-ray luminosity of the quasar must be lower, since due to the gravitational lensing there is a magnification of at least a few, possibly even a few hundred (Kent & Falco (1988); Rix et al. (1992); Wambsganss & Paczyński (1994)). As briefly mentioned in the introduction, it is of great interest to study any variability of this quasar. The standard ROSAT/SASS analysis did not find any indication for variability in Q2237+0305. With only about 330 source photons spread over two weeks in real time, it is difficult to determine any variability. In Figure 3, we display the observing intervals of this X-ray exposure for possible comparison with observations in other wave bands at the same time. The “zero” on the time axis corresponds to November 20, 1997, 01:49:58 UT (or JD 2450772.576). It is clear from this Figure that the “coverage factor” is less than 5%. So it makes no sense to present a continuous light curve of Q2237+0305 over the observing period. We can also “bin” the data to compare the Q2237+0305 “light curve” with the light curve of the “background” (or of the other seven detected sources, cf. Table 1). Such a binned artificial light curve is displayed in Figure 4 for bin widths of 1000 seconds<sup>1</sup><sup>1</sup>1For the binning we basically put together all observing intervals back-to-back, thus ignoring all the “dead times”. The data in these 53 bins of 1000sec each are in fact spread out over about about two weeks (cf. Figure 3). The top panel is the X-ray light curve for Q2237+0305, the bottom panel is the “light curve” of a background field with the same average count rate (6.2 counts per ksec). In order to find out whether the “peaks” in the quasar light curve could be due to enhanced background radiation, we chose to compare the quasar light curve with a background light curve normalized to the same average count rate. Figure 4 shows that the fluctuations in the flux of Q2237+0305 are uncorrelated with the variations in the background flux. There is no obvious variability visible at the top of Figure 4 that exceeds any variability in the bottom panel (which sets the “noise” level). Neither is there any “correlated” variability obvious, which could be expected for a highly increased background at certain phases of the observation (since the extraction circle of the Q2237+0305 light curve is much smaller than that for the background, this would be surprising). We investigated the issue of variability more quantitatively. We performed a KS-test with the real arriving times of the source photons of Q2237+0305 (extraction radius 30 arcseconds), comparing them with the background light curve of a circular region corresponding to the same total number of photons and for the total background. Both were consistent with no variability. Furthermore, we performed a chi-square test with the binned data. Similarly, we found no indication for variability. Among the 53 bins, the bin with the highest count contains 15 photons, and one bin is completely empty. The Poissonian probability for finding 15 counts for an average of 6.2 is only about $`1.2\times 10^3`$, the Poissonian probability for finding 0 counts with the same average is $`2.0\times 10^3`$. Similarly, if one divides the bins in two sets, the first 26 bins contain 150 counts; the standard variation for an average of 150 counts is $`\sigma _{26\mathrm{k}\mathrm{s}\mathrm{e}\mathrm{c}}=12.25`$ counts. The second 26 bins contain 194 counts, which is 2.4 $`\sigma _{26\mathrm{k}\mathrm{s}\mathrm{e}\mathrm{c}}`$ above the counts of the first half. These two tests leave open the possibility of source variability, but at very low significance. Another issue is a possible contamination of the X-ray counts by the (lensing) galaxy. We estimated the X-ray luminosity of this galaxy both by using Dell’Antonio et al. (1994)’s correlation $`L_X(spiral)=2.0\times 10^{29}L_B^{1.0}`$ (where $`L_X`$ is in erg s<sup>-1</sup> and $`L_B`$ is in solar luminosities), as well as by extrapolating from the known X-ray count rate of M31 (West et al. 1997). Both these estimates result in a possible contribution of the lensing galaxy of less than one percent of our detection, which hence can be neglected. ## 4 Conclusions and Outlook The detection of the quadruply-imaged quasar Q2237+0305 in X-rays with ($`6.2\pm 2.8`$) counts per ksec opens up the possibility of being able to monitor this system with the next generation of X-ray telescopes (Chandra X-ray Observatory, XMM, ASTRO-E). It would then be feasible to study both the intrinsic variability of the quasar and microlens-induced fluctuations. The Chandra X-ray Observatory with its on-axis resolution of 0.5 arcsec and its effective area almost twice as large as ROSAT’s will be able to detect and resolve the four images of Q2237+0305. In addition to the possibility of determining microlens-induced fluctuations (see Yonehara et al. 1998), such observations could offer the opportunity of measuring relative time delays in this system. Intrinsic X-ray variations of the lensed quasar on time scales of less than a day would be required. On the other hand, if one could follow a “caustic crossing event” in X-rays (which should appear in only one of the four images, according to the very high magnifications expected due to the small source size), we could also have the possibility of determining the size or even the source profile of the X-ray emission region of the quasar. ###### Acknowledgements. It is a pleasure to thank Ingo Lehmann for providing help with his MIDAS tools. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has also made use of the SIMBAD database, operated at CDS, Strasbourg, France.
no-problem/9904/astro-ph9904036.html
ar5iv
text
# FAINT RADIO SOURCES AND STAR FORMATION HISTORY ## 1 Introduction Faint radio sources provide important information about global star formation history. Sensitive radio observations of the Hubble Deep Field (HDF) (Richards et al 1998, ) and other fields well-studied at optical wavelengths (Windhorst et al 1995, , Fomalont et al 1991, ) have shown that sub-mJy radio sources are predominantly associated with star formation activity rather than active galactic nuclei (AGN). The radio luminosity of a galaxy is a reliable predictor of the star formation rate (SFR) for local galaxies (Condon 1992, Cram et al 1998, ). Estimates of star formation based on radio observations also have the advantage of being independent of extinction by dust, which has caused much difficulty in the determination of star formation history from optical data. In section 2, we make use of the tight correlation between radio and FIR luminosity for star forming galaxies to compare the FIR and radio backgrounds and to study the sources producing both. In section 3, we determine the evolving radio luminosity function from the observed redshift distribution of faint radio sources, and then estimate the history of star formation to a redshift of about 3. Throughout, we assume $`\mathrm{\Omega }_m=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, and $`H_0`$=50$`\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. We define the radio spectral index as $`S_\nu \nu ^\alpha `$. ## 2 FIR vs. Radio Backgrounds This section follows our recent paper (Haarsma & Partridge 1998, ). The far infrared (FIR) background was recently detected with DIRBE (Hauser et al 1998, ; Dwek et al 1998, ), and is most likely the collective emission of star forming galaxies. We use the radio-FIR correlation for individual galaxies (Helou, Soifer, & Rowan-Robinson 1985, ) to calculate the radio background associated with the FIR background, assuming that the bulk of emission is from $`z1`$. We find the radio background associated with the FIR background has a brightness temperature of $`T_{40\mathrm{cm}}=0.31`$ K, or $`T_{170\mathrm{cm}}15`$ K (scaled using a spectral index of $`\alpha =0.7`$). At 170$`\mathrm{cm}`$ (178 MHz), the observed radio background is $`T_{170\mathrm{cm}}=30\pm 7`$ K (Bridle 1967, ). This allows us to draw several conclusions about the faint sources making up the FIR background: 1. The radio emission from these sources makes up about half of the observed extragalactic radio background. (The other half is the summed radio emission of AGN.) 2. Since (i) is in agreement with other radio observations (Condon 1989, ), the FIR-radio correlation appears to hold even for the very faint sources making up the FIR background. This confirms the assumption that the FIR background between about 140 and 240$`\mu \mathrm{m}`$ is dominated by star-formation, not AGN activity. 3. By quantitatively comparing the radio and FIR backgrounds, we find a relationship for the sources contributing to the background, $$A\left(\frac{1+z}{8.5}\right)^\alpha =0.20\pm 0.05,$$ (1) where $`\alpha `$ is their radio spectral index, $`A`$ is the fraction of the radio background they produce (from (i), $`A0.5`$), and $`z`$ is their mean redshift. This function is plotted in Figure 1. Note that the redshift $`z`$ is the mean redshift of the sources dominating the FIR and radio backgrounds, which is not necessarily the redshift of peak star formation activity (see §3). 4. By extrapolating the 3.6$`\mathrm{cm}`$ $`\mathrm{log}N\mathrm{log}S`$ curve to fainter flux densities, we estimate that most of the FIR background is produced by sources whose 3.6$`\mathrm{cm}`$ flux density is greater than about 1$`\mu \mathrm{Jy}`$. This lower limit is consistent with other work (Windhorst et al 1993, ), but has more interesting observational consequences. An RMS sensitivity of 1.5$`\mu \mathrm{Jy}`$ has already been reached in VLA observations (Partridge et al 1997, ). The $`\mathrm{log}N\mathrm{log}S`$ curve indicates that the number density of $`S1`$$`\mu \mathrm{Jy}`$ sources is about $`25/\mathrm{arcmin}^2`$, similar to some model predictions (Guiderdoni et al 1998, ). At this density, these sources will cause SIRTF to encounter confusion problems at 160$`\mu \mathrm{m}`$. ## 3 Radio Star Formation History In this section, we use the redshift distribution of faint radio sources to determine the evolution of the radio luminosity function, and the evolution of the star formation rate density. ### 3.1 Data Three fields have been observed to microJy sensitivity at centimeter wavelengths and also have extensive photometric and spectroscopic data: the Hubble Deep Field (HDF), the Medium Deep Survey (MDS), and the V15 field. Table 1 gives the details of the three fields and references. For the first time we have a sample of microJy radio sources with nearly complete optical identifications and about 50% complete redshift measurements. We assume that all sources detected at these flux levels are star-forming galaxies, since optical identifications indicate that $``$80% of these radio sources have spiral or irregular counterparts (Richards et al 1998, ). The known quasars (two in the MDS sample, none in the HDF or V15 samples) were removed. In the flanking fields of the HDF, we have used the relationship between redshift and K-band magnitude (Lilly, Longair, & Allington-Smith 1985, ) to estimate redshifts for 10 sources without spectroscopic values. For the remaining sources without redshifts, we arbitrarily selected redshifts to fill in gaps in the redshift distribution, in order to illustrate the total number of sources that will ultimately appear on the plot. Photometric redshifts for these sources are currently being calculated (Waddington & Windhorst, in preparation), and will be included in future work. To compare these data to the model, we calculate $`n(z)`$, the average number of sources per arcmin<sup>2</sup> in each redshift bin. This requires a correction for the varying sensitivity across the primary beam of the radio observations (Katgert, Oort, & Windhorst 1988, ; Martin, Partridge, & Rood 1980, ). For example, a faint source which could only be detected at the center of the field contributes more to $`n(z)`$ than a strong source which could be detected over the entire primary beam area. The resulting redshift distributions are plotted in Figures 4 and 8. It is interesting how different the MDS and HDF distributions are, even though the surveys were both performed at 8 GHz with similar flux limits. The average source density (including all sources) is 1.26 sources/arcmin<sup>2</sup> in the HDF, but 2.63 sources/arcmin<sup>2</sup> in the MDS field (the V15 field at 5 GHz has 0.736 sources/arcmin<sup>2</sup>). The density of sources in the MDS field is over twice that of the HDF field, possibly due to galaxy clustering. In the analysis below, we fit the model to the data in all three fields simultaneously. ### 3.2 Calculations In order to determine the star formation history, we must first determine the evolving radio luminosity function. We used two versions of the local 1.4 GHz luminosity function for star-forming/spiral galaxies. We define the luminosity function $`\varphi (L_{e,1.4})`$ as the number per comoving Mpc<sup>3</sup> per $`d\mathrm{log}_{10}L`$ of star-forming radio sources with emitted luminosity $`L_{e,1.4}`$(W/Hz)at 1.4 GHz. Condon (1989, ) uses the following form for the luminosity function (but different notation), $`\mathrm{log}_{10}[\varphi (L_{e,1.4})]`$ $`d\mathrm{log}_{10}L=28.43+Y1.5\mathrm{log}_{10}L_{e,1.4}`$ (2) $`\left[B^2+{\displaystyle \frac{1}{W^2}}(\mathrm{log}_{10}L_{e,1.4}X)^2\right]^{1/2}d\mathrm{log}_{10}L,`$ with the fitted parameters for star-forming galaxies of $`Y=2.88`$, $`X=22.40`$, $`W=2/3`$, and $`B=1.5`$. Serjeant et al (1998, ) use the standard Schechter form, $$\varphi (L_{e,1.4})d\mathrm{log}_{10}L=\varphi _{}\mathrm{ln}10\left(\frac{L_{e,1.4}}{L_{}}\right)^{(1+\alpha _l)}\mathrm{exp}\left(\frac{L_{e,1.4}}{L_{}}\right)d\mathrm{log}_{10}L$$ (3) where a factor of $`L\mathrm{ln}10`$ has been included to convert the function from d$`L`$ to $`d\mathrm{log}_{10}L`$. Serjeant et al find fitted parameters of $`\varphi _{}=4.9\times 10^4\mathrm{Mpc}^3`$, $`L_{}=2.8\times 10^{22}\mathrm{W}/\mathrm{Hz}`$, and $`\alpha _l=1.29`$. To describe the evolution of the luminosity function, we use the functional form suggested by Condon (1984a, , eq. 24), a power-law in $`(1+z)`$ with an exponential cut-off at high redshift. The luminosity evolves as $$f(z)=(1+z)^Q\mathrm{exp}\left[\left(\frac{z}{z_q}\right)^q\right],$$ (4) and the number density evolves as $$g(z)=(1+z)^P\mathrm{exp}\left[\left(\frac{z}{z_p}\right)^p\right].$$ (5) This gives six parameters $`\{Q,q,z_q,P,p,z_p\}`$ to use in describing the evolution. When fitting for the parameters, we constrained the functions $`g(z)`$ and $`f(z)`$ to the physically reasonable ranges of $`1<g(z)<100`$, $`1<f(z)<100`$ for $`0<z<3`$. The general expression for the evolving luminosity function is then (Condon 1984b, ) $$\varphi (L_{e,1.4},z)=g(z)\varphi (\frac{L_{e,1.4}}{f(z)},0).$$ (6) To use this expression at an arbitrary observing frequency $`\nu `$ and redshift $`z`$, we must convert the observed luminosity $`L_{o,\nu }`$ to 1.4 GHz and do the K-correction, i.e. $$L_{e,1.4}=L_{o,\nu }\left(\frac{\nu }{1.4\mathrm{GHz}}\right)^\alpha (1+z)^\alpha $$ (7) where $`\alpha `$ is the radio spectral index, as defined in §1. We have assumed $`\alpha =0.4`$ for all calculations in §3 (Windhorst et al 1993, ). The evolving luminosity function can be used to predict the observed redshift distribution. The number of sources per redshift bin $`\mathrm{\Delta }z`$ that could be detected in a survey of angular area $`\mathrm{\Delta }\mathrm{\Omega }`$ and flux limit $`S_{lim}`$ at frequency $`\nu `$ is $$n(z)=V_c(z,\mathrm{\Delta }z,\mathrm{\Delta }\mathrm{\Omega })_{L^{}(z)}^{inf}\varphi (L_{e,1.4},z)d\mathrm{log}_{10}L$$ (8) where the lower limit of the integral is $$L^{}(z)=9.5\times 10^{12}\frac{\mathrm{W}}{\mathrm{Hz}}\left(\frac{S_{lim}}{\mu \mathrm{Jy}}\right)(1+z)^\alpha \left(\frac{\nu }{1.4\mathrm{GHz}}\right)^\alpha 4\pi \left(\frac{D_L(z)}{\mathrm{Mpc}}\right)^2$$ (9) and $`D_L`$ is the luminosity distance. The comoving volume in a shell from $`z`$ to $`z+\mathrm{\Delta }z`$ and angular size $`\mathrm{\Delta }\mathrm{\Omega }`$ is $`V_c(z,\mathrm{\Delta }z,\mathrm{\Delta }\mathrm{\Omega })`$ $`=`$ $`{\displaystyle 𝑑\mathrm{\Omega }r^2(z)𝑑r}`$ (10) $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{ster}}}\left({\displaystyle \frac{\mathrm{ster}}{1.18\times 10^7\mathrm{arcmin}^2}}\right){\displaystyle \frac{[r^3(z+\mathrm{\Delta }z)r^3(z)]}{3}}`$ where the comoving distance is $$r(z)=\frac{2c}{H_0}\left(1\frac{1}{\sqrt{1+z}}\right)$$ (11) for our assumed cosmology (see §1). We have used $`\mathrm{\Delta }\mathrm{\Omega }=1`$ arcmin<sup>2</sup> for comparison to the data in Figures 4 and 8. The evolving luminosity function also allows us to calculate the star formation history. For an individual galaxy, the star formation rate is directly proportional to its radio luminosity (Condon 1992, ): $$\mathrm{SFR}=Q\left(\frac{L_\nu /\frac{\mathrm{W}}{\mathrm{Hz}}}{5.3\times 10^{21}\left(\frac{\nu }{\mathrm{GHz}}\right)^{0.8}+5.5\times 10^{20}\left(\frac{\nu }{\mathrm{GHz}}\right)^{0.1}}\right)\frac{\mathrm{M}_{}}{\mathrm{yr}}$$ (12) The radio luminosity is primarily due to synchrotron emission from supernova remnants (the first term in the denominator) plus a small thermal component (the second term). Both components are proportional to the formation rate of high-mass stars which produce supernova ($`M>5M_{}`$), so the factor $`Q`$ is included to account for the mass of all stars ($`0.1100M_{}`$), $$Q=\frac{_{0.1M_{}}^{100M_{}}M\psi (M)𝑑M}{_{5M_{}}^{100M_{}}M\psi (M)𝑑M},$$ (13) where $`\psi (M)M^x`$ is the initial mass function (IMF). We have assumed throughout a Salpeter IMF ($`x=2.35`$), for which $`Q=5.5`$. If an upper limit of 125$`M_{}`$ is used, then $`Q=5.9`$. In order to use eq. 12 at high redshift, both $`L_\nu `$ and $`\nu `$ in the equation must be K-corrected to the emitted luminosity at the emission frequency. Are there other ways in which this relation evolves? The thermal term is much smaller than the synchrotron term, so evolution in the thermal term will have little effect. In the synchrotron term, the dependence on the supernova environment is weak. One component that might cause significant evolution in eq. 12 is an evolving IMF, entering through the factor $`Q`$. In active starbursts, the IMF may be weighted to high-mass stars (Elmegreen 1998, ), which would result in a smaller value of $`Q`$. However, the smallest $`Q`$ is unity (when virtually all mass occurs in high-mass stars), so the strongest decrease due to IMF evolution would be roughly a factor of five. To determine the star formation rate per comoving volume, we simply substitute the radio luminosity density for $`L_\nu `$ in eq. 12. The star formation rate depends on the emitted (rather than observed) luminosity density. The luminosity density emitted at 1.4 GHz can be easily found from the evolving luminosity function, $$\rho _{e,1.4}(z)=_{inf}^{inf}L_{e,1.4}\varphi (L_{e,1.4},z)d\mathrm{log}_{10}L.$$ (14) Thus the predicted star formation history is $$\psi (z)=Q\left(\frac{\rho _{e,1.4}(z)}{4.6\times 10^{21}\frac{\mathrm{W}}{\mathrm{HzMpc}^3}}\right)\frac{\mathrm{M}_{}}{\mathrm{yrMpc}^3}$$ (15) where 1.4$`\mathrm{GHz}`$ is used in the denominator of eq. 12 (no K-correction is needed because the luminosity density is the emitted value). ### 3.3 Results We use the formulation of §3.2 to determine the star formation history from the evolving luminosity function. To determine the evolution parameters, we compare the model to the observed $`n(z)`$ for the three surveys. We immediately found that pure luminosity evolution \[$`f(z)=(1+z)^3`$ and $`g(z)=1`$\], as often suggested in the literature, is a poor fit for the faint star-forming galaxy population (the predicted $`n(z)`$ is too small and has a very long high-redshift tail). The model fit of Condon (1984a, ), $`\{Q=3.5,P=1.75,p=1.8,z_p=1\}`$ with no exponential cut off in luminosity evolution, is much better (more reasonable redshift dependence, but $`n(z)`$ is still too low). To improve on these models, we adjust the evolution parameters $`\{Q,q,z_q,P,p,z_p\}`$ to improve the model fit to the $`n(z)`$ data, using a downhill simplex algorithm (Press et al 1992, ) to find the global $`\chi ^2`$ minimum. We performed this fit using both the Condon (1989, ) luminosity function (Model C, see eq. 2) and the Serjeant et al (1998, ) luminosity function (Model S, see eq. 3). In Model C we use the luminosity function of Condon (1989, ). The fitted evolution parameters were $`\{Q=7.6,q=1.3,z_q=0.48,P=1.6,p=1.2,z_p=1.8\}`$. The resulting evolution factors $`f(z)`$ and $`g(z)`$ are plotted in Figure 2 and the resulting luminosity function is shown in Figure 3. Although the term $`(1+z)^{7.6}`$ seems extreme, when combined with the exponential cut-off the luminosity evolution $`f(z)`$ is reasonable. The fit to the the redshift distribution is shown in Figure 4. The fit significantly underestimates the total number of sources in the MDS field, but only slightly underestimates the other two fields. The V15 survey has the largest total number of sources and thus has the most weight during fitting, so the result is a better fit for V15 than the other fields. Finally, Figure 5 shows our predicted star formation history (heavy line) along with model predictions from several others (thin lines). The vertical lines indicate the $`1/\sqrt{N}`$ uncertainty, where $`N`$ is the sum of galaxies at that redshift from the three surveys. The Model C prediction is in good agreement with other models at low redshift (the curve follows closely the prediction of Pei & Fall 1995, , as plotted in Dwek et al 1998, , figure 3), which is impressive given that no free parameters were adjusted to fit the $`z=0`$ value. The predicted star formation history peaks around a redshift of 1, and falls off more quickly than other models at high redshift. In Model S we use the luminosity function of Serjeant et al (1998, ) (see eq. 3 above). The fitted evolution parameters were $`\{Q=4.3,q=2.1,z_q=1.3,P=1.3,p=1.7,z_p=2.3\}`$. The resulting evolution factors $`f(z)`$ and $`g(z)`$ are plotted in Figure 6 and the resulting luminosity function is shown in Figure 7. Despite very different individual parameters ($`Q=4.3`$ vs. $`Q=7.6`$), the two fits have similar functions $`f(z)`$ and $`g(z)`$. The predicted redshift distribution (Figure 8) is peaked at a slightly lower redshift and has a slightly longer tail then Model C. The predicted star formation history (Figure 9) has a larger local value than Model C, but still less than that predicted by Baugh et al (1998, ) (thin solid line). The peak is around a redshift of 1.4, and falls off less rapidly than Model C at high redshift. ### 3.4 Discussion The star formation histories predicted by Model C and Model S both fall off more quickly at high redshift than model predictions by others. However, we are considering several refinements to our model that might modify this result. We are currently determining additional photometric redshifts (Waddington & Windhorst, in preparation), which will make the modeling more reliable particularly at high redshift. The predicted shape of the star formation history is limited by the functional form we chose for evolution (eq. 4 and 5), and we plan to experiment with other functions. If the IMF is evolving, or is dependent on environment, this would also affect our results. The relationship between star formation rate and radio luminosity (eq. 12) might be evolving in addition to its dependence on an evolving IMF. Finally, we have not explored the dependence of our results on cosmological parameters. This method has the potential to be an important indicator of star formation history. Radio luminosity is a reliable indicator of star formation rate in local galaxies, and is not affected by dust extinction. While others are performing similar calculations (Cram et al 1998, ; Cram 1998, ; Mobasher et al 1999, , Serjeant et al 1998, ), the survey data used here are complete to a substantially lower flux limit, with nearly complete knowledge of optical counterparts and $``$50% completeness in redshifts. This allows us to place stronger constraints on the evolving radio luminosity function and to probe star formation activity to much higher redshifts. ## Acknowledgments We are grateful to Eric Richards for helpful discussions. D.H. thanks the National Science Foundation for travel support for this Symposium. D.H. and B.P. acknowledge the support of NSF AST 96-16971. ## References
no-problem/9904/hep-ph9904397.html
ar5iv
text
# Search for R-parity Violating SUSY in Run 2 at DØ ## I Physics motivation Recent interest in R-parity violating (RPV) SUSY decay modes is motivated by the possible high-$`Q^2`$ event excess at HERA HERA . When interpretation of the excess through first-generation leptoquarks was excluded by the DØ D0-lq and CDF CDF-lq experiments, it was suggested HERA-RPV that such an effect could be explained via the $`s`$-channel production of a charm or top squark decaying into the $`e+jet`$ final state. Both the production and the decay vertices would thereby violate R-parity. Although more recent data has not confirmed the previous event excess, and despite the combined analysis showed that the anomalous events reported by the H1 and ZEUS experiments were unlikely to originate from the production of a single $`s`$-channel narrow resonance Bassler , interest in RPV signatures has not abatted. The CDF and DØ Collaborations have recently performed searches for RPV SUSY CDF-RPV ; D0-RPV , and have set new mass limits on the RPV SUSY particles. Both experiments focussed their searches on the $`\lambda ^{}`$ couplings, as motivated by the high-$`Q^2`$ HERA event excess. The results of the DØ searches are extended to the Run 2 case and the expected sensitivity to the RPV couplings is discussed. ## II DØ Search for RPV neutralino decays The DØ search for RPV SUSY considered the case of neutralino LSP which decays into a lepton and two quarks due to a finite RPV $`\lambda ^{}`$ coupling (see Fig. 1). Both the electron and muon decay channels were considered, corresponding to what commonly referred to as $`\lambda _{1ij}^{}`$ and $`\lambda _{2ij}^{}`$ couplings, respectively. The corresponding final states contain either $`2e`$ or $`2\mu `$ and at least four accompanying jets. Unlike at HERA, this search is not sensitive to the value of the RPV coupling, as long as it is large enough so that the neutralino decays within the DØ detector. That corresponds to $`\lambda ^{}10^3`$, which gives a lot of room, given current indirect constraints Herbi . We assume that the neutralino (LSP) pairs are produced in cascade decays of other supersymmetric particles and use all SUSY pair production mechanisms when generating signal events. Signal events were generated within the SUGRA framework with the following values of SUSY parameters: $`A_0=0`$, $`\mu <0`$ and $`\mathrm{tan}\beta =2`$ (the results are not sensitive to the value of $`A_0`$ .) Center of mass energy of the colliding beams was taken to be 2 TeV. ISAJET ISAJET was used for event generation. The acceptance and resolution of the DØ detector were parametrized using the following resolutions: $`\delta E/E=2\%15\%/\sqrt{E}\text{[GeV]}`$ (electrons), $`\delta (1/p)/(1/p)=0.0180.008(1/p)`$ (muons), and $`\delta E/E=3\%80\%/\sqrt{E}\text{[GeV]}`$ (jets) and found consistent with the full detector simulation based on GEANT GEANT . Figure 2 shows the points in the $`(m_0,m_{1/2})`$ SUGRA parameter space where signal Monte Carlo events were generated for the electron channel. Similar points were studied for the muon-decay channel. ## III Selection criteria for the dielectron channel A multijet trigger was used for the analysis of Run 1 data. It was found to be nearly 100% efficient for the typical RPV signal. Since Run 2 trigger list will include a similar trigger, we assume trigger efficiency of 100% and do not perform any trigger simulations for the Run 2. The following offline selections were used: * At least two good electrons, the leading one with $`E_T(e)>15`$ GeV and the other one with $`E_T(e)>10`$ GeV; * Rapidity range $`\eta 1.1`$ (central calorimeter), or $`1.5\eta 2.5`$ (end calorimeters) for all the electrons; * Energy isolation for the electrons: the EM energy in the R=0.2 cone about the center of gravity of the EM cluster, subtracted from the total energy in R=0.4 cone, should not exceed 15% of the EM energy in the R-0.2 cone. * At least four jets with $`E_T(j)>15`$ GeV and $`\eta <2.5`$; * The dielectron invariant mass ($`M_{ee}`$) should not be in the Z-mass interval, ie, $`M_{ee}M_Z>15`$ GeV/$`c^2`$. In the present analysis we have dropped the requirement on $`H_T=E_T(e)+E_T(j)`$ , but retained all other offline criteria that were used in the previous analysis of data from Run I D0-RPV . ## IV Selection in the dimuon channel The following event selection requirements were used for the muon decay channel: * Two muons, the leading one with $`p_T>`$ 15 GeV, and the other one with $`p_T>`$ 10 GeV. * Rapidity range $`|\eta |<2.3`$ for both muons. * Energy isolation requirement for both muons, i.e. the calorimeter energy accompanying the muon in a ($`\eta `$ $`\varphi `$) cone of 0.4 should be consistent with that from a minimum ionising particle. * At least four jets with $`E_T(j)>15`$ GeV and $`|\eta |<2.5`$; ## V Signal efficiencies The number of signal events expected can be written as: $`N=\sigma ϵ`$, where $`N`$ is the expected number of events for luminosity $``$, $`\sigma `$ is the cross-section, and $`ϵ`$ is the overall efficiency. The efficiency $`ϵ`$ can be split into three terms: $`ϵ=ϵ_{\mathrm{trig}}ϵ_{\mathrm{kin}}ϵ_{\mathrm{id}}`$. Here $`ϵ_{\mathrm{trig}}`$ is the trigger efficiency for the events that pass the offline cuts ( assumed to be 100%), $`ϵ_{\mathrm{kin}}`$ is the efficiency for offline criteria, which includes kinematic, fiducial and topological requirements, and $`ϵ_{\mathrm{id}}`$ is the electron/jet identification efficiency. The efficiency for identifying jets is very high ($`>95\%`$) and is expected to stay the same in Run 2. Electron identification efficiencies in Run 1 were $`80\pm 7\%`$ in the central ($`|\eta |<1.1`$) and $`71\pm 7\%`$ in the forward ($`1.5<|\eta |<2.5`$) regions D0-RPV . These efficiencies were calculated for electrons with $`E_T(e)>25`$ GeV, It drops by about 30% for electrons with $`E_T(e)=10`$ GeV. The muon identification efficiencies used in Run 1 were $`62\pm 2\%`$ in the central ($`|\eta |<1.0`$) and $`24\pm 4\%`$ in the forward ($`1.0<|\eta |<1.7`$) regions rvmu2 . These were calculated for muons with $`p_T>15`$ GeV. For muons with $`10\text{GeV}<p_T<15`$ GeV the efficiencies were 80% smaller on average rvmu3 . In the present analysis we have taken the overall particle identification efficiency to be $`0.90\pm 0.09`$ in each channel, independent of lepton $`E_T`$, primarily due to the expectation of a better tracker and muon spectrometer for the upgraded DØ experiment. ## VI Backgrounds The main backgrounds are expected to arise from Drell-Yan production in association with four or more jets, dilepton top-quark events, and QCD multijet events. The latter is the dominant background for the electron channel (followed by the Drell-Yan background). In the case of muons, the background is dominated by the Drell-Yan and top pair production. We used Monte Carlo to calculate background from the first two sources, and data to estimate background from QCD jets. Background for the Run 1 analysis was estimated to be $`1.8\pm 0.2\pm 0.3`$ (with $`1.27\pm 0.24`$ from QCD and $`0.42\pm 0.15\pm 0.16`$ from the other processes) for $`100pb^1`$ of data. To extrapolate this number to the data set from Run 2, we have simply multiplied it by the ratio of luminosities to obtain $`36\pm 4\pm 6`$ events. However, it is expected that due to the central magnetic field in the upgraded DØ detector, the probability of jets to be misidentified as electrons will be reduced by a factor of $`2`$ in Run 2. We have therefore considered a second scenario with the smaller expected background of $`15\pm 1.5\pm 1.5`$ events. For the muon channel, the expected background has been scaled directly from the Run 1 analysis. We expect $`10\pm 1\pm 1`$ background events in Run 2. ## VII Results In order to obtain the sensitivity of Run 2 in to RPV decays, we calculated the efficiency for signal for all the mass points shown in Fig. 2. Typical efficiencies, the signal cross section in the $`ee+4`$ jets channel, and the expected event yield in 2 $`fb^1`$ of data, for several representative $`(m_0,m_{1/2})`$ points, are given in Table 1. Similar numbers are obtained for the muon channel. We use these efficiencies to obtain exclusion limits in the $`(m_0,m_{1/2})`$ plane at 95% CL, assuming that no excess of events will be observed above the predicted background. The exclusion contours for the electron and muon channel are shown in Fig. 3 and 4, respectively. Numerical values of the limits are summarized in Table 2. It’s worth mentioning that our analysis provides a conservative estimate of the sensitivity achievable in Run 2, since no formal optimization of the signal vs. background has been performed. We expect that a formal optimization can improve the sensitivity in the mass reach by 15–20%.
no-problem/9904/hep-th9904052.html
ar5iv
text
# References UICHEP-TH/98-8, Coordinate Realizations of Deformed Lie Algebras with Three Generators R. Dutt<sup>a,</sup><sup>1</sup><sup>1</sup>1rdutt@vbharat.ernet.in, A. Gangopadhyaya<sup>b,</sup><sup>2</sup><sup>2</sup>2agangop@luc.edu, asim@uic.edu, C. Rasinariu<sup>c,</sup><sup>3</sup><sup>3</sup>3costel@uic.edu and U. Sukhatme<sup>c,</sup><sup>4</sup><sup>4</sup>4sukhatme@uic.edu a) Department of Physics, Visva Bharati University, Santiniketan, India; b) Department of Physics, Loyola University Chicago, Chicago, USA; c) Department of Physics, University of Illinois at Chicago, Chicago, USA. ## Abstract Differential realizations in coordinate space for deformed Lie algebras with three generators are obtained using bosonic creation and annihilation operators satisfying Heisenberg commutation relations. The unified treatment presented here contains as special cases all previously given coordinate realizations of $`so(2,1),so(3)`$ and their deformations. Applications to physical problems involving eigenvalue determination in nonrelativistic quantum mechanics are discussed. 1. Introduction: Lie groups and their associated algebras are extensively used in the analysis of the symmetry properties of physical systems. For example, realizations of $`so(2,1)`$ have been used to obtain the eigenvalues of many quantum mechanical problems. Recent studies show that coordinate realizations of nonlinear Lie algebras may also be interesting in determining eigenspectra of certain physical problems in an algebraic approach . The main purpose of this paper is to set up a unified approach for obtaining differential realizations in one and two-dimensional coordinate space for nonlinear Lie algebras with three generators. The deformed Lie algebras which we consider are described by $$[J_3,J_+]=J_+,[J_3,J_{}]=J_{},[J_+,J_{}]=f(J_3).$$ (1) $`J_\pm J_1\pm iJ_2`$ are the well known raising and lowering operators. $`f(J_3)`$ is an arbitrary analytic function of the operator $`J_3`$. Note that the special choice $`f(J_3)=2J_3`$ corresponds to $`so(3)`$ and $`f(J_3)=2J_3`$ corresponds to $`so(2,1)`$. In terms of the Cartesian generators $`J_1,J_2,J_3`$, the commutation relations are $$[J_1,J_2]=\frac{i}{2}f(J_3),[J_2,J_3]=iJ_1,[J_3,J_1]=iJ_2.$$ (2) The plan of this paper is as follows. In Sec. 2, we review some simple general properties of Lie algebras. In Sec. 3, we describe how to obtain realizations of eq. (1) in terms of bosonic creation and annihilation operators ($`a^{}`$ and $`a`$) satisfying Heisenberg commutation relations $`[a,a^{}]=1`$. Although we are using the conventional notation $`a`$ and $`a^{}`$ for these operators, they do not necessarily have to be Hermitian conjugates of each other. Appendix A contains a discussion of specific one-dimensional realizations of the Heisenberg algebra. In particular, it is shown that realizations involving derivatives higher than the first can all be reduced to first and zero order. Sec. 4 contains a description of one-dimensional coordinate realizations of the Lie algebra given in eq. (1). We show that our unified approach reproduces all previously known realizations in the literature . Two-dimensional coordinate realizations are described in Sec 5, along with some applications involving eigenvalue determination for some nonrelativistic quantum mechanical potentials. 2. Some Properties of the Lie Algebra: For completeness and to establish notation, we describe some properties of Lie algebras. Some are well-known, but others are new. (i) The function $`f(J_3)`$ characterizes the Lie algebra given in eq. (1). For subsequent work, it is convenient to define the function $`g(J_3)`$ as follows: $$f(J_3)g(J_3)g(J_31).$$ (3) For example, for $`so(3)`$, $`f(J_3)=2J_3`$ and one gets $`g(J_3)=J_3(J_3+1)`$. It is easy to check that the function $`g(J_3)`$ is not unique - any periodic function of unit period can be added while maintaining eq. (3). Note that the Casimir operator for the Lie algebra of eq. (1)is given by $$C=J_{}J_++g(J_3)=J_+J_{}+g(J_31).$$ (4) This observation is useful for many physical applications. For instance, we use it in Sec. 5 for eigenvalue determination. (ii) The operators $`J_+`$ and $`J_{}`$ satisfy the important property $$T(J_3)J_+=J_+T(J_3+1),T(J_3)J_{}=J_{}T(J_31)$$ (5) for any analytic function $`T(J_3)`$. This property is extensively used in obtaining realizations. (iii) If operators $`J_+,J_{},J_3`$ satisfy the standard $`so(3)`$ Lie algebra, so do operators $`\stackrel{~}{J}_+,\stackrel{~}{J}_{},\stackrel{~}{J}_3`$ defined by $`\stackrel{~}{J}_m=_nM_{mn}J_n`$ provided the matrix M satisfies $`M^TM=1`$ and $`detM=+1`$. Note that the elements of the matrix M do not have to be real, but if they are, the matrix is orthogonal. This property is very useful in relating all the $`so(3)`$ realizations currently available in the literature. (iv) Given operators $`J_1,J_2,J_3`$ which satisfy the $`so(3)`$ Lie algebra, one can find operators $`K_1,K_2,K_3`$ which satisfy a more general algebra $$[K_1,K_2]=iq_3K_3,[K_2,K_3]=iq_1K_1,[K_3,K_1]=iq_2K_2,$$ (6) by choosing $`K_1=\sqrt{q_2q_3}J_1,K_2=\sqrt{q_3q_1}J_2,K_3=\sqrt{q_1q_2}J_3.`$ In particular $`K_1=iJ_1,K_2=iJ_2,K_3=J_3`$ is a realization of $`so(2,1)`$. (v) Given operators $`J_+,J_{},J_3`$ which satisfy the standard $`so(3)`$ Lie algebra, one can find operators $`\stackrel{~}{J}_+,\stackrel{~}{J}_{},\stackrel{~}{J}_3`$ which satisfy the deformed algebra of eq. (1). These operators are given by $$\stackrel{~}{J}_+=J_+A(J_3,C),\stackrel{~}{J}_{}=B(J_3,C)J_{},\stackrel{~}{J}_3=J_3,$$ (7) where $`C=J_{}J_++J_3(J_3+1)`$ is the Casimir operator of $`so(3)`$. The form of the operators in eq. (7) was chosen so that the two conditions $`[\stackrel{~}{J}_3,\stackrel{~}{J}_\pm ]=\pm \stackrel{~}{J}_\pm `$ are trivially satisfied. In order to satisfy the third condition $`[\stackrel{~}{J}_+,\stackrel{~}{J}_{}]=f(\stackrel{~}{J}_3)`$, one needs functions $`A(J_3,C),B(J_3,C)`$ which satisfy the following condition: $$A(J_31,C)B(J_31,C)[CJ_3(J_31)]B(J_3,C)A(J_3,C)[C(J_3+1)J_3]=f(J_3).$$ (8) If $`A(J_3,C)`$ and $`B(J_3,C)`$ commute, this condition reduces to $$H(J_3,C)[CJ_3(J_3+1)]=g(J_3)+p(J_3);H(J_3,C)A(J_3,C)B(J_3,C),$$ (9) where $`p(J_3)`$ is an arbitrary periodic function of period unity. It is important to realize that only the product $`H(J_3,C)`$ is fixed by the above constraint equation, but not the individual functions $`A(J_3,C)`$ and $`B(J_3,C)`$. Given eq. (7), it is sufficient to restrict our attention to realizations of $`so(3)`$ in order to obtain realizations of any deformed Lie algebra with three generators. Note that for the special case of $`so(3)`$ itself, the choice $`p(J_3)=C`$ gives $`H(J_3,C)=1`$. The simplest choice of factors $`A(J_3,C)=B(J_3,C)=1`$ reproduces the initial $`so(3)`$ realization, whereas a more general choice $`B(J_3,C)=A^1(J_3,C)`$ yields a new realization. Furthermore, other choices of $`p(J_3)`$ give additional new realizations of $`so(3)`$. In particular, the choice $`p(J_3)=0`$ gives the realization $$\stackrel{~}{J}_+=J_+\frac{J_3(J_3+1)}{CJ_3(J_3+1)},\stackrel{~}{J}_{}=J_{},\stackrel{~}{J}_3=J_3,$$ which differs from the original one only in one generator $`J_+`$. This freedom in choosing the periodic function $`p(J_3)`$ is analogous to gauge fixing in field theories. An interesting nonlinear example using the above formalism comes from the choice $`g(J_3)=J_3^2(J_3+1)^2`$ and $`p(J_3)=C^2`$. This choice gives the realization $$\stackrel{~}{J}_+=J_+[C+J_3(J_3+1)],\stackrel{~}{J}_{}=J_{},\stackrel{~}{J}_3=J_3,$$ (10) for the deformed Lie algebra corresponding to $`f(J_3)=4J_3^3`$. 3. Realizations of the Deformed Lie Algebra in Terms of Bosonic Operators: In this section, we develop a procedure for obtaining realizations of the Lie algebra defined by eq. (1) in terms of bosonic creation and annihilation operators $`a^{}`$ and $`a`$ which obey the Heisenberg algebra commutator $`[a,a^{}]=1`$. The number operator is defined by $`Na^{}a`$. It follows that $`[N,a^{}]=a^{},[N,a]=a`$. More generally, $$[N,a^m]=ma^m,[N,a^m]=ma^m,(m=0,\pm 1,\pm 2,\mathrm{}).$$ (11) To generate realizations of a deformed Lie algebra using the operators $`a^{},a,N`$, we choose the following ansatz: $$J_+=PF(N),J_{}=G(N)Q,J_3=N+c,$$ (12) where $`c`$ is a constant. $`P`$ and $`Q`$ are functions of $`a`$ and $`a^{}`$ chosen to satisfy the property $$[N,P]=P,[N,Q]=Q.$$ (13) Clearly, from eq. (11) and eq. (13), it follows that two possible choices for $`P(a,a^{})`$ are $`a^{}`$ and $`1/a`$ and two possible choices for $`Q(a,a^{})`$ are $`a`$ and $`1/a^{}`$. In fact, one can choose the linear combination $$P=\alpha _1(N)a^{}+\alpha _2(N)\frac{1}{a},Q=\beta _1(N)a+\beta _2(N)\frac{1}{a^{}}.$$ (14) Using eq. (13), it is easy to show that $`PN^m=(N1)^mP,N^mQ=Q(N1)^m`$, so that one has the property $`PT(N)=T(N1)P,T(N)Q=QT(N1)`$ for any analytic function $`T(N)`$. Also, the dependence on $`a`$ and $`a^{}`$ of the products $`PQ`$ and $`QP`$ clearly comes only through the combination $`a^{}a=N`$. Our ansatz of eq. (12) will satisfy the conditions of eq. (1) provided $$F(N1)G(N1)PQG(N)F(N)QP=f(N+c).$$ (15) If $`F(N)`$ and $`G(N)`$ commute, and the above condition becomes $$H(N1)PQH(N)QP=f(N+c),H(N)F(N)G(N).$$ (16) It only remains to determine $`H(N)`$ from eq. (16). As in Sec. 2, note again that the functions $`F(N)`$ and $`G(N)`$ do not appear separately but only appear as their product $`H(N)`$. Also, note that in Sec. 5, we will discuss a situation where $`F(N)`$ and $`G(N)`$ do not commute. 4. One-Dimensional Coordinate Realizations: Here we consider one-dimensional coordinate realizations for $`a,a^{}`$ such that $`[a,a^{}]=1`$. Eqs. (12), (14) and (16) now immediately give a realization for the nonlinear algebra of eq. (1). As an example we consider the same deformed Lie algebra with $`f(J_3)=4J_3^3`$ as in Section 2. We make the simple choice $`P=a^{}=x,Q=a=d/dx,c=0`$ which gives $`PQ=N,QP=N+1,N=xd/dx`$. Eq. (16) now reads: $`H(N1)NH(N)(N+1)=4N^3`$ whose solution is $`H(N)=N^2(N+1)`$. Taking $`G(N)=1`$ our coordinate realization is: $$J_+=x\left(x\frac{d}{dx}\right)^2\left(x\frac{d}{dx}+1\right),J_{}=\frac{d}{dx},J_3=x\frac{d}{dx}.$$ (17) General coordinate realizations of $`a,a^{}`$ are discussed in Appendix A. Any of these can be used to generate different one-dimensional realizations of deformed Lie algebras. Our formalism is very flexible since there is freedom in choosing $`a,a^{}`$ (Appendix A) and the operators $`P,Q`$ in eq. (14). Furthermore, once $`H(N)`$ has been determined from eq. (16), one has various choices for factorization into the functions $`F(N),G(N)`$ which appear in the final realization given in eq. (12). Our formalism contains as special cases all the coordinate realizations published in the literature. We shall now illustrate this statement for specific realizations discussed in and . Filho and Vaidya have discussed physical applications based on the following representation of $`so(2,1)`$: $$J_+=2\frac{d^2}{dy^2}\frac{2\alpha }{y^2},J_{}=\frac{y^2}{8},J_3=\frac{y}{2}\frac{d}{dy}\frac{1}{4},$$ (18) where $`\alpha `$ is an arbitrary constant. In order to obtain this realization as a special case of our formalism, we choose $`a,a^{}`$ by taking $`\theta =0,h(y)=1/y^2,r(y)=y^2/4`$ in eq. (25) in Appendix A. This gives $$a=\frac{y^3}{2}\frac{d}{dy}\frac{y^2}{4},a^{}=\frac{1}{y^2},N=\frac{y}{2}\frac{d}{dy}\frac{1}{4}.$$ Furthermore, choosing $`P=a^{},Q=1/a^{}`$ in eq. (14), implies that the constraint (16) on $`H(N)`$ reads $$H(N1)H(N)=2N.$$ The solution is $`H(N)=N(N+1)+\beta ,`$ where $`\beta `$ is an arbitrary constant. Choosing the factorization $`G(N)=1/8`$ and $`F(N)=8H(N)`$, eq. (12) with $`c=0`$ and $`\beta =(34\alpha )/16`$ after simplification gives the Filho-Vaidya realization of eq. (18). Another example of a differential realization of the $`so(2,1)`$ algebra was given by Barut and Bornzin . Their expressions for the generators are: $$T_1=\frac{1}{2}(\frac{y^{2n}}{n^2}p_y^2+\frac{\xi }{y^n}y^n),T_2=\frac{1}{n}(yp_yi\frac{n1}{2}),T_3=\frac{1}{2}(\frac{y^{2n}}{n^2}p_y^2+\frac{\xi }{y^n}+y^n).$$ (19) Here $`p_y=iy^1\frac{d}{dy}y`$, $`n`$ is an arbitrary positive integer and $`\xi `$ is an arbitrary constant. To make contact with our formalism, using (iii) from Section 2, we first rotate $`T_1,T_2,T_3`$ to the new operators $`J_1=iT_3,J_2=T_1,J_3=iT_2`$. This gives $$J_+=iy^n,J_{}=i\left(\frac{y^{2n}}{n^2}\frac{d^2}{dy^2}+2\frac{y^{1n}}{n^2}\frac{\xi }{y^n}\right),J_3=\frac{y}{n}\frac{d}{dy}+\frac{n+1}{2n}.$$ Next, let us take $`\theta =0,h(y)=y^n`$ and $`r(y)=[n(2c1)1]/(ny^n)`$in eq. (25) of Appendix A. This implies $$a=\frac{y^{1n}}{n}\frac{d}{dy}\frac{n(2c1)1}{2ny^n},a^{}=y^n,N=\frac{y}{n}\frac{d}{dy}+\frac{n+1}{2n}c.$$ Further, choosing $`\alpha _1=i,\beta _2=1`$ and $`\alpha _2=\beta _1=0`$ in eq. (14), we find a solution of eq. (16) of the form $`H(N)=b_2N^2+b_1N+b_0`$ with $`b_2=\frac{i}{\beta _2}`$, $`b_1=i\frac{2c+1}{\beta _2}`$ and $`b_0=\frac{i}{\beta _2}\left[(2c+1)^2/4\xi 1/(4n^2)\right]`$. Finally, the factorization $`H=FG`$ with $`F=1`$, concludes the proof that eqs. (19) are a particular case of our formalism. Note that the initial rotation of generators seems to be essential in getting the realizations of . Similarly, our formalism also gives the one dimensional realizations described in refs. and . 5. Two-Dimensional Coordinate Realizations: In this section we will introduce realizations of $`so(2,1)`$ using two coordinates. In contrast to the one coordinate realizations, we now allow the functions $`F`$ and $`G`$ appearing in eq. (12) to be functions of $`N`$ as well as an internal coordinate $`x`$ and its derivative $`\frac{d}{dx}`$. It is important to observe that due to this generalization, the functions $`F`$ and $`G`$ no longer commute with each other, and as a result, equation (15) must be used. To construct explicit realizations of $`so(2,1)`$, we choose $`P=a^{}=\mathrm{exp}(i\varphi )`$ and $`Q=\frac{1}{a^{}}=\mathrm{exp}(i\varphi )`$, i.e. $`\alpha _2=\beta _1=0`$ in eq. (14). The simplest choice of the operator $`a`$ which satisfies $`[a,a^{}]`$ is $`a=i\mathrm{exp}(i\varphi )\frac{}{\varphi }`$. This gives $`N=a^{}a=i\frac{}{\varphi }`$. As a simple example, we consider $$F(N)=\left[\frac{}{x}+W(x,i\frac{}{\varphi })\right],G(N)=\left[\frac{}{x}+W(x,i\frac{}{\varphi })\right],$$ (20) where $`W`$ is a function to be determined. Substitution in eq. (15) yields $$\left[W^2(x,i\frac{}{\varphi }1)\frac{dW(x,i\frac{}{\varphi }1)}{dx}\right]\left[W^2(x,i\frac{}{\varphi })+\frac{dW(x,i\frac{}{\varphi })}{dx}\right]=f\left(i\frac{}{\varphi }+c\right).$$ (21) The left hand side of this equation depends on $`x`$ while the right hand side does not. In order to get a two dimensional realization one needs a solution of eq. (21). In supersymmetric quantum mechanics, this equation is well known to be the shape invariance condition. Its solutions are shape invariant superpotentials . One solution is $$W=i\frac{}{\varphi }\mathrm{tanh}x+B\mathrm{sech}x.$$ (22) In this case, an explicit calculation yields $`f(i\frac{}{\varphi }+c)=2(i\frac{}{\varphi })+1.`$ This implies that we are dealing with a deformed Lie algebra with $`f(J_3)=2J_3+2c+1.`$ For the choice $`c=1/2`$ this is the $`so(2,1)`$ algebra and its realization is: $$J_+=e^{i\varphi }\left[\frac{}{x}i\frac{}{\varphi }\mathrm{tanh}x+B\mathrm{sech}x\right],J_{}=\left[\frac{}{x}i\frac{}{\varphi }\mathrm{tanh}x+B\mathrm{sech}x\right]e^{i\varphi },J_3=i\frac{}{\varphi }+\frac{1}{2}.$$ There are several other solutions possible and they can be derived analytically using a point canonical transformation described in Ref. . The above realizations have interesting applications. The operator $`J_+J_{}`$, is given by : $$J_+J_{}=\left[\frac{d^2}{dx^2}+W^2(x,i\frac{}{\varphi }1)\frac{dW(x,i\frac{}{\varphi }1)}{dx}\right].$$ When acting on factorized basis functions $`e^{im\varphi }\psi (x)`$, one gets $$J_+J_{}=\left[\frac{d^2}{dx^2}+W^2(x,m1)\frac{dW(x,m1)}{dx}\right],$$ which is recognized to be the standard Hamiltonian of supersymmetric quatum mechanics. For the choice of eq. (22) the result is $$J_+J_{}=\left[\frac{d^2}{dx^2}+(m1)^2\left(B^2(m1)^2(m1)\right)\mathrm{sech}^2x+B(2(m1)+1)\mathrm{sech}x\mathrm{tanh}x\right],$$ which is just the Hamiltonian for the Scarf potential<sup>5</sup><sup>5</sup>5The Scarf Hamiltonian is described by a potential $$V_{}(x,a_0,B)=\left[B^2a_0(a_0+1)\right]\mathrm{sech}^2x+B(2a_0+1)\mathrm{sech}x\mathrm{tanh}x+a_0^2.$$ The eigenvalues of this system are given by $$E_n=a_0^2\left(a_0n\right)^2.$$ with $`m1`$ being one of the parameters. The Scarf potential is well known to be shape invariant, hence exactly solvable . We can also determine these eigenvalues using familiar algebraic methods of $`so(2,1)`$. The Casimir is $`CJ^2`$ and eq. (4) gives $`J_+J_{}=J_3^2J_3J^2`$. Since the eigenvalues of $`J^2,J_3`$ are $`j(j+1),m1/2`$ respectively, we find $$E=\left(m\frac{1}{2}\right)^2\left(m\frac{1}{2}\right)j(j+1).$$ Now substituting $`j=nm+\frac{1}{2}`$ , one gets $$E_n=(m1)^2(mn1)^2,n=0,1,2,\mathrm{}$$ (23) (Note that $`E_0=0`$ as expected from unbroken supersymmetric quantum mechanics.) With a change of variable and appropriate similarity transformations of $`F(N)`$ and $`G(N)`$, we can relate all solvable potentials of Ref. to $`J_+J_{}`$ of this algebra and hence derive information about their spectrum algebraically. In this paper, differential realizations in coordinate space for nonlinearly deformed Lie algebras with three generators were obtained using bosonic creation and annihilation operators. We have presented a unified formalism that contains as special cases all previously given coordinate realizations of $`so(2,1),so(3)`$ and their deformations. Although we have focused on deformations of the type specified by equation (1), coordinate realizations for other types of deformations have also been recently studied . A.G. and R.D. would also like to thank the Physics Department of the University of Illinois at Chicago for warm hospitality. Partial financial support from the U.S. Department of Energy and the Department of Science and Technology, Govt. of India (Grant No. SP/S2/K-27/94) is gratefully acknowledged. Appendix A. Differential Realizations of $`a`$ and $`a^{}`$ In this Appendix, we discuss differential coordinate realizations of operators $`a`$ and $`a^{}`$ which satisfy the Heisenberg commutation relation $`[a,a^{}]=1`$. The simplest choice is $$a=\frac{d}{dx},a^{}=x.$$ (24) As we shall see shortly, these operators are the basic building blocks for all other realizations, including those with higher order derivatives. Note that although the notation $`a`$ and $`a^{}`$ is being used, we are not requiring the two operators to be Hermitian conjugates of each other. Given any two operators $`a(x,\frac{d}{dx})`$ and $`a^{}(x,\frac{d}{dx})`$ such that $`[a,a^{}]=1`$, several simple transformations can be used to generate new operators $`\stackrel{~}{a}`$ and $`\stackrel{~}{a}^{}`$ which satisfy $`[\stackrel{~}{a},\stackrel{~}{a}^{}]=1`$. These transformations are: (i) Rotations in the $`(a,a^{})`$ plane: $$\stackrel{~}{a}=a\mathrm{cos}\theta +a^{}\mathrm{sin}\theta ,\stackrel{~}{a}^{}=a\mathrm{sin}\theta +a^{}\mathrm{cos}\theta ;$$ (ii) Change of variables $`x=h(y)`$: $$\stackrel{~}{a}(y,\frac{d}{dy})=a(h(y),\frac{1}{h^{}(y)}\frac{d}{dy}),\stackrel{~}{a}^{}(y,\frac{d}{dy})=a^{}(h(y),\frac{1}{h^{}(y)}\frac{d}{dy}),$$ where prime denotes the derivative with respect to $`y`$ ; (iii) Similarity transformations: $$\stackrel{~}{a}=\varphi ^1(x)a\varphi (x),\stackrel{~}{a}^{}=\varphi ^1(x)a^{}\varphi (x);$$ (iv) Additions of arbitrary functions of the other operator: $$\stackrel{~}{a}=a+\lambda (a^{}),\stackrel{~}{a}^{}=a^{};\stackrel{~}{a}=a,\stackrel{~}{a}^{}=a^{}+\mu (a).$$ Successive use of the first three transformations applied to eq. (24) yield $$a=\frac{\mathrm{cos}\theta }{h^{}(y)}\frac{d}{dy}+\left(h(y)\mathrm{sin}\theta +r(y)\mathrm{cos}\theta \right),a^{}=\frac{\mathrm{sin}\theta }{h^{}(y)}\frac{d}{dy}+\left(h(y)\mathrm{cos}\theta r(y)\mathrm{sin}\theta \right),$$ (25) where $`h(y)`$ and $`r(y)`$ are arbitrary analytic functions of coordinate $`y`$. It is easy to check that these are the most general operators linear in $`\frac{d}{dy}`$ which satisfy $`[a,a^{}]=1`$. A natural question to ask is whether one can construct differential coordinate realizations with second and higher order derivatives. This is in fact possible by starting with any first order realization \[say eq. (24) or eq. (25)\] and using transformation (iv) to generate higher order derivatives. For example, using eq. (24) and taking $`\mu (a)=a^2`$ in transformation (iv) gives the realization $$\stackrel{~}{a}=\frac{d}{dx},\stackrel{~}{a}^{}=x+\frac{d^2}{dx^2}.$$ Although this procedure can be readily extended to get realizations of the Heisenberg algebra involving derivatives of any desired order, it must be kept in mind that only the realizations involving first order derivatives are fundamental.
no-problem/9904/astro-ph9904184.html
ar5iv
text
# A CONTRACTING, TURBULENT, STARLESS CORE IN THE SERPENS CLUSTER ## 1 Introduction Turbulence is a ubiquitous feature of the interstellar medium although its precise nature is poorly understood and its role in the formation of stars is unclear. Theories of isolated star formation generally assert that gravitational collapse occurs onto a thermally supported core (e.g., Shu, Adams, & Lizano 1987) and motions are quasi-static until very late times (Basu & Mouschovias 1994; Ciolek & Mouschovias 1995; Li 1998). Observations of infall at small scales ($`0.01`$ pc) in the isolated starless core, L1544, in Taurus are not in contradiction with such theories if it is indeed sufficiently close to forming a star (Williams et al. 1999), although large scale motions ($`0.1`$ pc) do appear to require an alternative explanation (Tafalla et al. 1998), such as turbulent dissipation (Myers & Lazarian 1998). The Serpens molecular cloud (d=310 pc; de Lara, Chavarria-K, & Lopez-Molina 1991) has more embedded YSOs and is more turbulent than the Taurus cloud. In the northwest region of this cloud lies a cluster of Class 0 sources (Hurt & Barsony 1996) which has been the subject of numerous studies in the millimeter and sub-millimeter regime (Casali, Eiroa, & Duncan 1993; McMullin et al. 1994; White et al. 1995; Hurt, Barsony, & Wootten 1996; Wolf-Chase et al. 1998; Testi & Sargent 1998). Since the dense cores around Class 0 sources often show the spectral signature of inward motion (Mardones et al. 1997), we embarked on a study of the dense gas dynamics in this young cluster forming region in order to compare with more isolated star forming sites such as in Taurus. In this Letter, we present observations of a previously unrecognized core adjacent to the Class 0 source S68N that appears to be starless, contracting, and is highly turbulent. We compare its properties with its neighbor, S68N, deduce a chemical timescale for its formation that suggests that it is very young, and determine an average infall speed by spectral line modeling. This core, which we designate S68NW, demonstrates that turbulent motions in the ISM cannot be ignored in the formation of individual stars in clusters. ## 2 Observations A well tested method of diagnosing inward motions onto a star forming region is to search for self-absorbed lines where emission at low velocities is brighter than at high velocities (Leung & Brown 1977; Walker et al. 1986; Zhou et al. 1993). We observed CS(2–1) and N<sub>2</sub>H<sup>+</sup>(1–0) toward the cluster of Class 0 sources since both lines are reasonably bright hence quick to map, both are strongly excited by gas of density $`n_{\mathrm{H}_2}10^5`$ $`\mathrm{cm}^3`$, and generally CS is optically thick and N<sub>2</sub>H<sup>+</sup> is optically thin at the resolution of these observations. Singledish maps were made at the Five College Radio Astronomy Observatory<sup>2</sup><sup>2</sup>2FCRAO is supported in part by the National Science Foundation under grant AST9420159 and is operated with permission of the Metropolitan District Commission, Commonwealth of Massachusetts (FCRAO) 14 m telescope in December 1996 using the QUARRY 15 beam array receiver and the FAAS backend consisting of 15 autocorrelation spectrometers with 1024 channels set to an effective resolution of 24 kHz (0.06 km/s). The observations were taken in frequency switching mode and, after folding, 3rd order baselines were subtracted. The pointing and focus were checked every 3 hours on nearby SiO maser sources. The FWHM of the telescope beam is $`50^{\prime \prime }`$, and a map covering $`6^{}\times 8^{}`$ was made at Nyquist ($`25^{\prime \prime }`$) spacing. Observations were subsequently made with the 10 antenna Berkeley-Illinois-Maryland array<sup>3</sup><sup>3</sup>3Operated by the University of California at Berkeley, the University of Illinois, and the University of Maryland, with support from the National Science Foundation (BIMA) for two 8 hour tracks in each line during April 1997 (CS) and October/November 1997 (N<sub>2</sub>H<sup>+</sup>). A two field mosaic was made with phase center $`\alpha (2000)=18^\mathrm{h}29^\mathrm{m}47.^\mathrm{s}5,\delta (2000)=01^{}15^{}51\stackrel{}{\mathrm{.}}4`$ and a second slightly overlapping pointing at $`\mathrm{\Delta }\alpha =33\stackrel{}{\mathrm{.}}0,\mathrm{\Delta }\delta =91\stackrel{}{\mathrm{.}}0`$. Amplitude and phase were calibrated using 4 minute observations of 1751+096 (4.4 Jy) interleaved with each 22 minute integration on source. The correlator was configured with two sets of 256 channels at a bandwidth of 12.5 MHz (0.15 $`\mathrm{km}\mathrm{s}^1`$ per channel) in each sideband and a total continuum bandwidth of 800 MHz. The flexible correlator setup allowed us to observe CH<sub>3</sub>OH($`2_11_1`$) in addition to CS(2–1), and C<sup>34</sup>S(2–1) along with N<sub>2</sub>H<sup>+</sup>(1–0): the methanol line was found to map the outflow associated with S68N (Wolf-Chase et al. 1998) but the C<sup>34</sup>S line was detected only marginally. The data were calibrated and maps produced using standard procedures in the MIRIAD package. Since the emission is extended, analysis of the spectra must correct for the spatial filtering properties of the interferometer. To allow for this, we combined the FCRAO and BIMA data using the task IMMERGE. Maps were compared within the region of visibility overlap (6 m to 14 m) and small pointing corrections made to the FCRAO data ($`<6^{\prime \prime }`$, about a tenth of the beam) which was then scaled using a gain of 43.7 Jy K<sup>-1</sup>.<sup>4</sup><sup>4</sup>4Information regarding aperture efficiency measurements on the FCRAO 14 m telescope can be found on the World Wide Web at http://donald.phast.umass.edu/$``$fcrao/library/techmemos/gain96.html The resolution of the resulting maps was $`10\stackrel{}{\mathrm{.}}0\times 7\stackrel{}{\mathrm{.}}8`$ at p.a. $`72^{}`$ for CS which was observed twice in the compact C configuration and $`8\stackrel{}{\mathrm{.}}5\times 4\stackrel{}{\mathrm{.}}6`$ at p.a. $`+2^{}`$ for N<sub>2</sub>H<sup>+</sup> which was observed once in C configuration and once in the wider B configuration. ## 3 Analysis ### 3.1 S68N and S68NW Analysis of the large scale maps is deferred to a later paper. Here, we restrict attention to a remarkable region, $`1\stackrel{}{\mathrm{.}}5\times 2^{}`$, around the S68N protostar. This source was discovered from earlier 3-element BIMA CS observations by McMullin et al. (1994) but we re-observed it with the 10-element array to obtain greater sensitivity and resolution. It was originally undetected in the 3 mm continuum but is readily apparent in the new data at an integrated flux level of 12.3 mJy, in agreement with OVRO observations by Testi & Sargent (1998). S68N has also been detected at shorter wavelengths and its spectrum was fit by a modified blackbody with dust temperature 20 K and luminosity $`5L_{}`$ by Wolf-Chase et al. (1998). Maps of the integrated intensity of CS(2–1) and N<sub>2</sub>H<sup>+</sup>(1–0) around S68N are displayed in Fig. 1. The position of the 3 mm continuum peak, indicated by the star, lies at the center of the N<sub>2</sub>H<sup>+</sup> emission but is offset by $`7\mathrm{"}`$ from the CS core. However, there is both high velocity emission from the outflow and self-absorption present in the CS spectra which may skew the map of integrated intensity relative to the distribution of dense gas around the star. Equally striking in the CS map, however, is the presence of a compact core, hereafter S68NW, that lies $`50^{\prime \prime }`$ west of S68N. It is also present in the map of N<sub>2</sub>H<sup>+</sup> integrated intensity but is not nearly so prominent. It was not detected in the continuum to a $`3\sigma `$ sensitivity of 3.3 mJy beam<sup>-1</sup>, nor is it apparent in the slightly higher sensitivity OVRO Testi & Sargent observations. It is also undetectable in maps at 1 mm (Casali et al. 1993; Tafalla & Mardones, private communication), at 12, 25, 60, and 100 $`\mu `$m in the Hurt & Barsony IRAS HIRES maps, in the near-infrared ($`2\mu `$m; Eiroa & Casali 1992) or in the Digital Sky Survey. These observations constrain the luminosity of any embedded object in S68NW to be less than $`0.5L_{}`$. ### 3.2 Abundance differences Fig. 1 suggests a difference in the chemistry between the star forming S68N and starless S68NW core. We have estimated the abundance of N<sub>2</sub>H<sup>+</sup> in the two cores by comparing the mass of N<sub>2</sub>H<sup>+</sup> derived from the integrated emission with the virial mass derived from the size and linewidth. Given the compact appearance of the cores, the assumption of virialization is unlikely to be greatly in error. We define the boundaries of each core as the FWHM contour of the N<sub>2</sub>H<sup>+</sup> maps and calculate sizes, linewidths, and integrated emission within these limits. Core properties are listed in Table 1. The inferred virial N<sub>2</sub>H<sup>+</sup> abundance is much lower in S68NW than S68N. This is due to a combination of a smaller size, greater linewidth, and lower integrated intensity in S68NW, but the relatively low emission is the dominant factor. In the following section, infall model fits do not find such an extreme abundance difference between the two but nevertheless confirms that the N<sub>2</sub>H<sup>+</sup> abundance in S68NW is unusually low, $`1\times 10^{10}`$, compared to $`4\times 10^{10}`$ in other dense cores (Womack, Ziurys, & Wyckoff 1992; Ungerechts et al. 1997). A potential explanation that suits the starless nature of S68NW is chemical evolution: Bergin et al. (1997) show that whereas CS forms very quickly in a dense core, it takes $`10^5`$ yr to form substantial amounts of N<sub>2</sub>H<sup>+</sup>. Observations of other time-sensitive molecular species such as HC<sub>3</sub>N offer a test of this hypothesis. ### 3.3 Spectral line modeling The majority of the CS spectra in this region are double-peaked and the magnitude of the dips between the peaks tends to increase closer to the core centers. Average spectra within the FWHM contour of N<sub>2</sub>H<sup>+</sup> emission for each core are displayed in Fig. 2. Unlike the case of L1544 (Williams et al. 1999), the N<sub>2</sub>H<sup>+</sup> spectra are not self-absorbed and we use these data to determine the velocity and linewidth of the cores. For each core, the central dip in the CS spectrum lines up with the N<sub>2</sub>H<sup>+</sup> velocity indicating that the CS emission is self-absorbed. The S68N spectrum shows prominent outflow wings but it is quite symmetric in marked contrast to S68NW for which the lower velocity (blue) peak is much brighter than the higher velocity (red) peak. For a radially decreasing excitation gradient, such as would exist for a centrally condensed core at constant kinetic temperature, this indicates that the outer self-absorbing gas is red-shifted (i.e., infalling) relative to the inner emitting region. To estimate the speed of the infalling gas, we have fit the spectra using a simple two layer model consisting of two isothermal layers, the near side (to the observer) at low density and the far side at high density. This model resembles those discussed by Myers et al. (1996) and Williams et al. (1999): emission from the rear layer is absorbed by the lower excitation front layer with the location of the absorption dependent on the relative velocity between the front and rear layers (i.e., the infall speed). Observations are used to constrain the models as much as possible: gaussian fits to the isolated N<sub>2</sub>H<sup>+</sup> hyperfine component are used to set the systemic velocity and linewidth of the core, the line-of-sight widths of each layer are set equal to the measured radius of the cores, and the N<sub>2</sub>H<sup>+</sup> abundances are constrained to vary only within a factor of two of the virial estimates derived in the previous section. The free parameters are the densities of each layer, their common kinetic temperature, the molecular abundances, and the infall speed of the front layer onto the rear layer. In addition, a low optical depth component between the two layers was added to the S68N model to allow for the outflow, and a gaussian component $`2`$ $`\mathrm{km}\mathrm{s}^1`$ from line center was used in the S68NW model to fit excess emission at low velocities. The model spectra are shown in relation to the observations in Fig. 2 and model parameters listed in Table 2. The rear layer densities are very similar, approximately equal to the critical density of the two transitions, and the foreground layer densities are comparable to each other and similar to the density of <sup>13</sup>CO emitting gas. The kinetic temperatures are the same for both cores and equal to the dust temperature of S68N as determined from the spectral energy distribution by Wolf-Chase et al. (1998). The CS abundances are also the same and consistent with observations of Orion by Ungerechts et al. (1997) but the fits require a smaller difference in N<sub>2</sub>H<sup>+</sup> abundance than the virial estimates derived in the previous section (note, however, that the N<sub>2</sub>H<sup>+</sup> abundance of S68NW is still very low). The parameter that is most different between the two cores is the infall speed. The low infall speed in S68N is implied by the near symmetry of the line profile and is little affected by the addition of the outflow component which is also quite symmetrical. The inferred infall speed for S68NW, however, is high because of the large blue-red asymmetry but its precise value is very sensitive to the strength of an additional gaussian component added at low velocities. This extra component appears to be physically unrelated to S68NW; it lacks a red counterpart and peaks in emission $`2`$ $`\mathrm{km}\mathrm{s}^1`$ from the N<sub>2</sub>H<sup>+</sup> line. Channel maps suggest that it is a second CS core, slightly offset along the line of sight. Its contribution to the integrated CS intensity within one linewidth of the central velocity of N<sub>2</sub>H<sup>+</sup> is less than 20% at its peak and is generally much less in other spectra. The addition of this extra component reduces the blue-red ratio required in the infall model resulting in a smaller infall speed: in the absence of this component, the inferred infall speed is $`0.5`$ $`\mathrm{km}\mathrm{s}^1`$. Therefore, we believe that the value listed in Table 2, $`0.34`$ $`\mathrm{km}\mathrm{s}^1`$, is a lower limit which implies that the S68NW core is contracting supersonically ($`v_{\mathrm{in}}/\sigma _{\mathrm{thermal}}(\mathrm{H}_2)>1`$) although not necessarily super-Alfvénically ($`v_{\mathrm{in}}/\sigma _{\mathrm{non}\mathrm{thermal}}0.5`$). The implied mass infall rate of the front layer onto the rear layer is $`1\times 10^7M_{}`$ yr<sup>-1</sup>. ## 4 Discussion The data presented here indicate that S68NW is a turbulent core in the process of contraction and increasing its mass substantially. Furthermore its proximity to the S68N core and embedded protostar suggests that S68NW may soon form a low-mass star, as the next part of the sequence of low-mass star formation events which have occurred in the Serpens complex over the last Myr. If so, the formation of stars and cores in Serpens may substantially overlap in time, in contrast to the idea that star formation in clusters is coeval, such as in response to a single triggering event (e.g. Zinnecker, McCaughrean & Wilking 1993). Instead these observations imply that the cloud is forming a core while its already formed cores are still forming stars. If so, core formation and star formation may have relatively similar timescales, each shorter than the overall cluster formation timescale. The instantaneous collapse timescale, $`t_{\mathrm{coll}}=6200\mathrm{AU}/0.34\mathrm{km}\mathrm{s}^110^5\mathrm{yr}`$ is approximately equal to the free-fall time for gas of density $`n(\mathrm{H}_2)10^5`$ cm<sup>-3</sup>. Such dynamic motions are achieved in models of ambipolar diffusion only at very late times, $`10^7`$ yr (Basu & Mouschovias 1994; Ciolek & Mouschovias 1995; Li 1998), which appears to be inconsistent with the low N<sub>2</sub>H<sup>+</sup> abundance. In addition, the supersonic infall speed requires either low ionization levels, $`x_e<10^8`$ at $`n(\mathrm{H}_2)=3\times 10^4`$ cm<sup>-3</sup> (Basu & Mouschovias 1995a), significantly less than measured in cores in Taurus (Williams et al. 1998) and Orion (Bergin et al. 1999), or a weak magnetic field, $`B10\mu `$G at $`n(\mathrm{H}_2)=5\times 10^3`$ cm<sup>-3</sup> (Basu & Mouschovias 1995b) which would imply a smaller Alfvén speed, $`0.2`$ $`\mathrm{km}\mathrm{s}^1`$, than the observed linewidth. Such a weak field at these relatively high densities may also conflict with HI Zeeman measurements of similar size fields at much lower gas densities in Ophiuchus (Goodman & Heiles 1994). Finally, ambipolar diffusion models do not predict the large size scales over which infall occurs: asymmetric, self-absorbed CS line profiles extend well beyond the FWHM N<sub>2</sub>H<sup>+</sup> contour, indicating detectable inward motions over $`0.1`$ pc. Similarly large infall zones have also been observed in the isolated core L1544 (Tafalla et al. 1998) and in the cluster forming regions, L1251B and NGC1333–IRAS4 (Mardones 1998). A possible explanation is that the collapse front propagates outward at the non-thermal, rather than the thermal, sound speed (e.g. Myers & Fuller 1993) in which case the ratio of infall speed to effective sound speed remains comfortably within the bounds of ambipolar diffusion models. However, this requires that the non-thermal motions be maintained in the core over the ambipolar diffusion timescale, $`t_{\mathrm{AD}}10^6`$ yr, but Nakano (1998) shows that the timescale for turbulent dissipation is approximately the same as the free fall time and much less than $`t_{\mathrm{AD}}`$ in the absence of any internal driving sources. Indeed, it may be the decay of the non-thermal motions, with a corresponding loss of pressure support, that drives the fast inward motions in S68NW (Myers & Lazarian 1998) over the large observed size scales. Although we do not observe a decrease in the N<sub>2</sub>H<sup>+</sup> velocity dispersion toward S68NW, this does not preclude the existence of a small less turbulent central core. Observing such an object remains a challenge for the future. Note that in this scenario, the timescale of the next stage of star formation would be that of ambipolar diffusion if the core were magnetically subcritical, or would be dynamical if the core were magnetically supercritical. It will be useful to determine the incidence of cores like S68NW in other star-forming regions, and to study such cores in lines sensitive to a wide range of gas density. This research was partially supported by NASA Origins of Solar Systems Program, grant NAGW-3401. JPW thanks the Radio Astronomy Laboratory at the University of California at Berkeley for support during the writing of the manuscript and Chris McKee and Frank Shu for informative discussions. Conversations with Shantanu Basu, Glenn Ciolek, and Zhi-Yun Li are also gratefully acknowledged.
no-problem/9904/cond-mat9904287.html
ar5iv
text
# Ground state wavefunction of the quantum Frenkel-Kontorova model ## 1 Figures
no-problem/9904/cond-mat9904410.html
ar5iv
text
# Memory interference effects in spin glasses ## 1 Introduction The non-equilibrium character of the dynamics of 3d spin glasses below the zero field phase transition temperature has been extensively studied by both experimentalists and theorists sgrf . Two different main tracks have been used to describe the aging and non-equilibrium dynamics that is characteristic of spin glasses. On the one hand phase space pictures hierarki , which originate from mean field theory mpv and prescribe a hierarchical arrangement of metastable states, and on the other hand real space droplet scaling models which have been developed from renormalisation group arguments FH ; Henk . Independently, a theoretical description (which we shall not discuss here) of aging effects CuKu1 and temperature variation effects CuKu2 has been given by the direct solution of the dynamical equations of mean-field like models. When a spin glass is quenched from a high temperature (above $`T_g`$) to a temperature $`T_1`$ below $`T_g`$, a wait time dependence of the dynamic magnetic response is observed. This aging behaviour aging ; Sitges corresponds to a slow evolution of the spin configuration towards equilibrium. The ‘magnetic aging’ observed in spin glasses resembles the ‘physical aging’ observed in the mechanical properties of glassy polymers polymer ; cavaille , or the ‘dielectric’ aging found in supercooled liquids nagel and dielectric crystals KTNKLT . However, a more detailed comparison of aging in these various systems would show interesting differences nagel , particularly with respect to the effect of the cooling rate uppsalaSaclay . Remarkable influences of slight temperature variations on the aging process in spin glasses have been evidenced in a wide set of earlier experiments hierarki ; Sitges ; UppsDT ; DjurCuMn . These influences were further elucidated through the memory phenomena recently reported in uppsalaSaclay . The results have been interpreted both from ‘phase space’ and ‘real space’ points of view. In phase space models, aging is pictured as a random walk among the metastable states. At a given temperature $`T_1`$, the system samples the valleys of a fixed free-energy landscape. On the basis of the experimental observations, it has been proposed hierarki that the landscape at $`T_1`$ corresponds to a specific level of a hierarchical tree. When lowering the temperature to $`T_2<T_1`$, the observed restart of aging is explained as a subdivision of the free energy valleys into new ones at a lower level of the tree. The system now has to search for equilibrium in a new, unexplored landscape, and therefore acts at $`T_2`$ as if it had been quenched from a high temperature. On the other hand, the experiments show that when heating back from $`T_2`$ to $`T_1`$ the memory of the previous aging at $`T_1`$ is recovered. In the hierarchical picture, this is produced by the $`T_2`$-valleys merging back to re-build the $`T_1`$-landscape. In real space droplet pictures FH ; Henk , the aging behaviour at constant temperature is associated with a growth of spin glass ordered regions of two types (related by time-reversal symmetry). This is combined with a chaotic behaviour as a function of temperature, braymoore , i.e. the equilibrium spin configuration at one temperature is different from the equilibrium configuration at another temperature. However, there is also an overlap between the equilibrium spin structures at two different temperatures, $`T`$ and $`T\pm \mathrm{\Delta }T`$, on length scales shorter than the overlap length, $`L_{\mathrm{\Delta }T}`$. In this picture, chaos implies that if the spin glass has been allowed to age a time, $`t_w`$, at a certain temperature, the aging process is re-initialized after a large enough temperature change. Intuitively, a growth of compact domains may not allow a memory of a high temperature spin configuration to remain imprinted in the system while the system ages at lower temperatures. However, as suggested in uppsalaSaclay and developed in jonssonetal , a phenomenology based upon fractal domains and droplet excitations can be able to incorporate the observed memory behaviour in a real space droplet picture. The possibility of a fractal (non-compact) geometry of the domains has been evoked in the past in various theoretical contextsfractaldiv , and also in close connection with the aging phenomena fractclust ; jpbdean ; mfnoneq . In this paper, we report new results on the memory phenomenon observed in low frequency ac-susceptibility of spin glasses. We first recall and demonstrate an undisturbed memory phenomenon, and then show that such a memory can be erased not only by heating the sample to a temperature above the temperature where the memory is imprinted, but also by waiting a long enough time below this temperature. ## 2 Experimental The experiments were performed on the insulating spin glass $`CdCr_{1.7}In_{0.3}S_4`$ mtrl ($`T_g=16.7K`$), in a Cryogenic Ltd S600 SQUID magnetometer at Saclay. The ac field used in the experiments had a peak magnitude of 0.3 Oe and frequency $`\omega /2\pi `$=0.04 Hz. This low frequency makes the relaxation of the susceptibility at a constant temperature in the spin glass phase clearly visible (at the laboratory time scale of $`10^1`$ to $`10^5`$ s). The basic experimental procedure is illustrated in Fig. 1 and is as follows: (i) Cooling: The experiments are always started at 20 K, a temperature well above the spin glass temperature $`T_g`$=16.7 K. The ac-susceptibility is first recorded as a function of decreasing temperature. The sample is continuously cooled, but is additionally kept at constant temperature at two intermittent temperatures $`T_1`$ and $`T_2`$ for wait times $`t_{w1}`$ and $`t_{w2}`$, respectively ($`T_1<T_2<T_g`$). (ii) Heating: When the lowest temperature has been reached, the system is immediately continuously re-heated and the ac-susceptibility is recorded as a function of increasing temperature. Except at $`T_1`$ and $`T_2`$ when decreasing the temperature, the cooling and heating rates are constant ($`0.1`$ K/min.). At constant temperature, both components of the ac-susceptibility relax downward by about the same absolute amount. However, the relative decay of the out-of-phase is much larger than the relative decay of the in-phase component, and in the following we mainly focus on results from the out-of-phase component of the susceptibility. ## 3 Results ### 3.1 Double memory The results of a double memory experiment are presented in Fig. 2. The initial data is recorded on continuously cooling the sample including a first halt at the temperature $`T_1`$=12 K (0.72$`T_g`$) for $`t_{w1}`$=7 $`hrs`$ and a second halt at $`T_2`$=9 K (0.54$`T_g`$) for $`t_{w2}`$=40 $`hrs`$. The cooling is then continued to $`T`$=5 K, from where a new set of data is taken on increasing the temperature at a constant heating rate whithout halts. A reference curve, measured on continuous heating after cooling the sample without intermittent halts, is included in the figure. A first important feature can be noted on the curve recorded on cooling with intermittent halts. After aging 7 $`hrs`$ at 12 K, $`\chi ^{\prime \prime }`$ has relaxed downward due to aging. But when cooling resumes, the curve rises and merges with the reference curve, as if the aging at 12 K was of no influence on the state of the system at lower temperatures. This chaos-like effect (in reference to the notion of chaos in temperature introduced in braymoore ) points out an important difference from a simpler description of glassy systems, in which there are equivalent equilibrium states at all low temperatures and aging at any temperature implies that this equilibrium state is further approached. Here, as pictured in more detail in other experiments uppsalaSaclay , only the last temperature interval of the cooling procedure does contribute to the approach of the equilibrium state at the final temperature. Note that this notion of ‘last temperature interval’ depends on the observation time scale of the measurement ($`\chi ^{\prime \prime }`$ is only sensitive to dynamical processes with a characteristic response time of order $`1/\omega `$ which in these experiments corresponds to $`4`$ s). In magnetisation relaxation experiments, the observation time corresponds to the time elapsed after the field change, and effects of aging at a higher temperature can be seen in the long-time part ($`10^310^4`$ s) of the relaxation curves marcos ; DjurCuMn in a correspondingly enlarged ‘last temperature interval’ compared to that of our current ac-susceptibility experiments. The curve recorded on re-heating in Fig. 2 clearly displays the memory effect: the dips at $`T_1`$ and $`T_2`$ are recovered. The long wait time (40 $`hrs`$) at $`T_2`$= 9 K has no apparent influence on the memory dip associated with $`T_1`$=12 K. This experiment displays a double memory where no interference effects are present, i.e. the two dips at $`T_1`$ and $`T_2`$ are, within our experimental accuracy, fully recovered when reheating the sample. A similar result has been obtained on a metallic Cu:Mn spin-glass sample uppsalaSaclay ; DjurCuMn , confirming the universality of aging dynamics in very different spin-glass realizations. This experimental procedure has been recently reproduced in extensive simulations of the 3d Edwards-Anderson model. Although weaker and more spread out in temperature, similar effects of a restart of aging upon cooling and of a memory effect upon heating have been found Takayama . The memory phenomenon is also observable in the in-phase susceptibility, $`\chi `$’. In the inset of Fig. 2, $`\chi `$’ is plotted in the region around $`T_1`$=12 $`K`$. A relaxation due to aging is visible, and it is also clear that when cooling resumes after aging, the $`\chi `$’ curve rather rapidly merges with the reference curve (chaos-like effect). Upon re-heating, the memory effect can be distinguished, the memory curve clearly departs from the reference in the 11-13K range. The relative weakness of the deviation compared to the large $`\chi `$’ value can be understood in the following way. Aging in spin glasses mainly affects processes with relaxation times of the order of the age of the system; processes with shorter relaxation times are already equilibrated, and processes with longer relaxation times are not active. $`\chi `$’ measures the integrated response of all short time processes up to the observation time $`1/\omega `$, whereas $`\chi `$” only probes processes with relaxation times of order $`1/\omega `$. The relative influence of aging is thus smaller in $`\chi `$’ than in $`\chi `$”. ### 3.2 Memory erasing by heating The memory of aging at $`T_1`$ remains imprinted in the system during additional aging stages at sufficiently lower temperatures, and is recovered when heating back to $`T_1`$. We have investigated what remains of this $`T_1`$-memory after heating up to $`T^{}>T_1`$ (keeping of course $`T^{}<T_g`$). The experiments were performed using only one intermittent stop at $`T_1=12`$ K for $`t_{w1}`$= 3 $`hrs`$, and continuing the cooling to about 10 K. Then $`\chi ^{\prime \prime }`$ was recorded upon heating the sample to $`T^{}`$ and immediately re-cooling it to 10 K. The results are displayed in Fig. 3. For higher and higher $`T^{}`$, the memory dip at $`T_1`$ becomes weaker and weaker, finally fading out at $`T^{}`$ 13 K. The additional shallow dips observed in a limited temperature region just below $`T^{}`$ and just above 10 K, the two temperatures where the temperature change is reversed, are due to the finite heating/cooling rate and to the overlap within this temperature range between the state created on heating (cooling) and the desireable state on re-cooling (re-heating) the sample jonssonetal . ### 3.3 Memory interference The memory is also affected by aging at a lower temperature, provided this temperature is close enough to $`T_1`$ or the time spent there is long enough. In order to systematically investigate this interference effect, we have performed double memory experiments in which we have varied the parameters $`T_2`$ and $`t_{w2}`$ of the aging stage at the lower temperature, but kept the initial aging temperature $`T_1`$= 12 K and wait time $`t_{w1}`$=3 $`hrs`$ fixed. Fig. 4 shows the results using a fixed value of $`t_{w2}`$=6 $`hrs`$ but two different values of $`T_2`$ (9.5 and 10.5 K). Two reference heating curves are added for comparison; one is recorded after a cooling procedure where no halts are made, and the other after cooling with only a single halt at $`T_1`$=12 K for 3 $`hrs`$ (‘single memory’). This latter curve is a reference for a pure memory effect at 12 K. The obtained 12 K dip is about the same for the pure memory and the double memory curve with $`T_2`$=9.5 K. However, in the experiment performed with $`T_2`$=10.5 K , the memory of the dip achieved at $`T_1`$=12 K has become more shallow. Thus, for a temperature difference $`\mathrm{\Delta }T`$=1.5 K, the memory of aging gets partly re-initialised, while for $`\mathrm{\Delta }T`$=2.5 K no re-initialisation is observed. Fig. 5 shows the results using a fixed value of $`T_2`$=10.5 K but two different values of the wait time at $`T_2`$ $`t_{w2}`$=6 $`hrs`$ and $`t_{w2}`$=12 $`hrs`$ . The reference curves are the same as in Fig. 4. The longer the time spent at $`T_2`$, the larger is the part of the memory dip at $`T_1`$ that has been erased. ## 4 Discussion ### 4.1 Memory and chaos effects in phase space pictures Some effects related to these memory and memory interference phenomena have been explored in the past through various experimental procedures, as well in ac as in dc (magnetisation relaxation following a field change) measurements hierarki ; Sitges ; UppsDT . The hierarchical phase space picture hierarki has been developed as a guideline that accounts for the various results of the experiments. Although this hierarchical picture deals with metastable states as a function of temperature, it is obviously reminiscent of the hierarchical organization of the pure states as a function of their overlap in the Parisi solution of the mean field spin glass mpv . We want to recall that some more quantitative analyses of the experiments sacorbach have shown that the barrier growth for decreasing temperatures should be associated with a divergence of some barriers at any temperature below $`T_g`$. Thus, what can be observed of the organization of the metastable states might well be applicable to the pure states themselves. From a different point of view, another link between the hierarchical picture and mean field results has been proposed in a tree version of Bouchaud’s trap model jpbdean . The restart of aging when the temperature is again decreased after having been halted at some value indicates that the free-energy landscape has been strongly perturbed. The metastable states have been reshuffled, but not in any random manner, because the memory effect implies a return to the previously formed landscape (with the initial population distribution) when the temperature is raised back. The restart of aging then corresponds to the growth of the barriers and to the birth of other ones, which subdivide the previous valleys into new ones where the system again starts some ab initio aging. This hierarchical ramification is easily reversed to produce the memory effect when the temperature is increased back. The full memory effect seen in the experiment of Fig. 2 requires a large enough temperature separation $`\mathrm{\Delta }T=T_1T_2`$, as is shown from the memory interference effects displayed in Figs. 4 and 5. If one forgets the restart of aging at the lower temperature, the memory effect can be given a simple explanation: the slowing down related to thermal activation is freezing all further evolution of the system. However, the memory effect takes place while important relaxations occur at lower temperatures, and it is clear that there must be some smaller limit of $`\mathrm{\Delta }T`$ below which the $`T_2`$ evolution is of influence on the $`T_1`$ memory. From other measurements hierarki ; Sitges , it has been shown that in the limit of small enough $`\mathrm{\Delta }T`$’s (of order 0.1-0.5 K), the time spent at $`T_2`$ contributes essentially additively to the aging at $`T_1`$, as an effective supplementary aging time. In that situation of small $`\mathrm{\Delta }T`$, the landscape at $`T_2`$ is not very different from that at $`T_1`$ (large ‘overlap’); the same barriers are relevant to the aging processes, although being crossed more slowly at $`T_2`$. But this is not the case in Figs. 4 and 5, where intermediate values of $`\mathrm{\Delta }T`$ have been chosen. The memory interference effect demonstrated in Figs. 4 and 5 is in agreement with earlier ac and dc experiments which used negative temperature cycling procedures hierarki ; UppsDT and intermediate magnitudes of $`\mathrm{\Delta }T`$. Such experiments were performed so that the sample first was aged at $`T_1`$ a wait time $`t_{w1}`$, and thereafter cooled to $`T_2`$ and kept there a substantial wait time $`t_{w2}`$, after which it was re-heated to $`T_1`$, where the relaxation of the ac or dc signal was recorded. The results of these experiments are that a partial re-initialisation of the system has occurred, but that simultaneously a memory of the original aging at $`T_1`$ remains. In Figs. 4 and 5, the partial loss of the $`T_1`$ memory dip corresponds to such a partial reinitialisation. In this case of an intermediate value of $`\mathrm{\Delta }T1`$ K, there are indeed differences between the landscapes at both temperatures. Still, they are hierarchically related, since a memory effect is found. But the memory loss of Figs. 4 and 5 suggests that the free-energies of the bottom of the valleys are different at $`T_1`$ and $`T_2`$, meaning that the thermodynamic equilibrium phase is different from one temperature to another. As discussed previously uppsalaSaclay , the restart of aging when the temperature is lowered is suggestive of chaos between the equilibrium correlations at different temperatures. The conclusion from the current results is thus that the free-energies of the metastable states vary chaotically with temperature, which reinforces the idea of a ‘chaotic nature of the spin glass phase’ braymoore . ### 4.2 Towards an understanding of memory and chaos effects in real space While these phase space pictures allow a good description of many aspects of the experimental results, a correct real space picture would form the basis for a microscopic understanding of the physics behind the phenomena. As aging proceeds, $`\chi `$” decreases, which means a decrease of the number of dynamical processes that have a time scale of order $`1/\omega `$. Aging corresponds to an overall shift of a maximum in the spectrum of relaxation times towards longer times, as was understood from the early observations of aging in magnetisation relaxation experiments lundgren83 . Thinking of the dynamics in terms of groups of spins which are simultaneously flipped, longer response times are naturally associated with larger groups of spins. In such ‘droplet’ FH and ‘domain’ Henk pictures, aging corresponds to the progressive increase of a typical size of spin glass domains. Difficulties are encountered in this real space description with ‘memory and chaos’ effects uppsalaSaclay . On the one hand, the restart of aging processes when the temperature is lowered indicates the growth of domains of different types at different temperatures. On the other hand, the memory of previous aging at a higher temperature can be retrieved; thus, the low temperature growth of domains of a given type does not irreversibly destroy the spin structures that have developed at a higher temperature. However, a heuristic interpretation of aging in spin glasses in terms of droplet excitations and growth of spin glass equilibrium domains, along the lines suggested in uppsalaSaclay and developed in jonssonetal , is perhaps able to include the memory phenomena discussed in this paper. The carrying idea of this phenomenology is that at each temperature there exists an equilibrium spin glass configuration that is two fold degenerate by spin reversal symmetry. The simple picture of Fisher-Huse FH , where only compact domains are considered, is hard to reconcile with the memory effect reported here uppsalaSaclay . As suggested in various theoretical work fractaldiv ; fractclust ; jpbdean ; mfnoneq , we assume that the initial spin configuration results in an interpenetrating network of fractal ‘up’ and ‘down’ domains of all sizes separated by rough domain walls. We furthermore propose to modify somewhat the original interpretation of the ‘overlap length’ $`L_{\mathrm{\Delta }T}`$. The standard picture states that the equilibrium configurations corresponding to two nearby temperatures $`T`$ and $`T\pm \mathrm{\Delta }T`$, are completely different as soon as one looks at a scale larger than $`L(\mathrm{\Delta }T)`$. However, in a non-equilibrium situation, we believe that some fractal large scale (larger than $`L_{\mathrm{\Delta }T}`$) ‘skeletons’, carrying robust correlations, can survive to the change of temperature, and are responsible for the memory effects. An assumption of this sort is, we think, needed to account for the existence of domains of all sizes within the initial condition. If the initial spin configuration was purely random as compared to the equilibrium one, then the problem would be tantamount to that of percolation far from the critical point, where only small domains exist. The allowed excitations in this system are droplets of correlated ‘up’ or ‘down’ regions of spins of all sizes. Within this model, the magnetisation of the sample in response to a weak magnetic field is caused by polarisation of droplets, and the out-of-phase component of the susceptibility directly reflects the number of droplet excitations in the sample with a relaxation time equal to $`1/\omega `$. The size of spin glass domains is in the following denoted $`R`$ and the size of a droplet excitation $`L`$. As a function of time, the size of the excited droplets grows as $`L(T,t_a)`$. The effect of a droplet excitation on the spin configuration is different depending on the size and position of the droplet and the age of the spin glass system. A small droplet excitation $`LR`$ most probably is just an excitation within an equilibrium spin glass configuration, yielding no measurable change of the spin system. An excitation of size $`LR`$ may (i) remove the circumventing domain wall separating an up domain from a down domain, (ii) slightly displace an existing domain wall or (iii) just occur within an equilibrium spin configuration. After a few decades in time, the result of the numerous dispersed droplet excitations of sizes $`LL(T,t_a)`$ is that most domains of size $`RL(T,t_a)`$ are removed, whereas most larger domains remain essentially unaffected, only having experienced numerous slight domain wall displacements. In other words, in this picture, the structure of the large domains is unaffected by the dynamics. If the temperature now is changed to a temperature where the overlap length, $`L_{\mathrm{\Delta }T}`$, is smaller than the typical droplet size $`L(T_1,t_a)`$ active at the original temperature $`T_1`$, the new initial condition still leads to an interpenetrating network of up and down domains of all sizes relevant to the new temperature (chaos implies that the domain spin structure is different from the equilibrium configuration at $`T_1`$). A similar process as discussed above now creates a spin configuration of equilibrium structure on small length scales, where only small domains (up to a size $`L(T_2,t_a)`$) are erased, leaving large ones essentially unaffected. Returning to the original temperature, there will exist a new domain pattern on small length scales corresponding to droplets with short relaxation time and there will additionally remain an essentially unperturbed domain pattern on large length scales originating from the initial wait time at this temperature. The small length scale domains are rapidly washed out and a domain pattern equivalent to the original one is rapidly recovered. These dynamic features of the spin structure are reflected in the susceptibility experiments discussed above. The out of phase component gives a measure of the number of droplets that have a relaxation time equal to the observation time of the experiment, $`1/\omega `$, the decrease of the magnitude of the susceptibility with time implies that the number of droplets of relaxation time $`1/\omega `$ decays toward an equilibrium value obtained when ‘all’ domains of this size are extinguished and all subsequent droplet excitations of this size occur within spin glass ordered regions. The fact that a dip occurs when the temperature is recovered mirrors that the long length scale spin configuration is maintained during an aging period at lower temperatures, where only droplet excitations on much smaller length scales are active. The memory interference effects are within this picture immediate consequences of that the droplet excitations at the nearby temperature $`T_2`$ are allowed to reach the length scales of the original domain growth at $`T_1`$ leading to an enhanced number of droplet excitations of the size of this reconstructed domain pattern. When heating above the temperature for the aging process, Fig. 3, and outside the region of overlapping states, the equilibration processes at the high temperature reach longer length scales than at the aging temperature, and the memory of the equilibration at $`T_1`$ is rapidly washed out. On the other hand, in the experiments where the sample is cooled (Figs. 4 and 5) below $`T_1`$, the processes require longer aging time to reach the length scales of the aging at $`T_1`$ and the intereference effects become larger with increased time and higher temperature. ## 5 Conclusions When cooling a spin glass to a low temperature in the spin glass phase, a memory of the specific cooling sequence is imprinted in the spin configuration and this memory can be recalled when the system is continuously re-heated at a constant heating rate uppsalaSaclay . E.g. in a ‘double memory experiment’ two intermittent halts, one at $`T_1`$ a time $`t_{w1}`$ and another at $`T_2`$ a time $`t_{w2}`$ are made while cooling the sample. Depending on the parameters $`T_2`$ and $`t_{w2}`$ it is possible to partly reinitialise (erase) or fully keep the memory of the halt at the higher temperature $`T_1`$. These memory and memory interference effects can on the one hand be incorporated in hierarchical models for the configurational energies at different temperatures including a chaotic nature of the spin glass phase. On the other hand, a preliminary phenomenological real space picture has been proposed to account for the observed phenomena. In our mind, much remains to be done on the theoretical side to put this ‘fractal’ droplet picture on a firmer footing. ## 6 Acknowledgments Financial support from the Swedish Natural Science Research Council (NFR) is acknowledged. We are grateful to L.F. Cugliandolo, T. Garel, S. Miyashita, M. Ocio and H. Takayama for valuable discussions, and to L. Le Pape for his technical support.
no-problem/9904/astro-ph9904277.html
ar5iv
text
# 350 Micron Dust Emission from High Redshift Objects ## 1 Introduction The study of molecular gas and dust is a significant observational tool for probing the physical conditions and star formation activity in local galaxies. Recent advances in observational techniques at submillimeter and millimeter wavelengths now permit such studies to be made at cosmological distances. As a result of the IRAS survey many local objects are recognized as containing a large mass of dust and gas, such that the objects may be more luminous in the far-infrared (FIR) than in the optical. The question as to how many such objects there may be at cosmological distances, and whether they can account for the recently discovered FIR cosmic background (\[Puget et al. 1996\]; \[Fixsen et al. 1998\]; \[Hauser et al. 1998\]), is attracting much interest and spawning new instrument construction and new surveys (\[Hughes et al. 1998\]; \[Ivison et al. 1998\]; \[Kawara et al. 1998\]; \[Puget et al. 1999\]; \[Barger et al. 1998\]; \[Lilly et al. 1999\]). Existing submillimeter cameras on ground-based telescopes are not yet sensitive and large enough to detect distant objects at 350$`\mu `$m in arbitrary blank fields, e.g., the initial Caltech Submillimeter Observatory (CSO) SHARC survey which achieved 100 mJy ($`1\sigma `$) over about 10 square arcminutes (\[Phillips 1997\]). However, such cameras can measure the 350$`\mu `$m flux of objects of known position. A step forward in the field was the recognition of IRAS F10214+4724 as a high redshift object ($`z=2.286`$) by Rowan-Robinson et al. (1991). However, it has proved difficult to find many such objects to study. On the other hand, quasars have sometimes proved to exist in the environs of dusty galaxies (\[Haas et al. 1998\]; \[Lewis et al. 1998\]; \[Downes & Solomon 1998\]). Omont et al. (1996b) have shown by means of a $`1300\mu `$m survey of radio-quiet quasars that the dust emission at high redshifts can be detected in a substantial fraction of their project sources. In this letter we present measurements at 350$`\mu `$m towards a sample of 20 sources with redshifts $`1.8<z<4.7`$ which were selected from different surveys and studies (references are given in Tables 1 & 2). The sources had previously been detected at longer wavelengths, and are predominantly from the work of Omont et al. (1996b) and Hughes, Dunlop & Rawlings (1997). A wavelength of 350$`\mu `$m roughly corresponds to the peak flux density of highly redshifted ($`z3`$) dust emission of objects with temperatures of 40 to 60 K. Together with measurements at longer wavelengths, it strongly constrains the dust temperature and, hence, the dust mass and, especially, the luminosity of the object. Some of these results were first presented by Benford et al. (1998). In this paper, we assume $`\mathrm{H}_0=h_{100}\times 100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and $`\mathrm{\Omega }_0=1`$. ## 2 Observations and Results The measurements were made during a series of observing runs in 1997 February and October and 1998 January and April with the 10.4 m Leighton telescope of the CSO on the summit of Mauna Kea, Hawaii, during excellent weather conditions, with 225 GHz atmospheric opacities of $`<0.05`$ (corresponding to an opacity of $`<1.5`$ at 350$`\mu `$m). We used the CSO bolometer camera, SHARC, described by Wang et al. (1996) and Hunter, Benford, & Serabyn (1996). It consists of a linear 24 element close-packed monolithic silicon bolometer array operating at 300 mK. During the observations, only 20 channels were operational. The pixel size is $`5^{\prime \prime }`$ in the direction of the array and $`10^{\prime \prime }`$ in the cross direction. The weak continuum sources were observed using the pointed observing mode with the telescope secondary chopping in azimuth by 30<sup>′′</sup> at a rate of 4 Hz. The telescope was also nodded between the on and the off beams at a rate of $``$0.1Hz. The point-source sensitivity of SHARC at 350$`\mu `$m is $`1\mathrm{Jy}/\sqrt{\mathrm{Hz}}`$ and the beam size is $`9^{\prime \prime }`$ FWHM. All measurements were made at $`350\mu `$m with the exception of H 1413+117 which was also observed at $`450\mu `$m. Pointing was checked regularly on nearby strong galactic sources which also served as secondary calibrators, and was found to be stable with a typical accuracy of $`<3^{\prime \prime }`$. The planets Mars, Saturn and Uranus served as primary flux calibrators. The absolute calibration was found to be accurate to within 20%. Repeated observations of H$`\mathrm{\hspace{0.17em}1413}+117`$ and F$`\mathrm{\hspace{0.17em}10214}+4724`$ confirmed a relative flux accuracy of $`20\%`$. The data were reduced using the CSO BADRS software package. Typical sensitivities ($`1\sigma `$) of $`20`$ mJy were achieved, after $`2500`$s of on–source integration time. Nine sources were detected at levels of $`4\sigma `$ and above, as outlined in Table 1. Included are the $`z>\mathrm{\hspace{0.17em}4}`$ quasars BR 1202$``$0725, BRI 1335$``$0417 and HM 0000$``$263. Except for the Cloverleaf (H 1413+117; \[Barvainis, Antonucci, & Coleman 1992\]), the present measurements are the first reported detections for high redshift quasars at 350$`\mu `$m. Many of the sources were measured three or more times providing both consistency checks and improvements in the accuracy of the flux densities. The two strongest sources, H 1413+117 and IRAS F10214+4724, were often measured before starting the long ($``$2-3 hours) integrations on the weaker sources. As an illustration of the data quality, Figure 1 shows the 350$`\mu `$m CSO–SHARC measurement towards BR 1202$``$0725 at $`z=\mathrm{\hspace{0.17em}4.69}`$. This measurement corresponds to a total of 4 hours integration on source. The source is centered at offset zero. The other channels provide a measure of the neighboring blank sky emission and a reference for the quality of the detection. 11 sources with redshifts between 1.8 and 4.5 were not detected at 350$`\mu `$m, with flux density upper limits at the 3$`\sigma `$ level of 30 -$`\mathrm{\hspace{0.17em}125}`$mJy. Table 2 lists their names, redshifts, 350$`\mu `$m flux density measurements with $`\pm 1\sigma `$ errors, and a 3$`\sigma `$ upper limit to their luminosities (see below). Fig. 1 – The 350$`\mu `$m flux density measured in the bolometers of the SHARC array towards BR1202$``$0725 at $`z`$=4.69. Offsets are given in arcsec with respect to the reference channel number 10. The source is centered at offset zero. Emission is also seen in the two neighboring pixels because the bolometers sample the diffraction pattern of the telescope with a Nyquist sampling. TABLE 2 Sources with upper limits at 350$`\mu `$m Source $`z`$ S<sub>350</sub> L<sub>FIR</sub> ($`3\sigma `$) Ref. (mJy, $`\pm 1\sigma `$) (10$`{}_{}{}^{12}h_{100}^{2}`$ L) BR2237$``$0607 4.56 $``$$`5\pm 15`$ $`<6`$ 1 BRI0952$``$0115 4.43 $``$$`8\pm 22`$ $`<8`$ 1,2 PSS0248+1802 4.43 $`75\pm 22`$ $`<9`$ 3 BR1117$``$1329 3.96 $``$$`27\pm 13`$ $`<4`$ 1 Q0302$``$0019 3.28 $``$$`34\pm 21`$ $`<5`$ 4 Q0636$`+`$680 3.18 $`123\pm 38`$ $`<9`$ 5 Q2231$``$0015 3.01 $`65\pm 24`$ $`<6`$ 4 MG0414$`+`$0534 2.64 $`24\pm 35`$ $`<8`$ 6 Q0050$``$2523 2.16 $``$$`69\pm 42`$ $`<9`$ 4 Q0842$`+`$3431 2.13 $``$$`16\pm 10`$ $`<2`$ 1 Q0838$`+`$3555 1.78 $``$$`39\pm 19`$ $`<4`$ 1 References. — 1. Omont et al. (1996b); 2. Guilloteau et al. (1999); 3. Kennefick et al. (1995); 4. Hewett, Foltz & Chaffee (1995); 5. Sargent, Steidel & Boksenberg (1989); 6. Barvainis et al. (1998) ## 3 Discussion Figure 2 displays the spectral energy distributions of the six sources detected at 350$`\mu `$m for which fluxes at two or more other wavelengths are available from the literature. In the following, we will first comment on individual sources and then discuss the physical properties of the objects. The two radio-quiet, $`z>4`$ quasars, BR 1202$``$0725 and BRI 1335$``$0417, are exceptional objects with large masses of gas ($`10^{11}\mathrm{M}_{}`$) which have been detected in CO by Omont et al. (1996a) and Guilloteau et al. (1997). They both are clearly detected at 350$`\mu `$m with flux densities of $`106\pm 7`$ and $`52\pm 8`$mJy, respectively. However, BRI 0952$``$0115, which is the third $`z>4`$ quasar in which CO has been measured (\[Guilloteau et al. 1999\]), is not detected at 350$`\mu `$m at a 3$`\sigma `$ level of 65 mJy. This upper limit is consistent with the weak flux density of BRI 0952$``$0115 at 1.3 mm, $`2.8\pm 0.6\mathrm{mJy}`$ (\[Omont et al. 1996b\]; \[Guilloteau et al. 1999\]), and a temperature of $`50\mathrm{K}`$ (see below). The radio-quiet quasar HM 0000$``$263 at $`z=4.11`$, which was not detected at $`1.25\mathrm{mm}`$ using the 30m telescope (\[Omont et al. 1996b\]), due to its low declination, shows a large flux density at 350$`\mu `$m (134$`\pm `$29 mJy). Measurements at other wavelengths would be useful to further constrain the properties of this object. The detection of the $`z=3.8`$ radiogalaxy 4C41.17 with a flux density of $`37\pm 9`$mJy at 350$`\mu `$m is one of the most sensitive measurements of this study. This sensitivity was reached after only 3/4 of an hour of on-source time and defines the limits which can be achieved with SHARC in the pointed observing mode under excellent weather conditions. A marginal detection ($`4\sigma `$) was achieved at 350$`\mu `$m of PC 2047+0123, a $`z=3.80`$ quasar studied by \[Ivison (1995)\]. Finally, the 350$`\mu `$m flux density of the Cloverleaf (H 1413+117) is significantly higher than the value published by Barvainis, Antonucci, & Coleman (1992), i.e. 293$`\pm 14`$mJy as compared to 189$`\pm 56`$mJy. We have also obtained for the Cloverleaf a 450$`\mu `$m flux density of $`226\pm 34\mathrm{mJy}`$ in excellent agreement with the measurement of $`224\pm 38\mathrm{mJy}`$ at $`438\mu `$m of Barvainis, Antonucci, & Coleman (1992), as shown in Figure 2. A greybody was fit to the data points $`\mathrm{S}_\nu `$, for wavelengths of 350$`\mu `$m and longward, as a function of the rest frequency $`\nu =\nu _{\mathrm{obs}}/(1+z)`$, of the form $$\mathrm{S}_\nu =\mathrm{B}_\nu \mathrm{\Omega }[1\mathrm{exp}(\tau )]\mathrm{with}\tau =(\nu /\nu _0)^\beta ,$$ (1) where $`\nu _0=2.4\mathrm{THz}`$ is the critical frequency at which the source becomes optically thin and $`\mathrm{\Omega }`$ is the solid angle of emission. The shape of the fitted greybody is very weakly dependent on the value of $`\nu _0`$ (\[Hughes et al. 1993\]). The data were each weighted by their statistical errors in the $`\chi ^2`$ minimization. This yields the dust temperature, dust mass (following Hildebrand (1983), using a dust mass emission coefficient at $`\nu _0`$ of $`1.9\mathrm{m}^2\mathrm{kg}^1`$), and luminosity of the sources. When $`\beta `$ is considered as a free parameter, we find that the average value of $`\beta =1.5\pm 0.2`$ for the detected sources. The fits shown in Figure 2 assume an emissivity index of $`\beta =1.5`$. We estimated the $`1\sigma `$ uncertainty in the temperature by examining the $`\chi _\nu ^2`$ hypersurface in the range $`1\beta 2`$, similarly to the method of Hughes et al. (1993). To evaluate the uncertainties associated with the mass and luminosity, derived from the fitted temperature and $`\beta =1.5`$, we used the maximum and minimum values of the mass and luminosity which are compatible with the data plus or minus the statistical error. No lensing amplification was taken into account. The temperature, dust mass and luminosities derived under these assumptions are given in Table 1. Two of the sources with upper limits have 1.25mm detections (\[Omont et al. 1996b\]), which, together with the 350$`\mu `$m data, yields an upper limit to their temperature. For Q$`\mathrm{\hspace{0.17em}0842}+3431`$, we find that $`\mathrm{T}_{\mathrm{dust}}<40`$K while for BR$`\mathrm{\hspace{0.17em}1117}1329`$ a limit of $`\mathrm{T}_{\mathrm{dust}}<60\mathrm{K}`$ is found. If the dust is at the temperature limit, these quasars have dust masses $`<10^8M_{}`$. For the other sources, an estimate of the maximum luminosity has been given under the assumption that each object has a temperature of 50 K and an emissivity index of $`\beta =1.5`$. The total luminosity is probably underestimated, since a large luminosity contribution from higher temperature dust cannot be ruled out for most sources. However, in the case of H 1413+117 and IRAS F10214+4724, the available IRAS data allow us to fit an additional warm component. For IRAS F10214+4724, the cold component model carries roughly 60% of the total luminosity; in the case of H 1413+117, which has a hotter mid-IR spectrum, the total luminosity is underestimated by a factor of 3. Under the assumption that the majority of the luminosity is carried by the cold component (Table 1), the median luminosity-to-mass ratio is around 100 $`L_{}/M_{}`$, assuming a gas-to-dust ratio of $`500`$ similar to that of IRAS F10214+4724 and H 1413+117 (\[Downes et al. 1992\]; \[Barvainis et al. 1995\]) or of ultraluminous infrared galaxies (ULIRGs), i.e. $`540\pm 290`$ (\[Sanders, Scoville, & Soifer 1991\]). The peak emission in the rest frame is found to be in the wavelength range $`\lambda _{\mathrm{peak}}6080\mu `$m (Figure 2) implying dust temperatures of 40-60 K (Table 1). These temperatures are nearly a factor of two lower than previously estimated for ultraluminous sources (e.g. \[Chini & Krügel 1994\]). If the temperature range we find is typical for the cold component of highly redshifted objects, multi-band photometric studies in the submillimeter/FIR, such as planned with FIRST, will provide reasonably accurate redshift estimates for the sources detected in deep field surveys. The global star formation rate in each of the detected sources can be estimated using the relation of Thronson & Telesco (1986): $`\mathrm{SFR}\mathrm{\Psi }\mathrm{\hspace{0.17em}10}^{10}L_{\mathrm{FIR}}/L_{}h_{100}^2M_{}\mathrm{yr}^1`$ with $`\mathrm{\Psi }`$0.8-2.1. For our mean luminosity of $`1.7\times 10^{13}h_{100}^2L_{}`$, this yields a SFR of $`2000M_{}h_{100}^2\mathrm{yr}^1`$ (uncorrected for lensing) if all the submillimeter flux is from a starburst component. If we assume a final stellar mass of $`2\times 10^{12}M_{}`$, a value appropriate to a giant elliptical like M87 (\[Okazaki & Inagaki 1984\]), then the timescale for formation in a single massive starburst is $`10^9h_{100}^2`$yr. Given the large mass of dust already present in these quasars, a substantial amount of this star formation must already have occurred. For the most distant quasars, where the age of the universe is similar to the derived formation timescale, this implies a very high redshift ($`z>5`$) for the era of initial star formation, in agreement with models of high redshift Lyman-alpha emitters (\[Haiman & Spaans 1999\]). Acknowledgments The CSO is funded by the NSF under contract AST96-15025. We thank T.R. Hunter for help with the fitting/derivation programming and D. Downes for helpful comments. One of us (P.C.) acknowledges financial supports from INSU (Programmes Grands Télescopes Etrangers) and PCMI.
no-problem/9904/math9904180.html
ar5iv
text
# 1 Introduction ## 1 Introduction Let $`M`$ denote a compact oriented three-manifold. A plane field $`\eta `$ on $`M`$ is a subbundle of the tangent bundle $`TM`$ which associates smoothly to each point $`pM`$ a two-dimensional subspace $`\eta (p)T_pM`$. Unlike line fields, a plane field cannot always be integrated to yield a two-dimensional foliation $``$. A plane field is said to be integrable if it can be “patched together” to yield a foliation whose leaves are tangent to the plane field at each point. Certainly, such plane fields have strong topological and geometric properties. On the other hand, the case where the plane field $`\eta `$ is nowhere integrable can be equally important. A maximally nonintegrable (in the sense of Frobenius — see §5) plane field on an odd-dimensional manifold is a contact structure. Seen as an “anti-foliation”, contact structures are rich in geometric and topological properties which of late have become quite important in understanding the topology of three-manifolds and the symplectic geometry of four-manifolds. Let $`X`$ be a vector field on $`M`$. The dynamics of $`X`$ are often related to global properties of $`M`$. If we further specify that $`X`$ is tangent to a plane field $`\eta `$ — that is, $`X(p)\eta (p)`$ for all $`pM`$ — then we might expect stronger relationships. We will consider the ways in which the topology and geometry of a plane field $`\eta `$ are coupled to the dynamics of vector fields contained in $`\eta `$. The general principal at work here as elsewhere is that simple dynamics implicate simple topological objects in dimension three. We will reassert this by examining the gradient flows within plane fields. The examination and classification of gradient flows has been ubiquitous in the study of manifolds: e.g., the h-cobordism theorem and the resolution of the high-dimensional Poincaré Conjecture. This paper will add to the typical scenario the constraint of lying within a plane field. Atypical restrictions on the dynamics and on the underlying manifold are born out of this. We note that the problem of understanding gradient fields constrained to lie within plane fields is by no means unnatural. The study of mechanical systems with nonholonomic constraints is precisely the study of flows constrained to lie within a nowhere integrable distribution (i.e., in odd dimensions, a contact structure). For example, gradient flows for mechanical systems have been used successfully in the control of robotic systems (see, e.g., ): to maneuver a robot from points $`A`$ to $`B`$ through a physical space replete with obstacles, one establishes a gradient flow on a suitable configuration space with $`B`$ as a sink, with $`A`$ in the basin of attraction for $`B`$, and with infinite walls along the obstacles. In this paper, we show that the nonholonomic version of this procedure possesses potentially difficult topological obstructions. The paper is organized as follows: the remainder of this section provides a brief sketch of the requisite theory from the dynamical systems approach to flows. In §2, we commence our investigation of plane field flows by examining local and global properties of fixed points: fixed points will not be isolated, but must (on an open dense subset of $`C^r`$ vector fields tangent to $`\eta `$, $`r1`$) rather appear in links, or embedded closed curves. This culminates in a classification of gradient flows on three-manifolds which can lie within a plane field in §3. The existence of such flows is equivalent to the existence of a certain type of round handle decomposition for the manifold (see Definition 3.2). Surprisingly, this same restriction appears when considering energy surfaces for (Bott-) integrable Hamiltonian flows . Theorem: Let $`M`$ be a compact 3-manifold outfitted with a plane field $`\eta `$. If $`X`$ is a nondegenerate<sup>1</sup><sup>1</sup>1See Definition 2.9. gradient field tangent to $`\eta `$, then $`X`$ lies in the boundary of the space of nonsingular Morse-Smale flows on $`M`$. Furthermore, the set of fixed points for $`X`$ forms the cores of an essential round handle decomposition for $`M`$. This leads to the corollary (a stronger form of which is proved in §3): Corollary: Non-gradient dynamics is a generic (residual) property in the class of $`C^r`$ ($`r4`$) vector fields tangent to a fixed $`C^r`$ plane field on a closed hyperbolic three-manifold. In §4, we consider the manifestation of these restrictions on a knot-theoretic level for the particular case of the 3-sphere. Theorem: For $`X`$ a nondegenerate gradient plane field flow on $`S^3`$, each connected component of the fixed point set of $`X`$ is a knot whose knot type is among the class generated from the unknot by the operations of iterated cabling and connected sum. We proceed with remarks on two cases in which the plane field carries additional geometric structure: first, the case of an everywhere integrable plane field, i.e., a foliation; and second, the case of a maximally nonintegrable plane field, i.e., a contact structure. The property of carrying a gradient flow in a foliation forces the foliation to be taut; hence, there are no (nondegenerate) gradient flows within a foliation on $`S^3`$. More generally, we have the following restrictions on the underlying three-manifold: Theorem: A closed orientable three-manifold containing a nondegenerate gradient field within a $`C^r`$ ($`r2`$) codimension-one foliation must be a surface bundle over $`S^1`$ with periodic (or reducibly periodic) monodromy map. The corresponding restrictions do not hold for the contact case. We demonstrate that gradient fields can always reside within the analogue of a non-taut foliation: an overtwisted contact structure. We close with two questions on the higher dimensional versions of the results of this paper. ### 1.1 The dynamics of flows Ostensibly, flows within a plane field would appear to be a relatively restricted class of objects. However, the dynamics of such flows can exhibit behaviors which range from strictly two-dimensional dynamics (as when the plane field yields a foliation by compact leaves) to fully three-dimensional phenomena (e.g., an Anosov flow, which is tangent to a pair of transverse integrable plane fields). In §2, we show that near a fixed point of a plane field flow, the dynamics are locally “stacked” planar dynamics. In contrast, it is a simple exercise in homotopy theory that every nonsingular flow on $`S^3`$ (or any integral homology 3-sphere) lies within a plane field. A few definitions are important for the dynamical systems theory used in this paper. The most important aspect of a flow with respect to its geometry and dynamics is the notion of hyperbolicity. Recall that an invariant set $`\mathrm{\Lambda }M`$ of a flow $`\varphi ^t`$ is hyperbolic if the tangent bundle $`TM|_\mathrm{\Lambda }`$ has a continuous $`\varphi ^t`$-invariant splitting into $`E^\varphi E^sE^u`$, where $`E^\varphi `$ is tangent to the flow direction, and $`D\varphi ^t`$ uniformly contracts and expands along $`E^s`$ and $`E^u`$ respectively: i.e., $$\begin{array}{cc}D\varphi ^t(\text{v}^s)Ce^{\lambda t}\text{v}^s\hfill & \text{for }\text{v}^sE^s\hfill \\ D\varphi ^t(\text{v}^u)Ce^{\lambda t}\text{v}^u\hfill & \text{for }\text{v}^uE^u\hfill \end{array},t>0,$$ (1) for some $`C1`$ and $`\lambda >0`$. A flow $`\varphi ^t`$ which is hyperbolic on all of $`M`$ is called an Anosov flow. The existence of hyperbolic invariant sets greatly simplifies the analysis of the dynamics. The principal tool available is the Stable Manifold Theorem , which states that for a hyperbolic invariant set, the distributions $`E^s`$ and $`E^u`$ are in fact tangent to global stable and unstable manifolds: manifolds, all of whose points have the same backwards and forwards (resp.) asymptotic behavior. See any of the standard texts (e.g., ) for further information and examples. ## 2 Fixed points In analyzing the dynamics and topology of a flow, one examines dynamical $`n`$-skeleta of increasing dimension: first the fixed points, then periodic and connecting orbits, lastly higher-dimensional invariant manifolds and attractors. This section concerns the typical distribution of fixed points for plane field flows. ###### Lemma 2.1 Given $`\eta `$ a $`C^r`$ plane field on $`M^3`$ and $`pM`$ there exists a neighborhood $`U\text{}^3`$ of $`p`$ along with local coordinates $`(x,y,z)`$ on $`U`$ such that $`\eta =\text{ker}(\alpha )`$, where $`\alpha `$ is a one-form given by $$\alpha =dz+g(x,y,z)dy,$$ (2) for some function $`g`$ which vanishes at the origin. The space $`\mathrm{\Gamma }^r(\eta |_U)`$ of $`C^r`$ sections of $`\eta `$ on $`U`$ is isomorphic to $`C^r(\text{},C^r(\text{}^2,\text{}^2))`$, the space of $`C^r`$ arcs of $`C^r`$ planar vector fields. Proof: That $`\alpha `$ exists is easy to derive (and is stated in ): choose coordinates $`(x,y,z)`$ so that $`/z`$ is transverse to $`\eta `$ on $`U`$. Then, after rescaling, $`\eta `$ is the kernel of $`dz+f(x,y,z)dx+g(x,y,z)dy`$. By a change of variables, one can eliminate $`f`$ and remove constant terms in $`g`$. Parameterize $`U`$ as $`\{\text{}^2\times \{z\}:z\text{}\}`$. Given any 1-parameter family of functions $`F_z:\text{}^2\text{}^2`$, there is a well-defined vector field on $`U`$ given by $$\begin{array}{c}\dot{x}=f_1(x,y,z)\hfill \\ \dot{y}=f_2(x,y,z)\hfill \\ \dot{z}=g(x,y,z)f_2(x,y,z)\hfill \end{array}\text{ where }F_z(x,y)=(f_1(x,y,z),f_2(x,y,z)),$$ (3) which lies within $`\eta `$ by Equation 2. Similarly, any vector field on $`U`$ contained in $`\eta `$ induces a 1-parameter family of planar vector fields $`F_z:\text{}^2\text{}^2`$ by inverting the above procedure. Since $`/z`$ is always transverse to $`\eta `$, zeros of $`F_z`$ correspond precisely with zeros of the induced vector field in $`\eta `$. Note finally that the correspondence is natural with respect to the $`C^r`$-topology (nearby families of planar vector fields induce nearby plane field flows and vice versa). $`\mathrm{}`$ ###### Proposition 2.2 Let $`\eta `$ be a $`C^r`$ ($`r1`$) plane distribution on $`M`$ a compact 3-manifold, and let $`\mathrm{\Gamma }(\eta )`$ denote the space of $`C^r`$ sections of $`\eta `$. Then on an open dense subset of $`\mathrm{\Gamma }(\eta )`$, the fixed point set is a smooth finite link of embedded circles. Proof: From the Transversality Theorem (see \[13, p. 74\]) we know there is an open dense subset of sections of $`\eta `$ which are transverse to the zero section. The proposition clearly follows. $`\mathrm{}`$ ###### Corollary 2.3 Let $`X`$ be any vector field on $`M^3`$ contained in the distribution $`\eta `$. Then any fixed point of $`X`$ is nonhyperbolic. Proof: Hyperbolic fixed points are isolated and persist in $`C^1`$-neighborhoods of vector fields; hence, they cannot be perturbed to yield circles of fixed points. $`\mathrm{}`$ To analyze the dynamics near a curve of singularities, we show that for all but finitely many points, the dynamics are transversally hyperbolic; i.e., after ignoring the nonhyperbolic direction along the curve, the flow is hyperbolic along the tangent plane transverse to the curve. We then turn to classify the (codimension-1) bifurcations in the transverse behavior along a curve of singularities. ###### Proposition 2.4 Let $`X\eta `$ be a $`C^r`$ ($`r2`$) section of a $`C^r`$ plane field $`\eta `$. Then on a residual set of such vector fields, $`\text{Fix}(X)`$ is a link $`L`$ which is transversally hyperbolic with respect to all but finitely many $`pL`$. Proof: By a standard argument (see \[13, p. 74\]) is suffices to show that there is an open cover $`\{U_i\}`$ of $`M`$ for which there is a residual set of sections of $`\eta |_{U_i}`$ with the desired property. Cover each $`pM`$ by a chart as in Lemma 2.1. On each chart, consider the map from $`\text{}^3\text{}^2`$ induced by a section of $`\eta `$. Extend this to a map into the 1-jet space $`J^1(\text{}^3,\text{}^2)`$ to capture information about the linearization of the flow. One may easily find a codimension three stratified subset $`S`$ of $`J^1(\text{}^3,\text{}^2)`$ on which a section will both vanish and be transversally nonhyperbolic. Thus by the Jet Transversality Theorem for $`C^2`$ maps we obtain a residual subset of sections of $`\eta `$ whose 1-jets transversally intersect $`S`$ at isolated points (which clearly must lie on $`L`$). $`\mathrm{}`$ ###### Corollary 2.5 Under the hypotheses of Proposition 2.4, the singular link $`L`$ is transverse to $`\eta `$ at all but a finite number of points. Proof: If the curve of singularities $`S`$ is tangent to the plane field $`\eta `$ at a point $`p`$, then $`p`$ is not transversally hyperbolic since the eigenvalue whose eigenvector points in the direction transverse to $`\eta `$ is zero (the vector field can have no component in the direction transverse to $`\eta `$). $`\mathrm{}`$ It is now a simple matter to classify the points at which the vector field is not transversally hyperbolic to the equilibria. Thanks to Lemma 2.1, this analysis reduces simply to bifurcation theory of fixed points in planar vector fields. In particular, there are precisely two ways in which a (generic) $`X\eta `$ can fail to be transversally hyperbolic at a point. Given any singular point $`pS`$, the transverse dynamics is characterized by the pair of transverse eigenvalues for the linearized flow: $`\lambda ^x`$ and $`\lambda ^y`$. Transverse hyperbolicity fails if and only if one or both of these eigenvalues has zero real part. Generically, this can occur in two distinct ways. First, $`\lambda ^x`$ and $`\lambda ^y`$ may be both real, and one of them goes transversally through zero: this is a saddle-node bifurcation. Second, $`\lambda ^x`$ and $`\lambda ^y`$ may be a complex conjugate pair of eigenvalues which together pass through the imaginary axis transversally: this is a Hopf bifurcation. Again, these names correspond with analogous bifurcations of fixed points in planar vector fields. ###### Proposition 2.6 In the unfolding of a $`C^r`$-generic ($`r2`$) saddle node bifurcation on a curve of fixed points in a plane field flow, there is a quadratic tangency between the plane field and the fixed point curve, along with a one-parameter family of heteroclinic connections between fixed points limiting onto the bifurcation point, as in Figure 1. Proof: As per Lemma 2.1, choose a coordinate system $`(x,y,z)`$ on a neighborhood of the bifurcation point $`p`$ so that $`/z`$ is everywhere transverse to the plane field $`\eta `$. It is also clearly possible (via the Stable Manifold Theorem) to choose coordinates so that the $`x`$-direction corresponds to the eigenvector for the transversally hyperbolic eigenvalue $`\lambda ^x`$. By Lemma 2.1, the unfolding of this codimension-1 fixed point in a plane field flow corresponds to the codimension-1 unfolding of a generic fixed point in a planar vector field having one hyperbolic eigenvalue and one eigenvalue with zero real part. The unfolding of the planar saddle-node is conjugate to the system $$F_z:\begin{array}{c}\dot{x}=\lambda ^xx\hfill \\ \dot{y}=zay^2\hfill \end{array},$$ (4) for some $`a0`$, which, under Equation 3, corresponds to the vector field within $`\eta `$ $$\begin{array}{c}\dot{x}=\lambda ^xx\hfill \\ \dot{y}=zay^2\hfill \\ \dot{z}=g(x,y,z)(zay^2)\hfill \end{array}.$$ (5) The curve of fixed points is thus a parabola tangent to $`\eta `$ at the bifurcation point. To show the existence of a family of heteroclinic curves from one branch of the parabola to the next, note that the planar vector fields $`F_z`$ have precisely this 1-parameter family of orbits. Upon “suspending” to obtain a vector field within $`\eta `$, the orbits remain, since the expression for $`\dot{z}=g(x,y,z)(zay^2)`$ vanishes at $`(0,0,0)`$; hence, $`\dot{z}`$ is bounded near zero in a neighborhood of the bifurcation value and the integral curves within the invariant plane $`\dot{x}=0`$ must connect. $`\mathrm{}`$ ###### Proposition 2.7 In the unfolding of a $`C^r`$-generic ($`r4`$) codimension-one Hopf bifurcation on a curve of fixed points in a plane field flow, there is an invariant attracting or repelling paraboloid which opens along the curve of fixed points as in Figure 2. Proof: Since only the real portion of the transverse eigenvalues vanish, the curve of fixed points is transverse to the plane field in a neighborhood of the bifurcation point $`p`$. Hence, choose coordinates as per Lemma 2.1 such that the curve of fixed points is the $`z`$-axis and the bifurcation point is at $`(0,0,0)`$. Again, by Lemma 2.1, this bifurcation in a plane field flow corresponds precisely to the codimension-one Hopf bifurcation of planar vector fields, conjugate to the truncated normal form $$F_z:\begin{array}{c}\dot{r}=zr+ar^3\hfill \\ \dot{\theta }=\omega \hfill \end{array},$$ (6) where we have transformed $`(x,y)`$ to polar coordinates and the constant $`a`$ is (in the codimension-1 scenario) nonzero. Solving this equation for $`\dot{r}=0`$ yields the paraboloid $`r=\sqrt{z/a}`$, which is either attracting or repelling, depending on the sign of the coefficient $`a`$. By translating Lemma 2.1 into polar coordinates, it follows that $`\dot{z}`$ is of order $`r^2`$, which is less than $`\dot{r}`$; hence, adding the dynamics in the $`z`$-component affects neither the existence nor the attracting/repelling nature of the invariant paraboloid; however, unlike the planar case, the paraboloid is not necessarily fibered with closed curves. In general, orbits will spiral about the paraboloid. $`\mathrm{}`$ ###### Remark 2.8 We note that saddle-node or Hopf bifurcations must occur in pairs, since the fixed point curves are circles and the index at a bifurcation changes. However, in the case where there are no saddle-node or Hopf bifurcations along the singular curve, the flow is everywhere transversally hyperbolic, and the index of the fixed points (source, saddle, or sink) is constant along the curve. We conclude with the definition of a nondegenerate vector field tangent to a plane field, and prove that such vector fields are generic. ###### Definition 2.9 A nondegenerate section of a plane field $`\eta `$ is a vector field $`X\eta `$ whose fixed point set is a link having transversally hyperbolic dynamics at all but a finite number of points, at which the degeneracies are codimension one. ###### Proposition 2.10 Nondegenerate fields are generic (residual in the $`C^r`$ topology $`r4`$) within the space of sections to a $`C^r`$ plane field $`\eta `$. Proof: We simply repeat the argument in the proof of Proposition 2.4 using Propositions 2.6 and 2.7. $`\mathrm{}`$ ## 3 Round handles and gradients Let $`X`$ be a nondegenerate vector field contained in the plane field $`\eta `$. The goal of the remaining sections is to understand restrictions on the topology of 3-manifolds supporting plane field flows which are forced by prescribed dynamics. A well-known example of this occurs in the case of Anosov flows: certain three-manifolds are prohibited from carrying Anosov dynamics. In contrast, we examine obstructions associated to the simplest kinds of dynamics: gradient plane field flows. We show that only certain topologically “simple” manifolds support such dynamics. This will lead us to further knot-theoretic obstructions based on the singular links in a plane field flow. An old theme is played out: when the dynamics of $`X`$ are simple, the links associated to it are simple. ###### Lemma 3.1 Let $`M`$ denote an oriented Riemannian 3-manifold and $`X=\mathrm{\Psi }`$ a $`C^r`$ ($`r2`$) gradient vector field which lies within a $`C^r`$ plane field $`\eta `$ on $`M`$. Then $`\mathrm{\Psi }`$ is constant on each connected component of $`\text{Fix}(X)`$, the fixed point set of $`X`$. Furthermore, if $`c`$ is a regular value of $`\mathrm{\Psi }`$, then $`\mathrm{\Psi }^1(c)`$ is a disjoint union of tori transverse to both $`X`$ and $`\eta `$. Proof: Each component of $`\text{Fix}(X)`$ is a compact connected set of critical points for $`\mathrm{\Psi }`$, whose image under $`\mathrm{\Psi }`$ is a compact connected subset of having measure zero, by the Morse-Sard Theorem. For $`c`$ regular, $`\mathrm{\Psi }^1(c)`$ is a disjoint union of smooth surfaces, and $`X`$ is transverse to each component since $`X`$ is a gradient field. Hence, the plane field $`\eta `$ is everywhere transverse to $`\mathrm{\Psi }^1(c)`$ and the resulting line field given by the intersection of $`\eta `$ and the tangent planes to $`\mathrm{\Psi }^1(c)`$ in $`TM|_{\mathrm{\Psi }^1(c)}`$ is nonsingular. Thus, the Euler characteristic of each component of $`\mathrm{\Psi }^1(c)`$ is zero. The transverse vector field $`X`$ gives an orientation to the surface, which excludes from consideration the Klein bottle. $`\mathrm{}`$ Grayson and Pugh give examples of $`C^{\mathrm{}}`$ functions on $`\text{}^3`$ whose critical points consist of a smooth link, yet for which the level sets are usually not tori: see Remark 4.5. The above mentioned restrictions on gradient plane fields translate into very precise conditions on the topology of the underlying three-manifold. The fact that the manifold consists of a finite number of thick tori $`T^2\times [0,1]`$ glued together in ways prescribed by $`\mathrm{\Psi }`$ implies that the manifold can be decomposed into solid tori in a canonical fashion: this phenomenon was identified and analyzed by Asimov and Morgan in the 1970’s in a completely different context. ###### Definition 3.2 A round handle (or RH) in dimension three is a solid torus $`H=D^2\times S^1`$ with a specified index and exit set $`ET^2=(D^2\times S^1)`$ as follows: $`E=\mathrm{}`$. $`E`$ is either (1) a pair of disjoint annuli on the boundary torus, each of which wraps once longitudinally; or (2) a single annulus which wraps twice longitudinally. $`E=T^2`$. ###### Definition 3.3 A round handle decomposition (or RHD) for a manifold $`M`$ is a finite sequence of submanifolds $$\mathrm{}=M_0M_1\mathrm{}M_n=M,$$ (7) where $`M_{i+1}`$ is formed by adjoining a round handle to $`M_i`$ along the exit set $`E_{i+1}`$ of the round handle. The handles are added in order of increasing index. Asimov and Morgan used round handles to classify nonsingular Morse-Smale vector fields: that is, vector fields whose recurrent sets consist entirely of a finite number of hyperbolic closed orbits with transversally intersecting invariant manifolds. ###### Theorem 3.4 Let $`\eta `$ denote a $`C^r`$ ($`r2`$) plane field on $`M^3`$ (compact) with $`X\eta `$ a $`C^r`$ nondegenerate gradient vector field. Then the set of fixed points for $`X`$ forms the cores of a round handle decomposition for $`M`$. Furthermore, the indices of the fixed points correspond to the indices of the round handles, and $`X`$ is transverse to $`M_i`$ for all $`i`$. Proof: Let $`L`$ denote the set of fixed points for $`X=\mathrm{\Psi }`$: this is an embedded link. We first show that every fixed point is transversally hyperbolic. From Remark 2.8, the only non-transversally hyperbolic points must occur as Hopf bifurcations or saddle-node bifurcations. Hopf bifurcations are associated to complex transverse eigenvalues, which cannot exist in a gradient flow. Similarly, a saddle-node bifurcation introduces a one-parameter family of heteroclinic connections as in Figure 1. This also cannot occur in a gradient flow, since by Lemma 3.1 we have the function $`\mathrm{\Psi }`$ constant on the curve of fixed points. The orbits of the flow which necessarily connect one side to the other cannot be obtained by flowing down a gradient. Hence, each singular curve is transversally hyperbolic with constant index. Choose $`N`$ a small tubular neighborhood of $`L`$ in $`M`$ and let $`f`$ denote a bump function in $`N`$ which evaluates to 1 on $`L`$ and is zero outside of $`N`$. Orient the link $`L`$ and perturb $`X`$ to the new vector field $`X+ϵf\frac{}{z}`$, where $`\frac{}{z}`$ denotes the unit tangent vector along $`L`$. This yields a nonsingular flow which has $`L`$ as a set of hyperbolic closed orbits and no other recurrence. After a slight perturbation to remove any nontransverse intersections of stable and unstable manifolds to $`L`$, this vector field is a nonsingular Morse-Smale field with periodic orbit link $`L`$. The work of Morgan then implies that $`L`$ forms the cores of a round handle decomposition for $`M`$, where the index of each handle corresponds to the transverse index of the curve of fixed points (source, saddle, or sink). In it is moreover shown that the nonsingular Morse-Smale vector field is transverse to each $`M_i`$; since the neighborhood $`N`$ is very small, this transversality remains in effect for $`X`$. $`\mathrm{}`$ ###### Corollary 3.5 Gradient flows on plane fields in three-manifolds lie on the boundary of the space of nonsingular Morse-Smale fields. Proof: In the proof of Theorem 3.4, let $`ϵ0`$. This gives a one-parameter family of nonsingular Morse-Smale flows which converges to the gradient plane-field flow. $`\mathrm{}`$ ###### Corollary 3.6 Non-gradient dynamics is a generic condition in the space of plane field flows on an irreducible non-graph three-manifold (e.g., a hyperbolic 3-manifold). Proof: By the work of Morgan , round handle decompositions of irreducible three-manifolds exist only for the class of graph-manifolds. $`\mathrm{}`$ Recall that a graph manifold is a three-manifold given by gluing together Seifert-fibered spaces along essential torus boundaries. Examples include $`S^3`$, lens spaces, and manifolds with many $`S^2\times S^1`$ connected summands. The property of being composed of Seifert-fibered pieces (i.e., a graph manifold) is relatively rare among three-manifolds, the “typical” irreducible three-manifold being composed of hyperbolic pieces. ###### Remark 3.7 We may push Theorem 3.4 a bit further. Let $`\varphi ^t`$ be a plane field flow whose chain-recurrent set consists entirely of transversally hyperbolic curves of fixed points and a finite set of hyperbolic periodic orbits (note that hyperbolic periodic orbits can easily live within plane fields, even within nowhere integrable plane fields). This situation is, after the class of gradient flows, the next simplest scenario dynamically. Then, by the same proof, the connected components of the entire chain-recurrent set must form the cores of a round-handle decomposition. Hence, the additional dynamics forced upon plane field flows in a non-graph manifold is something other than hyperbolic periodic orbits. ## 4 The link of singularities We have shown that fixed points of plane field flows appear in links. The natural question is which links can arise as the singular points, and what dependence is there upon the dynamics of the plane field flow. For nondegenerate gradient fields, it is an immediate corollary of Theorem 3.4 that the singular link is a collection of fibers in the Seifert-fibered portions of a graph manifold. We can be more specific, however, in the special case of $`S^3`$. We recall two standard operations for transforming simple knots into more complex knots: see Figure 3 for an illustration. ###### Definition 4.1 Let $`K`$ be a knot in $`S^3`$. Then the knot $`K^{}`$ is said to be a $`(p,q)`$-cable of $`K`$ if $`K^{}`$ lives on the boundary of a tubular neighborhood of $`K`$, wrapping about the longitude (along $`K`$) $`p`$-times and about the meridian (around $`K`$) $`q`$-times. Let $`K`$ and $`J`$ be a pair of knots in $`S^3`$. Then the connected sum, denoted $`K\mathrm{\#}J`$, is defined to be the knot obtained by removing from each a small arc and identifying the endpoints along a band as in Figure 3. ###### Definition 4.2 The zero-entropy knots are the collection of knots generated from the unknot by the operations of cabling and connected sum; i.e., it is the minimal class of knots closed under these operations and containing the unknot. Zero-entropy knots are relatively rare among all knots: e.g., none of the hyperbolic knots (such as the figure-eight knot, whose complement has a hyperbolic structure) are zero-entropy. The title stems from the often-discovered fact (see for history) that such knots are associated to three-dimensional flows with topological entropy zero. ###### Corollary 4.3 Given a nondegenerate gradient plane field flow on $`S^3`$, every component of the fixed point link is a zero-entropy knot. Proof: Wada classifies the knot types for cores of all round handle decompositions on $`S^3`$. Each component is a zero-entropy knot. $`\mathrm{}`$ ###### Remark 4.4 Much more can be said: Wada in fact classifies all possible links which arise as round handle cores on all graph manifolds. This class of zero-entropy links is an extremely restricted class, which lends credence to the motto that simple dynamics implicate simple links in dimension three. We note this same class of links appears independently in the study of nonsingular Morse-Smale flows, suspensions of zero-entropy disc maps, and in Bott-integrable Hamiltonian flows with two degrees of freedom. ###### Remark 4.5 It is possible to construct gradient flows on $`S^3`$ (for example) in which the fixed point set is an embedded link which is not a zero-entropy link. Let $`L_1`$ and $`L_2`$ denote any pair of links in $`S^3`$ which each have at least three components. Grayson and Pugh prove the existence of $`C^{\mathrm{}}`$ functions $`\mathrm{\Psi }_1,\mathrm{\Psi }_2:\text{}^3\text{}`$ which have $`L_1`$ and $`L_2`$ as the (respective) sets of critical points. Moreover, these functions are proper and, for large enough $`c\text{}`$, the inverse image of $`c`$ is a smooth 2-sphere near infinity. Hence, we may consider the balls $`B_i`$ bounded by $`\mathrm{\Psi }_i^1(c)`$ and glue them together along the boundaries, obtaining $`S^3`$. The resulting function $`\mathrm{\Psi }`$ given by $`\mathrm{\Psi }_1`$ on $`B_1`$ and $`\mathrm{\Psi }_2+2c`$ on $`B_2`$ has as its gradient flow the split (unlinked) sum of $`L_1`$ and $`L_2`$ as its fixed points. Thus, this flow cannot live within a plane field. It is not ostensibly clear that every zero-entropy link in $`S^3`$ is realized as the zero set of a gradient flow within a plane field. We close this section with a realization theorem for such flows which shows that, in fact, a particular subclass of round-handle decompositions (and, hence, zero-entropy links) is realized. ###### Lemma 4.6 If $`X`$ is a nondegenerate gradient field on $`M`$ contained in the plane field $`\eta `$, then each index-1 round-handle $`H`$ in the decomposition must be attached to $`M_i`$ along annuli which are essential (homotopically nontrivial) in $`M_i`$. Proof: Assume that $`M_i`$ is the $`i`$th stage in a round handle decomposition, and that $`H`$ is an index-1 round handle with an exit annulus $`E`$ which is essential in $`H`$ by definition. By Theorem 3.4, the intersection of $`\eta `$ with $`M_i`$ is always transverse. Thus, if $`H`$ is attached to $`M_i`$ along an annulus $`AM_i`$, then the foliations given by the intersections of $`\eta `$ with the tangent planes to $`A`$ and $`E`$ respectively must match under the attachment. We claim this is impossible when $`A`$ is homotopically trivial in $`M_i`$. Define the index of a smooth (oriented) curve $`\gamma `$ in an orientable surface with a (nonsingular, oriented) foliation $``$ to be the degree of the map which associates to each point $`p\gamma `$ the angle between the tangent vectors to $`\gamma `$ and $``$ at $`p`$. This index is independent of the metric chosen and also invariant under homotopy of $`\gamma `$ or of $``$; hence, we can speak of the index of an annulus in a surface with foliation. When $`A`$ is homotopically trivial, the index must be equal to $`\pm 1`$, since a foliation is locally a product. However, the index of the exit annulus $`EH`$ must be zero as follows. Under the gradient field $`X`$, the core of the 1-handle is a curve $`\kappa `$ of fixed points with transverse index 1 whose unstable manifold $`W^u(\kappa )`$ intersects $`H`$ transversally along the core of the exit set $`E`$. Deformation retract $`E`$ to a small neighborhood of $`𝒲^u(\kappa )H`$ — here, the intersections with $`\eta `$ are always transverse. Next, homotope the annulus to a neighborhood of $`\kappa `$ by integrating the gradient field $`X`$ backwards in time. This has the effect of taking the annulus transverse to $`W^u(\kappa )`$ and sliding along $`W^u(\kappa )`$ back to $`\kappa `$. Since $`X`$ points outwards along $`W^u(\kappa )`$, the image of the annulus $`E`$ under the homotopy is always transverse to $`X`$, and hence to $`\eta `$. The fact that $`\eta \kappa `$ then implies that the foliation on $`E`$ induced by $`\eta `$ must be homotopic through nonsingular foliations to a product foliation by intervals on the annulus, which implies that the longitudinal annulus $`E`$ has index zero. Note that this works for exit sets $`E`$ which wrap any number of times about the longitude of $`H`$ (to cover both types of index-1 round handles). $`\mathrm{}`$ Hence, any round-handle decomposition which is realizable as a gradient plane field flow must have all 1-handles attached along essential annuli. We call such a round-handle decomposition essential. ###### Theorem 4.7 Let $`M`$ be a compact 3-manifold with $`L`$ an indexed link. Then $`L`$ is realized as the indexed set of zeros for some nondegenerate gradient plane field flow on $`M`$ if and only if $`L`$ is the indexed set of cores for an essential RHD on $`M`$. Proof: The necessity is the content of Lemma 4.6. Given any essential RHD, we construct a corresponding plane field gradient flow. One may begin with the fact proved by Fomenko that any essential round-handle decomposition can be generated by a vector field $`X`$ integrable via a Bott-Morse function $`\mathrm{\Psi }:M\text{}`$ with all critical sets being circles (see for a detailed exposition). After choosing a metric on $`M`$ we claim that $`\mathrm{\Psi }`$ lives within the plane field $`\eta `$ orthogonal to $`X`$. Indeed, away from $`L`$ the plane field $`\eta `$ will be spanned by $`\mathrm{\Psi }`$ and $`\mathrm{\Psi }\times X`$ since these are linear independent vectors orthogonal to $`X`$ (recall $`X`$ is tangent to the level sets of $`\mathrm{\Psi }`$). Thus $`\mathrm{\Psi }`$ clearly lies in $`\eta `$ on the complement of $`L`$. Along $`L`$ the gradient $`\mathrm{\Psi }`$ lies in $`\eta `$ since it is zero. $`\mathrm{}`$ The above construction may be modified so as to force the plane field to twist monotonically along orbits of the gradient field, by a careful choice of the vector field $`X`$. This implies that all of the permissible round handle decompositions may be realized by a totally nonintegrable plane field (a contact structure). Totally integrable plane fields are not so flexible, as will be illustrated in the next section. ## 5 Flows on foliations and contact structures ### 5.1 Foliations In the case where our given plane field has some geometrical property, we may further restrict the types of round-handle decompositions which may contain a gradient flow. For example, if the plane field $`\eta `$ is integrable, it determines a foliation on the manifold. In this subsection, we note that, in this case, $`S^3`$ cannot support such a gradient flow. This result, which is an obvious corollary of Novikov’s Theorem on foliations, generalizes to other three-manifolds. Recall from the theory of foliations on three-manifolds (see, e.g., ) that a Reeb component is a foliation of the solid torus $`D^2\times S^1`$ that consists of the boundary $`T^2`$ leaf along with a one-parameter family of leaves, each homeomorphic to $`\text{}^2`$ and limiting onto the boundary with nontrivial holonomy, as in Figure 4. A codimension-one foliation of a three-manifold is taut if there do not exist Reeb components or “generalized” Reeb components (see, e.g., for definitions). An equivalent definition of taut is that given any leaf $``$ there exists a closed curve through $``$ transverse to the foliation. It is straightforward to show that gradient fields must lie within taut foliations: ###### Theorem 5.1 Let $``$ denote a codimension-1 foliation on a compact three-manifold $`M`$ which contains a nondegenerate gradient vector field $`X`$. Then $``$ is taut. Proof: Assume that $`X=\mathrm{\Psi }`$ is a nondegenerate gradient field on a foliation $``$. For $``$ a leaf of $``$, the restriction of $`X`$ to $``$ must also be a gradient flow. In the case where $``$ is compact, there must be a nondegenerate fixed point of $`X`$ on $``$ which lies on a circle of fixed points transverse to $``$ (note that in the case of a boundary torus in a Reeb component, this is an immediate contradiction). In the case where $``$ is not a compact leaf, choose some nontrivial path $`\gamma `$ whose endpoints are directly above one another in a local product chart. Then, by perturbing $`\gamma `$ to be transverse to $``$, we may close it up to a transverse loop through $``$. $`\mathrm{}`$ This result can be greatly improved by considering the holonomy of the foliation. Recall that the holonomy of any closed curve $`\gamma :S^1`$ in a leaf $``$ of a codimension-one foliation $``$ is the germ of the Poincaré map associated to the characteristic foliation on an annulus transverse to $``$ along $`\gamma `$. The holonomy of a curve is an invariant of its homotopy class within the leaf. A foliation has vanishing holonomy if the holonomy of every curve $`\gamma `$ is trivial (the identity). ###### Theorem 5.2 Any closed orientable three-manifold $`M`$ containing a nondegenerate gradient field within a ($`C^r`$ for $`r>1`$) codimension-one foliation is a surface bundle over $`S^1`$. Proof: Suppose that $`M`$ admits a foliation $``$ which supports a nondegenerate gradient field. Then, by Theorem 3.4, $`M`$ has a round handle decomposition where all the regular tori are transverse to the foliation. The foliation on each round handle is equivalent to the product foliation by discs on $`D^2\times S^1`$, since these solid tori are filled with leaves transverse to the boundary each having a gradient flow with a single fixed point. We show in subsequent steps that the foliation $``$ may be modified within the round handle structure so that the new foliation $`^{}`$ has no holonomy. Once we show this, the celebrated theorem of Sacksteder implies that this foliation must be topologically conjugate to the kernel of a closed nondegenerate 1-form on $`M`$ . The existence of this 1-form implies, via the theorem of Tischler , that $`M`$ must be a surface bundle over $`S^1`$. We illustrate in Figure 5 below that it is possible to have gradient fields within a foliation having holonomy, so it is truly necessary to develop the following modification procedure, which makes use of “shearing” the foliation along 1-handles (cf. ). Denote by $`M_0`$ the (disjoint) union of all the 0-handles and by $`M_i`$ ($`1iN`$) the subsequent stages in the decomposition: $$M_i=\left(M_{i1}H_i\right)/\varphi _i,$$ where $`H_i`$ is the $`i`$th 1-handle and $`\varphi _i:E_iM_{i1}`$ is the attaching map on the exit set $`E_iH_i`$. Recall that each exit set $`E_i`$ is either one or two annuli and that the boundary of each $`M_i`$, $`M_i`$, is the disjoint union of a collection of tori. For each 1-handle $`H_i`$, let $`W_i^u`$ denote the (2-dimensional) unstable manifold to the core of $`H_i`$. Modify the round handle structure so that each $`H_i`$ is very “thin” – that is, each $`H_i`$ is restricted to a small neighborhood of $`W_i^u`$, appending the “leftover” portion to the neighboring 2-handles. Denote by $$\text{Bd}_i=M_0\underset{j=1}{\overset{i}{}}W_j^u,$$ the 2-complex given by the union of all the 0-handle boundaries and unstable manifolds of the 1-handles in $`M_i`$. Claim 1: $``$ has vanishing holonomy if the restriction of $``$ to Bd<sub>N</sub> has vanishing holonomy. Proof 1: Let $`\gamma `$ denote a loop within a leaf $``$ of $``$. Then the restriction of $``$ to each $`k`$-handle is a collection of disjoint discs whose boundaries lie in the union of 0- and 1-handles. Push $`\gamma `$ to these boundaries and, since $`H_i`$ is very close to $`W_i^u`$, perturb $`\gamma `$ to lie within $`W_i^u`$ for each $`H_i`$ it intersects. Since Bd<sub>N</sub> is transverse to $``$, we may choose the transverse annulus $`A`$ containing $`\gamma `$ to lie within this set. $`\mathrm{}`$<sub>1</sub> In what follows, we consider holonomy on the 2-complex Bd<sub>N</sub>, keeping in mind that the 1-handles are actually thin neighborhoods of the 2-cells $`W_i^u`$. The holonomy on each component of $`M_i`$ is equivalent to that on the corresponding piece of Bd<sub>N</sub> since each $`H_i`$ has a product foliation. Claim 2: The maps $`\{\varphi _i\}_1^N`$ may be isotoped so that the induced foliation $`^{}`$ on Bd<sub>N</sub> is without holonomy. Proof 2: It suffices to show that the foliation restricted to each $`M_i`$ is without holonomy (a product foliation): we proceed by induction on $`i`$. On the boundary of $`M_0`$ the foliation $``$ restricts to a product foliation by circles. Assume as an induction hypothesis a lack of holonomy on $`M_{i1}`$. There are three cases to consider: (1) $`H_i`$ is an orientable handle with attaching circles in the same component of $`M_{i1}`$; (2) $`H_i`$ is orientable with attaching circles in two distinct components of $`M_{i1}`$; and (3) $`H_i`$ is nonorientable. Case (1): Let $`C_\pm `$ denote the circles in the selected component $`T`$ of $`M_{i1}`$ along which $`W_i^u`$ is attached. Note $`C_\pm `$ divides $`T`$ into two annuli $`A_0`$ and $`A_1.`$ After fixing a diffeomorphism from $`C_+`$ to $`C_{}`$ there is a “handle holonomy map” $`f_H:C_+C_{}`$ which is the diffeomorphism given by sliding along leaves on $`\varphi _i(H_i)`$. There are corresponding “boundary holonomy maps” $`f_j:C_+C_{}`$ given by sliding along leaves on $`A_j`$. Isotope $`\varphi _i`$ on $`C_+`$ so that $`f_H`$ equals $`f_0`$ up to a rigid rotation (which is necessary in order to add subsequent handles along curves transverse to $``$ — see Claim 3). The holonomy on the two new components of $`M_i`$ is determined by taking the transverse curve $`C_+`$ (actually one must take a parallel copy of $`C_+`$ that sits in $`M_i`$) as a section. These holonomy maps factor as $`f_1^1f_H`$ and $`f_H^1f_0`$; however, the holonomy along $`C_+`$ within $`M_{i1}`$ is a map of the form $`f_1^1f_0`$, which, by induction, is a rigid rotation. Hence, up to rotations, $`f_0=f_1`$. Since we chose $`f_H=f_0`$ up to rotations the holonomy on $`M_i`$ vanishes. Case (2): If $`H_i`$ connects two disconnected boundary components of $`M_{i1}`$, then the holonomy along $`H_i`$ will always cancel with itself as follows. Denote by $`f_H:C_+C_{}`$ the handle holonomy maps as before. Then the global holonomy map along $`M_i`$ is of the form $`g_+f_Hg_{}f_H^1`$, where $`g_+:C_+C_+`$ and $`g_{}:C_{}C_{}`$ are holonomy self-maps along loops in the two components of $`M_{i1}`$, and hence by induction, identity maps. Case (3): If $`H_i`$ has connected exit set, the proof follows as in Case (1), since the handle must connect a single component of $`M_{i1}`$ to itself: isotope $`\varphi _i`$ so that the handle holonomy map equals the holonomy map along the boundary up to a rigid rotation. $`\mathrm{}`$<sub>2</sub> Claim 3: This “linearization” of $``$ does not affect the topology of $`M`$. Proof 3: Throughout the addition of the 1-handles, nothing about the topology of $`M`$ has changed, since the handle structure is identical — we modify only the foliation. However, after attaching the last 1-handle, the characteristic foliation on the boundary tori must be linear and rational, in order to glue in the 2-handles respecting the product foliation on their boundaries. The slopes of $``$ restricted to $`M_N`$ completely determine the topology of $`M`$ after adding the 2-handles (these are Dehn filling coefficients). Hence, we must be able to linearize all of the attaching maps for the 1-handles without changing the boundary slopes at the end of the sequence. To do so, we preserve at every stage the rotation number of the holonomy maps $`h_i`$ which slide the attaching curves of $`H_i`$ along $`M_i`$. Recall that to every diffeomorphism $`f:S^1S^1`$ is associated a rotation number $`\rho _f\text{}/\text{}`$ which measures the average displacement of orbits of $`f`$ (see, e.g., ). When modifying $`\varphi _i`$ to $`\stackrel{~}{\varphi }_i`$ in the above procedure, we may compose $`\stackrel{~}{\varphi }_i`$ with a rigid rotation by the angle necessary to preserve the rotation number of the holonomy map $`h_i`$ acting on the attaching curves in $`M_{i1}`$ (without adding further Dehn twists). This shearing maintains the average slope of the boundary foliation at each stage without adding holonomy. Hence, at the end of the 1-handle additions, when the original foliation had all boundary components with linear foliations of a particular fixed slope, the modified foliation also has linear boundary foliations with the same slope. Thus, adding the 2-handles is done using the same surgery coefficients, yielding the original manifold $`M`$ with a foliation having trivial holonomy. $`\mathrm{}`$<sub>3</sub> Claims 1-3 complete the proof of Theorem 5.2. $`\mathrm{}`$ ###### Remark 5.3 Of course, not every surface bundle over $`S^1`$ may support a gradient field within a foliation: there is still the restriction that $`M`$ be a graph-manifold. This translates precisely into a condition on the monodromy map of the fibration — the monodromy must be of periodic (or reducibly periodic) type with respect to the Nielsen-Thurston classification of surface homeomorphisms. Any pseudo-Anosov piece in the monodromy forces hyperbolicity, contradicting the graph condition. It is not hard to see that any such bundle can be given a gradient field lying within each fiber $`F`$ of the bundle by choosing a Morse function $`\varphi :F\text{}`$ which is equivariant with respect to the monodromy map. ###### Remark 5.4 All of the results of this section apply not only to gradient flows, but also to gradient-like flows, or flows for which there exists a function which decreases strictly along non-constant flowlines. The reason why nondegenerate gradient-like flows in foliations determine round-handle decompositions whereas for general plane fields they do not lies in the fact that the Hopf bifurcation of Proposition 2.7 cannot take place among gradient-like flows in the integrable case, while it can in the nonintegrable. ### 5.2 Contact structures In contrast to the case of an integrable plane field, one may consider the class of contact structures, which has attracted interest in the fields of symplectic geometry and topology, knot theory, mechanics, and hydrodynamics. ###### Definition 5.5 A contact form on a three-manifold $`M`$ is a one-form $`\alpha \mathrm{\Omega }^1(M)`$ such that the Frobenius integrability condition fails everywhere: that is, $$\alpha d\alpha 0.$$ (8) A contact structure on $`M`$ is a plane field $`\xi `$ which is the kernel of a locally defined contact form: that is, $$\xi _p=\{\text{v}T_p:\alpha (\text{v})=0\},$$ (9) for each $`pM`$. Contact structures are thus maximally nonintegrable: the plane field is locally twisted everywhere. One may think of a contact structure as being an anti-foliation, which leads one to suspect that the topology of the manifold may be connected to the geometry of the structure, as is often the case with foliations. Indeed, the contrast between foliations with Reeb components and those without Reeb components is reflected in the tight / overtwisted dichotomy in contact geometry (due primarily to Eliashberg and Bennequin ). ###### Definition 5.6 Given a contact structure $`\xi `$ on $`M`$ and an embedded surface $`FM`$, then the characteristic foliation $`F_\xi `$ is the (singular) foliation induced by the (singular) line field $`\{T_pF\xi _p:pF\}`$. A contact structure $`\xi `$ is overtwisted if there exists an embedded disc $`DM`$ such that $`D_\xi `$ has a limit cycle, as in Figure 4(right). A contact structure is tight if it is not overtwisted. The classification of contact structures follows along lines similar to that of codimension one foliations with or without Reeb components. An infinite number of homotopically distinct overtwisted contact structures exist on every closed orientable three-manifold and are algebraically classified up to homotopy . Tight structures, on the other hand, are quite mysterious: e.g., it is unknown whether they exist on all three-manifolds. Several examples of the similarity between tight contact structures and Reebless foliations are provided by the recent work of Eliashberg and Thurston . For example, Reebless foliations can be perturbed into tight contact structures and foliations with Reeb components can be perturbed into overtwisted structures (cf. Figure 4). Also, both Reebless foliations and tight structures satisfy a strong inequality restricting Euler classes. Tight structures are somewhat more general than Reebless foliations since the former can exist on $`S^3`$ while the latter cannot . Likewise, overtwisted structures are slightly more general than their foliation counterparts via the following observation, to be contrasted with Theorem 5.1: ###### Proposition 5.7 Any nondegenerate gradient field $`X`$ which lies within a tight contact structure $`\xi `$ on $`M^3`$ also lies within an overtwisted contact structure $`\xi ^{}`$ on $`M^3`$. Proof: The canonical way to turn a tight structure into an overtwisted structure is by performing a Lutz twist on a simple closed curve $`\gamma `$ transverse to $`\xi `$. We execute a version of this twisting which respects a gradient field. Given a gradient field $`X\xi `$, choose a curve $`\gamma `$ of fixed points of index zero (sinks). Translate the function $`\mathrm{\Psi }`$ whose gradient defines $`X`$ so that $`\mathrm{\Psi }|_\gamma 0`$. Since $`\gamma `$ is an index zero curve, $`\mathrm{\Psi }`$ increases as one moves radially away from $`\gamma `$. Let $`N`$ denote a tubular neighborhood of $`\gamma `$ whose boundary is a connected component of $`\mathrm{\Psi }^1(ϵ)`$ for some $`ϵ>0`$. Place upon $`N`$ the natural cylindrical coordinates $`(\mathrm{\Psi },\theta ,z)`$. In analogy with Lemma 2.1, we may choose $`\theta `$ and $`z`$ so that $`\xi |_N`$ is the kernel of the locally defined 1-form $$\alpha =g(\mathrm{\Psi },\theta ,z)d\theta +dz,$$ (10) for some function $`g`$ with $`g(0,\theta ,z)=0`$. The contact condition implies that $$\frac{g}{\mathrm{\Psi }}>0.$$ (11) Replacing this structure locally with the kernel of the form $$\alpha ^{}=\mathrm{sin}\left(\frac{\pi }{4}+\frac{2\pi g}{g(ϵ,\theta ,z)}\right)gd\theta +\mathrm{cos}\left(\frac{\pi }{4}+\frac{2\pi g}{g(ϵ,\theta ,z)}\right)dz,$$ (12) yields a contact structure since $$\alpha ^{}d\alpha ^{}=\left[\mathrm{cos}\left(\frac{\pi }{4}+\frac{2\pi g}{g(ϵ,\theta ,z)}\right)\mathrm{sin}\left(\frac{\pi }{4}+\frac{2\pi g}{g(ϵ,\theta ,z)}\right)+\frac{2\pi g}{g(ϵ,\theta ,z)}\right]\frac{g}{\mathrm{\Psi }}d\mathrm{\Psi }d\theta dz,$$ (13) and this coefficient is positive by Equation 11. This contact structure agrees with that defined by $`\alpha `$ along the torus $`\mathrm{\Psi }=ϵ`$ since $$\alpha ^{}|_{\mathrm{\Psi }=ϵ}=\mathrm{sin}\left(\frac{9\pi }{4}\right)g(ϵ,\theta ,z)d\theta +\mathrm{cos}\left(\frac{9\pi }{4}\right)dz=\frac{\sqrt{2}}{2}\alpha |_{\mathrm{\Psi }=ϵ},$$ (14) and these have the same kernel. Furthermore, this modified structure contains the vector field $`X=\mathrm{\Psi }`$, since $`X`$ points in the $`d/d\mathrm{\Psi }`$ direction. Finally, one can easily show that a perturbation of a constant-$`z`$ disc has a limit cycle in the characteristic foliation near $`c=ϵ/2`$ (cf. ); hence, this defines an overtwisted structure containing $`X`$. This construction can obviously be done in the $`C^{\mathrm{}}`$ category using bump functions. $`\mathrm{}`$ ###### Example 5.8 Consider the flow on $`S^3`$ (considered as the unit sphere in $`\text{}^4`$ with the induced metric) given by the gradient of the function $$\mathrm{\Psi }=\frac{1}{2}(x_1^2+x_2^2)\frac{1}{2}(x_3^2+x_4^2),$$ (15) the gradient being taken in $`S^3`$. One can check that the fixed point set consists of a pair of unknots linked once in a Hopf link, as in Figure 6. The standard tight contact form on $`S^3`$ is $$\alpha =\frac{1}{2}\left(x_1dx_2x_2dx_1+x_3dx_4x_4dx_3\right).$$ (16) A simple calculation shows that $`\alpha `$ is a contact form on $`S^3`$ with $`\mathrm{\Psi }\mathrm{ker}\alpha `$. However, we may Lutz twist this structure in a neighborhood of the fixed point links: a family of such overtwisted forms ($`n\text{}^+`$) is given by $$\alpha _n=\mathrm{cos}\left(\frac{\pi }{4}+n\pi (x_3^2+x_4^2)\right)(x_1dx_2x_2dx_1)+\mathrm{sin}\left(\frac{\pi }{4}+n\pi (x_3^2+x_4^2)\right)(x_3dx_4x_4dx_3),$$ (17) from which it can be shown that $`\mathrm{\Psi }\mathrm{ker}\alpha _n`$. Here, the integer $`n`$ denotes the number of twists that the plane field undergoes as an orbit travels from source to sink in Figure 6. ## 6 Two questions This work has focused on the case of gradient flows in plane fields in dimension three, as the round-handle theory is most interesting here. However, there are natural questions about gradient flows in arbitrary distributions for manifolds of dimension greater than three. We do not present any results in this area, but rather note that many of the tools remain valid: fixed point sets of a vector field constrained to a codimension-$`k`$ distribution consists of a finite collection of embedded $`k`$-dimensional submanifolds. Two problems emerge. In the case of a codimension-one distribution, nondegenerate gradient fields induce round-handle decompositions. However, every manifold of dimension greater than three whose Euler characteristic is zero possesses an RHD. Are there any such manifolds of dimension greater than three which do not possess a nondegenerate gradient field tangent to a codimension-one distribution? Secondly, in the case of higher codimension distributions, what restrictions exist on the topology of the fixed point sets? The case of a plane field on a four-manifold is particularly interesting with respect to the genera of the (two-dimensional) fixed point sets. ACKNOWLEDGMENTS This work has been supported in part by the National Science Foundation \[JE: grant DMS-9705949; RG: grant DMS-9508846\]. The authors wish to thank Mark Brittenham, John Franks, Will Kazez, Alec Norton, and Todd Young for their input. Special thanks are due the referee for constructive remarks.
no-problem/9904/astro-ph9904416.html
ar5iv
text
# Extragalactic Absorption of High Energy Gamma-Rays ## I Introduction Very high energy $`\gamma `$-ray beams from blazars can be used to measure the intergalactic infrared radiation field, since pair-production interactions of $`\gamma `$-rays with intergalactic IR photons will attenuate the high-energy ends of blazar spectra . In recent years, this concept has been used successfully to place upper limits on the the intergalactic IR field (IIRF) \- . Determining the (IIRF), in turn, allows us to model the evolution of the galaxies which produce it. As energy thresholds are lowered in both existing and planned ground-based air Cherenkov light detectors , cutoffs in the $`\gamma `$-ray spectra of more distant blazars are expected, owing to extinction by the IIRF. These can be used to explore the redshift dependence of the IIRF , . There are now 66 “grazars” ($`\gamma `$-ray blazars) which have been detected by the EGRET team . These sources, optically violent variable quasars and BL Lac objects, have been detected out to a redshift greater that 2. Of all of the blazars detected by EGRET, only the low-redshift BL Lac, Mrk 421 ($`z=0.031`$), has been seen by the Whipple telescope . The fact that the Whipple team did not detect the much brighter EGRET source, 3C279, at TeV energies , is consistent with the predictions of a cutoff for a source at its much higher redshift of 0.54 . So too are the further detections of three other close BL Lacs ($`z<0.12`$), viz., Mrk 501 ($`z=0.034`$) , 1ES2344+514 ($`z=0.044`$), and PKS 2155-304 ($`z=0.117`$) which were too faint at GeV energies to be seen by EGRET<sup>*</sup><sup>*</sup>*PKS 2155-304 was seen in one observing period by EGRET as reported in the Third EGRET Catalogue . The formulae relevant to absorption calculations involving pair-production are given and discussed in Ref. . For $`\gamma `$-rays in the TeV energy range, the pair-production cross section is maximized when the soft photon energy is in the infrared range: $$\lambda (E_\gamma )\lambda _e\frac{E_\gamma }{2m_ec^2}=2.4E_{\gamma ,TeV}\mu m$$ (1) where $`\lambda _e=h/(m_ec)`$ is the Compton wavelength of the electron. For a 1 TeV $`\gamma `$-ray, this corresponds to a soft photon having a wavelength near the K-band (2.2$`\mu `$m). (Pair-production interactions actually take place with photons over a range of wavelengths around the optimal value as determined by the energy dependence of the cross section; see eq. (3).) If the emission spectrum of an extragalactic source extends beyond 20 TeV, then the extragalactic infrared field should cut off the observed spectrum between $`20`$ GeV and $`20`$ TeV, depending on the redshift of the source , . ## II Absorption of Gamma-Rays at Low Redshifts Stecker and De Jager (hereafter SD98) have recalculated the absorption coefficient of intergalactic space using a new, empirically based calculation of the spectral energy distribution (SED) of intergalactic low energy photons by Malkan and Stecker (hereafter MS98) obtained by integrating luminosity dependent infrared spectra of galaxies over their luminosity and redshift distributions. After giving their results on the $`\gamma `$-ray optical depth as a function of energy and redshift out to a redshift of 0.3, SD98 applied their calculations by comparing their results with the spectral data on Mrk 421 and spectral data on Mrk 501 . SD98 make the reasonable simplifying assumption that the IIRF is basically in place at a redshifts $`<`$ 0.3, having been produced primarily at higher redshifts . Therefore SD98 limited their calculations to $`z<0.3`$. (The calculation of $`\gamma `$-ray opacity at higher redshifts , will be discussed in the next section.) SD98 assumed for the IIRF, two of the SEDs given in MS98 . Their upper curve now appears to be in better agreement with lower limits from galaxy counts, with Keck telescope, HST. NICMOS, ISO and SCUBA studies of galaxies at high redshifts (Ref. and references therein) and with COBE data \- (see Figure 1). The results of MS98 are also in agreement with upper limits obtained from TeV $`\gamma `$-ray studies \- . This agreement is illustrated in Figure 1 which shows the upper SED curve from MS98 in comparison with various data and limits. The SD98 results for the absorption coefficient as a function of energy do not differ dramatically from those obtained previously , ; however, they are more reliable because they are based on the empirically derived IIRF given by MS98, whereas all previous calculations of TeV $`\gamma `$-ray absorption were based on theoretical modeling of the IIRF. The MS98 calculation was based on data from nearly 3000 IRAS galaxies. These data included (1) the luminosity dependent infrared SEDs of galaxies, (2) the 60$`\mu `$m luminosity function of galaxies and, (3) the redshift distribution of galaxies. The advantage of using empirical data to construct the SED of the IIRF, as done in MS98, is particularly indicated in the mid-ir range where galaxy observations indicate more flux from warm dust in galaxies than that taken account of in more theoretically oriented models. As a consequence, the mid-IR “valley” between the cold dust peak in the far IR and cool star peak in the near IR is partially filled in (see Figure 1). For a source at low redshift, it follows from eq. (1) that $`\gamma `$-rays of energy $``$ 20 TeV will be absorbed preferentially by photons in the wavelength range of this “valley”, i.e., near 50 $`\mu `$m. In this range, significant lower limits now exist which are near the predicted IIRF flux (see Figure 1). In fact, the observed flaring spectrum of Mrk 501 has been newly extended to an energy of 24 TeV by observations of the HEGRA group . The new HEGRA data are well fitted by an $`E^2`$ source spectrum steepened at energies above a few TeV by intergalactic absorption with the optical depth calculated by SD98 . Figure 3, taken from Ref. , clearly shows this. The philosophy behind Ref. is that the existing lower limits on the mid-ir background flux predict a minimum expected absorption. The derived unabsorbed source spectrum then tells us (1) that there is negligible intrinsic absorption in the source, and (2) the physics of the emission mechanism should give a power-law spectrum with a spectral index of $``$2 up to an energy of at least $``$ 20 TeV. Consider the source PKS 2155-304, an XBL located at a moderate redshift of 0.117, which has been reported by the Durham group to have a flux above 0.3 TeV of $`4\times 10^{11}`$ cm<sup>-2</sup> s<sup>-1</sup> . We predict that this source should have its spectrum steepened by $``$ 1 in its spectral index between $`0.3`$ and $`3`$ TeV and should show an absorption turnover above $`6`$ TeV as shown in Figure 4. Observations of the spectrum of this source should provide a further test for intergalactic absorption. ## III Absorption of Gamma-Rays at High Redshifts In order to calculate high-redshift absorption properly, it is necessary to determine the spectral distribution of the intergalactic low energy photon background radiation as a function of redshift as realistically as possible out to frequncies beyond the Lyman limit. This calculation, in turn, requires observationally based information on the evolution of the spectral energy distributions (SEDs) of IR through UV starlight from galaxies, particularly at high redshifts. Salamon and Stecker (hereafter SS98) have calculated the $`\gamma `$-ray opacity as a function of both energy and redshift for redshifts as high as 3 by taking account of the evolution of both the stellar population spectra and emissivity of galaxies with redshift. In order to accomplish this, they adopted the recent analysis of Fall, et al. and also included the effects of metallicity evolution on galactic stellar population spectra. They then gave predicted $`\gamma `$-ray spectra for selected blazars and extend our calculations of the extragalactic $`\gamma `$-ray background from blazars to an energy of 500 GeV with absorption effects included. Fall, et al. have devised a method for calculating stellar emissivity which bypasses the uncertainties associated with estimates of poorly defined luminosity distributions of evolving galaxies. The core idea of their approach is to relate the star formation rate directly to the evolution of the neutral gas density in damped Ly $`\alpha `$ systems, and then to use stellar population synthesis models to estimate the mean co-moving stellar emissivity $`_\nu (z)`$ of the universe as a function of frequency $`\nu `$ and redshift $`z`$. The SS98 calculation of stellar emissivity closely follows this elegant analysis, with minor modifications. SS98 also obtained metallicity correction factors for stellar radiation at various wavelengths. Decreased metallicity at high redshifts gives a bluer stellar population spectrum , . The stellar emissivity in the universe is found to peak at $`1z2`$, dropping off steeply at lower redshifts and more slowly at higher redshifts. Indeed, observational data from the Hubble Deep Field to show that metal production has a similar redshift distribution, such production being a direct measure of the star formation rate (see, e.g., Ref.). With the co-moving energy density $`u_\nu (z)`$ evaluated (SS98), the optical depth for $`\gamma `$-rays owing to electron-positron pair production interactions with photons of the stellar radiation background can be determined from the expression $$\tau (E_0,z_e)=c_0^{z_e}𝑑z\frac{dt}{dz}_0^2𝑑x\frac{x}{2}_0^{\mathrm{}}𝑑\nu (1+z)^3\left[\frac{u_\nu (z)}{h\nu }\right]\sigma _{\gamma \gamma }(s)$$ (2) where $`s=2E_0h\nu x(1+z)`$, $`E_0`$ is the observed $`\gamma `$-ray energy at redshift zero, $`\nu `$ is the frequency at redshift $`z`$, $`z_e`$ is the redshift of the $`\gamma `$-ray source, $`x=(1\mathrm{cos}\theta )`$, and the pair production cross section $`\sigma _{\gamma \gamma }`$ is zero for center-of-mass energy $`\sqrt{s}<2m_ec^2`$, $`m_e`$ being the electron mass. Above this threshold, $$\sigma _{\gamma \gamma }(s)=\frac{3}{16}\sigma _\mathrm{T}(1\beta ^2)\left[2\beta (\beta ^22)+(3\beta ^4)\mathrm{ln}\left(\frac{1+\beta }{1\beta }\right)\right],$$ (3) where $`\beta =(14m_e^2c^4/s)^{1/2}`$. Figure 5 shows the opacity $`\tau (E_0,z)`$ for the energy range 10 to 500 GeV, calculated by SD98 both with and without a metallicity correction. Extinction of $`\gamma `$-rays is negligible below 10 GeV. The weak redshift dependence of the opacity at the higher redshifts as shown in Figure 5 indicates that the opacity is not very sensitive to the initial epoch of galaxy formation, $`z_{max}`$. In fact, the uncertainty in the metallicity correction (see Figure 5) would obscure any dependence on $`z_{max}`$ even further. ## IV The Effect of Absorption on the Spectra of Blazars and Gamma-ray Bursts With the $`\gamma `$-ray opacity $`\tau (E_0,z)`$ calculated out to $`z=3`$, the cutoffs in blazar $`\gamma `$-ray spectra caused by extragalactic pair production interactions with stellar photons can be predicted. Figure 6, based on the results given in Ref. (SS98), shows the expected effect of the intergalactic radiation grazar and $`\gamma `$-ray burst spectra. This figure plots the critical energy for absorption (i.e., for $`\tau =1`$) versus redshift. For energies much above the critical energy, the optical depth is greater than 1, leading to a predicted cutoff in the spectrum of the extragalactic source. The discovery of optical and X-ray afterglows of $`\gamma `$-ray bursts and the identification of host galaxies with measured redshifts (see, e.g., Refs. and ) has lead the accumulation of evidence that these bursts are highly relativistic fireballs originating at cosmological distances and may be associated primarily with early star formation . As indicated in Figure 6 $`\gamma `$-rays above an energy of $``$ 15 GeV will be attenuated if they at emitted at a redshift of $``$ 3. On 17 February 1994, the EGRET telescope observed a $`\gamma `$-ray burst which contained a photon of energy $``$ 20 GeV . As an example, if one adopts the opacity results which include the metallicity correction, the highest energy photon in this burst would be constrained probably to have originated at a redshift less than $``$2. Future detectors such as GLAST Ref. , may be able to place better redshift constraints on bursts observed at higher energies. Such constraints may further help to identify the host galaxies of $`\gamma `$-ray bursts. Observed cutoffs in grazar spectra may be intrinsic cutoffs in $`\gamma `$-ray production in the source, or may be caused by intrinsic $`\gamma `$-ray absorption within the source itself. In fact, models of quasar emission can predict natural cutoffs in quasar emission spectra in the relevant energy range above $``$ 10 GeV. Whether or not cutoffs in grazar spectra are primarily caused by intergalactic absorption can be determined by observing whether the grazar cutoff energies have the type of redshift dependence predicted here. ## V Acknowledgment The work presented here was a result of extensive collaboration with O.C. De Jager, M.A. Malkan, and M.H. Salamon, as indicated in the references cited.
no-problem/9904/astro-ph9904173.html
ar5iv
text
# CCS Imaging of the Starless Core L1544: An Envelope with Infall and Rotation ## 1 Introduction Radio and infrared observations have revealed that stars are formed in dense cloud cores (e.g., Beichman et al. (1986); Myers et al. (1987)). From these observations as well as theoretical works, models of protostellar evolution for low-mass stars, from embedded protostars to visible T Tauri stars, have been proposed (e.g., Shu, Adams, & Lizano 1987). Recently, even kinematic evidence for the infall of dense cores around embedded sources is observed (e.g., Zhou et al. 1993; Hayashi, Ohashi, & Miyama 1993). Although our understanding of the embedded and T Tauri phases has made great progress, the earliest stage of star formation or even a prior stage to star formation has still been poorly understood. Dense cores without any detectable young stellar objects, i.e., starless dense cores are most probably sites where star formation has just started or will soon start. Although some dense cores, such as B 335 (Zhou et al. 1990), have been found to be a very early collapse phase, even in this case significant material has already collapsed onto the central star. Starless cores are therefore good targets to study the earliest stage of star formation. L1544 is a well-studied starless core in Taurus, observed in several line emissions such as NH<sub>3</sub>, CS, and N<sub>2</sub>H<sup>+</sup> (e.g., Benson & Myers (1989); Tafalla et al. (1998); hereafter T98, Williams et al. 1999; hereafter W99) and submillimeter continuum (Ward-Thompson et al. 1994). Their observations were, however, made mainly using single-dish telescopes, which prevented them from studying detailed geometrical and kinematical structures of L1544. In addition, most of the emission lines they used were optically thick, resulting in difficulty in investigating the whole velocity structure without suffering self-absorption. In this Letter, we report results of interferometric observations of the L1544 starless core in CCS ($`J_N=3_22_1`$). This transition of CCS will be excited in cold cores such as starless cores because of its low energy level ($`E_u3.2`$ K; Yamamoto et al. (1990)). Interferometric observations enable us to study geometrical and kinematical structures of L1544 in detail. In addition, CCS in L1544 is optically thin (see §2). Moreover, CCS is one of the carbon chain molecules that are more abundant in starless cores (Suzuki et al. (1992)), and have no hyperfine structures, meaning that it is a good probe to study geometrical and kinematical structures of starless cores in detail (Langer et al. (1995), Kuiper, Langer, & Velusamy 1996, Wolkovitch et al. (1997)). ## 2 Observations Observations were made using the nine-antenna Berkeley-Illinois-Maryland Association (BIMA) array <sup>1</sup><sup>1</sup>1Operated by the University of California at Berkeley, the University of Illinois, and the University of Maryland, with support from the National Science Foundation. in September 1998. We observed CCS ($`J_N=3_22_1`$), which has rest frequency of 33.751374 GHz (Yamamoto et al. (1990)), with low-noise receivers utilizing cooled HEMT amplifiers, developed for observations of the Sunyaev-Zeldovich effect (Carlstrom, Marshall, & Grego (1996)). The filed of view of the array is $``$5′ at this frequency. Such a wide field of view is of great advantage to the imaging of L1544, which is moderately extended (see T98). The typical system temperature during the observations was 40-60 K in Single-side-band. Spectral information was obtained using a digital correlator with 1024 channels at a bandwidth of 6.25 MHz, providing a velocity resolution of $``$0.054 km s<sup>-1</sup> at the observed frequency. Two different configurations of the array were used. Projected baselines ranged from 6 m to 240 m, so that the observations were not sensitive to structures extended more than $``$5′, corresponding to 42000 AU at the distance of Taurus ($`d=140`$ pc; Elias (1978)) <sup>2</sup><sup>2</sup>2Note that the observations are less sensitive to the structures close to 5′ in extent (see Wilner & Welch 1994). The obtained channel data was reduced using the MIRIAD package. The phase was calibrated by observing 0530+135, and the complex passband of each baseline was determined from observations of 3C84. When CCS maps were made and cleaned, a Gaussian taper was applied to the visibility data to improve sensitivity to extended low brightness emission, so that the resultant beam size was 20″$`\times `$13″ with a position angle of $`5`$°. The resultant 1 $`\sigma `$ rms noise level for channel maps was typically 0.165 Jy beam<sup>-1</sup>, equivalent to $``$0.7 K in brightness. In order to measure the optical depth of CCS in L1544, we also observed CCS and CC<sup>34</sup>S ($`J_N=3_22_1`$) simultaneously using the Nobeyama 45 m telescope ($`\mathrm{\Delta }\theta 52\mathrm{}`$) in April 1999. $`T_\mathrm{A}^{}`$ of CCS was measured to be 1.1 K, while no CC<sup>34</sup>S emission was detected to a 3 $`\sigma `$ level of 0.072 K in $`T_\mathrm{A}^{}`$, indicating that a 3 $`\sigma `$ upper limit to the optical depth of CCS is 0.93 when the sulfur isotope ratio is the terrestrial value, 23. In this Letter, we will discuss results of the BIMA observations in detail. ## 3 Results CCS was detected at the LSR velocities ranging from 6.88 to 7.48 km s<sup>-1</sup> with high S/N ($`3\sigma `$). Figure 1 shows the CCS total intensity map, integrated over this velocity range. The map shows a structure elongated in the northwest-southeast direction (PA $``$144°), with a size of $``$210″$`\times `$ 64″ at the 3 $`\sigma `$ level, corresponding to 0.15 $`\times `$ 0.045 pc at the distance to Taurus. The ratio between the major and minor axes is $``$3.3. The condensation consists of two blobs, one at the northwest and the other at the southeast, and each blob contains several peaks, suggesting that the condensation has clumpy structures. A similar elongated structure was also observed in N<sub>2</sub>H<sup>+</sup> (W99), whereas it shows a smaller size (7000 AU $`\times `$ 3000 AU) and a slightly large position angle (PA$`=`$155°). More interestingly, the CCS map does not show a prominent peak at the central part of the condensation (marked by the cross in Fig. 1), where the other maps taken in C<sup>34</sup>S (2-1), N<sub>2</sub>H<sup>+</sup> (1-0), and 800 $`\mu `$m show prominent peaks (T98, W99, Ward-Thompson et al. (1994)). This difference suggests that the CCS in L1544 traces the outer regions of the core while the other molecules and dust trace the inner part. The clumpy structures of the CCS condensation are more obvious in the channel maps as shown in the left panels of Fig. 2. Remarkable characteristics of the clumps are their narrow line width: each clump is detected at only a few velocity channels. This is shown in the right panels of Fig. 2, where line profiles of two representative clumps are presented. The line widths deconvolved with the instrumental velocity resolution (0.054 km s<sup>-1</sup>) of the clumps were measured to be (0.06$`\pm `$0.01)-(0.13$`\pm `$0.01) km s<sup>-1</sup> in full width at half-maximum, which are almost equivalent to the thermal line width of the CCS molecule at 10 K or 0.09 km s<sup>-1</sup> (see Langer et al. 1995), suggesting that thermal motions are dominant in the clumps. Note that the line profiles of the clumps often show double peaks. This is because two clumps with different peak LSR velocities partially coincide with each other. The multi-peak-velocity structure of L1544 was also observed in C<sup>34</sup>S (T98). Physical properties of the clumps will be discussed in detail in a forth coming paper. We discern the global velocity field of the CCS condensation from position-velocity diagrams (Fig. 3). Two velocity components are visible along the projected minor axis of the condensation (Fig. 3a): one at $`V_{LSR}`$7.1 km s<sup>-1</sup> and the other at $`V_{LSR}`$7.4 km s<sup>-1</sup>. Both components persist all along the minor axis except at the southwestern edge, where only the 7.4 km s<sup>-1</sup> component is detected, and at the northeastern edge, where only the 7.1 km s<sup>-1</sup> component is detected. This suggests that the two velocity components are mostly coincident along the line-of-sight. Neither velocity component shows any significant gradient along the minor axis. The PV diagram along the projected major axis (Fig. 3b) shows more remarkable velocity structures. Similar to the PV diagram along the minor axis, there are two velocity components in the inner parts of the condensation ($`40`$$`\mathrm{\Delta }x+40`$″ in Fig. 3b), whereas these two components merged into a single velocity component at each of the southeastern and northwestern edges of the condensation. The velocity difference between the two velocity components at $`\mathrm{\Delta }x=0`$″ is $``$0.25 km s<sup>-1</sup>, similar to that in Fig. 3a. In addition, a global velocity gradient, $``$0.08 km s<sup>-1</sup> per 14000 AU, is observed from the southeast to the northwest. The N<sub>2</sub>H<sup>+</sup> observations (W99) also showed a velocity gradient at inner parts of L1544 along almost the same direction (PA$`=`$155°) although their measurements showed a $``$3 times larger gradient. Thus, the whole velocity structure of the CCS condensation can be represented as a “tilted ellipse”, as shown by the dashed curve in Fig. 3b. The elliptical velocity structure is roughly symmetrical with respect to $`V_{LSR}`$7.25 km s<sup>-1</sup>, suggesting that this velocity seems to be the systemic velocity of this system. This suggests that the two velocity components seen in Fig. 3a are actually blueshifted and redshifted parts in a single kinematical system. The global velocity gradient seen along the projected major axis suggests rotation of the CCS condensation, while the blueshifted and redshifted components overlapping with each other along the line of sight can be explained by inward motions in the condensation (see §4.2.). ## 4 Discussion ### 4.1 Geometrical Structures of the CCS Condensation The elongated structure seen in the CCS total intensity map suggests that the CCS condensation has either a filamentary structure or a flattened one with an almost edge-on configuration. The latter case is more likely in view of the high column densities needed to produce the self-absorption features observed in optically thick molecular tracers such as CS (T98) or N<sub>2</sub>H<sup>+</sup> (W99). The envelopes around young stellar objects (YSOs) often show flattened structures similar to the CCS condensation here except that their sizes are smaller (e.g., Ohashi et al. (1997)). This fact suggests that flattened envelopes are present even before the YSOs form. The flattened geometry of starless cores are predicted by some theoretical simulations (e.g., Nakamura, Hanawa, & Nakano (1995); Matsumoto, Hanawa, & Nakamura (1997)), in which magnetic fields or rotation play an important role in producing flattened structures. Note that magnetic fields may be more important to explain the flattened geometry of CCS because of its slow rotation (see §4.2.; see also W99). For the L1544 CCS condensation, the ratio between the projected major and minor axes implies an inclination angle of $``$73° (0° for the face-on case) if it is spatially thin. If the flattened condensation is spatially thick like the envelopes around YSOs (e.g., L1527; Ohashi et al. (1997)), then this estimate is a lower limit to the true inclination. As pointed out in §3, the CCS total intensity is stronger at the northwestern and southeastern edges of the condensation, and weaker in the center. A ringlike geometry for the condensation, observed almost edge-on, is consistent with this pattern, since, for optically thin emission, the total intensity is proportional to the column density as long as there is no significant gradient in the excitation. As we will show in the next section, the velocity field also suggests a ringlike geometry for the condensation. Note that the CCS ring is not a physical structure but rather probably results from a lower abundance of CCS towards the center of the L1544 core. Most other tracers, including C<sup>34</sup>S, N<sub>2</sub>H<sup>+</sup>, and submillimeter dust continuum, show prominent peaks close to the center of the L1544 core. A similar situation has been observed in L1498, another starless core in Taurus, where CCS is weak toward the dust continuum peak (Kuiper et al. (1996); Willacy, Langer, & Velusamy (1998)). The apparent structure may be due to chemical evolution. As a dense core collapses and the density increases, the photoionization and photodissociation processes become gradually less effective in the central region with highest density. The abundance of molecules sensitive to the photochemistry changes, and the CCS abundance decreases substantially (Suzuki et al. (1992)). In addition, CCS depletes onto grains in a collapsing gas that increases the density (Bergin & Langer (1997)), also explaining the lower abundance of CCS toward the center of L1544. These ideas of the chemical evolution are consistent with the result that L1544 is undergoing infall, as evidenced by kinematic tracers (see §4.2). ### 4.2 Kinematical Structures of the CCS Condensation As shown in §3, the kinematics of the CCS condensation are characterized by two major features: (1) blueshifted and redshifted components that coincide with each other along the line of sight, and (2) a velocity gradient along the major axis. Taking into account the flattened edge-on structure of the CCS condensation, the blueshifted and redshifted components may be explained by radial motion in the plane of the condensation, while the velocity gradient may represent rotation of the condensation. One possible radial motion in the plane of the condensation is infall, which we favor over expansion, as discussed below. Here we consider the kinematics of the condensation using a simple model that consists of a spatially thin ring with both infall and rotation. A simple model ring was obtained by modifying the model disk used in the study of the flattened envelope of L1527, which exhibits both infall and rotation (Ohashi et al. (1997)). For L1544, we have modified the surface density distribution and velocity field in two ways: (1) The surface density is constant at $`R_{out}>R>R_{in}`$, and is null at $`RR_{in}`$, where $`R_{out}`$ and $`R_{in}`$ are the outer and inner radii of the ring, respectively; (2) The infall velocity, $`V_{infall}`$, is constant throughout, while the rotation velocity, $`V_{rotation}`$, is proportional to the radius, i.e., rigid rotation such that $`V_{rotation}=V_{rotation}^0`$($`R`$/$`R_{out}`$), where $`V_{rotation}^0`$ is the rotation velocity at $`R_{out}`$. The first modification makes the model have a ring structure. Under the assumption that the model ring has an edge-on configuration with respect to the observer, PV diagrams along the projected major axis of the model ring are calculated to compare with the observed PV diagram (Fig. 3a). For the model calculations, $`R_{out}`$ was fixed at 15000 AU according to the CCS observations, while three parameters, $`R_{in}`$, $`V_{infall}`$, and $`V_{rotation}^0`$ remained adjustable. As shown in Fig. 4a, when $`R_{in}=7500`$ AU, $`V_{infall}=0.12`$ km s<sup>-1</sup>, and $`V_{rotation}^0=0.09`$ km s<sup>-1</sup>, the observed PV diagram (Fig. 3b) is well reproduced by the model: the calculated PV diagram shows a velocity structure, represented as a tilted ellipse, with a velocity difference at $`\mathrm{\Delta }x=0`$ of 0.24 km s<sup>-1</sup> and a global velocity gradient of 0.09 km s<sup>-1</sup> per 15000 AU. Note that $`2V_{infall}`$ corresponds to the velocity difference at $`\mathrm{\Delta }x=0`$ in the calculated diagram, while $`V_{rotation}^0`$ is equivalent to the velocity gradient per 15000 AU in the diagram. Hence, it is easily understood that much smaller or larger $`V_{infall}`$ and/or $`V_{rotation}^0`$ cannot reproduce the observed velocity structures. On the other hand, a ring structure with a larger $`R_{in}`$ is essential for the model to reproduce the observed PV because when much smaller $`R_{in}`$, including the case of $`R_{in}=0`$ (equivalent to a model with a disk structure) is used, prominent peaks emerge close to $`\mathrm{\Delta }x=0`$ in the calculated diagram (see Fig. 4b). This is because when $`R_{in}`$ decreases, the total column density through the plane of the ring drastically increases close to $`\mathrm{\Delta }x=0`$. One might argue that expansion instead of infall can explain the observed PV diagram. However, infall is more likely to explain the kinematics of the CCS condensation because the mass of L1544 derived from the virial theorem is comparable to or much smaller than that estimated from the 800 $`\mu `$m dust emission: a virial mass is estimated to be $``$1.7 $`M_{\mathrm{}}`$ using the radius of the CCS condensation (15000 AU) and the mean line width of the CCS (0.4 km s<sup>-1</sup>), under the assumption of a spherical cloud with a constant density, while a mass of 2-6 $`M_{\mathrm{}}`$ was derived from 800 $`\mu `$m dust emission <sup>3</sup><sup>3</sup>3Note that the current CCS data is not adequate to derive a mass because CCS may be less abundant in the inner parts of L1544 (see §4.1). (T98). Inward motions in L1544 were also suggested from a spectroscopic method (T98, W99). Our result is remarkable as the observations resolve the velocity field and show direct evidence for infall motions in the starless core L1544. The kinematics of L1544 revealed by the CCS imaging show differences with the envelopes of YSOs. The estimated infall and rotation velocities are comparable to each other at a scale of 10000 AU, unlike YSOs where infall velocities are inferred to be 2 to 6 times larger than rotation velocities in the outer regions. In addition, the estimated infall velocity is much smaller in magnitude than those inferred around YSOs (e.g., Hayashi et al. (1993); Ohashi et al. (1997); Momose et al. (1998)). Although our model ring with a constant infall velocity and rigid rotation explain the kinematics of the CCS condensation very well, the current data do not place strong constraints on the radial dependences of the infall and rotation velocities because the CCS molecule traces only the outer part of the L1544 core. W99 have estimated infall and rotation velocities in the inner regions of L1544, where CCS emission is weak or absent, based on the self-absorption of N<sub>2</sub>H<sup>+</sup>. Their estimates of $``$0.08 km s<sup>-1</sup>and $``$ 0.14 km s<sup>-1</sup>for infall and rotation, respectively, suggests nearly constant infall and rotation throughout the L1544 core, from scales of 10000 to 1000 AU. The optical depth of the N<sub>2</sub>H<sup>+</sup> lines may yet mask the innermost regions of L1544, and direct measurements of the kinematics using an optically thin tracer may be valuable for revealing the radial dependences of the systematic motions in this starless core. We are grateful to P. T. P. Ho, H. Masunaga, F. Nakamura, K. Saigo, S. Takano, and S. Yamamoto for fruitful discussions. We also thank H. Maezawa for his help during observations with the Nobeyama 45 m telescope.
no-problem/9904/astro-ph9904093.html
ar5iv
text
# A Characterization of the Brightness Oscillations During Thermonuclear Bursts From 4U 1636–536 ## 1 INTRODUCTION Shortly after the launch of the Rossi X-ray Timing Explorer (RXTE) in December 1995, observation with RXTE of neutron-star low-mass X-ray binaries (LMXBs) revealed that several sources had a single, highly coherent, high-amplitude brightness oscillation during at least one thermonuclear X-ray burst (for reviews see Strohmayer, Zhang, & Swank 1997; Strohmayer, Swank, & Zhang 1998a). The asymptotic frequency of these oscillations in the tails of bursts is so similar in different bursts from a single source, and the oscillation is so coherent in the tail (see, e.g., Strohmayer & Markwardt 1999), that it is almost certain that this asymptotic frequency is the stellar spin frequency or its first overtone. These burst oscillations therefore provided the first direct evidence for the value of the spin frequencies of these LMXBs, and they corroborate strongly the proposed evolutionary link between LMXBs and millisecond rotation-powered pulsars. In addition, the stability of the frequency in the tails of the bursts has led to their application as promising probes of the binary systems themselves (Strohmayer et al. 1998b). The existence of burst oscillations indicates that the emission from the surface, and hence the thermonuclear burning, is not uniform over the entire star. This is in accord with theoretical expectations (Joss 1978; Ruderman 1981; Shara 1982; Livio & Bath 1982; Fryxell & Woosley 1982; Nozakura, Ikeuchi, & Fujimoto 1984; Bildsten 1995), and it suggests that the properties of the burst oscillations, such as the evolution of their frequency or amplitude, may contain valuable information about the propagation of thermonuclear burning over the surface of the neutron star. The lessons learned from study of the thermonuclear propagation in bursts may ultimately further our understanding of thermonuclear propagation in other astrophysical contexts, such as classical novae and Type Ia supernovae. Unlike in novae or Type Ia supernovae, burning in thermonuclear X-ray bursts occurs near the surface and occurs often for a single source, and is therefore relatively easy to observe. The detailed study of burst brightness oscillations therefore has broad importance. Here we describe in detail the frequency behavior of the burst oscillations in five bursts from 4U 1636–536, which is an LMXB with an orbital period of 3.8 hours (see, e.g., van Paradijs et al. 1990). This source is of special interest because it produces detectable signals at both the fundamental and the first overtone of the stellar spin frequency (Miller 1999), and because near the beginning of one burst the brightness oscillations reached the highest amplitude —50% rms— so far recorded for oscillations during a thermonuclear burst (Strohmayer et al. 1998c). In § 2 we analyze the light curves of the bursts, and the frequency and amplitude of the brightness oscillations in the four of those five bursts that have strong brightness oscillations for most of the duration of the burst. We find that, despite apparent similarities in the light curves of three of those four bursts, the amplitude and frequency behavior of their brightness oscillations are very different from each other. We also find compelling evidence in one burst, and strong evidence in another burst, for an interval in which the burst oscillation frequency decreases after the peak in the light curve. In § 3 we focus on the initial portions of the bursts. Analyses of bursts from many sources have shown that the oscillation frequency often changes by a few Hertz over the first few seconds of a burst (see Strohmayer et al. 1998a for a review). The change is often a monotonic rise, but there are indications of more complicated behavior in some bursts. It has been pointed out that the magnitude of the frequency change could be explained by a 20–50 meter expansion of the burning layers followed by a slow settling, if the layers conserve angular momentum (see, e.g., Strohmayer et al. 1998a), but details have not been worked out. For example, it is not clear how the layers would maintain their coherence throughout the 5–10 complete circuits relative to the body of the star that are implied by the observations. Bildsten (1998) has suggested that the layers may be stabilized by thermal buoyancy or mean molecular weight stratification, but details have not been worked out. In § 3 we examine in detail the first 0.75 seconds of all five bursts, which was the interval used to construct the candidate waveform for the $``$290 Hz oscillation in 4U 1636–536 (Miller 1999). We examine models of the frequency behavior that have increasing complexity: a constant-frequency model; one with a frequency and frequency derivative; a four-parameter model with an initial frequency and frequency derivative followed by a different frequency derivative after a break time; and a five-parameter model with two different frequencies and frequency derivatives separated by a break time. We find that if the same type of frequency model applies to all five of the bursts then the data do not require a model more complicated than the constant-frequency model or, possibly, the model with a single frequency and frequency derivative. Note, however, that this is not inconsistent with the use of the five-parameter model to construct a waveform used in the search for the expected $``$290 Hz oscillation (Miller 1999); in such a search, the only goal is to find the best fit to the $``$580 Hz oscillations, and the extra parameters need not be justified by a significantly better fit. Finally, in § 4 we discuss the implications of these results for the current picture of the frequency changes, in which the frequency change occurs because the burning layer is lifted by 20–50 meters from the surface by the radiation flux. We find that the simplest version of this picture has difficulty explaining the observations. ## 2 OVERVIEW OF THE BURSTS We used public-domain data from the High Energy Astrophysics Science Archive Research Center. The data were taken in Single Bit Mode, which does not record the energy of photons. We give the starting times of the bursts in Table 1 and the light curves in Figure 1. In burst d, the data dropouts are caused by telemetry saturation. Figure 2 shows the peaks of the power spectra of the first four bursts, as a function of time. The burst on 23 February 1997 does not have a strong brightness oscillation for most of its duration, and we therefore do not analyze it in the rest of this section. For each burst, the frequency of maximum power in successive nonoverlapping one-second intervals is shown by the solid triangles, and the Leahy et al. (1983)-normalized power at this frequency is shown by the solid line. Here we plot only those points with Leahy powers in excess of 10 (chance probability for a single trial less than $`6\times 10^3`$). The horizontal bars on the frequency points indicate the extent of the interval for which the power density spectrum was calculated. In a few cases, more than one peak exceeds this threshold in a given power density spectrum. We then represent the lower-power peak by an open circle. In burst a the secondary peak has a Leahy power of 21.2 (single-trial significance $`2.5\times 10^5`$); in burst b the secondary peaks have Leahy powers of 44.0 (first interval; significance $`2.8\times 10^{10}`$) and 13.5 (second interval; significance $`1.2\times 10^3`$); and in burst c the secondary peak has a Leahy power of 12.8 (significance $`1.7\times 10^3`$). Finally, Figure 3 shows the rms amplitude of each oscillation, computed for one-second intervals 1/8 second apart. It is evident from these figures that the frequency behavior can be very complex and can differ greatly from burst to burst. The light curves for bursts (a), (c), and (d) appear similar to each other, although burst (d) has a slightly longer decay time than the other two. However, the frequency and amplitude of the brightness oscillations evolve very differently in the three bursts. In burst (a), there is a strong oscillation near the beginning which disappears for approximately one second, then the oscillation reappears after the peak. The frequency increases continuously, although there is some evidence that in the initial $``$0.5 second of the burst the frequency drops (this might help explain the presence of a higher-frequency secondary peak in the power density spectrum). In burst (c), the brightness oscillation is present for almost the entire time examined. The frequency increases rapidly in the first two to three seconds, then appears to decrease to an asymptotic value. A power density spectrum of a two-second interval starting 1.75 seconds after the beginning of the burst reveals a peak at 581.62$`\pm `$0.04 Hz. A power density spectrum of a six-second interval starting four seconds after the beginning of the burst has a peak at 581.47$`\pm `$0.01 Hz. If the latter frequency is the asymptotic frequency of the oscillation, then at a 3$`\sigma `$ level of certainty it is less than the maximum frequency attained during the burst. The amplitude of the oscillation in the burst tail is high and significant, and there is an abrupt increase in the amplitude 6–8 seconds after the beginning of the burst that is not accompanied by any apparent change in the light curve. In burst (d) there is a clear decrease in the frequency of the burst oscillation in the tail of the burst. We explored this further by taking a power density spectrum of a longer interval: five seconds, starting three seconds after the beginning of the burst. We found that, at the 99.99% confidence level, the frequency change per second during this interval is $`0.54\pm 0.08`$ Hz s<sup>-1</sup>. The best-fit frequency at the beginning of this five-second interval depends on the frequency derivative, and is approximately $`\nu _0=581.39\mathrm{Hz}2(\dot{\nu }+0.62\mathrm{Hz}\mathrm{s}^1)\mathrm{Hz}`$. This means that, relative to a brightness oscillation with a constant frequency equal to the frequency at the beginning of this five-second interval, the observed brightness oscillation has a total phase lag of between $`12\pi `$ and $`16\pi `$ radians. The total phase lag is comparable to what is seen in many bursts, except that here the frequency inferred in the tail of the burst is significantly less than the spin frequency inferred from other bursts in this source. There is no sign in this burst that the frequency has reached an asymptotic value. Burst (b) is the only one with a clearly different light curve. This is a weak burst. The frequency of the brightness oscillation is consistent with what is observed in, at least, bursts (a) and (c): a rise in the frequency near the beginning of the burst, followed by an approximate leveling off. We note, however, that within the uncertainties the frequency could also reach a maximum and then decline, as appears to be the case for bursts (c) and (d). ## 3 BRIGHTNESS OSCILLATIONS AT THE BEGINNING OF THE BURST Previous analyses have shown that the brightness oscillations in the initial $``$second of the bursts are often of particular interest. This is where the highest amplitudes (rms$``$50%; Strohmayer et al. 1998c) are reported, and where subharmonics of the strong oscillation have been detected in 4U 1636–536 (Miller 1999) and possibly in the Rapid Burster (Fox & Lewin 1999). It is therefore important to examine the initial portion more closely to see what hints about the brightness oscillation mechanism can be derived. Before doing so, we need to emphasize an important distinction. If the purpose of the analysis is to characterize the frequency variations of the $``$580 Hz oscillation then extra parameters can only be added if the fit to the data is improved sufficiently to justify the additional complexity. The situation is different when the goal is to produce a matched filter for a search for a harmonically related frequency, as in the search for a signal at half of the $``$580 Hz dominant brightness oscillation in 4U 1636–536 (Miller 1999). For that purpose, it is not necessary to justify the extra parameters used in the construction of the filter, if no reference is made to the signal for which one is searching. In the case of the search for the $``$290 Hz oscillation, a five-parameter matched filter was used for each burst; matched filters with fewer parameters also give a clear signal at $``$290 Hz, although with lower significance because the filter does not fit the data as well. A general method to find the best-fit values of parameters and their confidence regions employs a likelihood function. In this approach, we suppose that we have a model in which the countrate as a function of time is predicted to be $`s(t)`$, from which we can predict the number of counts $`s_i`$ in one particular bin $`i`$ of the data, which in this case is 1/8192 s in duration. In general, $`s_i`$ is not an integer. The actual number of counts observed in bin $`i`$ is $`c_i`$, which is an integer. With these definitions, the Poisson likelihood of the full data set given the model $`s(t)`$ is $$=\mathrm{\Pi }\frac{s_i^{c_i}}{c_i!}e^{s_i},$$ (1) where the product is over all of the bins of the data. Note that in normal applications of the point likelihood the bin sizes would be so small that a given bin would have either zero or one count, but the fixed bin size of 1/8192 s combined with the high count rates during the bursts (up to $``$30,000 c/s; see Figure 1) means that many of the bins have multiple counts. The likelihood is maximized to determine the best values of the parameters of the model waveform $`s(t)`$, and approximate confidence contours can be estimated using contours of constant log likelihood: $`2\mathrm{\Delta }\mathrm{log}=\mathrm{\Delta }\chi ^2`$ (Eadie et al. 1971, § 9.4.3, p. 207). The model waveform $`s(t)`$ will in general include components related to the relatively slow change in the brightness of the source as well as components related to the high-frequency brightness oscillation. However, the frequency scales are different enough ($``$1–5 Hz for the slowly rising component versus $``$580 Hz for the fast oscillations) that the fitting of the two components are nearly independent of each other. This means that, when we analyze the behavior of the brightness oscillations, we can simplify by assuming that the burst has a constant average brightness. With this in mind, the model we consider is $$s(t)=c_{\mathrm{av}}\left(1+A\mathrm{cos}2\pi [\nu (t)t+\varphi _0]\right),$$ (2) where $`c_{\mathrm{av}}`$ is the average countrate, $`A`$ is the amplitude of the signal (which we assume to be time-independent), $`\nu (t)`$ is the frequency as a function of time, and $`\varphi _0`$ is the phase of the oscillation at the beginning of the data interval analyzed. In this section we explore four different models for the frequency behavior in the initial 0.75 seconds of each of the four bursts (this time was chosen to conform to the analysis of Miller , which was performed to look for a weaker $``$290 Hz oscillation). The four models are: $$\nu _1(t)=\nu _0$$ (3) $$\nu _2(t)=\nu _0+\dot{\nu }t$$ (4) $$\begin{array}{cc}\hfill \nu _3(t)& =\nu _1+\dot{\nu }_1t,t<t_{\mathrm{break}}\hfill \\ & =\nu _2+\dot{\nu }_2t,t>t_{\mathrm{break}}\hfill \end{array}$$ (5) where continuity of the frequency is imposed, so that there are four independent parameters, and finally $$\begin{array}{cc}\hfill \nu _4(t)& =\nu _1+\dot{\nu }_1t,t<t_{\mathrm{break}}\hfill \\ & =\nu _2+\dot{\nu }_2t,t>t_{\mathrm{break}}\hfill \end{array}$$ (6) where continuity of the frequency is not imposed, so that there are five independent parameters. For the purposes of this section the most interesting of the parameters of the model waveform $`s(t)`$ is the frequency, as opposed to the amplitude or the initial phase of the brightness oscillation. If the amplitude $`A1`$, as it is in this case, then a tremendous speed-up in the search procedure is possible with the use of the cross-correlation (see, e.g., Helstrom 1960 or Wainstein & Zubakov 1962 for details of cross-correlation and matched filtering techniques) $$H=C\left|_{t_0}^{t_0+T}c(t)e^{i\nu (t)t}𝑑t\right|^2,$$ (7) where $`t_0`$ is the start time of the burst, $`T`$=0.75 s is the duration of the burst, and $`C`$ is a normalization constant. In practice this integral is actually calculated as a sum over all of the bins of the data, and $`dt`$=1/8192 s is the duration of a bin. If $`C=2/N_{\mathrm{tot}}`$, where $`N_{\mathrm{tot}}`$ is the total number of counts in the data set, then $`H`$ has the same statistical properties as the Leahy power; $`H`$ is also related to the $`Z^2`$ statistic used in pulsar period searches (Buccheri et al. 1983; see Strohmayer & Markwardt 1999 for a recent use in the characterization of brightness oscillations during thermonuclear X-ray bursts). To lowest order in the oscillation amplitude $`A`$ this description is mathematically identical to the likelihood description, but it is much faster to apply because no search need be performed for the amplitude or oscillation phase. It is therefore preferable for low-amplitude oscillations. With this formalism, we can estimate the best values and uncertainty regions for the different frequency models above. The figures in the previous section, which were constructed using a constant-frequency waveform, give this information for the one-parameter, constant-frequency model. ### 3.1 Two-Parameter Frequency Model The best-fit values for the two-parameter frequency model are given in Table 2. To estimate uncertainties on these parameters, we performed a Monte Carlo analysis in which we selected $`10^6`$ random values per burst of $`\nu _1`$, $`\dot{\nu }_1`$, $`\dot{\nu }_2`$, and $`t_{\mathrm{break}}`$, uniformly sampled from, respectively, 576 Hz to 585 Hz; -12 Hz s<sup>-1</sup> to 12 Hz s<sup>-1</sup>; -12 Hz s<sup>-1</sup> to 12 Hz s<sup>-1</sup>; and 0 s to 0.75 s. The quoted uncertainties for single parameters were computed using a Bayesian viewpoint, in which the posterior probability density was calculated throughout the interval and then integrated over the other three parameters to produce a marginalized probability distribution. We have assumed a uniform prior probability density over the whole space searched. This means that the posterior probability density is simply proportional to the likelihood. These confidence regions, which are the smallest regions that encompass 68% of the probability, are given in Table 3. In some cases the maximum likelihood value of a parameter obtained by extremization in the full two-dimensional parameter space is outside the marginalized 68% confidence region. This is symptomatic of the fact that the parameters are constrained only weakly by the data. ### 3.2 Four-Parameter Frequency Model The best-fit values for the four-parameter frequency model are given in Table 4. As for the two-parameter model, the uncertainties were estimated by marginalizing over all but the parameter of interest; the confidence regions containing 68% of the probability are given in Table 5. ### 3.3 Five-Parameter Frequency Model The best-fit values for the five-parameter frequency model are given in Table 6. As for the two-parameter model, the uncertainties were estimated by marginalizing over all but the parameter of interest; the confidence regions containing 68% of the probability are given in Table 7. ### 3.4 Summary of Frequency Models The best-fit parameters and relative log likelihoods are listed in Table 8; as indicated above, $`2\mathrm{\Delta }\mathrm{log}\mathrm{\Delta }\chi ^2`$. From this table, it is clear that for all but burst four it is not necessary to use the five-parameter fit, and for bursts 2, 3, and 4 it is not necessary to use a model more complicated than the two-parameter model in which the frequency and frequency derivative are constant throughout the first 0.75 seconds. For burst (d) by itself the five-parameter model is preferred at only the $`2\sigma `$ level compared to the four-parameter model, and for all five bursts combined the five-parameter model is preferred at less than the $`1\sigma `$ level relative to the four-parameter model. For all five bursts combined, the four-parameter model is preferred at less than the $`1\sigma `$ level compared to the two-parameter model, and the two-parameter model is preferred at less than the $`2\sigma `$ level compared to the one-parameter model. ## 4 DISCUSSION AND SUMMARY What can be learned from this detailed characterization of the burst brightness oscillations in 4U 1636–536? The clearest impression left is that there are no simple statements about the frequency behavior that are true for all of the bursts. In two of the bursts, one can make an argument that the oscillation frequency is initially 1–2 Hz below the asymptotic frequency, and then rises. In this interpretation, the asymptotic frequency is extremely close to the spin frequency of the neutron star. This picture can be qualitatively explained by the idea that the burning layer lifts 20–50 meters during the burst and settles down gradually. However, the burst on 31 December 1996 does not follow this pattern. The frequency in the initial second is indeed lower than the maximum value attained, but the significance of this initial signal is low (Leahy power of 10). The maximum is followed by a clear decrease in the frequency over several seconds, with a total phase change equivalent to more than five complete circuits around the star. This happens during a time when the countrate decreases from approximately 2/3 of the maximum to approximately 1/3 of the maximum. The burst on 29 December 1996 has a very strong and significant brightness oscillation in its tail, which appears to level out to a constant frequency. However, near the peak of the light curve for this burst the oscillation frequency is higher than this asymptotic frequency, at a 3$`\sigma `$ significance level. Such a drop in frequency is not expected in the simplest version of the hypothesis that the frequency changes are caused by the rise of the burning layers. In this model, the highest frequency should be observed when the layers are fully coupled to the core of the star, which is expected to occur when the frequency has reached its asymptotic limit. Another constraint on the hypothesis that the asymptotic frequency equals the spin frequency (after correcting for orbital Doppler shifts) is that the variation in the observed asymptotic frequency must be consistent with the possible modulation due to the binary motion of the neutron star. From binary evolution theory (see, e.g., Lamb & Melia 1987; Verbunt & van den Heuvel 1995), an LMXB such as 4U 1636–536 with a 3.8 hr orbital period (van Paradijs et al. 1990) that contains a neutron star of mass $`M_{\mathrm{NS}}`$=1.4$`M_{}`$ to 2.0$`M_{}`$ has a companion star of mass $`M_c`$0.4$`M_{}`$. Assuming that the orbit is approximately circular, the orbital velocity of the neutron star is therefore 90–130 km s<sup>-1</sup>, implying a maximum frequency modulation of $`\mathrm{\Delta }\nu /\nu =4.3\times 10^4`$, or approximately 0.25 Hz if $`\nu `$=580 Hz. Therefore, the observed asymptotic frequency cannot be different by more than 0.5 Hz for two different bursts. The analysis of the 31 December 1996 burst reported in § 2 indicates that eight seconds after the start of the burst the frequency is less than 579.0 Hz. The asymptotic frequency in the burst on 29 December 1996 is 581.43 Hz, so the frequency in the 31 December 1996 burst must rise by 2 Hz to reach a plausible spin frequency. It is difficult to reconcile this frequency behavior with what is expected in the simplest version of the rising burning layer hypothesis. One possibility is that the observed frequency changes are not simply indicative of the spin frequency of the burning layer, but also include a time-dependent change in the phase at which the photons emerge relative to the phase of the burning layer. This would be observationally indistinguishable from a pure frequency change, and would add an extra degree of freedom to the model. Even this, however, is subject to significant observational restrictions. To see this, consider the following observational trends, which have been observed in many bursts from several sources (see, e.g., Strohmayer et al. 1998 for a summary). In the remainder of this section we assume that all quantities (e.g., frequencies, times, and phases) are measured at infinity. (1) There are several bursts in which burst oscillations are seen for the entire burst, and do not disappear during the time of peak countrate. (2) Aside from an early phase in which there may be a frequency decrease, the frequency increases smoothly as the burst progresses. (3) The total phase lag of the oscillations compared with a hypothetical oscillation that has a constant frequency equal to the frequency in the burst tail is as much as $`10\pi `$. The total amount of energy in a burst is $`10^{39}`$ ergs. If expansion of a layer and angular momentum conservation are to explain the $``$0.3%–1% change in the observed angular frequency, then the layer must rise by a distance that is a fraction $``$0.2%–0.5% of the radius of the star, or 20 to 50 meters. The surface gravity of a neutron star is $`2\times 10^{14}`$ cm s<sup>-2</sup>, so the largest amount of mass that can be lifted to the required 20–50 meter height above the surface is $``$1–2$`\times 10^{21}`$ g. If most of the $`10^{13}`$ cm<sup>2</sup> surface area of the star is involved, this implies that the greatest column depth which could be lifted to the required height is roughly $`10^8`$ g cm<sup>-2</sup>, which is comparable to the expected $`10^610^8`$ g cm<sup>-2</sup>depth of ignition (see, e.g., Fushiki & Lamb 1987; Brown & Bildsten 1998). One may therefore distinguish two scenarios: (1) the burning layer rotates with the core of the star at a constant spin frequency and the observed frequency shifts are caused by phase shifts induced by radiation transport through more slowly rotating layers, and (2) the burning layer itself is lifted and rotates more slowly than the core of the star. We now treat these in order. Suppose for simplicity that the burning layer has an infinitesimal vertical extent, that it has some restricted azimuthal extent, and that it all rotates with the same angular frequency $`\omega _{\mathrm{burn}}(t)`$. The energy from this layer propagates upwards through the atmosphere, which in general may be composed of layers with different angular frequencies. Therefore, the phase of emergence of the radiation may differ from the phase of the burning layer at the time of the emission of the radiation. Under the rising burning layer hypothesis, it is expected that the angular frequency of higher layers is less than the angular frequency of lower layers ($`d\omega /dh<0`$). Hence, there is expected to be a lag $`\varphi _{\mathrm{lag}}>0`$ between the phase of emergence and the phase of emission. This phase lag will, in general, have a time-dependence, as the scale height of the atmosphere and the angular frequency of different layers in the atmosphere changes throughout the burst. An observer at infinity will therefore see a net angular frequency of a hot spot that is equal to $`\omega _{\mathrm{burn}}(t)\dot{\varphi }_{\mathrm{lag}}(t)`$. Consider first a burning layer that rotates with the stellar core throughout the burst. Then $`\omega _{\mathrm{burn}}(t)`$ =const=$`\omega _{\mathrm{spin}}`$. If neither $`\omega (h)`$ nor the density or height of the envelope changes with time, then $`\varphi _{\mathrm{lag}}`$ is a constant and the observed frequency is just $`\omega _{\mathrm{spin}}`$. Hence, in order to have an apparent frequency shift in this situation, the structure or angular velocity of the envelope must change with time. Now consider an envelope that does change with time. For us to observe a frequency less than $`\omega _{\mathrm{spin}}`$, it is necessary that $`\dot{\varphi }_{\mathrm{lag}}(t)>0`$, so the characteristic phase of emergence of the radiation must lag the phase of the source of heat by a greater and greater amount with increasing time (the increase of this phase lag with time must itself decrease with time to produce the observed increase in frequency). But how is this possible? As the envelope settles down, the phase lag should decrease, because $`d\omega (t)/dh<0`$. But if the phase lag decreases, the observed frequency should be higher than the spin frequency. This is not seen in most bursts, and even in the burst on 29 December 1996 where there does appear to be a short period of spindown, the total phase lead implied by the spindown is much smaller than the total phase lag implied by the spinup near the beginning of the burst. Thus, the preceding set of assumptions is inconsistent with the data. This demonstrates that the observed frequency behavior is inconsistent with the source of heat (i.e., the burning layer) rotating at a constant frequency equal to $`\omega _{\mathrm{spin}}`$. Instead, the source of heat must change its frequency during the burst. To analyze this situation, let us now consider a burning layer with a finite thickness, so that the observed photons are a superposition of the photons from many infinitesimal layers such as discussed above. The observed frequency of oscillation is then a superposition of the frequencies due to the infinitesimal layers. Consider two of these infinitesimal slices, labeled 1 and 2, where slice 1 is higher than slice 2. Suppose that these slices are not coupled to each other. Then, by assumption, the angular frequency $`\omega _{\mathrm{burn},1}`$ of slice 1 is less than the angular frequency $`\omega _{\mathrm{burn},2}`$ of slice 2. In addition, because the photons from slice 2 have to travel through the same atmospheric layers as the photons from slice 1 in addition to the layers between 2 and 1, the phase lag $`\varphi _{\mathrm{lag},1}`$ of photons from slice 1 is expected to be less than the phase lag $`\varphi _{\mathrm{lag},2}`$ of photons from slice 2. Hence, as the atmospheric scale height decreases, it is expected that $`\varphi _{\mathrm{lag},2}`$ will decrease more rapidly than $`\varphi _{\mathrm{lag},1}`$ does, so that $$\dot{\varphi }_{\mathrm{lag},2}<\dot{\varphi }_{\mathrm{lag},1}<0.$$ (8) Therefore, the difference between the angular frequency of the photons from slice 2 and the angular frequency of the photons from slice 1 is $$\omega _{\mathrm{burn},2}\omega _{\mathrm{burn},1}+\dot{\varphi }_{\mathrm{lag},1}\dot{\varphi }_{\mathrm{lag},2}>\omega _{\mathrm{burn},2}\omega _{\mathrm{burn},1}.$$ (9) This means that the phases of emergence of radiation diverge rapidly from each other, which leads quickly to a low amplitude unless the heat source has a small vertical extent. The requirement that the amplitude be significant means that the total azimuthal phase subtended by the emergent radiation has to be much less than $`2\pi `$. The integrated phase lag relative to the stellar core is often $`10\pi `$ or larger, hence the average vertical extent of the heat source must be much less than 1/5 of the vertical distance from the original location of the heat source to its location during the burst. An alternative to having the vertical extent of the layer be small is that the burning layer may be tightly coupled to itself, so that its angular frequency is approximately constant over a significant vertical distance. To summarize, several conclusions may be drawn about the standard model for frequency changes during burst oscillations, which we take to be the picture that at least part of the burning layer is lifted and then settles gradually to the surface as the flux drops, producing an observed asymptotic frequency equal to the spin frequency of the neutron star Doppler-shifted by the orbital motion of the neutron star. (1) The burning region itself (and not just overlaying optically thick layers) must be lifted by 20–50 meters from the surface, (2) this region must remain decoupled from the rest of the star, presumed to be rotating at the original spin frequency, for several seconds, (3) to produce the observed coherence of the brightness oscillations during the rise in frequency, the burning layer must either have a vertical extent much smaller than its height above the surface or be strongly coupled to itself to prevent relative azimuthal motion, and (4) the existence of a frequency greater than the asymptotic frequency (as in the 29 December 1996 burst) implies that something other than differential rotation (e.g., variation in the phase lag) must account for at least part of the observed frequency change. The prolonged decrease in frequency in the tail of the 31 December 1996 burst is not straightforwardly fit into this picture. Despite these difficulties, the high stability (Strohmayer et al. 1998) and coherence (Markwardt & Strohmayer 1999) of the brightness oscillations in the tails of bursts from sources such as 4U 1728–34 argue persuasively that the frequency in the tail of the bursts is close to either the fundamental or the first overtone of the neutron star spin frequency. Moreover, the general picture in which frequency changes are attributed to changes in the height of the emitting layer accounts approximately for the magnitude of the frequency change and explains why the frequency tends to rise near the beginning of the burst. However, in its current form it suffers from apparently serious problems. It is extremely important that there be a detailed investigation of, e.g., the coupling between differentially rotating layers, and that other ideas be explored so that the strengths and weaknesses of the rising layer model are put into sharper focus. We thank Don Lamb and Fred Lamb for discussions about models of the frequency change, and Don Lamb, Dimitrios Psaltis, and Carlo Graziani for comments on a previous version of this paper. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. This work was supported in part by NASA grant NAG 5-2868, NASA AXAF contract SV 464006, and NASA ATP grant number NRA-98-03-ATP-028.
no-problem/9904/cond-mat9904084.html
ar5iv
text
# Off equilibrium dynamics of the Frustrated Ising Lattice Gas ## Abstract We study by means of Monte Carlo simulations the off equilibrium properties of a model glass, the Frustrated Ising Lattice Gas (FILG) in three dimensions. We have computed typical two times quantities, like density-density autocorrelations and the autocorrelation of internal degrees of freedom. We find an aging scenario particularly interesting in the case of the density autocorrelations in real space which is very reminiscent of spin glass phenomenology. While this model captures the essential features of structural glass dynamics, its analogy with spin glasses may bring the possibility of its complete description using the tools developed in spin glass theory. Much effort is being currently devoted to reach a reasonable theoretical understanding of structural glass physics. While the huge amount of experimental work available shows an extremely rich phenomenology, a succesfull description in terms of microscopic models is still lacking. The theoretical models available are mainly phenomenological, basically focusing the non-Arrhenius relaxation in the so called fragile glasses . Among the most succesfull ones are the free volume model and the entropy model in which the relevant variables for the description of a glass transition from a supercooled liquid phase are respectively volume and entropy. While the last model predicts the existence of a thermodynamic second order transition at a well defined temperature, it is extremely difficult to obtain evidences of it as it is in practice experimentally inaccessible. Another succesfull approach has been the Mode Coupling Theory of Götze and Sjögren . This dynamical approach is qualitatively correct in predicting the behavior of time correlations and responses in supercooled liquids. Recently is has been shown to correspond to the high temperature limit of a dynamical theory of spin glasses . This raised the interesting possibility of describing the structural glass physics by exploiting the analogy with some spin glasses in which the transition is discontinuous as originally suggested by Kirkpatrick et al. . Moreover, some finite dimensional models have been proposed whose mean field limit would provide such a transition . Nevertheless, although spin and structural glasses have some basic features in common, like the characteristic slow dynamics, an essential difference is the absence (presence) of quenched disorder in structural glasses (spin glasses). Unlike in spin glasses, the nature of the glass transition may be purely dynamical in origin, without an underlying thermodynamic transition. The existence of a growing correlation length, for example, has not been considered up to very recently in structural glass models . The absence of a simple microscopic model turns these questions very difficult to be answered and much of the present knowledge comes from computer simulations of Lennard-Jones systems or from simple kinetically constrained models . These, in particular, have dynamically self-induced frustration effects obtained by restricting the possible Monte Carlo movements and reproduce reasonably well the glassy phenomenology. However, although having the advantage of being lattice models, it is not obvious how to relate the ad hoc kinetic rules with the underlying physics. A possible candidate to fill in this gap is the Frustrated Ising Lattice Gas model (FILG) , a Hamiltonian lattice gas in which the presence of internal degrees of freedom subjected to quenched disorder mimics the geometric frustration that slows down the motion of the molecules as the system is cooled or compressed. The introduction of internal degrees of freedom is responsible for the slowing down of the diffusional dynamics of the particles and the necessity of considering them in order to describe the essence of glass physics has recently been addressed by Tanaka . The thermodynamic properties of the FILG in 3D as well as its equilibrium dynamics have been studied by Nicodemi and Coniglio . The model shows many glass properties, including dependence on the cooling (or compression) rate, stretched exponential behavior in correlation functions, a dynamical singularity in which the diffusion constant goes to zero, the breakdown of the Stokes-Einstein relation along with anomalous diffusion at intermediate times. As in many glasses no singularities in the linear susceptibilities are observed, in particular the compressibility is continuous everywhere. Scarpetta et al. studied an equivalent, non local version of the FILG (the Site Frustrated Percolation, SFP) in 2D, finding a behavior similar to the 3D FILG, the main difference being the (Arrhenius) dynamical singularity occurring at zero temperature. This model also seems to relate the glass transition with a percolation-type transition as evidenced by the onset of several precursor phenomena . Non equilibrium phenomena, aging being one example, are widespread on a great variety of systems, including polymers, granular materials and ferromagnetic coarsening and appears in the relaxation dynamics of both spin and structural glasses, either theoretically and experimentally. In this letter we report results for the off equilibrium dynamics of the FILG in three dimensions, showing a characteristic aging dynamics present in some two times quantities and some possible scaling scenarios. Of particular interest are the results for the density autocorrelations which suggest the definition of a non-linear compressibility which may bring information on a possible thermodynamic transition analog to the magnetic transition in spin glasses. The FILG is defined by the Hamiltonian: $$H=J\underset{<ij>}{}(\epsilon _{ij}\sigma _i\sigma _j1)n_in_j\mu \underset{i}{}n_i.$$ (1) There are two kinds of dynamical variables: the site occupation $`n_i=0,1`$ ($`i=1\mathrm{}N`$) and the particles internal degrees of freedom, $`\sigma _i=\pm 1`$. The usually complex spatial structure of the molecules of glass forming liquids, which can assume several spatial orientations, is in part responsible for the geometric constraints on their mobility. Here we take the simplest case of two possible orientations, and the steric effects imposed on a particle by its neighbors are felt as restrictions on its orientation due to the quenched random variables $`\epsilon _{ij}=\pm 1`$. The key role of the first term of the Hamiltonian is that when $`J\mathrm{}`$ (recovering the SFP) no frustrated link can be fully occupied, implying that any frustrated loop in the lattice will have a hole and then $`\rho <1`$ preventing the system from reaching the close packed configuration. Finally $`\mu `$ represents a chemical potential ruling the system density (at fixed volume) and, by taking $`\mu \mathrm{}`$ we recover the Edwards-Anderson spin glass model. It should also be mentioned that, after including gravity, the same model successfully describes granular materials under vibration , another class of systems where geometric frustration rules its behavior. By increasing $`\mu `$ the model presents two characteristic points . For $`\mu 0.75`$ (low density) it shows liquid like behavior, time correlation functions decay exponentially, equilibration is quickly achieved and the particles mean squared displacement grows linearly with time, a simple diffusion scenario. At $`\mu 0.75`$ there is a percolation transition (the corresponding density being $`\rho 0.38`$). Dynamically it manifests in the onset of two different relaxation regimes in the correlation functions, a fast exponential relaxation at short times and a slow relaxation at longer times characterized by stretched exponentials. The diffusion is still linear for long times but the diffusion coefficient becomes smaller as the density grows. Also, for a fixed $`\mu `$, the equilibrium density depends on the cooling rate. The dynamics becomes slower as the chemical potential grows (or equivalently, the temperature is lowered) and a second transition is reached for $`\mu 6`$. This is a spin glass transition associated with the frozen in of the internal degrees of freedom. Interestingly, at this point a dynamical singularity is also present, manifested by the vanishing of the diffusion constant. Besides the qualitative analogies with the physical processes typical of glass forming liquids, there are a few points of more fundamental character. The presence of two characteristic temperatures (or chemical potentials) separating different dynamical regimes is common with a class of mean field theories of spin glasses and also with what is observed in real glass formers. The percolation transition corresponds to the dynamical transition in mean field $`p`$-spin or Potts glasses or with the glass transition in the mode coupling theory. In the FILG the dynamical signature of the transition is consequence of the appearance of percolating clusters, a geometrical feature which may be responsible for the slowing down of the dynamics in structural glasses. The second transition corresponds to the ideal glass transition at which the relaxation times diverge and the diffusion constant goes to zero. In the FILG the structural manifestation of this transition is the presence of a frozen percolating cluster. The density of the system approaches a critical value $`\rho _c0.7`$, attainable only by an infinitely slow cooling. These facts turn the FILG an important model for studying different aspects of the glass transition. Different protocols can be envisaged for studying, for example, correlations and response functions. We have prepared the model in a non equilibrium state, setting the parameters as in with an initial density lower than the critical one. Then the system is quenched to a super critical chemical potential and dynamic correlations are recorded. We have used a Monte Carlo dynamics which alternates flipping of the internal degrees of freedom, creation and destruction of particles in a plane surface (which mimics a compression experiment) as in and particle diffusion. The creation-destruction of particles is fast enough as to destroy all particles at long times if it is allowed in the bulk. Consider the connected two point correlation $$c(t,t_w)=\frac{1}{N}\underset{i}{}n_i(t+t_w)n_i(t_w)\rho (t+t_w)\rho (t_w)$$ (2) where the global density at time $`t`$ is given by $`\rho (t)=N^1_in_i(t)`$. We now define the density autocorrelations as $`C_n(t,t_w)=c(t,t_w)/c(0,t_w)`$. In figure (1) the behavior of $`C_n(t,t_w)`$ is shown as a function of $`t`$ in a semi-log plot after a quench in chemical potential to a value $`\mu =10`$, for waiting times between $`2^5`$ and $`2^{17}`$. A typical aging scenario is present signalling the slowing down of the dynamics as the waiting time grows. For the longest waiting times the correlation presents a rather fast relaxation to a plateau in which the system is in quasi-equilibrium: the dynamics is stationary and the fluctuation-dissipation relations hold. The plateau separates two time scales typical of glassy systems: a $`\beta `$ (fast) relaxation for small time and an $`\alpha `$ (slow) relaxation at longer times, corresponding respectively to the fast movements of the particle inside the dynamical cages and the large scale, cooperative process that takes much more time in order to rearrange the cages. Moreover, in this very long time regime ($`tt_w`$), the system falls out of equilibrium, the correlations decay to zero asymptotically and time translational invariance (TTI) no longer holds with the corresponding violation of the fluctuation-dissipation theorem (FDT). As the $`C_n(t,t_w)`$ curves have a complex behavior it is possible to scale the curves only after a careful analysis of the different time scales present. It is clear that only for the largest waiting times different relaxation scales can be observed. We restrict our analysis to the five upper curves $`(t_w=2^{13}\mathrm{}2^{17})`$. In these curves, a fast initial decay is observed up to times $`t100MCS`$, corresponding to the already mentioned $`\beta `$ decay which we will not analyze in any more detail. Then a plateau characteristic of quasi-equilibrium dynamics develops. Its asymptotic value as $`t_w\mathrm{}`$ defines the ergodicity breaking (or Edwards-Anderson) parameter. For a fixed, large $`t_w`$ and $`tt_w`$, the system begins to fall out of equilibrium and the correlations decay to zero. But only the very early epochs of this decay can be observed within the times of our simulations, although we can observe the crossover between equilibrium and non equilibrium dynamics. Consequently, in order to obtain a good scaling for this last regime we have subtracted from $`C_n(t,t_w)`$ a stationary contribution of the form: $$C_n(t,t_w)=C_{\mathrm{}}+At^\alpha ,t_w\text{fixed},tt_w.$$ (3) with $`C_{\mathrm{}},A`$ and $`\alpha `$ as fit parameters. For the non stationary regime we assumed a time dependence of the form $`h(t_w)/h(t+t_w)`$ with $`h(x)`$ given by $`h(x)=\mathrm{exp}\left[(1\mu )^1(x/\tau )^{1\mu }\right]`$ with $`\mu <1`$ and $`\tau `$ a microscopic timescale. Note that this form is quite general as one recovers the cases of a simple $`t/t_w`$ dependence (full aging) when $`\mu =1`$ and stationary dynamics when $`\mu =0`$. The final scaling is shown in figure (2). The scaling obtained is much better than assuming only full aging or activated dynamics even after including in these cases the contribution from the stationary decay. We have also measured, as shown in figure (3), the autocorrelations of the internal degrees of freedom: $$C_s(t,t_w)=\frac{1}{N}\underset{i}{}s_i(t+t_w)n_i(t+t_w)s_i(t_w)n_i(t_w),$$ (4) The internal degrees of freedom correspond to a diluted Edwards-Anderson (EA) spin glass and the curves should be compared to the ones of the 3D EA model . From our knowledge of the 3D EA model one may expect the scaling of the autocorrelation in the aging regime be of the form $`t^\alpha \stackrel{~}{C}(t/t_w)`$. Nevertheless we verified that an activated dynamics scaling of the form $`C_s(t,t_w)\stackrel{~}{C}(\mathrm{log}(t)/\mathrm{log}(t_w))`$ works better. Moreover, doing an analysis similar to the one for $`C_n(t,t_w)`$ the scaling can be slightly improved (see fig.(3), inset). To our knowledge this is the first Hamiltonian lattice model which presents the essentials of structural glass phenomenology. In particular the results for the density autocorrelation are very promising suggesting that it may be possible to apply spin glass ideas and techniques for an analytical investigation of the model. The similarity between the density autocorrelation and the spin glass overlap function suggests the introduction of a non-linear compressibility in analogy with the spin glass susceptibility, which may be a good quantity for studying the possibility of an underlying phase transition with a growing correlation length in the model. Another issue that must be studied is the precise form of the violation of FDT through the so called fluctuation dissipation ratio . Also other protocols may be implemented in order to probe the off equilibrium dynamics of the system. Work in progress, keeping fixed the global density and performing a quench in temperature, indicates that the phenomenology is similar to what we have presented here. Moreover, the evaluation of the root mean square deviation of the particles and the incoherent scattering function (which is related to the Fourier transform of the density correlations) also show evidence of slow dynamics and aging. The results of these investigations will be published in a future paper. We are working also in the $`2D`$ version of the model: here, some differences are expected with respect to $`3D`$ because the dynamical singularity only occurs for $`T=0`$ $`(\mu \mathrm{})`$ and the relaxation time diverges with an Arrhenius law. Still other issues can also be explored. For instance, to what extent, if any, the scenario for the $`3D`$ model presented here changes as one goes to infinite range connections. In the mean field version, where both first and second order transitions show up, it might be possible that different aging regimes are present , and would be interesting to know if the FILG is a finite dimensional version of a model whose mean field limit has a discontinuous transition. This work was partly supported by Brazilian agencies CNPq and FAPEMIG. We acknowledge M. Sellitto for a careful reading of the manuscript and J.A.C. Gallas for providing time on his alpha station.
no-problem/9904/nucl-th9904043.html
ar5iv
text
# (HYBRID) BARYONS: QUANTUM NUMBERS AND ADIABATIC POTENTIALS ## 1 Introduction Hybrid baryons are bound states of three quarks with an explicit excitation in the gluon field of QCD. The construction of (hybrid) baryons in a model motivated from and consistent with lattice gauge theory, the non–relativistic flux–tube model of Isgur and Paton, was detailed in ref. There we studied the detailed flux dynamics and built the flux hamiltonian. A minimal amount of quark motion is allowed in response to flux motion, in order to work in the centre of mass frame. Otherwise, we make the so–called “adiabatic” approximation, where the flux motion adjusts itself instantaneously to the motion of the quarks. The main result was that the lowest flux excitation can to a high degree of accuracy (about 5%) be simulated by neglecting all flux–tube motions except the vibration of a junction. The junction acquires an effective mass from the motion of the remainder of the flux–tube and the quarks. The model is then simple: a junction is connected via a linear potential to the three quarks. The ground state of the junction motion corresponds to a conventional baryon and the various excited states to hybrid baryons. ## 2 Quantum numbers of (hybrid) baryons The junction can move in three directions, and correspondingly be excited in three ways, giving the hybrid baryons $`H_1,H_2`$ and $`H_3`$. For each junction excitation, it is found that the junction wave function can be realized to be either totally symmetric, or totally antisymmetric under quark label exchange, indicated by $`H^S`$ and $`H^A`$ respectively. The quantum numbers of the lowest–lying states that can be constructed on the adiabatic surfaces corresponding to each of the six hybrid baryons are indicated in Table 1. Since quarks are fermions, the wave function should be totally antisymmetric under quark label exchange, called the Pauli principle. The colour structure of hybrid baryons are taken to be identical to those of conventional baryons, i.e. it is totally antisymmetric under label exchange. This imposes constraints on the combination of flavour and non–relativistic spin $`S`$ of the three quarks that is allowed. The combinations are indicated in Table 1. “Chirality” gives the behaviour of the junction wave function under reflection in the plane spanned by the quarks, when the positions of the quarks are fixed. Let $`L`$ be the orbital angular momentum of the quarks and the junction. It is possible to argue that $`L=1`$ for the ground state $`H_1`$ and $`H_2`$ hybrid baryons, while $`L=0`$ for the ground state conventional and $`H_3`$ hybrid baryons. The total angular momentum $`𝐉=𝐋+𝐒`$. Since $`L=0`$ for ground state conventional and $`H_3`$ hybrid baryons, $`J=S`$. Since $`L=1`$ for ground state $`H_1,H_2`$ hybrid baryons, $`J=\frac{1}{2},\frac{3}{2}`$ for $`S=\frac{1}{2}`$, and $`J=\frac{1}{2},\frac{3}{2},\frac{5}{2}`$ for $`S=\frac{3}{2}`$. These assignments are indicated in Table 1. ## 3 The potential in which the quarks move We shall now calculate the junction energy (“adiabatic potential”) for the ground and first excited states of the junction as a function of the quark positions. Define $$𝝆=\frac{𝐫_1𝐫_2}{\sqrt{2}}𝝀=\frac{𝐫_1+𝐫_22𝐫_3}{\sqrt{6}}\mathrm{cos}\theta _{\rho \lambda }=\frac{𝝆𝝀}{\rho \lambda }$$ (1) where $`𝐫_i`$ denotes the positions of the quarks. The energy is a function of $`\rho `$, $`\lambda `$ and $`\theta _{\rho \lambda }`$. The procedure for evaluating the conventional baryon potential is as follows. We numerically evaluate $`V_B(l_1,l_2,l_3)`$ by solving the Schrödinger Equation for the junction hamiltonian, $`(\frac{1}{2}M_{\text{eff}}^{\mathrm{}}\dot{𝐫}^\mathrm{𝟐}+V_\text{J})\mathrm{\Psi }_B(𝐫)=V_B(l_1,l_2,l_3)\mathrm{\Psi }_B(𝐫)`$, variationally using an ansatz ground state simple harmonic oscillator wave function. $`M_{\text{eff}}^{\mathrm{}}`$ is the effective mass of the junction in the limit where the flux–tubes between the junction and quarks are continuous strings. $`V_\text{J}`$ is the linear potential between the junction and the three quarks. The hybrid baryon $`H_1`$ potential $`V_{H_1}(l_1,l_2,l_3)`$ is solved using $`(\frac{1}{2}M_{\text{eff}}^{\mathrm{}}\dot{𝐫}^\mathrm{𝟐}+V_\text{J})\mathrm{\Psi }_{H_1}(𝐫)=V_{H_1}(l_1,l_2,l_3)\mathrm{\Psi }_{H_1}(𝐫)`$ with a first excited state simple harmonic oscillator wave function as an ansatz. The difference between the hybrid and conventional baryon potentials is plotted in Figures 1 \- 2. Since the potentials are functions of $`\rho ,\lambda `$ and $`\theta _{\rho \lambda }`$, one of the variables is held fixed at a typical value for clarity of presentation. The (hybrid) baryon potential can be seen to increase when $`\rho \lambda `$ is small. Numerically, the ratio of the hybrid to baryon potential is found to be $`1.441.6`$ for all $`\rho ,\lambda `$ and $`\theta _{\rho \lambda }`$. A preliminary estimate of the ground state $`H_1`$ hybrid baryon mass of $`2`$ GeV has also been made by adding the difference between the hybrid and conventional baryon potentials to the phenomenologically successful baryon potential used in ref. ## 4 Conclusions The spin and flavour structure of the six hybrid baryons have been specified. Exchange symmetry constrains the spin and flavour of the (hybrid) baryon wave function. The orbital angular momentum of the low–lying hybrid baryon is argued to be unity. The adiabatic potentials have been calculated numerically. The low–lying hybrid baryon mass has been estimated numerically.
no-problem/9904/physics9904048.html
ar5iv
text
# A New Theory of Geomagnetism ## 1 Introduction In 1905 Albert Einstein described Geomagnetism as one of the five unsolved problems of physics. After nearly a century, inspite of a tremendous amount of work which has culminated in the dynamo model of Geomagnetism, , it cannot be said with confidence that the problem has been solved. From time to time, simulations are developed which improve upon earlier models, but several unexplained features persist. These include the problems of Geomagnetic reversals. In particular it may be mentioned (cf.ref. and )that Muller and Morris have attributed the reversals to asteroid impacts, which in turn have been related to mass extinctions. We will point out in what follows that in the light of recent work on semionic and anomalous behaviour of Fermions under special conditions, the solid core of the earth which has hitherto not been considered could contribute significantly to Geomagnetism, and could even facilitate an explanation for the magnetic reversals. ## 2 Magnetism of the Solid Core It is well known that the earth has a solid core with a radius of about 1200 kilometers, composed mostly of Iron $`(90\%)`$ and Nickel $`(10\%)`$, at a temperature of about 6000 degrees centigrade and with a relative density around 10. This in turn is surrounded by the liquid core which it is believed gives rise to the dynamo model of Geomagnetism. Given the above data, using the atomic weight of iron, we can easily calculate that the number of atoms in the solid core, $`N`$ is given by, $$N10^{48}$$ (1) where the symbol $``$ denotes, ”of the order of”. We next calculate the Fermi temperature of the conduction electrons in the solid core. This is given by, $$kT_F=\left\{6\pi ^2\left(\frac{N}{V}\right)\right\}^{2/3}\frac{\mathrm{}^2}{2m}$$ (2) where $`V`$ is the volume of the solid core, $`k`$ is the Boltzmann constant $`\mathrm{}`$ is the reduced Planck constant and $`m`$ the electron mass. It follows from (2) that $$T_F10^{5^o}C$$ (3) Thus one can see that the temperature of the solid core is below the Fermi temperature of the conduction electrons. In recent years it has been realized that under special conditions like low dimensionality or temperatures, conduction electrons do not strictly obey Fermi-Dirac statistics, but rather they are semionic, that is they obey statistics between the Fermi-Dirac and Bose-Einstein statistics . In particular, this is true below the Fermi temperature . The implications are interesting: Given Fermi Dirac statistics, at temperatures below the Fermi temperature we would have (cf.ref.) for the magnetisation $`M`$ per unit volume the formula $$M=\frac{\mu (2\overline{N}_+N)}{V}$$ (4) where $`\mu `$ is the electron magnetic moment and $`\overline{N}_+`$ is the average number of electrons with spin up, say, where $`\overline{N}_+\frac{N}{2}`$, so that $`M`$ in (4) is very small. However if the behaviour is not Fermionic, but rather Bosonic, then $$\overline{N}_+N$$ In our case, $$\frac{N}{2}<\overline{N}_+<N$$ With this input (4) becomes, $$M\frac{\mu N}{V}$$ (5) From (5) we can easily deduce that the terrestrial magnetic field $`H`$ is given by, $$H\frac{MV}{r^3}1G$$ The above order of magnetic calculation thus gives the correct order of the terrestrial magnetism. Moreover, this would have the added advantage that it could explain geomagnetic reversals: The semionic behaviour of the electrons in the solid core is sensitive to external magnetic influences and could thus flip or reverse polarity. On the other hand, the explanation in the case of the convective dynamo model would be contrived in comparison.
no-problem/9904/cond-mat9904296.html
ar5iv
text
# Calculation of ground states of four-dimensional ±𝐽 Ising spin glasses ## I Introduction Optimization methods have found widespread application in computational physics. Among these the investigation of the low-temperature behavior of spin glasses attracted most of the attention within the statistical physics community. The reason is that despite its simple definition (see below) its behavior is far from being understood. From the computational point of view the calculation of spin-glass ground states is very demanding, because it belongs to the class of the NP-hard problems . This means that only algorithms are available, for which the running time on a computer increases exponentially with the system size. In this work a method recently proposed, the cluster-exact approximation (CEA) is applied to four-dimensional Ising spin glasses. The model under investigation here consists of $`N`$ spins $`\sigma _i=\pm 1`$, described by the Hamiltonian $$H\underset{i,j}{}J_{ij}\sigma _i\sigma _j$$ (1) where $`\mathrm{}`$ denotes a sum over pair of nearest neighbors. In this report simple 4d lattices are considered, i.e. $`N=L^4`$. The nearest neighbor interactions (bonds) take independently $`J_{ij}=\pm 1`$ with equal probability. Periodic boundary conditions are applied to the systems. No kind of external magnetic field is present here. Four-dimensional Ising spin glasses have been investigated rather rarely. Most of the results were obtained via Monte-Carlo (MC) simulations at finite temperature, see e.g. . Here the $`T=0`$ behavior is investigated, i.e. ground states are calculated. This has the advantage, that one does not encounter ergodicity problems or critical slowing down like in algorithms which base on MC methods. Only one attempt to address the 4d spin-glass ground-state problem is known to the author. But, as we will see later, the former results suffer from the problem, that not the true global minima of the energy were obtained. Furthermore, no analytic predictions of the ground-state energy have been noted by the author. The question whether finite-dimensional Ising spin glasses show an ordered phase below a non-zero transition temperature $`T_c`$ is of crucial interest. By MC simulations around the (expected) transition temperature this question is hard to solve. Another way to address this question is to calculate the stiffness or domain wall energy $`\mathrm{\Delta }=E^aE^p`$ which is the difference between the ground-state energies $`E^a,E^p`$ for antiperiodic and periodic boundary conditions in one direction. Here the antiperiodic boundary conditions for calculating $`E^a`$ are realized by inverting one plane of bonds. For the other directions periodic boundary conditions are applied always. This treatment introduces a domain wall into the system. If a model exhibits an ordered low-temperature phase, the domain wall increases with growing system size, which becomes visible through the behavior of $`\mathrm{\Delta }`$: the disorder-averaged stiffness energy shows a finite size dependence $$|\mathrm{\Delta }|L^{\mathrm{\Theta }_S}$$ (2) A positive value of the stiffness exponent $`\mathrm{\Theta }_S`$ indicates the existence of an ordered phase for non-zero temperature. For example a simple $`d=2`$ Ising ferromagnet has $`\mathrm{\Theta }_S=1`$. For spin glasses, the stiffness exponent plays additionally an important role within the droplet-scaling theory , where it describes the finite-size behavior of the basic excitations (the droplets). Using this kind of analysis is was proven that the 2d spin glass exhibits no ordering for $`T>0`$ . For the three-dimensional problem in a recent calculation by applying genetic CEA a value of $`\mathrm{\Theta }_S=0.19(2)`$ was found, which shows, that indeed the $`d=3`$ model has a spin-glass phase for nonzero temperature. For $`d=4`$ the existence of a finite $`T_c2.1`$ was proven rather early even by MC simulations , but the value for the stiffness-exponent $`\mathrm{\Theta }_S`$ is of interest on its own. In recently a value of $`\mathrm{\Theta }_S=0.82(6)`$ was found by performing a MC simulation near $`T_c`$. In the work presented here the value is obtained via ground-state calculations. The paper is organized as follows: In the next section the algorithm applied here is briefly presented. The main section contains the results for the ground-state energy and the stiffness exponent. Finally a summary is given. ## II Algorithm The technique for the calculation bases on a special genetic algorithm and on cluster-exact approximation which is an optimization method designed especially for spin glasses. Now a brief description of the method is given. Genetic algorithms are biologically motivated. An optimal solution is found by treating many instances of the problem in parallel, keeping only better instances and replacing bad ones by new ones (survival of the fittest). The genetic algorithm starts with an initial population of $`M_i`$ randomly initialized spin configurations (= individuals), which are linearly arranged in a ring. Then $`\nu \times M_i`$ times two neighbors from the population are taken (called parents) and two offspring are created using the so called triadic crossover . Then a mutation with a rate of $`p_m`$ is applied to each offspring, i.e. a fraction $`p_m`$ of the spins is reversed. Next for both offspring the energy is reduced by applying CEA. The algorithm bases on the concept of frustration . The method constructs iteratively and randomly a non-frustrated cluster of spins, whereas spins with many unsatisfied bonds are more likely to be added to the cluster. The non-cluster spins act like local magnetic fields on the cluster spins. For the spins of the cluster an energetic minimum state can be calculated in polynomial time by using graph-theoretical methods : an equivalent network is constructed , the maximum flow is calculated and the spins of the cluster are set to the orientations leading to a minimum in energy. This minimization step is performed $`n_{\mathrm{min}}`$ times for each offspring. Afterwards each offspring is compared with one of its parents. The pairs are chosen in the way that the sum of the phenotypic differences between them is minimal. The phenotypic difference is defined here as the number of spin where the two configurations differ. Each parent is replaced if its energy is not lower (i.e. better) than the corresponding offspring. After this creation of offspring is performed $`\nu \times M_i`$ times the population is halved: From each pair of neighbors the configuration which has the higher energy is eliminated. If not more than 4 individuals remain the process is stopped and the best individual is taken as result of the calculation. The whole algorithm is performed $`n_R`$ times and all configurations which exhibit the lowest energy are stored, resulting in $`n_g`$ statistical independent ground state configurations. The method was already applied for the investigation of the ground-state landscape of 3d Ising spin glasses . ## III Results In this section at first the values for the simulation parameters, which are defined above, are presented. Then the finite-size behavior of the ground-state energy is investigated. Finally results for the stiffness energy are discussed. The simulation parameters were determined in the following way: For the system sizes $`L=2,4,6,7`$ several different combinations of the parameters $`M_i,\nu ,n_{min},p_m`$ were tested. For the final parameter sets it is not possible to obtain lower energies even by using parameters where the calculation consumes four times the computational effort. For $`L=3,5`$ the parameter sets for $`L+1`$ were used. Using parameter sets chosen this way genetic CEA calculates true ground states, as shown in . It should be pointed out that it is relatively easy to obtain states, which exhibit an energy slightly above the true ground state energy. The hard task is to obtain really the global minimum of the energy. Here $`p_m=0.1`$ and $`n_R=5`$ were used for all system sizes. Table I summarizes the parameters. Also the typical computer time $`\tau `$ per ground state computation on a 80 MHz PPC601 is given. Ground states were calculated for system sizes up to $`7\times 7\times 7\times 7`$ for $`N_L`$ independent realizations (see table I) of the random variables. For each realization the ground states with periodic and antiperiodic boundary condition in one direction were calculated. The remaining three directions are always subjected to periodic boundary conditions. One can extract from the table that for small system sizes $`L4`$ ground states are rather easily to obtain, while the $`L=7`$ systems alone required 6560 CPU-days. Using these parameters on average $`n_g>2.7`$ ground states were obtained for every system size $`L`$ using $`n_R=5`$ runs per realization. The average ground-state energy $`e_0`$ per spin is shown in Fig.1 as a function of the system size $`L`$. Using a fit to $`e_0(L)=e_0^{\mathrm{}}+aL^b`$ the value for the infinite system is extrapolated, resulting in $`e_0^{\mathrm{}}=2.095(1)`$ ($`a=7.1(7),b=4.2(1)`$). This value is compatible with the lower bound of $`e_0=\sqrt{2d\mathrm{ln}2}2.35`$ given by the random energy model . The value calculated here is substantially smaller than the result $`e_0^{\mathrm{}}=2.054(3)`$, which was obtained in using a pure genetic algorithm. This shows that in not the true global minima were found, which can be concluded also from the fact, that there $`e_0(L)`$ increases with growing system size. Because the periodic boundary conditions impose additional constraints on the systems, the opposite behavior is expected, as found for the results presented here. For further comparison additionally some calculations were performed by the author by simply rapidly quenching from random chosen spin configurations. By executing an analogous fit, a value of $`e_0^{\mathrm{}}=2.04(2)`$ is obtained. This shows, that the result from seems to be only slightly better than the data obtained by applying a very simple minimization method. The distribution of the stiffness energy, which is obtained from performing ground-state calculations for systems with either periodic or antiperiodic boundary conditions in one direction, are shown in Fig. 2 for $`L=5`$ and $`L=7`$. With increasing system size the distribution broadens. This means that larger domain walls become more and more likely. To study this effect more quantitatively, in Fig. 3 the disorder-averaged absolute value $`|\mathrm{\Delta }|`$ of the stiffness energy is plotted as a function of the system size $`L`$. Also shown is a fit $`|\mathrm{\Delta }(L)|L^{\mathrm{\Theta }_S}`$ which results in $`\mathrm{\Theta }_S=0.64(5)`$. Here, the system sizes $`L=2,3`$ were left out of the analysis, since they are below the scaling regime. Because of the large sample sizes the error bars are small enough, so we can be pretty sure that $`\mathrm{\Theta }_S>0`$. It confirms earlier results from MC simulations that the 4d EA spin glass exhibits a non-zero transition temperature $`T_c`$. The value $`\mathrm{\Theta }_S=0.64(5)`$ is comparable to a recent result from MC simulations $`\mathrm{\Theta }_S=0.82(6)`$ , given the facts that the system sizes are rather small and the other result was obtained at finite temperature near the transition point $`T_c2.1`$. Additionally, the prediction from droplet-scaling theory $`\mathrm{\Theta }_S<(d1)/2=1.5`$ is fulfilled. It should be pointed out, that the method described above does not guaranty to find exact ground states, although the method for choosing the parameters makes it very likely. If states with a slightly higher energy are obtained, the result for $`e_0^{\mathrm{}}`$ is not affected very much. For the stiffness energy, it was shown in that the result is very reliable as well, as long as the energies of the states are not too far away from the true ground-state energies. ## IV Conclusion Results have been presented from calculations of a large number of ground states of 4d Ising spin glasses. They were obtained using a combination of cluster-exact approximation and a genetic algorithm. Using a huge computational effort it was ensured that true ground states have been obtained with a high probability. The finite size behavior of the ground-state energy and the stiffness energy have been investigated. By performing a $`L\mathrm{}`$ extrapolation, the ground-state energy per spin for the infinite system is estimated to be $`e_0^{\mathrm{}}=2.095(1)`$. The absolute value of the stiffness energy increases with system size and shows a $`|\mathrm{\Delta }(L)|L^{\mathrm{\Theta }_S}`$ behavior with $`\mathrm{\Theta }_S=0.64(5)`$. For systems with a Gaussian distribution of the bonds qualitatively similar results are expected, since the ordering behavior depends only on the sign of the interactions and not on their magnitudes. A more detailed study of the ground-state landscape of 4d systems, similar to , requires more than $`n_G3`$ ground states per realization to be calculated. Since this requires a substantial higher computational effort, it remains to be done for the future. ## V Acknowledgements The author thanks A.P. Young for interesting discussions, critical reading of the manuscript and for the allocation of computer time on his workstation cluster at the University of California in Santa Cruz. This work was suggested by him during the “Monbusho Meeting” held at the Fondation Royaumont near Paris. The author was supported by the Graduiertenkolleg “Modellierung und Wissenschaftliches Rechnen in Mathematik und Naturwissenschaften” at the Interdisziplinäres Zentrum für Wissenschaftliches Rechnen in Heidelberg and the Paderborn Center for Parallel Computing by the allocation of computer time. Financial support was provided by the DFG (Deutsche Forschungsgemeinschaft) and the organizers of the “Monbusho Meeting”.
no-problem/9904/nucl-th9904036.html
ar5iv
text
# Compressibility of Nuclear Matter from Shell Effects in Nuclei ## Introduction Shell effects manifest themselves in terms of the existence of magic numbers in nuclei. Inclusion of spin-orbit coupling in the shell model is known to account for the magic numbers<sup>1)</sup>. Shell effects are also found to appear more subtly e.g. as anomalous kink in the isotope shifts of Pb nuclei. It was shown that in the relativistic mean-field (RMF) theory, this kink appears naturally due to the inherent Dirac-Lorentz structure of nucleons<sup>2)</sup>. By including an appropriate isospin dependence of the spin-orbit potential in the density dependent Skyrme approach, the anomalous kink in the isotope shifts in Pb nuclei could be produced<sup>3)</sup>. The shell gaps at the magic numbers are known to produce important effects. Consequences due to the shell gaps are broadly termed as shell effects. The clearest manifestation of the shell effects can be seen in the pronounced dip in neutron and proton separation energies at the magic numbers in figures by Wapstra and Audi<sup>4)</sup> on the experimental separation energies all over the periodic table. Recently, shell effects have become focus of much attention<sup>5,6)</sup>. This is due to the reason that the shell effects in nuclei near the drip lines constitute an important ingredient to understand r-process abundances and heavy nucleosynthesis<sup>7,8)</sup>. In this talk, I focus upon 2-neutron separation energies to demonstrate that the shell efffects show a significant dependence upon the compressibility of nuclear matter. Using empirical data on the 2-neutron separation energies and thus implicitly the empirical shell effects, I will show that these data can be used to constrain the compressibility of nuclear matter. As the focus of this conference shows, the compression modulus K is an important fundamental property of the nuclear matter. It represents a cardinal point on the behaviour of equation of state (EOS) of the nuclear matter. The compressibility of the nuclear matter has received a significant attention in the last decade and various approaches<sup>9-11)</sup> have been employed to extract the compression modulus. However, an unambiguous determination of the compression modulus of the nuclear matter has remained difficult due to a lesser sensitivity of the giant monopole resonance data to the compressibility<sup>12,13)</sup>. ## The Density-Dependent Skyrme Theory Here I employ the density-dependent Skyrme theory<sup>14)</sup> to examine the shell effects. The Skyrme approach has generally been found to be very successful in providing ground-state properties of nuclei. The energy density functional used in the Skyrme approach is the standard one as given in Ref.<sup>15)</sup>. The corresponding energy per nucleon for the symmetric nuclear matter (with the Coulomb force switched off) is given by $$(E/A)_{\mathrm{}}=k\rho ^{2/3}(1+\beta \rho )+\frac{3}{8}t_0\rho +\frac{1}{16}t_3\rho ^{1+\alpha }$$ (1) where $`k=75`$ MeV.fm<sup>2</sup>. The constant $`\beta `$ is given in terms of the constants $`t_1`$ and $`t_2`$ and $`x_2`$ of the Skyrme force by $$\beta =\frac{2m}{\mathrm{}^2}\frac{1}{4}\left[\frac{1}{4}(3t_1+5t_2)+t_2x_2\right].$$ (2) The constant $`\beta `$ is also related to the effective mass $`m^{}`$ and the saturation density $`\rho _0`$ by $$m/m^{}=1+\beta \rho _0$$ (3) The incompressibility (or the compression modulus) of the infinite nuclear matter is given as the curvature of the EOS curve and can be written as $`K`$ $`=`$ $`9\rho _0^2{\displaystyle \frac{d^2(E/A)(\rho )}{d\rho ^2}}|_{\rho _0}`$ $`=`$ $`2k\rho _0^{2/3}+10k\beta \rho _0^{5/3}+{\displaystyle \frac{t_3}{16}}\alpha (\alpha +1)\rho _0^{1+\alpha }`$ The parameters $`t_0`$, $`t_3`$ and $`\alpha `$ are usually obtained from the nuclear matter properties. The other parameters $`t_1`$, $`t_2`$ and various $`x`$ parameters are obtained from fits to properties of finite nuclei. The strength W<sub>0</sub> responsible for the spin-orbit interaction is obtained by reproducing the spin-orbit splittings in nuclei such as <sup>16</sup>O and <sup>40</sup>Ca. ## The Skyrme Forces In order to show the effect of the incompressibility of nuclear matter on shell effects and consequently also on ground-state properties of nuclei, I have constructed a series of zero-range Skyrme forces. Experimental data on ground-state binding energies and charge radii of key nuclei such as <sup>16</sup>O, <sup>40</sup>Ca, <sup>90</sup>Zr, <sup>116</sup>Sn, <sup>124</sup>Sn and <sup>208</sup>Pb are taken into account in the least-square minimization. The ground-state properties in the Skyrme theory are calculated using the Hartree-Fock method. With a view to vary K over a large range, the correlation between K and the saturation density is implicitly taken into account. This correlation has been summarized for Skyrme type of forces in Fig. 4 of Blaizot<sup>9)</sup>. Accordingly, there exists an inverse correlation between the saturation density (Fermi momentum) and the compression modulus. Keeping this in mind, I have varied the saturation density $`\rho _0`$ over the range $`0.140170fm^3`$ in steps of $`.005fm^3`$ as shown in Table 1. The effective mass is fixed at 0.79. This value is required for being consistent with giant quadrupole resonance energies of heavy nuclei<sup>15)</sup>. Eq. (3) then provides the value of $`\beta `$ which can be used in eq. (2) to connect the coefficients $`t_1`$, $`t_2`$ and $`x_2`$. In order to do a systematic variation of the nuclear compressibility, I have kept the saturation binding energy fixed at a value of $`16.0`$ MeV. This value is consistent with most nuclear mass models<sup>16)</sup> and is close to physically acceptable values. The coefficients $`\alpha `$, $`t_0`$ and $`t_3`$ are determined by the eqs. (1), (4) and the saturation condition. However, we have allowed $`t_0`$ and $`t_3`$ to vary so that a fit to finite nuclei provides an incompressibility that is consistent with the inverse correlation. On the other hand, in an exhaustive computational exercise it is found that the violation of the above correlation results in bad fits for ground-state binding energies of key nuclei. I have fixed $`x_1`$ and $`x_2`$ at zero for convenience. Given a value of $`\beta `$, the coeffients $`t_1`$ and $`t_2`$ in eq. (2) are obtained from the fits to the ground-state binding energies of finite nuclei. Thus, I have obtained various Skyrme parameter sets with incompressibility values K = 200, 220, 249, 270, 305, 327, 360 and 393 MeV, respectively. These forces encompass a broad range of physically plausible values of K. The nuclear matter properties of these forces are shown in Table 1. It can be seen that the force with K = 270 MeV was obtained for the sake of interpolation. Table 2 shows the total binding energies of some key nuclei obtained with the HF+BCS approach using the various Skyrme forces. It can be seen that all of these forces reproduce the total binding energies of the key nuclei from <sup>16</sup>O to <sup>208</sup>Pb very well. ## Two-neutron Separation Energies and the Compressibility I have selected the chain of Ni isotopes in order to probe the shell effects. As most Ni isotopes are known to be spherical and the experimental binding energies are known over a large range of Ni isotopes, the chain of Ni isotopes serves as an ideal test bench to probe the shell effects. Even-mass Ni isotopes from A=52 to A=70 are considered. This includes the neutron magic number N=28 (A=56) where we intend to investigate the shell effects due to the major shell closure. With a view to obtain the ground-state properties of nuclei, we have performed spherical Skyrme Hartree-Fock calculations in coordinate space. Herein pairing is included using the BCS formalism with constant pairing gaps. Since nuclei under focus such as <sup>56</sup>Ni and <sup>58</sup>Ni are not far away from the stability line, the BCS scheme suffices to be a suitable mechanism for the pairing. Ground-state binding energies and $`rms`$ charge and neutron radii for the Ni isotopes are calculated using the various Skyrme forces. The binding energies of nuclei are used to obtain the 2-neutron separation energies as $$S_{2n}(Z,N)=B(Z,N)B(Z,N2),$$ (5) where B represents the total binding energy of a nucleus. As mentioned earlier, effects due to the shell gaps affect various nuclear properties besides the particle separation energies. First, the calculated charge radii of Ni isotopes are shown in Fig. 1. The charge radii show an increase with the incompressibility. Evidently, forces with the incompressibility about 200-250 MeV show a little dispersion in the values of the charge radii. However, above $`K300`$ MeV, there is a clear increase in the charge radii as a function of K and forces with a large incompressibility yield large values of charge radii for any given isotope. Inevitably, a large value of K hinders synthesis of nuclei with a radial extension which is diminuted (compressed) contrary to that with a lower values of K. The empirical $`rms`$ charge radii of <sup>58</sup>Ni, <sup>60</sup>Ni, <sup>62</sup>Ni and <sup>64</sup>Ni are taken from the compilation of Ref.<sup>17)</sup> by solid circles. The values are 3.776, 3.815, 3.846 and 3.868 fm, respectively, obtained by folding the proton density distributions with the finite size 0.80 fm of protons. These results derive from experiments on muonic atoms. The $`rms`$ charge radii deduced from the precision measurements in muonic atoms (taken from the recent compilation of Ref.<sup>18)</sup> are 3.776, 3.813, 3.842 and 3.860 fm, respectively, which are almost identical to the previous compilation<sup>17)</sup>. The empirical charge radii shown in Fig. 1 encompass the theoretical curves between K=270-327 MeV. Whereas the charge radius for <sup>58</sup>Ni points towards K=270 MeV, that for the heavier Ni isotopes is in the vicinity of K=327 MeV. Since the charge radii deduced from various methods have been obtained with a significant precision, these data could, in principle, be used to constrain the incompressibility. However, as there is still some model dependence in extraction of the charge radii, I would like to withhold any conclusions from this figure. On the other hand, the general trend of the experimental data points to a higher value of the compression modulus. Figure 2 shows the corresponding $`rms`$ neutron radii. The neutron radii show a monotonous increase with the mass number, with the exception of a slight kink at the magic number N=28. The change in the $`r_n`$ values with K is similar to that for the charge radii, i.e. for lower values of K, there is a very little change in $`r_n`$ with K. A significant change in the neutron radii, however, can be seen with large K values. The empirical $`rms`$ neutron radii obtained from 800-MeV polarized-proton scattering experiment<sup>17,19)</sup> for the isotopes <sup>58</sup>Ni and <sup>64</sup>Ni are shown in the figure by the solid points. The values lie between the curves for K=300 and K=327 MeV. The 2-neutron separation energy $`S_{2n}`$ for the Ni isotopes obtained with the various Skyrme forces are shown in Fig. 3. For the sake of clarity of the presentation, I have selected a set of the Skyrme forces in the figure. The dramatic fall in the $`S_{2n}`$ curves is seen conspicuously for the nucleus just above the N=28 magic number. Such a kink in the $`S_{2n}`$ values signifies the presence of the shell effects which arise from closure of a major shell. All the Skyrme forces produce such a kink in Fig. 3. However, the slope of the kink between A=56 (N=28) and A=58 (N=30) changes from one force to the other (as a function of the incompressibility $`K`$). The difference in the $`S_{2n}`$ values of <sup>56</sup>Ni and <sup>58</sup>Ni can be taken as a measure of the shell effects. The force with the lowest incompressibility ($`K=200`$ MeV) gives this difference as 6.59 MeV with a slope which is minimum amongst all the curves. This implies that the shell effects due to this force are the weakest one. As the K value increases, the corresponding difference shows a smooth increase and the ensuing steepness increases gradually. For the force with the highest incompressibility ($`K=393`$ MeV), the difference amounts to 9.46 MeV. This difference is higher than the experimental difference<sup>20)</sup> of 8.37 MeV. It implies that the shell effects show a strong dependence on the compression modulus K. Notwithstanding this correlation, the difference in the experimental values of $`S_{2n}`$ can serve as a calibration for K. Consequently, the experimental difference and the slope is closest to the curve for K=270 MeV. Thus, on the basis of the empirical $`S_{2n}`$ values, I infer that the incompressibility of nuclear matter lies in the neighbourhood of $`K270`$ MeV. ## Shell Effects at the Neutron Drip-Line Shell effects near the drip lines have been a matter of discussion of late<sup>5,6)</sup>. It has been shown earlier that within the non-relativistic approaches of the Skyrme type, the shell effects near the drip lines and in particular near the neutron drip line are quenched<sup>5)</sup>. This is contrary to that shown in the RMF theory by Sharma et al.<sup>6)</sup>, where the shell effects were observed to remain strong. Before such a debate is resolved, it is important to settle the issue of the shell effects near the stability line. Fig. 4 shows the 2-neutron separation energies for the Ni isotopes using the force SkP within the Hartree-Fock Bogolieubov approach<sup>21)</sup>. The force SkP has been found to be successful in reproducing the ground-state properties of nuclei. The comparison of the SkP results with the experimental $`S_{2n}`$ values shows that empirically there exist strong shell effects than predicted by the Skyrme force SkP. The compression modulus of the force SkP is about 200 MeV. Thus, the behaviour of the force SkP about the weak shell effects is consistent with the results from forces with low value of the incompressibility in Fig. 3. This suggests again that the shell effects in Ni are commensurate to a higher value of the compression modulus of the nuclear matter as inferred above. How the shell effects behave along the neutron drip line as a function of the compression modulus can be visualized in Fig. 5. Here I have chosen the chain of Zr isotopes. This chain includes nuclei on both the sides of the magic number N=82 (A=122). The total binding energies obtained in the Hartree-Fock calculations with the various Skyrme forces are shown. For nuclei from A=116 (N=76) to A=122 (N=82), the total binding energy shows a steep increase in the value with all the forces. The forces with lower K show a stronger binding in general as compared to those with higher K, which is as expected. For nuclei above A=122, there is a striking difference in the way the binding energies progress with mass number. For the force with K = 200 MeV, the binding energy of nuclei heavier than <sup>122</sup>Zr shows an increase in the value with the mass number, implying that a further sequential addition of a pair of neutrons to the N=82 core does contribute to the binding energy. As the K value increases, the binding energy contribution from neutrons above N=82 starts diminishing. For K values 300 MeV and above, a stagnation is observed in the binding energies. Such a behaviour implies that the shell effects become stronger as the incompressibility increases. This is again an indication that the shell effects are strongly correlated to the incompressibility of the nuclear matter. Consequently, the incompressibility inferred from the ground-state data along the stability line suggests that the shell effects near the neutron drip line are strong. This is consistent with the strong shell effects about the neutron drip line as concluded in the RMF theory<sup>6)</sup>. For nuclei near the drip lines, the HFB approach is considered to be more suitable than the HF+BCS one. However, the effects of the continuum are expected to be significant more for nuclei very near the drip line and in particular the observables which are most affected by coupling to the continuum are the $`rms`$ radii. The total binding energies are known to show a little difference in the HF+BCS and HFB approaches. The main character of the shell effects is not altered by the inclusion of the continuum, as has been pointed out in Ref.<sup>8)</sup> using the Skyrme force SIII. However, it will be interesting to see how a judicious choice of a force compatible with the experimental data as discussed above, will affect the r-process nucleosynthesis. ## Summary and Conclusions In the present work, I have shown that there exists a correlation between the shell effects and the compressibility of nuclear matter. The latter is shown to influence the shell effects significantly. Consequently, the 2-neutron separation energies show a strong dependence on the compression modulus. The ensuing correlation then provides a calibration for the compression modulus of nuclear matter. It is shown that using this correlation, the empirical data on the 2-neutron separation energies and thus implicitly the shell effects can be used to constrain the compressibility of nuclear matter. This procedure allows me to conclude that the incompressibility K should be within 270-300 MeV. This conclusion is also consistent with the experimental data on charge radii. A natural consequence of the present study is that the shell effects near the neutron drip line are predicted to be strong. This is in conformity with the strong shell effects observed at the neutron drip line in the RMF theory. ## References 1) M.G. Mayer and J.H.D. Jensen, Elementary Theory of Nuclear Shell Structure (Wiley, New York, 1955). 2) M.M. Sharma, G.A. Lalazissis and P. Ring, Phys. Lett. 317, 9 (1993). 3) M.M. Sharma, G.A. Lalazissis, J. König, and P. Ring, Phys. Rev. Lett. 74 3744 (1994). 4) C. Borcea, G. Audi, A.H. Wapstra and P. Favaron, Nucl. Phys. A565 158 (1993). 5) J. Dobaczewski et al., Phys. Rev. Lett. 72 981 (1994); ibid 73 1869 (1994). 6) M.M. Sharma, G.A. Lalazissis, W. Hillebrandt and P. Ring, Phys. Rev. Lett. 72 1431 (1994); ibid 73 1870 (1994). 7) K.-L. Kratz et al., Astrophys. J. 403 216 (1993). 8) B. Chen et al., Phys. Lett. B355 37 (1995). 9) J.P. Blaizot, Phys. Rep. 64 171 (1980). 10) M.M. Sharma et al., Phys. Rev. C38 2562 (1988). 11) M.V. Stoitsov, P. Ring and M.M. Sharma, Phys. Rev. C50 1445 (1994). 12) J.M. Pearson, Phys. Lett. B271 12 (1991). 13) S. Shlomo and D.H. Youngblood, Phys. Rev. C47 529 (1993). 14) D. Vautherin and D.M. Brink, Phys. Rev. C7 296 (1973). 15) M. Brack, C. Guet and H.B. Hakansson, Phys. Rep. 123 275 (1985). 16) P. Möller, J.R. Nix, W.D. Myers and W.J. Swiatecki, At. Data Nucl. Data Tables 59 185 (1995). 17) C.J. Batty, E. Friedmann, H.J. Gils and H. Rebel, Adv. Nucl. Phys. 19 1 (1989). 18) G. Fricke et al., Atomic Data and Nuclear Data Tables 60 178 (1995). 19) L. Ray, Phys. Rev. C19 1855 (1979). 20) A.H. Wapstra and G. Audi, Nucl. Phys. A565 1 (1993). 21) J. Dobaczewski et al., Nucl. Phys. A422 103 (1984) and private communication (1994). Figure Captions Charge radii for Ni isotopes with various Skyrme forces. The experimental values for a few nuclei from Refs.<sup>17,18)</sup> are shown by solid dots. Neutron radii for Ni isotopes with various Skyrme forces. The experimental values for <sup>58</sup>Ni and <sup>64</sup>Ni from Refs.<sup>19)</sup> are shown by solid dots. Two-neutron separation energy S<sub>2n</sub> for Ni isotopes obtained with various Skyrme forces. Comparison with the experimental data from Ref.<sup>20)</sup> is also made. Total binding energy of Zr isotopes near the neutron drip line calculated with the Skyrme forces.
no-problem/9904/cond-mat9904419.html
ar5iv
text
# Scaling and percolation in the small-world network model ## I Introduction Networks of social interactions between individuals, groups, or organizations have some unusual topological properties which set them apart from most of the networks with which physics deals. They appear to display simultaneously properties typical both of regular lattices and of random graphs. For instance, social networks have well-defined locales in the sense that if individual A knows individual B and individual B knows individual C, then it is likely that A also knows C—much more likely than if we were to pick two individuals at random from the population and ask whether they are acquainted. In this respect social networks are similar to regular lattices, which also have well-defined locales, but very different from random graphs, in which the probability of connection is the same for any pair of vertices on the graph. On the other hand, it is widely believed that one can get from almost any member of a social network to any other via only a small number of intermediate acquaintances, the exact number typically scaling as the logarithm of the total number of individuals comprising the network. Within the population of the world, for example, it has been suggested that there are only about “six degrees of separation” between any human being and any other. This behavior is not seen in regular lattices but is a well-known property of random graphs, where the average shortest path between two randomly-chosen vertices scales as $`\mathrm{log}N/\mathrm{log}z`$, where $`N`$ is the total number of vertices in the graph and $`z`$ is the average coordination number. Recently, Watts and Strogatz have proposed a model which attempts to mimic the properties of social networks. This “small-world” model consists of a network of vertices whose topology is that of a regular lattice, with the addition of a low density $`\varphi `$ of connections between randomly-chosen pairs of vertices. Watts and Strogatz showed that graphs of this type can indeed possess well-defined locales in the sense described above while at the same time possessing average vertex–vertex distances which are comparable with those found on true random graphs, even for quite small values of $`\varphi `$. In this paper we study in detail the behavior of the small-world model, concentrating particularly on its scaling properties. The outline of the paper is as follows. In Section II we define the model. In Section III we study the typical length-scales present in the model and argue that the model undergoes a continuous phase transition as the density of random connections tends to zero. We also examine the cross-over been large- and small-world behavior in the model, and the structure of “neighborhoods” of adjacent vertices. In Section IV we derive a scaling form for the average vertex–vertex distance on a small-world graph and demonstrate numerically that this form is followed over a wide range of the parameters of the model. In Section V we calculate the effective dimension of small-world graphs and show that this dimension depends on the length-scale on which we examine the graph. In Section VI we consider the properties of site percolation on these systems, as a model of the spread of information or disease through social networks. Finally, in Section VII we give our conclusions. ## II The small-world model The original small-world model of Watts and Strogatz, in its simplest incarnation, is defined as follows. We take a one-dimensional lattice of $`L`$ vertices with connections or bonds between nearest neighbors and periodic boundary conditions (the lattice is a ring). Then we go through each of the bonds in turn and independently with some probability $`\varphi `$ “rewire” it. Rewiring in this context means shifting one end of the bond to a new vertex chosen uniformly at random from the whole lattice, with the exception that no two vertices can have more than one bond running between them, and no vertex can be connected by a bond to itself. In this model the average coordination number $`z`$ remains constant ($`z=2`$) during the rewiring process, but the coordination number of any particular vertex may change. The total number of rewired bonds, which we will refer to as “shortcuts”, is $`\varphi L`$ on average. For the purposes of analytic treatment the Watts–Strogatz model has a number of problems. One problem is that the distribution of shortcuts is not completely uniform; not all choices of the positions of the rewired bonds are equally probable. For example, configurations with more than one bond between a particular pair of vertices are explicitly forbidden. This non-uniformity of the distribution makes an average over different realizations of the randomness hard to perform. A more serious problem is that one of the crucial quantities of interest in the model, the average distance between pairs of vertices on the graph, is poorly defined. The reason is that there is a finite probability of a portion of the lattice becoming detached from the rest in this model. Formally, we can represent this by saying that the distance from such a portion to a vertex elsewhere on the lattice is infinite. However, this means that the average vertex–vertex distance on the lattice is then itself infinite, and hence that the vertex–vertex distance averaged over all realizations is also infinite. For numerical studies such as those of Watts and Strogatz this does not present any substantial difficulties, but for analytic work it results in a number of quantities and expressions being poorly defined. Both of these problems can be circumvented by a slight modification of the model. In our version of the small-world model we again start with a regular one-dimensional lattice, but now instead of rewiring each bond with probability $`\varphi `$, we add shortcuts between pairs of vertices chosen uniformly at random but we do not remove any bonds from the regular lattice. We also explicitly allow there to be more than one bond between any two vertices, or a bond which connects a vertex to itself. In order to preserve compatibility with the results of Watts and Strogatz and others, we add with probability $`\varphi `$ one shortcut for each bond on the original lattice, so that there are again $`\varphi L`$ shortcuts on average. The average coordination number is $`z=2(1+\varphi )`$. This model is equivalent to the Watts–Strogatz model for small $`\varphi `$, whilst being better behaved when $`\varphi `$ becomes comparable to 1. Fig. 1(a) shows one realization of our model for $`L=24`$. Real social networks usually have average coordination numbers $`z`$ significantly higher than $`2`$, and we can arrange for higher $`z`$ in our model in a number of ways. Watts and Strogatz proposed adding bonds to next-nearest or further neighbors on the underlying one-dimensional lattice up to some fixed range which we will call $`k`$. In our variation on the model we can also start with such a lattice and then add shortcuts to it. The mean number of shortcuts is then $`\varphi kL`$ and the average coordination number is $`z=2k(1+\varphi )`$. Fig. 1(b) shows a realization of this model for $`k=3`$. Another way of increasing the coordination number, suggested first by Watts, is to use an underlying lattice for the model with dimension greater than one. In this paper we will consider networks based on square and (hyper)cubic lattices in $`d`$ dimensions. We take a lattice of linear dimension $`L`$, with $`L^d`$ vertices, nearest-neighbor bonds and periodic boundary conditions, and add shortcuts between randomly chosen pairs of vertices. Such a graph has $`\varphi dL^d`$ shortcuts and an average coordination number $`z=2d(1+\varphi )`$. An example is shown in Fig. 2(a) for $`d=2`$. We can also add bonds between next-nearest or further neighbors to such a lattice. The most straightforward generalization of the one-dimensional case is to add bonds along the principal axes of the lattice up to some fixed range $`k`$, as shown in Fig. 2(b) for $`k=3`$. Graphs of this type have $`\varphi kdL^d`$ shortcuts on average and a mean coordination number of $`z=2kd(1+\varphi )`$. Our main interest in this paper is with the properties of the small-world model for small values of the shortcut probability $`\varphi `$. Watts and Strogatz found that the model displays many of the characteristics of true random graphs even for $`\varphi 1`$, and it seems to be in this regime that the model’s properties are most like those of real-world social networks. ## III Length-scales in small-world graphs A fundamental observable property of interest on small-world lattices is the shortest path between two vertices—the number of degrees of separation—measured as the number of bonds traversed to get from one vertex to another, averaged over all pairs of vertices and over all realizations of the randomness in the model. We denote this quantity $`\mathrm{}`$. On ordinary regular lattices $`\mathrm{}`$ scales linearly with the lattice size $`L`$. On the underlying lattices used in the models described here for instance, it is equal to $`\frac{1}{4}dL/k`$. On true random graphs, in which the probability of connection between any two vertices is the same, $`\mathrm{}`$ is proportional to $`\mathrm{log}N/\mathrm{log}z`$, where $`N`$ is the number of vertices on the graph. The small-world model interpolates between these extremes, showing linear scaling $`\mathrm{}L`$ for small $`\varphi `$, or on systems small enough that there are very few shortcuts, and logarithmic scaling $`\mathrm{}\mathrm{log}N=d\mathrm{log}L`$ when $`\varphi `$ or $`L`$ is large enough. In this section and the following one we study the nature of the cross-over between these two regimes, which we refer to as “large-world” and “small-world” regimes respectively. For simplicity we will work mostly with the case $`k=1`$, although we will quote results for $`k>1`$ where they are of interest. When $`k=1`$ the small-world model has only one independent parameter—the probability $`\varphi `$—and hence can have only one non-trivial length-scale other than the lattice constant of the underlying lattice. This length-scale, which we will denote $`\xi `$, can be defined in a number of different ways, all definitions being necessarily proportional to one another. One simple way is to define $`\xi `$ to be the typical distance between the ends of shortcuts on the lattice. In a one-dimensional system with $`k=1`$, for example, there are on average $`\varphi L`$ shortcuts and therefore $`2\varphi L`$ ends of shortcuts. Since the lattice has $`L`$ vertices, the average distance between ends of shortcuts is $`L/(2\varphi L)=1/(2\varphi )`$. In fact, it is more convenient for our purposes to define $`\xi `$ without the factor of $`2`$ in the denominator, so that $`\xi =1/\varphi `$, or for general $`d`$ $$\xi =\frac{1}{(\varphi d)^{1/d}}.$$ (1) For $`k>1`$ the appropriate generalization is $$\xi =\frac{1}{(\varphi kd)^{1/d}}.$$ (2) As we see, $`\xi `$ diverges as $`\varphi 0`$ according to $$\xi \varphi ^\tau ,$$ (3) where the exponent $`\tau `$ is $$\tau =\frac{1}{d}.$$ (4) A number of authors have previously considered a divergence of the kind described by Eq. (3) with $`\xi `$ defined not as the typical distance between the ends of shortcuts, but as the system size $`L`$ at which the cross-over from large- to small-world scaling occurs. We will shortly argue that in fact the length-scale $`\xi `$ defined here is precisely equal to this cross-over length, and hence that these two divergences are the same. The quantity $`\xi `$ plays a role similar to that of the correlation length in an interacting system in standard statistical physics. Its leaves the system with no length-scale other than the lattice spacing, so that at long distances we expect all spatial distributions to be scale-free. This is precisely the behavior one sees in an interacting system undergoing a continuous phase transition, and it is reasonable to regard the small-world model as having a continuous phase transition at this point. Note that the transition is a one-sided one since $`\varphi `$ is a probability and cannot take values less than zero. In this respect the transition is similar to that seen in the one-dimensional Ising model, or in percolation on a one-dimensional lattice. The exponent $`\tau `$ plays the part of a critical exponent for the system, similar to the correlation length exponent $`\nu `$ for a thermal phase transition. De Menezes et al. have argued that the length-scale $`\xi `$ can only be defined in terms of the cross-over point between large- and small-world behavior, that there is no definition of $`\xi `$ which can be made consistent in the limit of large system size. For this reason they argue that the transition at $`\varphi =0`$ should be regarded as first-order rather than continuous. In fact however, the arguments of de Menezes et al. show only that one particular definition of $`\xi `$ is inconsistent; they show that $`\xi `$ cannot be consistently defined in terms of the mean vertex–vertex distance between vertices in finite regions of infinite small-world graphs. This does not prove that no definition of $`\xi `$ is consistent in the $`L\mathrm{}`$ limit and, as we have demonstrated here, consistent definitions do exist. Thus it seems appropriate to consider the transition at $`\varphi =0`$ to be a continuous one. Barthélémy and Amaral have conjectured on the basis of numerical simulations that $`\tau =\frac{2}{3}`$ for $`d=1`$. As we have shown here, $`\tau `$ is in fact equal to $`1/d`$, and specifically $`\tau =1`$ in one dimension. We have also demonstrated this result previously using a renormalization group (RG) argument, and it has been confirmed by extensive numerical simulations. The length-scale $`\xi `$ governs a number of other properties of small-world graphs. First, as mentioned above, it defines the point at which the average vertex–vertex distance $`\mathrm{}`$ crosses over from linear to logarithmic scaling with system size $`L`$. This statement is necessarily true, since $`\xi `$ is the only non-trivial length scale in the model, but we can demonstrate it explicitly by noting that the linear scaling regime is the one in which the average number of shortcuts on the lattice is small compared with unity and the logarithmic regime is the one in which it is large. The cross-over occurs in the region where the average number of shortcuts is about one, or in other words when $`\varphi kdL^d=1`$. Rearranging for $`L`$, the cross-over length is $$L=\frac{1}{(\varphi kd)^{1/d}}=\xi .$$ (5) The length-scale $`\xi `$ also governs the average number $`V(r)`$ of neighbors of a given vertex within a neighborhood of radius $`r`$. The number of vertices in such a neighborhood increases as $`r^d`$ for $`r\xi `$ while for $`r\xi `$ the graph behaves as a random graph and the size of the neighborhood must increase exponentially with some power of $`r/\xi `$. To derive the specific functional form of $`V(r)`$ we consider a small-world graph in the limit of infinite $`L`$. Let $`a(r)`$ be the surface area of a “sphere” of radius $`r`$ on the underlying lattice of the model, i.e., it is the number of points which are exactly $`r`$ steps away from any vertex. (For $`k=1`$, $`a(r)=2^dr^{d1}/\mathrm{\Gamma }(d)`$ when $`r1`$.) The volume within a neighborhood of radius $`r`$ in an infinite system is the sum of $`a(r)`$ over $`r`$, plus a contribution of $`V(rr^{})`$ for every shortcut encountered at a distance $`r^{}`$, of which there are on average $`2\xi ^da(r^{})`$. Thus $`V(r)`$ is in general the solution of the equation $$V(r)=\underset{r^{}=0}{\overset{r}{}}a(r^{})[1+2\xi ^dV(rr^{})].$$ (6) In one dimension with $`k=1`$, for example, $`a(r)=2`$ for all $`r`$ and, approximating the sum with an integral and then differentiating with respect to $`r`$, we get $$\frac{\mathrm{d}V}{\mathrm{d}r}=2[1+2V(r)/\xi ],$$ (7) which has the solution $$V(r)=\frac{1}{2}\xi (\mathrm{e}^{4r/\xi }1).$$ (8) Note that for $`r\xi `$ this scales as $`r`$, independent of $`\xi `$, and for $`r\xi `$ it grows exponentially, as expected. Eq. (8) also implies that the surface area of a sphere of radius $`r`$ on the graph, which is the derivative of $`V(r)`$, should be $$A(r)=2\mathrm{e}^{4r/\xi }.$$ (9) These results are easily checked numerically and give us a simple independent measurement of $`\xi `$ which we can use to confirm our earlier arguments. In Fig. 3 we show curves of $`A(r)`$ from computer simulations of systems with $`\varphi =0.01`$ for values of $`L`$ equal to powers of two from $`128`$ up to $`\mathrm{131\hspace{0.17em}072}`$ (solid lines). The dotted line is Eq. (9) with $`\xi `$ taken from Eq. (1). The convergence of the simulation results to the predicted exponential form as the system size grows confirms our contention that $`\xi `$ is well-defined in the limit of large $`L`$. Fig. 4 shows $`A(r)`$ for $`L=\mathrm{100\hspace{0.17em}000}`$ for various values of $`\varphi `$. Eq. (9) implies that the slope of the lines in the limit of small $`r`$ is $`4/\xi `$. In the inset we show the values of $`\xi `$ extracted from fits to the slope as a function of $`\varphi `$ on logarithmic scales, and a straight-line fit to these points gives us an estimate of $`\tau =0.99\pm 0.01`$ for the exponent governing the transition at $`\varphi =0`$ (Eq. (3)). This is in good agreement with our theoretical prediction that $`\tau =1`$. ## IV Scaling in small-world graphs Given the existence of the single non-trivial length-scale $`\xi `$ for the small-world model, we can also say how the mean vertex–vertex distance $`\mathrm{}`$ should scale with system size and other parameters near the phase transition. In this regime the dimensionless quantity $`\mathrm{}/L`$ can be a function only of the dimensionless quantity $`L/\xi `$, since no other dimensionless combinations of variables exist. Thus we can write $$\mathrm{}=Lf(L/\xi ),$$ (10) where $`f(x)`$ is an unknown but universal scaling function. A scaling form similar to this was suggested previously by Barthélémy and Amaral on empirical grounds. Substituting from Eq. (1), we then get for the $`k=1`$ case $$\mathrm{}=Lf(\varphi ^{1/d}L).$$ (11) (We have absorbed a factor of $`d^{1/d}`$ into the definition of $`f(x)`$ here to make it consistent with the definition we used in Ref. .) The usefulness of this equation derives from the fact that the function $`f(x)`$ contains no dependence on $`\varphi `$ or $`L`$ other than the explicit dependence introduced through its argument. Its functional form can however change with dimension $`d`$ and indeed it does. In order to obey the known asymptotic forms of $`\mathrm{}`$ for large and small systems, the scaling function $`f(x)`$ must satisfy $$f(x)\frac{\mathrm{log}x}{x}\text{as }x\mathrm{},$$ (12) and $$f(x)\frac{1}{4}d\text{as }x0.$$ (13) When $`k>1`$, $`\mathrm{}`$ tends to $`\frac{1}{4}dL/k`$ for small values of $`L`$ and $`\xi `$ is given by Eq. (2), so the appropriate generalization of the scaling form is $$\mathrm{}=\frac{L}{k}f\left((\varphi k)^{1/d}L\right),$$ (14) with $`f(x)`$ taking the same limiting forms (12) and (13). Previously we derived this scaling form in a more rigorous way using an RG argument. We can again test these results numerically by measuring $`\mathrm{}`$ on small-world graphs for various values of $`\varphi `$, $`k`$ and $`L`$. Eq. (14) implies that if we plot the results on a graph of $`\mathrm{}k/L`$ against $`(\varphi k)^{1/d}L`$, they should collapse onto a single curve for any given dimension $`d`$. In Fig. 5 we have done this for systems based on underlying lattices with $`d=1`$ for a range of values of $`\varphi `$ and $`L`$, for $`k=1`$ and $`5`$. As the figure shows, the collapse is excellent. In the inset we show results for $`d=2`$ with $`k=1`$, which also collapse nicely onto a single curve. The lower limits of the scaling functions in each case are in good agreement with our theoretical predictions of $`\frac{1}{4}`$ for $`d=1`$ and $`\frac{1}{2}`$ for $`d=2`$. We are not able to solve exactly for the form of the scaling function $`f(x)`$, but we can express it as a series expansion in powers of $`\varphi `$ as follows. Since the scaling function is universal and has no implicit dependence on $`k`$, it is adequate to calculate it for the case $`k=1`$; its form is the same for all other values of $`k`$. For $`k=1`$ the probability of having exactly $`m`$ shortcuts on the graph is $$P_m=\left(\genfrac{}{}{0pt}{}{dL^d}{m}\right)\varphi ^m(1\varphi )^{dL^dm}.$$ (15) Let $`\mathrm{}_m`$ be the mean vertex–vertex distance on a graph with $`m`$ shortcuts in the limit of large $`L`$, averaged over all such graphs. Then the mean vertex–vertex distance averaged over all graphs regardless of the number of shortcuts is $$\mathrm{}=\underset{m=0}{\overset{dL^d}{}}P_m\mathrm{}_m.$$ (16) Note that in order to calculate $`\mathrm{}`$ up to order $`\varphi ^m`$ we only need to know the behavior of the model when it has $`m`$ or fewer shortcuts. For the $`d=1`$ case the values of the $`\mathrm{}_m`$ have been calculated up to $`m=2`$ by Strang and Eriksson and are given in Table I. Substituting these into Eq. (16) and collecting terms in $`\varphi `$, we then find that $$\frac{\mathrm{}}{L}=\frac{1}{4}\frac{1}{24}\varphi L+\frac{11}{1440}\varphi ^2L^2\frac{11}{1440}\varphi ^2L+\mathrm{O}(\varphi ^3).$$ (17) The term in $`\varphi ^2L`$ can be dropped when $`L`$ is large or $`\varphi `$ small, since it is negligible by comparison with at least one of the terms before it. Thus the scaling function is $$f(x)=\frac{1}{4}\frac{1}{24}x+\frac{11}{1440}x^2+\mathrm{O}(x^3).$$ (18) This form is shown as the dotted line in Fig. 5 and agrees well with the numerical calculations for small values of the scaling variable $`x`$, but deviates badly for large values. Calculating the exact values of the quantities $`\mathrm{}_m`$ for higher orders is an arduous task and probably does not justify the effort involved. However, we have calculated the values of the $`\mathrm{}_m`$ numerically up to $`m=5`$ by evaluating the average vertex–vertex distance $`\mathrm{}`$ on graphs which are constrained to have exactly 3, 4 or 5 shortcuts. Performing a Taylor expansion of $`\mathrm{}/L`$ about $`L=\mathrm{}`$, we get $$\frac{\mathrm{}}{L}=\frac{\mathrm{}_m}{L}\left[1+\frac{c}{L}+\mathrm{O}\left(L^2\right)\right],$$ (19) where $`c`$ is a constant. Thus we can estimate $`\mathrm{}_m/L`$ from the vertical-axis intercept of a plot of $`\mathrm{}/L`$ against $`L^1`$ for large $`L`$. The results are shown in Table I. Calculating higher orders still would be straightforward. Using these values we have evaluated the scaling function $`f(x)`$ up to fifth order in $`x`$; the result is shown as the dot–dashed line in Fig. 5. As we can see the range over which it matches the numerical results is greater than before, but not by much, indicating that the series expansion converges only slowly as extra terms are added. It appears therefore that series expansion would be a poor way of calculating $`f(x)`$ over the entire range of interest. A much better result can be obtained by using our series expansion coefficients to define a Padé approximant to $`f(x)`$. Since we know that $`f(x)`$ tends to a constant $`f(0)=\frac{1}{4}d`$ for small $`x`$ and falls off approximately as $`1/x`$ for large $`x`$, the appropriate Padé approximants to use are odd-order approximants where the approximant of order $`2n+1`$ ($`n`$ integer) has the form $$f(x)=f(0)\frac{A_n(x)}{B_{n+1}(x)},$$ (20) where $`A_n(x)`$ and $`B_n(x)`$ are polynomials in $`x`$ of degree $`n`$ with constant term equal to 1. For example, to third order we should use the approximant $$f(x)=f(0)\frac{1+a_1x}{1+b_1x+b_2x^2}.$$ (21) Expanding about $`x=0`$ this gives $`{\displaystyle \frac{f(x)}{f(0)}}`$ $`=`$ $`1+(a_1b_1)x+(b_1^2a_1b_1b_2)x^2`$ (23) $`+[(a_1b_1)(b_1^2b_2)+b_1b_2]x^3+\mathrm{O}(x^4).`$ Equating coefficients order by order in $`x`$ and solving for the $`a`$’s and $`b`$’s, we find that $`a_1`$ $`=`$ $`1.825\pm 0.075,`$ (25) $`b_1`$ $`=`$ $`1.991\pm 0.075,`$ (26) $`b_2`$ $`=`$ $`0.301\pm 0.012.`$ (27) Substituting these back into (21) and using the known value of $`f(0)`$ then gives us our approximation to $`f(x)`$. This approximation is plotted as the solid line in Fig. 5 and, as the figure shows, is an excellent guide to the value of $`f(x)`$ over a large range of $`x`$. In theory it should be possible to calculate the fifth-order Padé approximant using the numerical results in Table I, although we have not done this here. Substituting $`f(x)`$ back into the scaling form, Eq. (14), we can also use the Padé approximant to predict the value of the mean vertex–vertex distance for any values of $`\varphi `$, $`k`$ and $`L`$ within the scaling regime. We will make use of this result in the next section to calculate the effective dimension of small-world graphs. ## V Effective dimension The calculation of the volumes and surface areas of neighborhoods of vertices on small-world graphs in Section III leads us naturally to the consideration of the dimension of these systems. On a regular lattice of dimension $`D`$, the volume $`V(r)`$ of a neighborhood of radius $`r`$ increases in proportion to $`r^D`$, and hence one can calculate $`D`$ from $$D=\frac{\mathrm{d}\mathrm{log}V}{\mathrm{d}\mathrm{log}r}=\frac{rA(r)}{V(r)},$$ (28) where $`A(r)`$ is the surface area of the neighborhood, as previously. We can use the same expression to calculate the effective dimension of our small-world graphs. Thus in the case of an underlying lattice of dimension $`d=1`$, the effective dimension of the graph is $$D=\frac{4r}{\xi }\frac{\mathrm{e}^{4r/\xi }}{\mathrm{e}^{4r/\xi }1},$$ (29) where we have made use of Eqs. (8) and (9). For $`r\xi `$ this tends to one, as we would expect, and for $`r\xi `$ it tends to $`4r/\xi `$, increasing linearly with the radius of the neighborhood. Thus the effective dimension of a small-world graph depends on the length-scale on which we look at it, in a way reminiscent of the behavior of multifractals. This result will become important in Section VI when we consider site percolation on small-world graphs. In Fig. 6 we show the effective dimension of neighborhoods on a large graph measured in numerical simulations (circles), along with the analytic result, Eq. (29) (solid line). As we can see from the figure, the numerical and analytic results are in good agreement for small radii $`r`$, but the numerical results fall off sharply for larger $`r`$. The reason for this is that Eq. (28) breaks down as $`V(r)`$ approaches the volume of the entire system; $`V(r)`$ must tend to $`L^d`$ in this limit and hence the derivative in (28) tends to zero. The same effect is also seen if one tries to use Eq. (28) on ordinary regular lattices of finite size. To characterize the dimension of an entire system therefore, we use another measure of $`D`$ as follows. On a regular lattice of finite linear size $`\mathrm{}`$, the number of vertices $`N`$ scales as $`\mathrm{}^D`$ and hence we can calculate the dimension from $$D=\frac{\mathrm{d}\mathrm{log}N}{\mathrm{d}\mathrm{log}\mathrm{}}.$$ (30) We can apply the same formula to the calculation of the effective dimension of small-world graphs putting $`N=L^d`$, although, since we don’t have an analytic solution for $`\mathrm{}`$, we cannot derive an analytic solution for $`D`$ in this case. On the other hand, if we are in the scaling regime described in Section IV—the regime in which $`\xi 1`$—then Eq. (14) applies, along with the limiting forms, Eqs. (12) and (13). Substituting into (30), this gives us $$\frac{1}{D}=\frac{\mathrm{d}\mathrm{log}\mathrm{}}{\mathrm{d}\mathrm{log}L^d}=\frac{1}{d}\left[1+\frac{\mathrm{d}\mathrm{log}f(x)}{\mathrm{d}\mathrm{log}x}\right],$$ (31) where $`x=(\varphi k)^{1/d}LL/\xi `$. In other words $`D`$ is a universal function of the scaling variable $`x`$. We know that $`f(x)`$ tends to a constant for small $`x`$ (i.e., $`\xi L`$), so that $`D=d`$ in this limit, as we would expect. For large $`x`$ (i.e., $`\xi L`$), Eq. (12) applies. Substituting into (31) this gives us $`D=d\mathrm{log}x`$. In the inset of Fig. 6 we show $`D`$ from numerical calculations as a function of $`x`$ in one-dimensional systems of a variety of sizes, along with the expected asymptotic forms, which it follows reasonably closely. In the main figure we also show this second measure of $`D`$ (squares with error bars) as a function of the system radius $`\mathrm{}`$ (with which it should scale linearly for large $`\mathrm{}`$, since $`\mathrm{}\mathrm{log}x`$ for large $`x`$). As the figure shows, the two measures of effective dimension agree reasonably well. The numerical errors on the first measure, Eq. (28) are much smaller than those on the second, Eq. (30) (which is quite hard to calculate numerically), but the second measure is clearly preferable as a measure of the dimension of the entire system, since the first fails badly when $`r`$ approaches $`\mathrm{}`$. We also show the value of our second measure of dimension calculated using the Padé approximant to $`f(x)`$ derived in Section IV (dotted line in the main figure). This agrees well with the numerical evaluation for radii up to about $`1000`$ and has significantly smaller statistical error, but overestimates $`D`$ somewhat beyond this point because of inaccuracies in the approximation; the Padé approximant scales as $`1/x`$ for large values of $`x`$ rather than $`\mathrm{log}x/x`$, which means that $`D`$ will scale as $`x`$ rather than $`\mathrm{log}x`$ for large $`x`$. ## VI Percolation In the previous sections of this paper we have examined statistical properties of small-world graphs such as typical length-scales, vertex–vertex distances, scaling of volumes and areas, and effective dimension of graphs. These are essentially static properties of the networks; to the extent that small-world graphs mimic social networks, these properties tell us about the static structure of those networks. However, social science also deals with dynamic processes going on within social networks, such as the spread of ideas, information, or diseases. This leads us to the consideration of dynamical models defined on small-world graphs. A small amount of research has already been conducted in this area. Watts, for instance, has considered the properties of a number of simple dynamical systems defined on small-world graphs, such as networks of coupled oscillators and cellular automata. Barrat and Weigt have looked at the properties of the Ising model on small-world graphs and derived a solution for its partition function using the replica trick. Monasson looked at the spectral properties of the Laplacian operator on small-world graphs, which tells us about the time evolution of a diffusive field on the graph. There is also a moderate body of work in the mathematical and social sciences which, although not directly addressing the small-world model, deals with general issues of information propagation in networks, such as the adoption of innovations, human epidemiology, and the flow of data on the Internet. In this section we discuss the modeling of information or disease propagation specifically on small-world graphs. Suppose for example that the vertices of a small-world graph represent individuals and the bonds between them represent physical contact by which a disease can be spread. The spread of ideas can be similarly modeled; the bonds then represent information connections between individuals which could include letters, telephone calls, or email, as well as physical contacts. The simplest model for the spread of disease is to have the disease spread between neighbors on the graph at a uniform rate, starting from some initial carrier individual. From the results of Section IV we already know what this will look like. If for example we wish to know how many people in total have contracted a disease, that number is just equal to the number $`V(r)`$ within some radius $`r`$ of the initial carrier, where $`r`$ increases linearly with time. (We assume that no individual can catch the disease twice, which is the case with most common diseases.) Thus, Eq. (8) tells us that, for a $`d=1`$ small-world graph, the number of individuals who have had a particular disease increases exponentially, with a time-constant governed by the typical length-scale $`\xi `$ of the graph. Since all real-world social networks have a finite number of vertices $`N`$, this exponential growth is expected to saturate when $`V(r)`$ reaches $`N=L^d`$. This is not a particularly startling result; the usual model for the spread of epidemics is the logistic growth model, which shows initial exponential spread followed by saturation. For a disease like influenza, which spreads fast but is self-limiting, the number of people who are ill at any one time should be roughly proportional to the area $`A(r)`$ of the neighborhood surrounding the initial carrier, with $`r`$ again increasing linearly in time. This implies that the epidemic should have a single humped form with time, like the curves of $`A(r)`$ plotted in Fig. 4. Note that the vertical axis in this figure is logarithmic; on linear axes the curves are bell-shaped rather than quadratic. In the context of the spread of information or ideas, similar behavior might be seen in the development of fads. By a fad we mean an idea which is catchy and therefore spreads fast, but which people tire of quickly. Fashions, jokes, toys, or buzzwords might be expected to show popularity profiles over time similar to the curves in Fig. 4. However, for most real diseases (or fads) this is not a very good model of how they spread. For real diseases it is commonly the case that only a certain fraction $`p`$ of the population is susceptible to the disease. This can be mimicked in our model by placing a two-state variable on each vertex which denotes whether the individual at that vertex is susceptible. The disease then spreads only within the local “cluster” of connected susceptible vertices surrounding the initial carrier. One question which we can answer with such a model is how high the density $`p`$ of susceptible individuals can be before the largest connected cluster covers a significant fraction of the entire network and an epidemic ensues. Mathematically, this is precisely the problem of site percolation on a social network, at least in the case where the susceptible individuals are randomly distributed over the vertices. To the extent that small-world graphs mimic social networks, therefore, it is interesting to look at the percolation problem. The transition corresponds to the point on a regular lattice at which a percolating cluster forms whose size increases with the size $`L`$ of the lattice for arbitrarily large $`L`$. On random graphs there is a similar transition, marked by the formation of a so-called “giant component” of connected vertices. On small-world graphs we can calculate approximately the percolation probability $`p=p_c`$ at which the transition takes place as follows. Consider a $`d=1`$ small-world graph of the kind pictured in Fig. 1. For the moment let us ignore the shortcut bonds and consider the percolation properties just of the underlying regular lattice. If we color in a fraction $`p`$ of the sites on this underlying lattice, the occupied sites will form a number of connected clusters. In order for two adjacent parts of the lattice not to be connected, we must have a series of at least $`k`$ consecutive unoccupied sites between them. The number $`n`$ of such series can be calculated as follows. The probability that we have a series of $`k`$ unoccupied sites starting at a particular site, followed by an occupied one is $`p(1p)^k`$. Once we have such a series, the states of the next $`k`$ sites are fixed and so it is not possible to have another such series for $`k`$ steps. Thus the number $`n`$ is given by $$n=p(1p)^k(Lkn).$$ (32) Rearranging for $`n`$ we get $$n=L\frac{p(1p)^k}{1+kp(1p)^k}.$$ (33) For this one-dimensional system, the percolation transition occurs when we have just one break in the chain, i.e., when $`n=1`$. This gives us a $`k`$th order equation for $`p_c`$ which is in general not exactly soluble, but we can find its roots numerically if we wish. Now consider what happens when we introduce shortcuts into the graph. The number of breaks $`n`$, Eq. (33), is also the number of connected clusters of occupied sites on the underlying lattice. Let us for the moment suppose that the size of each cluster can be approximated by the average cluster size. A number $`\varphi kL`$ of shortcuts are now added to the graph between pairs of vertices chosen uniformly at random. A fraction $`p^2`$ of these will connect two occupied sites and therefore can connect together two clusters of occupied sites. The problem of when the percolation transition occurs is then precisely that of the formation of a giant component on an ordinary random graph with $`n`$ vertices. It is known that such a component forms when the mean coordination number of the random graph is one, or alternatively, when the number of bonds on the graph is a half the number of vertices. In other words, the transition probability $`p_c`$ must satisfy $$p_c^2\varphi kL=\frac{1}{2}L\frac{p_c(1p_c)^k}{1+kp_c(1p_c)^k},$$ (34) or $$\varphi =\frac{(1p_c)^k}{2kp_c[1+kp_c(1p_c)^k]}.$$ (35) We have checked this result against numerical calculations. In order to find the value of $`p_c`$ numerically, we employ a tree-based invasion algorithm similar to the invaded cluster algorithm used to find the percolation point in Ising systems. This algorithm can calculate the entire curve of average cluster size versus $`p`$ in time which scales as $`L\mathrm{log}L`$. We define $`p_c`$ to be the point at which the average cluster size divided by $`L`$ rises above a certain threshold. For systems of infinite size the transition is instantaneous and hence the choice of threshold makes no difference to $`p_c`$, except that $`p_c`$ can never take a value lower than the threshold itself, since even in a fully connected graph the average cluster size per vertex can be no greater than the fraction $`p_c`$ of occupied vertices. Thus it makes sense to choose the threshold as low as possible. In real calculations, however, we cannot use an infinitesimal threshold because of finite size effects. For the systems studied here we have found that a threshold of $`0.2`$ works well. Fig. 7 shows the critical probability $`p_c`$ for systems of size $`L=\mathrm{10\hspace{0.17em}000}`$ for a range of values of $`\varphi `$ for $`k=1`$, 2 and 5. The points are the numerical results and the solid lines are Eq. (35). As the figure shows the agreement between simulation and theory is good although there are some differences. As $`\varphi `$ approaches one and the value of $`p_c`$ drops, the two fail to agree because, as mentioned above, $`p_c`$ cannot take a value lower than the threshold used in its calculation, which was $`0.2`$ in this case. The results also fail to agree for very low values of $`\varphi `$ where $`p_c`$ becomes large. This is because Eq. (33) is not a correct expression for the number of clusters on the underlying lattice when $`n<1`$. This is clear since when there are no breaks in the sequence of connected vertices around the ring it is not also true that there are no connected clusters. In fact there is still one cluster; the equality between number of breaks and number of clusters breaks down at $`n=1`$. The value of $`p`$ at which this happens is given by putting $`n=1`$ in Eq. (32). Since $`p`$ is close to one at this point its value is well approximated by $$p1L^{1/k},$$ (36) and this is the value at which the curves in Fig. 7 should roll off at low $`\varphi `$. For $`k=5`$ for example, for which the roll-off is most pronounced, this expression gives a value of $`p0.8`$, which agrees reasonably well with what we see in the figure. There is also an overall tendency in Fig. 7 for our analytic expression to overestimate the value of $`p_c`$ slightly. This we put down to the approximation we made in the derivation of Eq. (35) that all clusters of vertices on the underlying lattice can be assumed to have the size of the average cluster. In actual fact, some clusters will be smaller than the average and some larger. Since the shortcuts will connect to clusters with probability proportional to the cluster size, we can expect percolation to set in within the subset of larger-than-average clusters before it would set in if all clusters had the average size. This makes the true value of $`p_c`$ slightly lower than that given by Eq. (35). In general however, the equation gives a good guide to the behavior of the system. We have also examined numerically the behavior of the mean cluster radius $`\rho `$ for percolation on small-world graphs. The radius of a cluster is defined as the average distance between vertices within the cluster, along the edges of the graph within the cluster. This quantity is small for small values of the percolation probability $`p`$ and increases with $`p`$ as the clusters grow larger. When we reach percolation and a giant component forms it reaches a maximum value and then drops as $`p`$ increases further. The drop happens because the percolating cluster is most filamentary when percolation has only just set in and so paths between vertices are at their longest. With further increases in $`p`$ the cluster becomes more highly connected and the average shortest path between two vertices decreases. By analogy with percolation on regular lattices we might expect the average cluster radius for a given value of $`\varphi `$ to satisfy the scaling form $$\rho =\mathrm{}^{\gamma /\nu }\stackrel{~}{\rho }\left((pp_c)\mathrm{}^{1/\nu }\right),$$ (37) where $`\stackrel{~}{\rho }(x)`$ is a universal scaling function, $`\mathrm{}`$ is the radius of the entire system and $`\gamma `$ and $`\nu `$ are critical exponents. In fact this scaling form is not precisely obeyed by the current system because the exponents $`\nu `$ and $`\gamma `$ depend in general on the dimension of the lattice. As we showed in Section V, the dimension $`D`$ of a small-world graph depends on the length-scale on which you look at it. Thus the value of $`D`$ “felt” by a cluster of radius $`\rho `$ will vary with $`\rho `$, implying that $`\nu `$ and $`\gamma `$ will vary both with the percolation probability and with the system size. If we restrict ourselves to a region sufficiently close to the percolation threshold, and to a sufficiently small range of values of $`\mathrm{}`$, then Eq. (37) should be approximately correct. In Fig. 8 we show numerical data for $`\rho `$ for small-world graphs with $`k=1`$, $`\varphi =0.1`$ and $`L`$ equal to a power of two from 512 up to $`\mathrm{16\hspace{0.17em}384}`$. As we can see, the data show the expected peaked form, with the peak in the region of $`p=0.8`$, close to the expected position of the percolation transition. In order to perform a scaling collapse of these data we need first to extract a suitable value of $`p_c`$. We can do this by performing a fit to the positions of the peaks in $`\rho `$. Since the scaling function $`\stackrel{~}{\rho }(x)`$ is (approximately) universal, the positions of these peaks all occur at the same value of the scaling variable $`y=(pp_c)\mathrm{}^{1/\nu }`$. Calling this value $`y_0`$ and the corresponding percolation probability $`p_0`$, we can rearrange for $`p_0`$ as a function of $`\mathrm{}`$ to get $$p_0=p_c+y_0\mathrm{}^{1/\nu }.$$ (38) Thus if we plot the measured positions $`p_0`$ as a function of $`\mathrm{}^{1/\nu }`$, the vertical-axis intercept should give us the corresponding value of $`p_c`$. We have done this for a single value of $`\nu `$ in the inset to Fig. 9, and in the main figure we show the resulting values of $`p_c`$ as a function of $`1/\nu `$. If we now perform our scaling collapse, with the restriction that the values of $`\nu `$ and $`p_c`$ fall on this line, then the best coincidence of the curves for $`\rho `$ is obtained when $`p_c=0.74`$ and $`\nu =0.59\pm 0.05`$—see the inset to Fig. 8. The value of $`\gamma `$ can be found separately by requiring the heights of the peaks to match up, which gives $`\gamma =1.3\pm 0.1`$. The collapse is noticeably poorer when we include systems of size smaller than $`L=512`$, and we attribute this not merely to finite size corrections to the scaling form, but also to variation in the values of the exponents $`\gamma `$ and $`\nu `$ with the effective dimension of the percolating cluster. The value $`p_c=0.74`$ is in respectable agreement with the value of $`0.82`$ from our direct numerical measurements. We note that $`\nu `$ is expected to tend to $`\frac{1}{2}`$ in the limit of an infinite-dimensional system. The value $`\nu =0.59`$ found here therefore confirms our contention that small-world graphs have a high effective dimension even for quite moderate values of $`\varphi `$, and thus are in some sense close to being random graphs. (On a two-dimensional lattice by contrast $`\nu =\frac{4}{3}`$.) ## VII Conclusions In this paper we have studied the small-world network model of Watts and Strogatz, which mimics the behavior of networks of social interactions. Small-world graphs consist of a set of vertices joined together in a regular lattice, plus a low density of “shortcuts” which link together pairs of vertices chosen at random. We have looked at the scaling properties of small-world graphs and argued that there is only one typical length-scale present other than the fundamental lattice constant, which we denote $`\xi `$ and which is roughly the typical distance between the ends of shortcuts. We have shown that this length-scale governs the transition of the average vertex–vertex distance on a graph from linear to logarithmic scaling with increasing system size, as well as the rate of growth of the number of vertices in a neighborhood of fixed radius about a given point. We have also shown that the value of $`\xi `$ diverges on an infinite lattice as the density of shortcuts tends to zero, and therefore that the system possesses a continuous phase transition in this limit. Close to the phase transition, where $`\xi `$ is large, we have shown that the average vertex–vertex distance on a finite graph obeys a simple scaling form and in any given dimension is a universal function of a single scaling variable which depends on the density of shortcuts, the system size and the average coordination number of the graph. We have calculated the form of the scaling function to fifth order in the shortcut density using a series expansion and to third order using a Padé approximant. We have defined two measures of the effective dimension $`D`$ of small-world graphs and find that the value of $`D`$ depends on the scale on which you look at the graph in a manner reminiscent of the behavior of multifractals. Specifically, at length-scales shorter than $`\xi `$ the dimension of the graph is simply that of the underlying lattice on which it is built, and for length-scales larger than $`\xi `$ it increases linearly, with a characteristic constant proportional to $`\xi `$. The value of $`D`$ increases logarithmically with the number of vertices in the graph. We have checked all of these results by extensive numerical simulation of the model and in all cases we find good agreement between the analytic predictions and the simulation results. In the last part of the paper we have looked at site percolation on small-world graphs as a model of the spread of information or disease in social networks. We have derived an approximate analytic expression for the percolation probability $`p_c`$ at which a “giant component” of connected vertices forms on the graph and shown that this agrees well with numerical simulations. We have also performed extensive numerical measurements of the typical radius of connected clusters on the graph as a function of the percolation probability and shown by performing a scaling collapse that these obey, to a reasonable approximation, the expected scaling form in the vicinity of the percolation transition. The characteristic exponent $`\nu `$ takes a value close to $`\frac{1}{2}`$, indicating that, as far as percolation is concerned, the graph’s properties are close to those of a random graph. ## Acknowledgments We thank Luis Amaral, Alain Barrat, Marc Barthélémy, Roman Kotecký, Marcio de Menezes, Cris Moore, Cristian Moukarzel, Thadeu Penna, and Steve Strogatz for helpful comments and conversations, and Gilbert Strang and Henrik Eriksson for communicating to us some results from their forthcoming paper. This work was supported in part by the Santa Fe Institute and by funding from the NSF (grant number PHY–9600400), the DOE (grant number DE–FG03–94ER61951), and DARPA (grant number ONR N00014–95–1–0975).
no-problem/9904/math9904105.html
ar5iv
text
# Shifted Quasi-Symmetric Functions and the Hopf algebra of peak functions ## 1. Introduction Schur $`Q`$ functions first arose in the study of projective representations of $`S_n`$ . Since then they have appeared in variety of contexts including the representations of Lie superalgebras and cohomology classes dual to Schubert cycles in isotropic Grassmanians . While studying the duality between skew Schur $`P`$ and $`Q`$ functions and their connection to the Schubert calculus of isotropic flag manifolds, we were led to their quasi-symmetric analogues: the *peak functions* of Stembridge . We show that *the linear span of peak functions is a Hopf algebra* (Theorem 2.2). We also show that these peak functions are contained in the strictly larger set of *shifted quasi-symmetric functions* (Theorem 3.6) introduced by Billey and Haiman . We remark that the quasi-symmetric functions here are not any apparent specialization of the quasi-symmetric $`q`$-analogues of Hivert . From extensive calculations, we believe that the set of all shifted quasi-symmetric functions form a Hopf algebra, but at present we can only show that: *The set of all shifted quasi-symmetric functions forms a graded coalgebra whose $`n`$th graded component has rank $`\pi _n`$, where $`\pi _n`$ is given by the recurrence* $$\pi _n=\pi _{n1}+\pi _{n2}+\pi _{n4},$$ *with initial conditions $`\pi _1=1`$, $`\pi _2=1`$, $`\pi _3=2`$, $`\pi _4=4`$.* We shall prove this result (Theorems 3.2 and 4.3) and in addition shall establish some other properties of these functions. A composition $`\alpha =[\alpha _1,\alpha _2,\mathrm{},\alpha _k]`$ of a positive integer $`n`$ is an ordered list of positive integers whose sum is $`n`$. We denote this by $`\alpha n`$. We call the integers $`\alpha _i`$ the components of $`\alpha `$, and denote the number of components in $`\alpha `$ by $`k(\alpha )`$. There exists a natural one-to-one correspondence between compositions of $`n`$ and subsets of $`[n1]`$. If $`A=\{a_1,a_2,\mathrm{},a_{k1}\}[n1]`$, where $`a_1<a_2<\mathrm{}<a_{k1}`$, then $`A`$ corresponds to the composition, $`\alpha =[a_1a_0,a_2a_1,\mathrm{},a_ka_{k1}]`$, where $`a_0=0`$ and $`a_k=n`$. For ease of notation, we shall denote the set corresponding to a given composition $`\alpha `$ by $`I(\alpha )`$. For compositions $`\alpha `$ and $`\beta `$ we say that $`\alpha `$ is a *refinement* of $`\beta `$ if $`I(\beta )I(\alpha )`$, and denote this by $`\alpha \beta `$. For any composition $`\alpha =[\alpha _1,\alpha _2,\mathrm{},\alpha _k]`$ we denote by $`M_\alpha `$ the *monomial quasi-symmetric function* $$M_\alpha =\underset{i_1<i_2<\mathrm{}<i_k}{}x_{i_1}^{\alpha _1}\mathrm{}x_{i_k}^{\alpha _k}.$$ We define $`M_0=1`$, where $`0`$ denotes the unique empty composition of $`0`$. We denote by $`F_\alpha `$ the *fundamental quasi-symmetric function* $$F_\alpha =\underset{\alpha \beta }{}M_\beta .$$ ###### Definition 1.1. For any subset $`A[n1]`$, let $`A+1`$ be the subset of $`\{2,\mathrm{},n\}`$ formed from $`A`$ by adding $`1`$ to each element of $`A`$. Let $`\alpha n`$. Then we define $$\theta _\alpha =\underset{\stackrel{\beta n}{I(\alpha )I(\beta )I(\beta )+1}}{}2^{k(\beta )}M_\beta .$$ This is the natural extension of the definition of peak functions given in . ###### Example 1.2. We shall often omit the brackets that surround the components of a composition. If $`\alpha =21`$, then $`I(\alpha )=\{2\}`$, and $`I(\alpha )+1=\{3\}`$. Hence $$\theta _{21}=4M_{21}+4M_{12}+8M_{111}.$$ Let $`\mathrm{\Sigma }^n`$ be the $``$-module of quasi-symmetric functions spanned by $`\{M_\alpha \}_{\alpha n}`$ and let $`\mathrm{\Sigma }=_{n0}\mathrm{\Sigma }^n`$ be the graded $``$-algebra of quasi-symmetric functions. This is a Hopf algebra with coproduct given by $$\mathrm{\Delta }(M_\alpha )=\underset{\alpha =\beta \gamma }{}M_\beta M_\gamma ,$$ where $`\beta \gamma `$ is the concatenation of compositions $`\beta `$ and $`\gamma `$. ###### Example 1.3. $`\mathrm{\Delta }(M_{32})=1M_{32}+M_3M_2+M_{32}1`$. We compute the coproduct of the functions $`\theta _\alpha `$. ###### Lemma 1.4. For any composition $`\alpha n`$ we have that (1) $`\mathrm{\Delta }(\theta _\alpha )`$ $`=`$ $`{\displaystyle \theta _{ϵa}\theta _{\varphi (b\zeta )}}`$ where the sum is over all ways of writing $`\alpha `$ as $`\epsilon (a+b)\zeta `$, that is, the concatenation of compositions $`\epsilon `$ and $`\zeta `$, and a component of $`\alpha `$ written as the sum of numbers $`a,b0`$. Also $`\varphi (b\zeta )=[1+\zeta _1,\zeta _2,\mathrm{}]`$ if $`b=1`$ and $`b\zeta `$ otherwise. We shall use this result to show that certain subsets of functions $`\theta _\alpha `$ span coalgebras (Theorems 2.2 and 3.2). ###### Proof. Definition 1.1 is equivalent to $$\theta _\alpha =\underset{\stackrel{\beta n}{\beta ^{}\alpha }}{}2^{k(\beta )}M_\beta ,$$ where $`\beta ^{}`$ is the refinement of $`\beta `$ obtained by replacing all components $`\beta _i>1`$, for $`i>1`$, by $`[1,\beta _i1]`$. Thus the LHS of equation (1) is equal to (2) $`{\displaystyle \underset{\stackrel{\stackrel{\beta n}{\beta ^{}\alpha }}{\beta =\gamma \delta }}{}}2^{k(\beta )}M_\gamma M_\delta `$ $`=`$ $`{\displaystyle \underset{\stackrel{\gamma \delta n}{(\gamma \delta )^{}\alpha }}{}}2^{k(\gamma )}M_\gamma 2^{k(\delta )}M_\delta .`$ Let $`2^{k(\gamma )}M_\gamma 2^{k(\delta )}M_\delta `$ be a term of this sum, with $`\gamma m`$. This term can only appear in one summand on the RHS of equation (1), namely $`\theta _{\epsilon a}\theta _{\varphi (b\zeta )}`$ with $`\epsilon am`$. To show that it does indeed appear, we need to prove that $`\gamma ^{}\epsilon a`$ and $`\delta ^{}\varphi (b\zeta )`$. Let $`\delta ^{}`$ be the refinement of $`\delta ^{}`$ obtained by replacing the part $`\delta _1`$ by $`[1,\delta _11]`$ if $`\delta _1>1`$. We have that $$\gamma ^{}\delta ^{}=(\gamma \delta )^{}\epsilon (a+b)\zeta ,$$ which implies that $`\gamma ^{}\epsilon a`$, and $`\delta ^{}b\zeta \varphi (b\zeta )`$. If $`\delta _1=1`$ then $`\delta ^{}=\delta ^{}\varphi (b\zeta )`$. However, if $`\delta _1>1`$ then there are two possible cases: either $`\delta _1b`$, or $`b=1`$ and $`\delta _11\zeta _1`$. In the former case $`\delta ^{}b\zeta =\varphi (b\zeta )`$, while in the latter, $`\delta _11+\zeta _1`$, whence $`\delta ^{}[1+\zeta _1,\zeta _2,\mathrm{}]=\varphi (b\zeta )`$. Conversely, let $`2^{k(\gamma )}M_\gamma 2^{k(\delta )}M_\delta `$ be a term belonging to a tensor $`\theta _{\epsilon a}\theta _{\varphi (b\zeta )}`$ on the RHS of equation (1). To show that it appears in equation (2) we must prove that $`(\gamma \delta )^{}\epsilon (a+b)\zeta `$. We have that $`\gamma ^{}\epsilon a`$ and $`\delta ^{}\varphi (b\zeta )`$, which imply that $$(\gamma \delta )^{}=\gamma ^{}\delta ^{}\gamma ^{}\delta ^{}\epsilon a\varphi (b\zeta ).$$ If $`b>1`$ then $$(\gamma \delta )^{}\epsilon a\varphi (b\zeta )=\epsilon ab\zeta \epsilon (a+b)\zeta .$$ If $`b=1`$ then $`\delta ^{}\varphi (b\zeta )=[1+\zeta _1,\zeta _2,\mathrm{}]`$ implies that $$\delta ^{}=[1,\mathrm{}][1,\zeta _1,\mathrm{}]=b\zeta .$$ Therefore, $$(\gamma \delta )^{}=\gamma ^{}\delta ^{}\epsilon ab\zeta \epsilon (a+b)\zeta $$ as desired. ∎ ## 2. The peak Hopf algebra ###### Definition 2.1. For any composition $`\alpha =[\alpha _1,\alpha _2,\mathrm{},\alpha _k]`$ we say that $`\theta _\alpha `$ is a *peak function* if $`\alpha _i=1i=k`$. Observe that if $`\theta _\alpha `$ is a peak function and $`\alpha n`$, then $`I(\alpha )\{2,\mathrm{},n1\}`$ such that no two $`i`$ in $`I(\alpha )`$ are consecutive. Let $`\mathrm{\Pi }^n`$ be the $``$-module spanned by all peak functions $`\theta _\alpha `$, $`\alpha n`$, and let $`\mathrm{\Pi }=_{n0}\mathrm{\Pi }^n`$. This was studied by Stembridge who showed that the peak functions are F-positive, are closed under product, and form a basis for $`\mathrm{\Pi }`$, and so the rank of $`\mathrm{\Pi }^n`$ is the $`n`$th Fibonacci number. In addition we also know the following about the *algebra of peaks*, $`\mathrm{\Pi }`$. ###### Theorem 2.2. $`\mathrm{\Pi }`$ is closed under coproduct. ###### Proof. If all components of a composition $`\alpha `$, except perhaps the last, are greater than $`1`$, then the same is true for all compositions $`\epsilon a`$ and $`\varphi (b\zeta )`$ appearing in the RHS of equation (1). ∎ Let $`\mathrm{\Theta }`$ be the $``$-linear map from $`\mathrm{\Sigma }`$ to $`\mathrm{\Pi }`$ defined by $`\mathrm{\Theta }(F_\alpha )=\theta _{\mathrm{\Lambda }(\alpha )}`$, where $`\mathrm{\Lambda }(\alpha )`$ is the composition formed from $`\alpha =[\alpha _1,\alpha _2,\mathrm{},\alpha _k]`$ by adding together adjacent components $`\alpha _i,\alpha _{i+1},\mathrm{},\alpha _{i+j}`$ where $`\alpha _{i+l}=1`$ for $`l=0,\mathrm{},j1`$, and either $`\alpha _{i+j}1`$, or $`i+j=k`$. ###### Example 2.3. If $`\alpha =31125111`$ then $`\mathrm{\Lambda }(\alpha )=3453`$. Stembridge showed that $`\mathrm{\Theta }:\mathrm{\Sigma }\mathrm{\Pi }`$ is a graded surjective ring homomorphism, and was an analogue of the retraction from the algebra of symmetric functions to Schur $`Q`$ functions. It is clear from our proof above that this morphism is in fact a Hopf homomorphism. We can describe the kernel of $`\mathrm{\Theta }`$ as follows. ###### Lemma 2.4. The non-zero differences $`F_\alpha F_{\mathrm{\Lambda }(\alpha )}`$ form a basis of the kernel of $`\mathrm{\Theta }`$. ###### Proof. Each difference $`F_\alpha F_{\mathrm{\Lambda }(\alpha )}`$ is in the kernel of $`\mathrm{\Theta }`$ as $`\mathrm{\Theta }(F_\alpha F_{\mathrm{\Lambda }(\alpha )})=0`$ since $`\mathrm{\Lambda }(\mathrm{\Lambda }(\alpha ))=\mathrm{\Lambda }(\alpha )`$. In addition, the non-zero differences are linearly independent as they have different leading terms. Letting $`f_n`$ denote the $`n`$th Fibonacci number, there are $`2^{n1}f_n`$ such differences, and since $`dim\mathrm{ker}\mathrm{\Theta }`$ $`=`$ $`dim\mathrm{\Sigma }^ndim\mathrm{\Pi }^n`$ $`=`$ $`2^{n1}f_n,`$ our result follows. ∎ ## 3. The coalgebra of shifted quasi-symmetric functions ###### Definition 3.1. For any composition $`\alpha =[\alpha _1,\alpha _2,\mathrm{},\alpha _k]n`$ we say that $`\theta _\alpha `$ is a *shifted quasi-symmetric function* (sqs-function) if $`n1`$ or $`\alpha _1>1`$. Observe that if $`\theta _\alpha `$ is an sqs-function and $`\alpha n`$, then $`I(\alpha )\{2,\mathrm{},n1\}`$. For integers $`n0`$, let $`\mathrm{\Xi }^n`$ be the $``$-module spanned by all sqs-functions $`\theta _\alpha `$, $`\alpha n`$, and let $`\mathrm{\Xi }=_{n0}\mathrm{\Xi }^n`$. ###### Theorem 3.2. $`\mathrm{\Xi }`$ is closed under coproduct. ###### Proof. If the first component of a composition $`\alpha `$ is greater than $`1`$, then the same is true for all compositions $`\epsilon a`$ and $`\varphi (b\zeta )`$ appearing in the RHS of equation (1). ∎ Unlike peak functions , sqs-functions are not $`F`$-positive since $$\theta _{211}=F_{22}+F_{112}+2F_{121}+F_{211}F_{1111}.$$ ###### Definition 3.3. For any composition, $`\alpha n`$, we define the complement $`\alpha ^c`$ of $`\alpha `$ to be the composition for which $`I(\alpha ^c)=(I(\alpha ))^c`$, the set complement of $`I(\alpha )`$ in $`[n1]`$. We define the graph $`G(\alpha )`$ of $`\alpha `$ to be the graph obtained from by removing the edge $`(i,i+1)`$ if and only if $`iI(\alpha )`$. Observe that $`G(\alpha ^c)`$ contains the edge $`(i,i+1)`$ if and only if this edge is not contained in $`G(\alpha )`$. These graphs will be used later to simplify the proof of Theorem 3.6. Let a *word* of length $`n`$ be any $`n`$-tuple, $`w_1w_2\mathrm{}w_n`$, and let a *binary word* of length $`n`$ be a word $`w_1w_2\mathrm{}w_n`$ wuch that $`w_i\{0,1\}`$ for all $`i`$. For $`2in1`$, let us denote by $`3^{(i)}`$ the composition $`[1^{i2},3,1^{ni1}]`$ of $`n`$. For some subset $`S\{2,\mathrm{},n1\}`$, let us denote by $`_{iS}3^{(i)}`$ the composition of $`n`$ for which $`G(_{iS}3^{(i)})`$ has an edge between vertices $`i`$ and $`i+1`$ if and only if an edge exists between vertices $`i`$ and $`i+1`$ in $`G(3^{(i)})`$ for some $`iS`$. ###### Example 3.4. Let $`S=\{2,3\}[3]`$. Then $`G(3^{(2)})`$ is and $`G(3^{(3)})`$ is hence $`G(_{iS}3^{(i)})`$ is so $`_{iS}3^{(i)}`$ is the composition $`4`$. ###### Definition 3.5. Let $`\alpha `$ be a composition of $`n`$. Let $`𝒜(I(\alpha ))`$ denote the set of all sequences $`j_1j_2\mathrm{}j_n`$ in $``$ such that we do not have $`j_{i1}=j_i=j_{i+1}`$ for any $`iI(\alpha )`$. The *shifted quasi-symmetric function* $`\theta _\alpha ^{BH}`$ is given by $$\theta _\alpha ^{BH}=\underset{\stackrel{J=(j_1,\mathrm{},j_n)}{\stackrel{j_1\mathrm{}j_n}{J𝒜(I(\alpha ))}}}{}2^{|j|}x_{j_1}\mathrm{}x_{j_n},$$ where $`|j|`$ denotes the number of distinct values $`j_i`$ in $`J`$. ###### Theorem 3.6. For any sqs-function $`\theta _\alpha `$ we have that $`\theta _\alpha =\theta _\alpha ^{BH}`$. ###### Proof. For each $`iI(\alpha )[n1]`$, $`j_{i1}=j_i=j_{i+1}`$ is forbidden in any monomial $$x_{j_1}x_{j_2}\mathrm{}x_{j_i}\mathrm{}x_{j_n}$$ appearing as a summand of the function $`\theta _\alpha ^{BH}`$. This is equivalent to saying that $`M_\beta `$ is a summand of $`\theta _\alpha ^{BH}`$ if and only if $`G(3^{(i)})G(\beta )`$ for all $`iI(\alpha )`$. Therefore at least one of $`i1`$ or $`i`$ must be the largest label of a vertex in a connected component in $`G(\beta )`$. Now when going from compositions of $`n`$ to subsets of $`[n1]`$ we can do so using our graphs, $`G`$. All we have to do is list the label of the vertex that is the largest in each connect component, not listing $`n`$. We call these vertices the *end-points*. We are now in a position to prove the equivalence of Definitions 1.1 and 3.5 for sqs-functions. The powers of 2 agree so we need only show that the indices of summation do too. To see this, take any sqs-function $`\theta _\alpha `$ and let $`iI(\alpha )`$. Then $`M_\beta `$ is a summand in $`\theta _\alpha ^{BH}`$ if at least one of $`i1`$ or $`i`$ is an end-point in $`G(\beta )`$. Therefore $`i`$ or $`i1`$ belongs to $`I(\beta )`$, and $`M_\beta `$ is a summand of $`\theta _\alpha `$. Conversely, if $`M_\beta `$ is a summand of $`\theta _\alpha `$, then this implies that for each $`iI(\alpha )`$, we have that $`i1`$ or $`i`$ belongs to $`I(\beta )`$, so one of $`i1`$ or $`i`$ is an end-point in $`G(\beta )`$, so $`M_\beta `$ is a summand of $`\theta _\alpha ^{BH}`$. ∎ ## 4. A basis for $`\mathrm{\Xi }`$ ###### Definition 4.1. Let $`\theta _\alpha `$ be an sqs-function and $`\alpha n`$. We define an internal peak $`iI(\alpha )`$ such that $`i1,i+1I(\alpha )`$, and $`i\{3,\mathrm{}n2\}`$. Remark Observe that the occurrence of an internal peak in the $`i`$th position in $`I(\alpha )=\{w_1,w_2,\mathrm{}\}`$, where $`w_1<w_2<\mathrm{}`$, is equivalent to having two components of $`\alpha `$, say $`\alpha _i,\alpha _{i+1}`$ such that $`\alpha _{i+1}2`$, and $`\alpha _i2`$ if $`i1`$, or $`\alpha _i3`$ if $`i=1`$. We can now describe the basis of $`\mathrm{\Xi }`$ as follows. ###### Theorem 4.2. The coalgebra $`\mathrm{\Xi }`$ has a basis consisting of all sqs-functions $`\theta _\alpha `$ where $`I(\alpha )`$ contains no internal peak. We sketch the proof of Theorem 4.2 later. ###### Theorem 4.3. The rank of $`\mathrm{\Xi }^n`$ is given by the recurrence $$\pi _n=\pi _{n1}+\pi _{n2}+\pi _{n4},$$ with initial conditions $`\pi _1=1`$, $`\pi _2=1`$, $`\pi _3=2`$, $`\pi _4=4`$. This recurrence was suggested by a superseeker query . ###### Proof. By direct calculation we obtain that $`\pi _1=1`$, $`\pi _2=1`$, $`\pi _3=2`$, and $`\pi _4=4`$. To obtain our recurrence, we observe that for each sqs-function, $`\theta _\alpha `$ where $`\alpha n`$, we can encode $`I(\alpha )`$ as a binary word of length $`n2`$, by placing a $`1`$ in position $`i1`$ if $`i`$ is contained in $`I(\alpha )`$, and $`0`$ otherwise. By this one-to-one correspondence we see that $`I(\alpha )`$ contains no internal peak if its corresponding binary word does not contain $`010`$ as a subword. We therefore count binary words of length $`n`$ that avoid the subword $`010`$. Appending either $`1`$ or $`0`$ to such a binary word of length $`n1`$ gives one of length $`n`$, provided that we have not created the subword $`010`$ in the last three positions. Let $`a_n`$, $`b_n`$, $`c_n`$, and $`d_n`$ enumerate those binary words of length $`n2`$ that avoid the subword $`010`$ and end in, respectively $`00`$, $`01`$, $`10`$, and $`11`$. We then obtain the following 4 simultaneous recursions. $$a_n=a_{n1}+c_{n1},b_n=a_{n1}+c_{n1},c_n=d_{n1},d_n=b_{n1}+d_{n1}.$$ Clearly the number of $`I(\alpha )`$s in $`[n1]`$ with no internal peaks is given by $$\pi _n=a_n+b_n+c_n+d_n,$$ However by substituting in our recurrences we obtain $`\pi _n`$ $`=`$ $`a_n+b_n+c_n+d_n`$ $`=`$ $`2a_{n1}+b_{n1}+2c_{n1}+2d_{n1}`$ $`=`$ $`\pi _{n1}+a_{n1}+c_{n1}+d_{n1}`$ $`=`$ $`\pi _{n1}+a_{n2}+b_{n2}+c_{n2}+2d_{n2}`$ $`=`$ $`\pi _{n1}+\pi _{n2}+d_{n2}`$ $`=`$ $`\pi _{n1}+\pi _{n2}+b_{n3}+d_{n3}`$ $`=`$ $`\pi _{n1}+\pi _{n2}+a_{n4}+b_{n4}+c_{n4}+d_{n4}`$ $`=`$ $`\pi _{n1}+\pi _{n2}+\pi _{n4}.`$ We say that $`M_\beta `$ is a maximal term of $`\theta _\alpha `$ if for any $`\gamma `$ higher in the partial order of compositions $`M_\gamma `$ is not a summand of $`\theta _\alpha `$. The following lemma is stated without proof. ###### Lemma 4.4. Let $`\theta _\alpha `$ be an sqs-function. Consider the collection $`S`$ of all possible sets derived from $`I(\alpha )`$ by adding either $`i1`$ or $`i+1`$ to $`I(\alpha )`$ for all internal peaks $`iI(\alpha )`$. If $`M_\beta `$ is a maximal term of $`\theta _\alpha `$, then $`\beta `$ is derived from $$\underset{\stackrel{i\left(I\left(\stackrel{~}{\alpha }\right)\right)^c}{I(\stackrel{~}{\alpha })S}}{}3^{(i)}$$ by adding adjacent components equal to 1 together to give a component equal to 2 as often as possible. ###### Lemma 4.5. Let $`\theta _\alpha `$ be an sqs-function, and let $`I(\alpha )`$ have an internal peak in the $`j`$th position, then we have the following linear relation $`\theta _\alpha `$ $`=`$ $`\theta _{[\alpha _1,\mathrm{},\alpha _j1,1,\alpha _{j+1},\mathrm{},\alpha _k]}+\theta _{[\alpha _1,\mathrm{},\alpha _j,1,\alpha _{j+1}1,\mathrm{},\alpha _k]}`$ $`\theta _{[\alpha _1,\mathrm{},\alpha _j1,1,1,\alpha _{j+1}1,\mathrm{},\alpha _k]}.`$ ###### Proof. By Definition 3.5 we have that the leading terms of $`\theta _\alpha `$ determine the other summands that belong to $`\theta _\alpha `$. Hence by Lemma 4.4 it follows that the summands of $`\theta _\alpha `$ will be the union of the summands of $`\theta _{[\alpha _1,\mathrm{},\alpha _j1,1,\alpha _{j+1},\mathrm{},\alpha _k]}`$ and $`\theta _{[\alpha _1,\mathrm{},\alpha _j,1,\alpha _{j+1}1,\mathrm{},\alpha _k]}`$. However, those summands that appear in both will be duplicated. By definition these will be the summands of $`\theta _{[\alpha _1,\mathrm{},\alpha _j1,1,1,\alpha _{j+1}1,\mathrm{},\alpha _k]}`$, and the result follows. ∎ Sketch of proof of Theorem 4.2. From our relation in Lemma 4.5, it follows that any $`\theta _\alpha `$ can be rewritten as a linear combination of functions $`\theta _{\stackrel{~}{\alpha }}`$, where $`I(\stackrel{~}{\alpha })`$ contains no internal peaks. In addition, by Lemma 4.4 and definition 3.5 we have that the set of all sqs-functions $`\theta _\alpha `$ where $`I(\alpha )`$ contains no internal peaks is linearly independent and thus form a basis for $`\mathrm{\Xi }`$.∎
no-problem/9904/nucl-th9904069.html
ar5iv
text
# Nuclear Transport at Low Excitations ## Abstract Numerical computations of transport coefficients at low temperatures are presented for shapes typically encountered in nuclear fission. The influence of quantum effects of the nucleonic degrees of freedom is examined, with pair correlations included. Consequences for global collective motion are studied for the case of the decay rate. The range of temperatures is specified above which this motion may be described as a quantal diffusion process. PACS numbers: 05.60.Gg, 24.10.Pa, 24.75.+i, 25.70.Ji Phys. Rev. Lett. 82 (1999) 4603 In the past decade much progress has been made in the understanding of nuclear transport phenomena in the regime of not too low temperatures, say between 1 and 5 MeV (with $`k_B=1`$). Such a situation is reached if two heavy ions collide at an energy above the Coulomb barrier, but where the excess energy per particle still is small compared to the Fermi energy. In this regime the dynamics of the composite system may be parameterized in terms of shape variables. Of particular interest is the outgoing channel which is dominated by fission and the emission of light particles and $`\gamma `$’s. It has been possible experimentally to deduce valid information on the time scale of collective motion , and, hence, on the size of nuclear dissipation. These experiments suggest collective motion to be over-damped, possibly providing an answer to the question raised by Kramers as early as 1940 in his seminal paper , namely whether nuclear friction is ”abnormally small or abnormally large”. Nowadays such processes are described theoretically in terms of the Langevin equation , which is understood to be equivalent to Kramers’ original equation (of Fokker Planck type) for the density in collective phase space. On general grounds, it may be anticipated that the magnitude of nuclear dissipation will vary with excitation. Indeed, there are experimental indications for such a conjecture. At small thermal excitations the dynamics is governed by the (real) mean field for which there is no room for damping of slow collective motion like fission. At larger T coupling to more complicated configurations sets in, which causes transfer of energy from the collective degrees of freedom $`Q_\mu `$ to the nucleonic ones $`x_i`$. Within the linear response approach the effects of this coupling are accounted for by dressing the single particle energies by complex self-energies depending both on frequency and T. Approximating its imaginary part by a constant proportional to $`T^2`$, friction will again decrease with $`T`$, once the microscopic damping becomes so large that one may speak of ”collision dominance”. At intermediate temperatures there might be the intricate contribution to friction from the ”heat pole”, which has been seen to be large for non-ergodic systems , and which has a dramatic influence on the T-dependence. In the present Letter we want to focus on very low excitations, say in the range below $`T1\mathrm{MeV}`$. This regime not only is of great practical importance, as for the production of super-heavy elements , for instance, it is also of great theoretical interest. First of all, there is little doubt that in this domain quantum effects dominate nucleonic dynamics, and the transport coefficients will strongly be influenced by shell effects and pair correlations. One even must expect quantum features to be present for collective motion, for instance as corrections to Kramers formula for the decay rate . Often quantal approaches are based on the functional integral method applied to simplified Hamiltonians of the Caldeira-Leggett type (for a review see ). There, the bath degrees of freedom $`x_i`$ are represented by a set of oscillators of fixed frequencies, with a bilinear coupling between the $`x_i`$ and the collective variable $`Q`$. The decay rate is calculated for imaginary time propagation. Both features hardly can be taken over to nuclear fission. First of all, the simplest constraint to warrant self-consistency between the mean field and the shape of the nuclear density requires the former to change with $`Q`$. This aspect alone makes it very difficult to work with a (pre-fixed) Hamiltonian for the total system of all degrees of freedom. Moreover, the temperature, which one may define for the fast degrees of freedom (supposedly be given by the ”nucleonic” ones), is subject to changes with $`Q`$ as well as with time. The latter feature occurs because of the evaporation of particles mentioned before. For these reasons a formulation with real time propagation is much more appropriate. This has been achieved by a suitable application of linear response theory on the basis of a locally harmonic approximation (LHA) (for a review see). One exploits the concept of propagators which move the system forward in collective phase space by small time steps. As the individual ones only cover small areas they may be represented by (multi-dimensional) Gaussians. The latter satisfy an equation of motion whose structure is similar to that of Kramers, with only the diffusive terms being modified to account for quantum effects. The following study is based on numerical calculations of transport coefficients for average motion, namely friction $`\gamma `$, inertia $`M`$ and local stiffness $`C`$, more precisely of those ratios which determine transport in phase space, $$\mathrm{\Gamma }_\gamma =\frac{\gamma }{M},\varpi ^2=\frac{C}{M},\eta =\frac{\mathrm{\Gamma }_\gamma }{2\varpi }=\frac{\gamma }{2\sqrt{MC}}$$ (1) Their knowledge will allow us to examine implications on the diffusion coefficients and, hence, on transport processes like fission. It would be most desirable that such information be used as input for computational codes to solve for Fokker-Planck or Langevin equations. In this way one would be able to examine in more detail the role of shell effects, which are known to produce structure not only in the static energy but in the inertial and frictional forces as well. To simplify matters, for the present purpose we will look at the more schematic case where the system’s energy exhibits just one minimum and one barrier at $`Q_a`$ and $`Q_b`$, respectively. The stiffnesses and the barrier height are found from a Strutinsky calculation of the free energy. Finer details of shell effects are removed both from the potential as well as from the transport coefficients by applying a suitable smoothing over a small region of the collective variable around $`Q_a`$ and $`Q_b`$. Suppose we may at first discard any quantum effects in the collective degrees of freedom, which amounts to look at the ”high temperature limit”, for which Kramers’ equation applies . The temperatures we have in mind are always small compared to the barrier height, $`TE_b`$. The decay rate $`R_K`$ then shows the following behavior. For given $`T`$, but as function of the $`\eta _b`$ (at the barrier), the $`R_K(\eta _b,T)`$ increases first, to decrease after it has reached a maximal value (see e.g.). The decreasing branch is represented well by Kramers’ ”high viscosity limit” $$R_K=\frac{\omega _a}{2\pi }\left(\sqrt{1+\eta _b^2}\eta _b\right)\mathrm{exp}(E_b/T)$$ (2) which is valid for $`\eta _bT/(2E_b)`$ (see e.g.). If blindly extended down to $`\eta _b=0`$ this form $`R_K`$ reaches a value typical for a simple transition state model, $`r_{TST}r(\eta _b=0)`$ (Bohr-Wheeler formula). Rather, for very small $`\eta _b`$, one ought to apply Kramers’ ”low viscosity limit”, given by $`R_K^{l.v.}=\mathrm{\Gamma }_\gamma ^b(E_b/T)\mathrm{exp}(E_b/T)`$. For nuclear physics the latter has not played any role yet, as $`\eta `$ is believed to lie above the limit given below (2). According to this should be the case at temperatures above $`1\mathrm{MeV}`$. Moreover, the $`\varpi `$ does not change much with $`T`$. In a value of about $`1\mathrm{MeV}/\mathrm{}`$ was found both at the potential minimum as well as at the saddle. It so turns out that this feature is more or less recovered even at smaller $`T`$, say within an accuracy of the order of $`20\%`$, which may be good enough for the following discussion. More drastic modifications are expected, and indeed seen, for dissipation. To study this behavior, the $`\gamma ,\mathrm{\Gamma }_\gamma `$ and $`\eta `$ have been calculated on the basis of the same deformed shell model as in but with pair correlations included. The transformation from independent particles to quasi-particles of BCS-type is standard. For our purpose the common procedure does not suffice, however. As mentioned previously, a decent and sensible description of nuclear dissipation needs to account for ”collisions”. At low thermal excitations their effects, too, will strongly be influenced by pair correlations. Look at the extreme case of zero temperature and take an even-even system for the sake of simplicity. Then there will be no quasi-particle states within twice the gap energy $`2\mathrm{\Delta }`$. Hence, the imaginary part of the self-energy $`\mathrm{\Gamma }(\mathrm{}\omega ,\mathrm{\Delta },T=0)`$ must be zero at least within such a range of frequencies $`\mathrm{}\omega `$. Hence, at $`T=0`$ friction will strictly vanish for any collective motion whose frequency $`\omega =\varpi `$ lies in that range, i.e. $`\mathrm{}\omega 2\mathrm{\Delta }`$. Extrapolating from the case of $`T=0`$ to finite $`T`$ we should expect the function $`\gamma =\gamma (T)`$ to have a step like behavior. This dependence then goes through to that of $`\mathrm{\Gamma }_\gamma `$ and $`\eta `$, albeit the inertia, too, is influenced by pairing. Let us demonstrate these features on the basis of numerical calculations performed for the example of $`{}_{}{}^{224}\mathrm{Th}`$. For details we have to refer to , but we may mention that the $`\mathrm{\Gamma }(\mathrm{}\omega ,\mathrm{\Delta },T)`$ has been calculated along the lines suggested in . In Fig.1 we display the $`\eta (T)`$’s at the minimum and the barrier. They have been obtained for a $`\mathrm{\Delta }=\mathrm{\Delta }(T)`$ as determined by the gap equation. Unfortunately, so far it has not been possible to calculate the underlying response function $`\chi (\omega )`$ in full glory. Rather, when evaluating the necessary folding integrals over frequency the correct width had to be approximated by a constant, calculated at the Fermi energy $`\mu `$: $`\mathrm{\Gamma }(\mathrm{}\omega ,\mathrm{\Delta },T)=\mathrm{\Gamma }(\mathrm{}\omega =\mu ,\mathrm{\Delta },T)`$. Indeed, in this regime of small $`\omega `$ and for $`TT_{pair}`$ where pair correlations disappear, such an estimate may be considered to represent the correct width well enough to allow for a general analysis. Evidently, the values of $`\eta `$’s obtained for $`\mathrm{\Delta }0`$ clearly fall well below those of the unpaired case, shown here by the dashed lines. The most important features exhibited in Fig.1 may be summarized as follows, together with the consequences for Kramers’ decay rate: (1) The step like function mentioned before is clearly visible. (2) Below $`TT_{pair}0.5\mathrm{MeV}`$ the effective damping rate $`\eta `$ is smaller than about $`0.1`$. (3) As seen in Fig.1, $`\eta `$ may fall below $`T/2E_b`$ such that formula (2) no longer applies. (4) Up to $`T1\mathrm{MeV}`$ $`\eta `$ stays below $`0.2`$ at the minimum and below $`0.3`$ at the barrier. The latter value implies that the rate may be approximated fairly well by the transition state value $`r_{TST}R_K(\eta =0)`$, see (2). (5) These values of $`\eta `$ are much smaller than those one gets within ”macroscopic models”, say in terms of a combination of wall friction with the stiffness and inertia of the liquid drop model (with irrotational flow). In Fig.2 we plot the ratio of Kramers’ pre-factor $`f_K=(\sqrt{1+\eta _b^2}\eta _b)`$ obtained from our results to that of the macroscopic limit just described . It is seen that the latter underestimates the decay rate by about a factor of 10. Quantum corrections will increase this deviation further, indicated here by the dashed curve. Within the LHA, these quantum corrections come in through the diffusion coefficients, as given by the fluctuation dissipation theorem . It is only at temperatures above $`2\mathrm{MeV}`$ that one may safely assume the classic Einstein relation $`D_{pp}=\gamma T`$ to be valid . In the general case, in addition to the $`D_{pp}`$ there is a cross term $`D_{qp}`$, both of which depend in non-linear way on combinations of $`M,\gamma `$ and $`C`$, or on the parameters introduced in (1). The diffusion coefficients behave very differently for stable and for unstable modes. To demonstrate this feature let us look at the limit of small dissipation $`\eta 1`$. To lowest order in $`\gamma `$ one gets $$D_{qp}=0D_{pp}=\gamma T^{}\mathrm{with}T^{}(\varpi )=\frac{\mathrm{}\varpi }{2}\mathrm{coth}\left(\frac{\mathrm{}\varpi }{2T}\right)$$ (3) with $`\varpi =\sqrt{C/M}=|\varpi |`$ for $`C>0`$ and $`\varpi =i|\varpi |`$ for $`C<0`$. The form (3) may be said to represent the correct behavior fairly well below $`\eta 0.1`$ (see Fig.3.4.2 of ). From the results shown above for $`\eta (T)`$, one may thus argue the relation (3) to be acceptable for temperatures below $`T_{pair}`$, whereas deviations must be expected for $`TT_{pair}`$. For $`C<0`$ and weak friction the diffusion coefficient $`D_{pp}`$ falls below the values given by the Einstein relation. It quickly drops to zero at a critical temperature $`T_c`$, below which the $`D_{pp}`$ would become negative and the diffusion equation would loose its meaning. The value of this $`T_c`$ decreases with increasing $`\eta `$, such that the form given in (3) delivers an upper limit and we may write $$T_cT_c(\eta =0)=\mathrm{}\varpi _b/\pi T_c<T_{pair}$$ (4) The statement on the right is reached assuming the $`\mathrm{}\varpi _b`$ to be of the order of $`1\mathrm{MeV}`$ and taking the value for $`T_{pair}`$, as reported above, together with the fact that below $`T_{pair}`$ the damping rate $`\eta `$ falls below $`0.1`$. Commonly, the quantum corrections to Kramers decay rate are expressed by a factor $`f_Q`$ appearing in the correct rate as $`R=f_QR_K`$ (see e.g. ). As shown in , this form may also be obtained within the LHA. This derivation is based on the assumption that in the neighborhood of the potential minimum friction is large enough to ensure sufficient relaxation inside the well. The same assumption is behind Kramers’ ”high viscosity limit”, upon which we just have convinced ourselves to be given in the range of temperature at $`T_{pair}`$ and above. Moreover, this assumption turns out to be necessary also when the problem is formulated and solved with path integrals in real time propagation (see ). The $`f_Q`$ can be expressed by a ratio of two partition sums: $`f_Q=|𝒵_b|/𝒵_a`$, where the one associated to the barrier has to be defined by analytic continuation. According to , the $`𝒵`$ of a damped oscillator can be calculated from the equilibrium fluctuations of momentum and coordinate. Hence, within the LHA it might be expressed by the diffusion coefficients. Unfortunately, for $`\gamma 0`$ a calculation of the momentum fluctuation requires regularization, for instance by introducing a frequency dependent friction coefficient (Drude regularization). To get a fairly simple estimate of $`f_Q`$ and its T-dependence we used the following formula (with $`\mathrm{}\omega _n=n2\pi T`$) $$f_Q=\underset{n=1}{\overset{\mathrm{}}{}}\frac{\omega _n^2+\omega _n\overline{\mathrm{\Gamma }_\gamma }+\varpi _a^2}{\omega _n^2+\omega _n\overline{\mathrm{\Gamma }_\gamma }\varpi _b^2}\mathrm{with}\overline{\mathrm{\Gamma }_\gamma }=\frac{\mathrm{\Gamma }_\gamma ^a+\mathrm{\Gamma }_\gamma ^b}{2}$$ (5) It may be noted in passing, (a) that without (Drude) regularization this formula would diverge for $`\mathrm{\Gamma }_\gamma ^a\mathrm{\Gamma }_\gamma ^b`$ and (b) that problems of this type are absent for the Caldeira-Leggett approach where the transport coefficients do not change with the collective variable; generalizations are possible, though, for instance by introducing variable coefficients phenomenologically, see e.g. . The result of a numerical evaluation of (5) within our theory is shown in Fig.2 by the dashed curve. This graph demonstrates several features, valid in this range of temperatures: (i) The quantum effects in the collective motion may change the decay rate by about 30 % or less. (ii) Already at $`T=1\mathrm{MeV}`$ they only amount to about 10 %. (iii) More important are the quantum effects of nucleonic motion, which are responsible for the deviation of the transport coefficients from the macroscopic models. Unfortunately, it is not possible to carry the analysis further down to smaller temperatures. Below $`T_c`$ \[recall (4)\] we have seen the LHA break down. It may be said that this problem generally appears in real time formulations. Within the functional integral approach this has been demonstrated in and traced back to the harmonic approximation to the barrier. The problem at stake here is a very severe one for any applications of Langevin or Fokker Planck equations to nuclear physics. Both methods allow one to account for various non-linear effects, as manifested by variable transport coefficients, for instance, but both of them rely on ”real time propagation”. Moreover, practical computer programs exploit locally harmonic approximations, in one way or other, such that it is not possible to even define meaningful diffusion coefficients below $`T_c`$. In this context we should like to mention the possibility of calculating and exploiting a Feynman-Vernon functional for global motion on the basis of Random Matrix Theory . In principle, this might allow one to study quantum effects, but it is somewhat questionable whether this procedure will be applicable to nuclear physics, at those temperatures where these quantum effects become important which we just mentioned. This concern has essentially two reasons — discarding for the moment the very fact that this model, too, has difficulties with self-consistency. (i) Generally an application of RMT ceases to be valid at low excitations. (ii) So far practical applications have been possible only to leading order in an expansion in $`1/T`$, actually to that regime of $`T`$ where friction decreases with $`T`$. Summarizing our results we hope to have been able to exhibit for low energy nuclear physics an exciting problem of quantum transport which still is lacking a general solution. With respect to the application in nuclear physics itself, it appears to be very difficult to describe theoretically processes like the ”cold” production of super heavy elements without a decent understanding of transport at small temperatures and weak dissipation. Whereas low thermal excitations are dictated by experimental conditions, the fact of small friction then is a consequence of the quantal nature of nucleonic dynamics in a mean field, in particular when pair correlations become important. Of course, to obtain more quantitative results, further studies on the microscopic level are needed. For instance, it is necessary to understand better the mechanism of ”collisions” under the presence of pair correlations. Their role on the T-dependence of transport, as well as that of the ”heat pole” in the larger range of excitations, require further clarification. Likewise, it should be very interesting to allow for fluctuations in the gap parameter and to examine in which way they might modify fission dynamics. The authors like to acknowledge fruitful discussions with J. Ankerhold and N.V. Antonenko. This work was supported by the Deutsche Forschungsgemeinschaft.
no-problem/9904/physics9904058.html
ar5iv
text
# Supersymmetry : A new organizing principle for the microworld?11footnote 1Presented at the Seminar on Philosophy of Science, IIT Bombay, February 1998 ## 1 Prologue This paper deals with a topic of current interest in Theoretical Physics and is presented to the forum of historians and philosophers of science. It assumes familiarity at the popular level with developments in High Energy Physics and with basic Quantum Physics. In rephrasing technical statements, an attempt is made to remain close to the truth, albeit selectively. Math is used sparingly but some formulae are displayed in the hope that they will give the reader an opening into more detailed literature. Sections 2 and 3 deal with the paradigm of symmetry as it has come to be understood in this century. Supersymmetry is an elegant symmetry principle, but seems to not be operating in nature in its simplest version. Section 4 deals with Supersymmetry; the idea, its appeal and its failings. Section 5 presents two alternatives for the metaphysical status of Supersymmetry in case it is indeed discovered. ## 2 The intangible microworld The macroscopic world directly impinges on the senses and demands systematizing principles. Presenting itself in many different contexts, it also provides ample clues for arriving at such principles. Majority of the phenomena of common experience are correctly described by Newtonian principles, complemented by laws governing electromagnetism, hydrodynamics and so on. The microscopic world is nevertheless present. It is intangible except for a few tangible and powerful clues. The shape and solidity of the world relies on Fermi statistics. Several phenomenological constants contain Planck’s constant or Avogadro’s number. The scientist has had to progressively become a detective relying on skimpy evidence to pursue the trail of an elusive and magnificent if unseeable reality. Consider an example of clues leading to detection. Valence was wrested from Chemical phenomena, after much confusion and controversy. Mendeleev’s periodic table, at first based on valence, systematized the elements and predicted new ones. The raison d’etre of the table remained a mystery until the electronic structure of the atom could be understood. This in turn needed the Pauli exclusion principle for its explanation; in turn bringing us to the very heart of microscopic phenomena, the fundamental indistinguishability of quanta. The message of Quantum Mechanics is that only the possible quantum states of a collection of quanta that are distinct, not the quanta themselves. ### 2.1 The metaphysics of insight An important paradigm for theoretical progress in this century has been symmetry principles. This is in contrast to the development of Electromagnetism which occurred over about two centuries, during which theory and experiment progressed step by step, aiding each other. The exploration of the microworld beginning with radioactivity did not enjoy such a luxury of wealth of data, nor of easily constructible and repeatable experiments. The developments starting during 1880’s and culminating in 1930’s therefore relied on deep insights guided by certain metaphysical assumptions. Here and in the following, by metaphysical we shall mean principles external to the discipline of physics itself, but nevertheless conceived and used by professionals. It is the implicit use of such principles that is the subject of this paper. Also, being external to the discipline itself, they appropriately form the subject matter of Philosophy of Science. There are two simple but deep principles used universally in science. These are, (1) universal applicability of the concepts, (2) consistency of the epistemy. By the latter we mean the expectation that existing technical frameworks or formalisms will apply also to a relatively new domain of phenomena. An example of above principles at work is provided by the discovery of Bose Statistics. Planck’s explanation of Black Body radiation relied on assumption of absorption and emission of radiation energy in quanta. Einstein made this fact into a new concept, that of a photon, and used it to explain photoelectric effect. This put the photon on a more general footing. The next step, which took many years in coming, was taken by Bose who assumed that the thermodynamics of photons must be deducible from a counting of states just as in classical Statistical Mechanics. This is what we mean by principle (2). There was one revolutionary new input required however. The counting of states is based on strict indistinguishability among the photons. We cite these as examples of insight working in conjunction with above guiding principles. In the Quantum domain however, both proved unreliable. Neither the concepts could be universally applied, nor the epistemy. Mathematically precise entities and rules were in some sense the only infallible guide. Which concepts would remain robust and what exactly these rules were took a long time for its understanding. Barring possible new phenomena such as Hawking radiation, Quantum Mechanics as we know today is consistent and complete but does not cease to evoke disbelief even in eminent practitioners. There was however another metaphysical principle which was emerging as means of guessing ahead. Very loosely it may be called the principle that the equations must be elegant and must incorporate a certain symmetry. It was based on this principle that Einstein’s theory of Gravitation and Dirac’s theory for the relativistic electron were accepted by Physics community with awe and excitement even before they could be completely established. In its highly evolved form today, it has come to be further formalized as a demand for the existence of precise, mathematically implementable symmetry principles. The origins of this metaphysical principle go back to the nineteenth century, when Maxwell achieved an elegant unification of the laws of electromagnetism. He organized several laws and rules of thumb then known so that the Electric and Magnetic forces appeared on par with each other displaying an uncanny similarity between the two. The laws were also stated in the form of mathematical equations that permitted easy geometrical visualization and were yet so far reaching in their import and applicability that an eminent colleague is reported to have exclaimed, quoting Goethe, “was it a god who wrote those lines?” With hindsight we know that in fact the symmetry they displayed went much farther than a nineteenth century esthete could have discerned, for they were the first equations to be known which were covariant under Special Relativistic transformations. What we are trying to identify as a principle is actually rather broad and perhaps contains more logically distinct positions than one. We shall focus here on a much more restrictive and precise aspect of this principle. Specifically we refer to the use of symmetry principles that tend to restrict theories and introduce economy of phenomenological parameters in it. We shall refer to it broadly as Gauge Symmetry. It started with Einstein trying to formulate relativistically consistent laws of Gravitation, and later also helped to shape the laws of strong and weak interactions. The gauge symmetry underlying General Relativity on the one hand and Gauge Field Theories of strong and weak interactions on the other hand have several technical differences. But as has been emphasized by Weinberg, they have an essential similarity to permit being viewed as manifestations of the same basic principle. This is the topic of the next section. ## 3 Gauge Symmetry The most common example of a mathematically implementable symmetry principle in Physics is the idea of rotations. <sup>2</sup><sup>2</sup>2This section is an abridged version of the author’s contribution to the Seminar on Philosophy of Science, IIT Bombay, 1993. We do not expect outcomes of experiments to depend on the directional orientation of the apparatus. Considered as a set of operations, the rotations form a mathematical structure called Group. One representation of this group is in terms of matrices, acting on vectors such as the position vector or the electric field vector. The Special Relativity principle is a similar principle, in fact a generalized rotation involving time, such that the ordinary rotations form a subgroup of this bigger group. But this notion of symmetry was completely revolutionized by Einstein in his subsequent work, viz., General Relativity. This theory is actually a theory of Gravity, generalized from its Newtonian version and made consistent with Special Relativistic rotations. The prescription of General Relativity can be summarized in two parts : (1) The space-time should be treated as curved, like the surface of a ball. Thus gravitational influences are described by a set of space-time dependent functions that specify the distance and angle measurement prescriptions. In a curved space these replace the Pythagorian distance law from point to point. (2) In a curved space one does not choose a rigid, Cartesian system of coordinates, but any convenient curvilinear coordinates. So the laws of physics must be such as to remain invariant under arbitrary choices of curvilinear coordinates. This translates to invariance of the laws under rotations that can be different at different points. This two part law was called the Principle of General Covariance. This theory was a great speculative triumph. At the time of its invention, there was no evidence for it. No one had suspected that the perihelion precession of Mercury had contributions from Relativistic effects, requiring fundamental reformulation of Newtonian Gravity. No other experimental evidence existed that demanded such a generalization. In the 1930’s the notion of rotational symmetry was extended by Heisenberg in a very profound way. It is known that the strong nuclear force does not depend separately on the physical state of the proton or that of the neutron. In Quantum Mechanics, the physical state of a system is described by a complex wavefunction denoted $`\psi `$. Heisenberg’s proposal was that instead of using $`\psi _p`$ (for proton) and $`\psi _n`$ (for neutrons), if we used $`\stackrel{~}{\psi _p}`$ $`=c_1\psi _p+c_2\psi _n`$ (1) $`\mathrm{and}\stackrel{~}{\psi _n}`$ $`=c_3\psi _n+c_4\psi _p`$ (3) the physics would remain unchanged. Here the $`c_1,c_2`$, $`c_3,c_4`$ are complex numbers satisfying some constraints. The relations above can be thought of as a complex rotation, an abstract generalization from the case of real vectors. This rotation, called an isospin rotation is a symmetry (although approximate) of the strong nuclear force. Several decades later, Yang and Mills proposed Gauge Field Theories. These were a generalization of isospin symmetry in much the same way as General Relativity generalized Special Relativity. Both prescribed the form of the interaction, although the requirement was stated as a geometrical law. To summarize, precise mathematical principles were used as a strategy to guess at a theory with insufficient experimental evidence. The theory could well have proved wrong. This too has happened many times, as for instance with the original Kaluza-Klein theory. But the success of the cases in which this approach has worked is spectacular. ### 3.1 Broken symmetry The curious fact about symmetries is that sometimes they may not be manifest in the data. This can happen due to two different reasons. One reason is that the symmetry may be only approximate. That is, only by ignoring some of the data or by modifying their values does one see the symmetry principle at work. For this to be true, the contaminating effects should be small in a quantitative sense. But this case of non-manifest symmetry is not as interesting as the next one. It has been found that in some systems, the governing equations possess a certain symmetry. However, the complexity of the interactions drives the system to solutions that do not reflect the symmetry. This case is called Spontaneous Breakdown of symmetry. In case of weak nuclear interactions, one seeks a theory that obeys gauge invariance, somewhat similar to the two-part principle of General Covariance. However, gauge invariance implies masslessness of the mediating particles, whereas the mediators of the weak force are known to be massive. The resolution of this paradox lay in realizing that there could be additional particles, known as Higgs whose complex dynamics leads to the ground state of the system not explicitly displaying the gauge symmetry. Under these circumstances, the interaction of the gauge particles with the Higgs particles makes the former massive. In the second type of broken symmetry, the symmetry is all the time present, being made invisible by the particular state in which the system is available to us. In this case, guessing the governing equations is difficult but symmetry can be used as a guiding principle. ## 4 Supersymmetry This brief history prepares us for a description of the new proposal of Supersymmetry. The origins of the search for this rather bizarre symmetry to be described lie in two unrelated motivations. One was a direct one, asking whether photon and neutrino, the only two particles known to be massless in 1960’s had anything more in common. Specifically whether they were two manifestations of the same particle “species” masquerading as two. Secondly, it was also a search for the most general type of symmetries allowed by interactions that respect the basic rotational and Special Relativistic symmetries of space time. There was also an enigma in the distinction between the gauge symmetry of Gravity which involved space-time itself and the gauge symmetry of Nuclear forces, which seemed to operate in an abstract space of wave functions. The General Covariance of Gravity came to be called an external gauge symmetry and the Gauge symmetry of the nuclear forces internal symmetries. The possible kinds of internal symmetries were soon classified in terms of the mathematical theory of Lie Groups. It is a rich variety of possible symmetries. The question was, what were the most general kinds of external symmetry and whether there could be any mixing between external and internal. The above question was supposedly answered with some degree of finality by a so called ”No-go” theorem which appeared in the late sixties. It said that all the possible symmetries one could possibly have, consistent with Quantum Mechanics were the ones already known, viz., a variety of internal symmetries like isospin on the one hand and the already known external symmetries, those of the Special Theory of Relativity (subsuming the old known symmetry of rotations) on the other hand. There was nothing new to be added to the category of external. There was a loop hole however. In order to understand it, let us look at a mathematical statement of internal consistency of several symmetry operations is formulated. It can be checked by some amount of careful experimentation that a small amount of $`x`$-axis rotation followed by a small amount of $`y`$-axis rotation, is not the same as the $`y`$-rotation first followed by same $`x`$-rotation. This can actually be checked by holding up a pen. The results of the two operations differ by a small $`z`$-axis rotation! This was first put in the form of equations by Hamilton in mid-nineteenth century. In modern notation one says $$L_xL_yL_yL_x=L_z$$ (4) Here $`L_x`$ stands for the operation of small $`x`$-axis rotation. The product $`L_xL_y`$ has to be read right to left for its factors. The left hand side is called the commutator of the two operators. The case when the commutator of two operators vanishes, is when the two operations are really independent of each other. For example small linear motion in $`x`$ direction is completely independent of small linear motion in $`y`$ direction. So they can be taken up in any order, giving the same result. This fact is expressed by the equation $$P_xP_yP_yP_x=0$$ (5) In the Quantum Theory, formulated in terms of space-time dependent fields, there is a different kind of “commutator”. It was known since the 1930’s that to obtain a consistent Quantum theory of particles of spin $`1/2`$, one must require an anti-commutation relation. The independence of quantum field operator at far away points $`x`$ and $`y`$ has to be expressed by $$\psi (x)\psi (y)+\psi (y)\psi (x)=0(\mathrm{Fermionic})$$ (6) What is unusual about this relation is the plus sign where minus should be; and this indeed expresses independence. The correctness of this rule is amply borne out by the Pauli Exclusion Principle whereby two spin $`1/2`$ particles can never occupy the same state. The other kind of particles, those with integer spin and called Bosons, obey a more familiar algebra, where independence is expressed by $$\varphi (x)\varphi (y)\varphi (y)\varphi (x)=0(\mathrm{Bosonic})$$ (7) These algebraic operations were however considered special to the Quantum Fields representing real particles, not to be confused with operators representing symmetry operations. The breakthrough against the No-go theorem lay in realizing that perhaps one could allow “fermionic” algebra even between symmetry operations. Thus consider a linear displacement along two directions which are independent, and require $$\theta _1\theta _2+\theta _2\theta _1=0$$ (8) Here $`1`$ and $`2`$ are some “directions” whose meaning is yet to be clarified, $`\theta `$ the corresponding operators. In what way this can be visualized and in what sense this is an independence are questions not easy answer. For the moment we take symbols and their algebra as guides and check for internal consistency of various operations. Miraculously, it turned out that one could indeed expand the algebra of the Special Relativistic generators in this way, provided all the new generators were fermionic rather than bosonic. This was a possibility not considered by the authors of the No-go theorem. ### 4.1 Superspace Here we elaborate a little on the technical idea of Supersymmetry. Supersymmetry was first formulated as a set of operations on Quantum Fields. An interpretation closer to that for usual Special Relativistic symmetries was later formulated, pioneered among others by Abdus Salam. To every of the four dimensions $`(t,x,y,z)`$ there corresponds a superspace dimension, and these are labeled as $`(\theta ^1,\overline{\theta }^1,\theta ^2,\overline{\theta }^2)`$. They are supposed to obey anticommuting algebra. The mathematics of classical (non-Quantum) variables of this kind was known to the mathematicians as Grassmann algebra. Just as rotations led to a mixing of the axes, a supersymmetric translation leads to mixing of ordinary and superspace axes. To give an example, if $`\theta ^1`$ is shifted to $`\theta ^1+\alpha `$ then the $`x`$ coordinate shifts as $$xxi\alpha \overline{\theta }^2$$ (9) This does not mean anything to us according to usual intuition. But this is how things proceed in gleaning secrets of the microworld. From abstract operations on fields, we proceeded to further compatibility with usual space time picture; the metaphysical principle (2) of section 3. Perhaps future knowledge of new phenomena will help us visualize these operations better. ### 4.2 Predictions and extensions There are two main results that follow from assuming that there is supersymmetry in nature. The first is that for every fermion of given mass there is a boson of identical mass and vice versa. This means that corresponding to the observed photon, there must exist a spin half particle which has been named photino. Similarly corresponding to the electron, there must exist a spin zero particle which has been named selectron (abbreviating ‘scalar electron’). This nomenclature pattern is followed for all the hypothetical supersymmetric partners of known particles. The problem is, we don’t have a single known pair of species which may be considered superpartners of each other. It is worth recalling that the original motivation for searching for supersymmetry was to identify the almost massless neutrino as the superpartner of the photon. This however cannot be true because other quantum numbers as required by the symmetry principle do not match. The second and very powerful implication of supersymmetry is that it subsumes the usual gauge symmetry principle and predicts all the possible forms of interactions between the particles. This is a very desirable and attractive. This was the main benefit of pursuing symmetry principles. They should help us to guess the form of the interaction. Since we do not see any superpartners yet, the confirmation or otherwise of this prediction is in the future. There are many other attractive features of supersymmetry from the theoretical point of view but are more technical. And the simplest predictions seem to be unviable. Does this mean supersymmetry is of no use? The experience of searching for gauge symmetry for the weak interactions tells us that we should keep the possibility open that this symmetry too is not realized in nature in its simple uncomplicated version, but perhaps exists in a broken form. The problem of broken supersymmetry is technically more involved than breaking of the known gauge symmetries. In some sense this is because the principle is really very strong. It is difficult to understand how the symmetry breaks. Developing an understanding of that is itself a theoretical challenge. Supersymmetry and its possible manifestations constitutes a subject of extensive scientific investigation at present. There are several hypothetical models with mechanisms for breaking supersymmetry and several giant accelerators being constructed to check these models. In addition, as true with all Elementary Particle Physics, these models ought also to have left their imprints in the early Universe. Efforts are also therefore on to validate or invalidate some of these models based on cosmological observations being carried out today. ## 5 Philosophical positions The esthetic appeal of the symmetry paradigm lies in the elegance of the mathematical structure. It seems to generalize the ordinary notion of the freedom to choose the frame of reference. (See sec. 3). It also permits mixing or “rotating” of distinct particle species into each other, thus entailing economy in the number of particle types or species. On the utilitarian side, the unification achieved requires fewer coupling constants, since several are dictated to be identical and others have to be simple multiples of a basic value. This success however has not been unqualified. In the Standard Model of elementary particles for example, although the advertised benefits are present, and the coupling constants are fewer, there do remain a large number of unknown parameters. These arise primarily in the form of unknown masses of fermionic and scalar species. Supersymmetry has merited so much attention due to its elegance. But it does require introducing a large number of new species or types of particles. Since many of these are fermionic and scalar, their masses again require a large number of unknown parameters. The whole picture is further complicated by the need to have the symmetry broken. The mechanisms that explain this breakdown have to rely on more unknown physics, thus invoking whole new unknown sectors of the theory. Some of the new particles are supposed to be unobservable by themselves and their influences on the observable world are only through the fact that supersymmetry appears broken. There is supposed be very little additional observable evidence about them even in principle. How do we view the situation from outside Physics? I submit that there are two possibilities. One is that Supersymmetry is indeed a deep new principle. The other is that it is an expedient necessary to tide us over till further experiments provide more clues. The third uninteresting possibility of course remains, viz., it shares the fate of several other profound speculations about nature, beautiful but irrelevant. ### 5.1 A principle … There are several technical reasons advanced to support why it must indeed be a fundamental principle of nature. For example it is meant to rationalize some of the mystery surrounding the Electroweak symmetry breaking. We can not enter into this discussion. In the spirit of staying close to fundamental facts, we may yet advance a reason for the same position along the following lines. There is no known classical analogue to fermions. For several decades in early Quantum Mechanics they were treated with awe and mystery. Their wavefunction can distinguish between $`360^{}`$ and $`720^{}`$ rotations. Pauli referred to this as “non-classical two-valuedness”. However, later developments have required and guided parallel treatments for bosons and fermions. In the path integral approach, fermions could be elegantly included by inventing rules for integration over fermionic variables. But more importantly, a set of simple consistent rules are suggested by the formalism itself. This is the first time that one treats classical fermionic variables with impunity and gets the required answers. This begins to suggest that the bias towards bosons as more natural is perhaps purely classical. As Dirac emphasizes, the early Quantum Mechanics was developed only for those systems that have a classical analogue. It was not possible to “quantize” other systems. But such may nevertheless exist. Over the decades that have elapsed, the only other kind of system that seems to require quantization is the fermionic one. (Modulo phenomena we still have no reasonable explanation for). Thus the microworld dictates that we expand our classical notions to include the Grassmann numbers as well. The notion of Superspace brings further parity between bosonic and fermionic dimensions. In fact if the fermionic dimensions did not exist, bosons would still get away with a special status. What more elegant framework could we have for understanding these new dimensions than to require that they are partners in a very rigorous sense to the bosonic dimensions of common experience. ### 5.2 … or an expedient? But if the symmetry is so fundamental, why do we not see it in its pure form? Should the fact that the ideal principle has already been disproved make us abandon the search for supersymmetry? This brings us to our second, an inelegant but more pragmatic view. A usual argument for assuming supersymmetry is as follows. The newer, higher energy accelerators have to be built with special purposes in mind, i.e. to search specific energy ranges, and to detect particles with expected decay products. We need guiding principles to channel our searches and supersymmetry seems to be only elegant guiding principle aside from gauge symmetry. But we shall go a little bit beyond this position. Suppose that supersymmetry is indeed discovered. It may manifest itself in an ugly and highly disproportionate form. Would it be still worth discovering? The answer is an affirmative. The caveat is that it may not be due to the pristine principles we advanced. But supersymmetry may yet act as an “organizing principle”. By this we mean a rule such as the valence of elements. This concept allows us to understand the possible compounds the element can form. Empirical study then also reveals for the same element several possible valence states. But once discovered in one context, that valence state can be sought for in other contexts as well. Valence is not any kind of fundamental principle. But it does put strong restrictions on the kind of molecules that can exist. An ugly manifestation of supersymmetry would be worth having for the same reason. It may still place a strong restriction on the kind of fundamental particles that can exist and the qualitative nature of their interactions. Valence of course is vindicated by centuries of further developments which make it a direct descendant of indeed a deep and beautiful principle. Perhaps the same is true of supersymmetry. ## 6 Conclusion We argued that the need to understand the microworld has demanded greater ingenuity from the theoretical physicist in the twentieth century. The response to this challenge has been in the form of educated metaphysical principles evolved by the pioneers of the subject. A prime one of these in retrospect, has been the principle of symmetry with specific mathematical connotations. Its great bounty has been the ability to guess at a whole theory starting from very scarce evidence. In particular, gauge symmetry has shaped our understanding of the fundamental forces of nature, beginning with General Theory of Relativity. Supersymmetry is another such principle, proposed in advance of any empirical data but with many compulsive reasons for its correctness. However, even if discovered, the symmetry would have to be found in a badly broken form. This seems to detract from its original appeal. We have presented a very general reason why we may expect the symmetry to be fundamental. Yet there is an alternative possibility that it may be an organizing principle similar to valence. In either case it is worth stepping up the efforts to verify its presence or otherwise in nature.
no-problem/9904/astro-ph9904373.html
ar5iv
text
# The ARGO-YBJ Detector and high energy GRBs ## 1 Introduction The study of the GeV-TeV component of gamma-ray bursts is of great importance to understand the acceleration mechanisms and the sources physical conditions. The detection of GeV gamma-rays by EGRET during some intense GRBs egret (Catelli et al., 1997) suggests the possibility that a high energy component could be present in all events. Furthermore several models predict GeV and TeV emission, sometimes correlated with UHECRs production (see Baring (1997), for a review). Due to the low fluxes and the small sensitive areas of satellite experiments, gamma-rays of energy larger than a few tens of GeV, must be detected by ground based experiments located at mountain altitude measuring the secondary particles generated by gamma-rays in the atmosphere. At energies E $`<`$ 10 TeV the number of particles reaching the ground is to small to reconstruct the shower parameters using a standard air shower array, made of several detectors spread over large areas. On the contrary, a detector consisting of a full coverage layer of counters, providing a high granularity sampling of all particle showers, can succesfully measure arrival direction and primary energy of small showers, allowing the study of the unexplored range of gamma energies between 20 GeV and 300 GeV proposal (Abbrescia et al., 1996). ## 2 The ARGO-YBJ detector ARGO-YBJ is an air shower detector optimized to observe small size showers, to be constructed in the Yangbajing Laboratory (Tibet, China) at an altitude of 4300 m a.s.l. The experiment consists of a $``$ 71$`\times `$74 m<sup>2</sup> core detector realised with a single layer of RPC’s ($`90\%`$ of active area), surrounded by an outer detector ($`30\%`$ of active area) for a total size of $``$ 100$`\times `$100 m<sup>2</sup>. A lead converter 0.5 cm thick will cover uniformly the RPC plane in order to increase the number of charged particles by conversion of shower photons in $`e^\pm `$ and to reduce the time spread of the shower front addendum (Bacci et al., 1998). ARGO-YBJ can image with high efficiency and sensitivity atmospheric showers initiated by primaries of energies in the range 10 GeV$`÷`$ 500 TeV. Its main physics goals are proposal (Abbrescia et al., 1996): Gamma-astronomy at $``$ 100 GeV energy threshold, with a sensitivity to detect unidentified point sources of intensity as low as $`10\%`$ of the Crab Nebula; Gamma-Ray Burst physics, extending the satellite measurements at energies E $`>`$ 10 GeV; $`\overline{p}/p`$ ratio in the TeV energy range; Sun and heliosphere physics. The detector assembling should start in 2000 and data taking with the first $`750`$ $`m^2`$ of RPC’s in 2001. ## 3 Sensitivity to high energy GRBs A high energy GRB is detectable if the number of air showers from the gamma-rays is significant larger than the fluctuations of the background, due to showers from cosmic rays with arrival directions compatible with the burst position. A good angular resolution is of major importance in order to reduce the background and increase the detection sensitivity. The angular resolution and the effective area of ARGO-YBJ to detect gamma-rays as a function of the energy have been obtained by means of simulations. For gamma-rays with energy as low as E $``$ 10-20 GeV, the opening angle around the source direction in which 70$`\%`$ of the signal showers are contained is $``$ 5. To evaluate the sensitivity of ARGO-YBJ to detect GRBs, we considered a burst with a power law energy spectrum $`dN/dEE^\alpha `$ extending in the range 1 GeV $`÷E_{max}`$, a duration $`\mathrm{\Delta }t`$=1 s, and a zenith angle $`\theta `$=20. The burst will give a signal with a significance larger than 4 standard deviations if the energy fluence in the range 1 GeV $`÷E_{max}`$ is larger than a minimum value F<sub>min</sub>. Fig.1 shows F<sub>min</sub> as a function of $`E_{max}`$ for 3 spectral slopes. For a generic duration $`\mathrm{\Delta }t`$ the minimum fluences detectable are given by $`F_{min}\sqrt{\mathrm{\Delta }t}`$. In the energy range considered the sensitivity is strongly dependent on the maximum energy of the spectrum $`E_{max}`$. ARGO-YBJ can observe GRBs with energy fluences of a few 10<sup>-6</sup> erg cm<sup>-2</sup> if the energy spectrum extends at least up to $``$ 200 GeV with a slope $`\alpha `$2; the minimum detectable fluence is $``$ 10<sup>-5</sup> if $`E_{max}`$ 30 GeV. This is of particular importance, since if GRB sources are located at cosmological distances, the high energy tail of the spectrum is affected by the $`\gamma \gamma e^+e^{}`$ interaction of gamma-rays with low energy starlight photons in the intergalactic space. According to Salomon and Stecker (1998), at a distance corresponding to a redshift $`z`$=0.1 the absorption is almost negligible, while at $`z`$=0.5 (1.0) the absorption becomes important for photons of energy E $`>`$ 100 (50) GeV. These values give an idea of the possible maximum energy of the GRBs spectra as a function of their distance, and from Fig.1 one can infer the maximum sensitivity of ARGO-YBJ to detect cosmological GRBs. The minimum observable fluences can be compared with the fluences measured by EGRET in the 1 MeV-1 GeV energy range: $`F10^5÷10^4`$ erg cm<sup>-2</sup> egret (Catelli et al., 1997). Since EGRET spectral slopes $`\alpha `$ are mostly $``$ 2, one could expect fluences of the same order of magnitude at energies above 1 GeV. From Fig.1 one can conclude that ARGO-YBJ could detect GRBs with the same intensity of those observed by EGRET provided that the energy spectrum extends up to few tens of GeVs; the sensitivity increases by a factor $``$10 for spectra extending up to $`E_{max}`$ 200 GeV.
no-problem/9904/astro-ph9904239.html
ar5iv
text
# ABSTRACT ## ABSTRACT The broad band 0.1-200 keV spectra of a sample of 5 Seyfert 2 galaxies (NGC 7172, Mkn 3, NGC 2110, NGC 4507 and NGC 7674) have been measured within the first year of the $`BeppoSAX`$ Core program. All sources have been detected up to $``$ 100 keV and their spectral characteristics derived with good accuracy. Although the results obtained from the detailed analysis of individual sources indicate some “source-by-source” differences, we show in the following that all spectra are consistent, at least qualitatively, with what expected from a “0<sup>th</sup>-order” version of unified models. Indeed, a simple test on these data indicates that these Seyfert 2 galaxies are on average intrinsically very similar to Seyfert 1 galaxies (i.e., steep at E$`>`$ 10 keV) and that the main difference can be ascribed to a different amount of absorbing matter along the line of sight (i.e. different inclinations of our line of sight with respect to a putative molecular torus or different thicknesses of the tori). ## 1 PREDICTIONS FROM UNIFIED MODELS The discovery of broad emission lines in the polarized optical spectra of several Seyfert 2 galaxies (Antonucci & Miller 1985, Tran et al. 1992) has provided the basis for a unified model of Seyfert galaxies in which the main discriminating parameter between Seyfert 1 and Seyfert 2 nuclei is the inclination with respect to our line of sight of a supposed obscuring torus surrounding the central source (see Antonucci 1993 for a review). In this scheme, Seyfert 1 galaxies are active nuclei observed nearly perpendicularly to the torus plane (unabsorbed) whereas Seyfert 2 galaxies represent those seen through the torus (absorbed) and, of course, the main prediction is that Seyfert 2 galaxies are intrinsically similar to Seyfert 1 galaxies, once the effects of the torus are properly accounted for. It is now widely recognized that the intrinsic high energy spectrum of Seyfert 1 galaxies consists, on average, of a steep power-law continuum ($`\mathrm{\Gamma }`$ $``$ 1.9–2.0, Nandra & Pounds 1994) with an exponential cut-off typically at energies larger than 150 keV (Zdziarski et al. 1996, Perola et al. 1998). Therefore, one would expect that Seyfert 2 galaxies exhibit similar high-energy properties. X-ray observations (mainly below $``$ 20 keV) of Seyfert 2 galaxies, have shown a variety of spectral characteristics not always consistent with a “0<sup>th</sup>-order” version of unified models (Smith & Done 1996, Cappi et al. 1996, Turner et al. 1998). However, at high energies where the effects of absorption and matter reprocessing are less evident, measurements are sparse for Seyfert 2 galaxies (but see Johnson et al. 1997). It has been therefore difficult to assess the Seyfert 1 nature of the primary spectrum of Seyfert 2 galaxies from these observations. Based on these general considerations, we have undertaken a program of observations with $`BeppoSAX`$, aimed at studying the X-ray spectral properties of bright Seyfert 2 galaxies over a broad energy band (up to about 200 keV) and at testing the validity of unified models. As a matter of fact, $`BeppoSAX`$ can provide crucial information on the intrinsic source properties because high energy photons from a few to several keV can pass through the circumnuclear intervening material and can therefore be compared to the “typical” spectrum of Seyfert 1 galaxies. So far 5 objects have been observed within the first AO cycle, namely NGC 7674 (Malaguti et al. 1998a), Mkn 3 (Cappi et al.1998), NGC 2110 (Malaguti et al. 1998b), NGC 7172 and NGC 4507 (see also Bassani et al. 1998). ## 2 A SIMPLIFIED QUALITATIVE TEST A simple, qualitative, test has been performed on our sources by fitting each source of the sample with the same model: a soft power-law component plus a hard, absorbed, power-law component with reflection and associated Fe K line as illustrated in Fig. 1. In the framework of unified models, the soft power-law is attributed to scattered soft X-ray emission from ionized material placed above the molecular torus, while the hard X-ray and reflection components are interpreted as the direct component absorbed by the torus and its reflection from the inner side of the torus, respectively. The intensity of the soft, scattered, component was assumed to be $``$ 2% that of the direct one. The intensity of the reflection component was fixed to R = 1 (conrresponding to a 2$`\pi `$ coverage as viewed from the X-ray source), and the iron line was assumed to be produced through both the reflection and absorption with an equivalent width of $``$ 1 keV (with respect to the reflected continuum) and $``$ 500 $`\times `$ $`\frac{N_\mathrm{H}}{1.23\times 10^{24}}`$ eV (with respect to the direct, absorbed, component), as expected from theoretical models (George & Fabian 1991, Leahy & Creighton 1993). The only free parameters were the intensity and photon index of the direct continuum and the absorption column along the line of sight. The fit results and unfolded spectra obtained from this test are given in Figure 2. The most interesting result is that all sources are well described by a steep, Seyfert 1-like spectrum, with $`\mathrm{\Gamma }`$ $``$ 1.79–1.95. This is a newly discovered behaviour of Seyfert 2 galaxies for energies up to $``$ 100 keV which supports unified models. It is stressed that this result is largely independent on the presence of the steep soft component and on the assumed intensity of the reflection component. This is demonstrated by the fact that the average Seyfert 2 spectrum obtained averaging all the PDS 20-200 keV data (except NGC 7674) can be well fitted ($`\chi ^2`$ $``$ 1.2) by a single power-law model with $`\mathrm{\Gamma }_{20200keV}`$ = 1.85 $`\pm `$ 0.05 and shows no deviation from the power-law up to $``$ 150 keV. The observed major differences in the quality of fits ($`\chi _{red}^2`$ ranging from 0.9–1.8) are to be ascribed to two main factors. The first is source by source differences in the Fe K complex which indicates the need of a more dedicated analysis. The second depends upon the different amount of absorbing matter along the line of sight. The less absorbed source is NGC 2110 which also shows less indication for a reflection component and the most absorbed one is NGC 7674, where only the reflection component is observed possibly because the direct component is completely blocked by a Compton thick molecular torus (with $`N_\mathrm{H}>10^{25}`$ cm<sup>-2</sup>, Malaguti et al. 1998a). Intermediate cases are NGC 7172, NGC 4507 and Mkn 3; in the latter, both the direct and reflected components are clearly resolved spectroscopically. Moreover although a detailed measurement in individual sources of the high-energy cutoff is difficult, it appears clear in the data (see Figure 2) that there is no evidence of such a cutoff for energies up to at least $``$ 100–150 keV. REFERENCES * Antonucci, R.R.J., 1993, $`ARA\&A`$, 31, 473 * Antonucci, R.R.J. & Miller, J.S., 1985, ApJ, 297, 621 * Bassani, L., et al. 1998, to appear in proceedings of “Dal nano- al tera -eV: tutti i colori degli AGN”, third Italian conference on AGNs, Roma, Memorie S.A.It, astro-ph/9809327 * Cappi M., Mihara, T., Matsuoka, et al., 1996, ApJ, 456, 141 * Cappi, M., et al. 1998, $`A\&A`$, in press, astro-ph/9902022 * George, I.M. & Fabian, A.C., 1991, MNRAS, 249, 352 * Johnson, W.N., Zdziarski, A.A., Madejski, G.M., Paciesas, W.S., Steinle, H., & Lin Y-C, 1997, in proceedings of the Fourth Compton Symposium, Ed. D. Dermere, M.S. Stcikman and J.D. Kurfess, AIP, 283 * Leahy D.A. and Creighton J., 1993, MNRAS, 263, 314 * Malaguti, G., et al. 1998a, $`A\&A`$, 331, 519 * Malaguti G. et al. 1998b, $`A\&A`$, in press, astro-ph/9901141 * Nandra, K. & Pounds, K.A., 1994, MNRAS, 268, 405 * Perola, C., et al. 1998, to appear in proceedings of “Dal nano- al tera -eV: tutti i colori degli AGN”, third Italian conference on AGNs, Roma, Memorie S.A.It * Smith, D.A., & Done, C., 1996, MNRAS, 280, 355 * Tran, H.D., Miller, J.S. & Kay, L.E., 1992, ApJ, 397, 452 * Turner, T.J., George, I.M., Nandra, K., & Mushotzky, R.F., 1998, ApJ, 493, 91 * Zdziarski, A.A., Johnson, W.N., Poutanen, J., Magdziarz, P., & Gierlinski, M., 1996, in “The Transparent Universe”, proceedings of the 2nd INTEGRAL Workshop, Ed. C. Winkler, T.J.-L. Courvoiser and P. Durouchoux, ESA SP-382, 373
no-problem/9904/cond-mat9904223.html
ar5iv
text
# Untitled Document
no-problem/9904/math9904036.html
ar5iv
text
# Fano varieties with high degree A Fano variety is a smooth projective variety defined over an algebraically closed field. By Yau’s proof of Calabi’s conjecture (\[Y1\], \[Y2\]), complex Fano varieties are compact Kähler varieties with positive definite Ricci curvature (\[Bo1\], \[Be\], 11.16.ii)). In particular, compact complex varieties with a positive Kähler-Einstein metric are Fano varieties. Not all complex Fano varieties however have a Kähler-Einstein metric, because this forces the automorphism group to be reductive (\[Be\], cor. 11.54). This excludes for example the projective plane blown-up at one or two points. However, again, this condition is not sufficient to ensure the existence of a Kähler-Einstein metric (Tian gives in \[T\] an example of a Fano variety with no Kähler-Einstein metric and finite automorphism group). The existence of such a metric also implies that the tangent bundle is $`(K_X)`$-stable. If $`X`$ is a Fano variety of dimension $`n`$, we denote by $`\delta (X)`$ the $`n`$th root of the intersection number $`(K_X)^n`$. We let $`\rho _X`$ be the rank of the Néron-Severi group of $`X`$ (also called the Picard number of $`X`$), and $`\iota _X`$ the index of $`X`$, i.e. the greatest integer by which the canonical class is divisible; it satisfies $`\iota _Xn+1`$, with equality if and only if $`X`$ is isomorphic to $`𝐏^n`$. Various upper bounds on $`\delta (X)`$ are known when the base field has characteristic zero. * For any Fano variety $`X`$ of dimension $`n`$, we have (\[KMM2\]) $$\delta (X)3(2^n1)(n+1)^{(n+1)(2^n1)}.$$ * When $`\rho _X=1`$, we have (\[C\], \[N\], \[KMM1\], \[R\]) $$\delta (X)\mathrm{max}(n\iota _X,n+1)n(n+1).$$ * When $`\rho _X=1`$ and the tangent bundle of $`X`$ is $`(K_X)`$-semi-stable, we have (\[R\]) $$\delta (X)2n.$$ When $`X`$ has a Kähler-Einstein metric, classical methods of differential geometry give (see \[D\]) $$\delta (X)(2n1)\left(\frac{2^{n+1}(n!)^2}{(2n)!}\right)^{1/n}2n$$ which is asymptotically the same as Ran’s bound. Finally, Reid proved in \[Re\] that the tangent bundle of a Fano variety $`X`$ with $`\rho _X=\iota _X=1`$ is $`(K_X)`$-stable. The purpose of this note is to construct, for each positive integers $`k`$ and $`n`$, Fano varieties of dimension $`n`$ and Picard Number $`k`$ for which $`\delta (X)`$ grows essentially like $`n^k`$. Batyrev remarked in \[B\] that for the $`n`$-dimensional Fano variety<sup>1</sup><sup>1</sup>1We follow Grothendieck’s notation: for a vector bundle $``$, the projectivization $`𝐏`$ is the space of hyperplanes in the fibers of $``$. $$X=𝐏\left(𝒪_{𝐏^{n1}}𝒪_{𝐏^{n1}}(n1)\right),$$ we have $$\delta (X)=\left(\frac{(2n1)^n1}{n1}\right)^{1/n}2n.$$ Consider more generally $$X=𝐏(𝒪_{𝐏^s}^r𝒪_{𝐏^s}(a)),$$ where $`r`$, $`s`$ and $`a`$ are non-negative integers. We have $$K_X(r+1)L+(s+1a)H,$$ where $`L`$ is a divisor associated with the line bundle $`𝒪_X(1)`$ and $`H`$ is the pull-back on $`X`$ of a hyperplane in $`𝐏^s`$. It follows that $`X`$ is a Fano variety when $`as`$ <sup>2</sup><sup>2</sup>2This can be seen by noting that $`𝒪_X(L+H)`$ is the line bundle $`𝒪(1)`$ on $`X`$ associated with the description of $`X`$ as $`𝐏(𝒪_{𝐏^s}(1)^r𝒪_{𝐏^s}(a+1))`$. It is therefore ample.. In the intersection ring of $`X`$, we have the relations $`L^{r+1}=aHL^r`$ and $`L^rH^s=1`$. Setting $`n=dim(X)=r+s`$, we get $`(K_X)^n`$ $`={\displaystyle \underset{i=0}{\overset{n}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{i}}\right)(r+1)^iL^iH^{ni}`$ $`={\displaystyle \underset{i=r}{\overset{n}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{i}}\right)(r+1)^i(aH)^{ir}L^r\pi ^{}H^{ni}`$ $`={\displaystyle \underset{i=r}{\overset{n}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{i}}\right)(r+1)^ia^{ir}`$ $`(r+1)^na^{nr}`$ Take $`a=s=nr`$; the function $`rr^n(nr)^{nr}`$ reaches its maximum near $`\frac{n}{\mathrm{log}n}`$. Taking $`r=\left[\frac{n}{\mathrm{log}n}\right]`$, we get $`(K_X)^n`$ $`\left({\displaystyle \frac{n}{\mathrm{log}n}}\right)^n\left(n{\displaystyle \frac{n}{\mathrm{log}n}}\right)^{n\frac{n}{\mathrm{log}n}}`$ $`n^{2n\frac{n}{\mathrm{log}n}}{\displaystyle \frac{1}{(\mathrm{log}n)^n}}\left(1{\displaystyle \frac{1}{\mathrm{log}n}}\right)^n`$ $`=n^{2n}e^n{\displaystyle \frac{1}{(\mathrm{log}n)^n}}\left(1{\displaystyle \frac{1}{\mathrm{log}n}}\right)^n`$ $`\left({\displaystyle \frac{3n^2}{10\mathrm{log}n}}\right)^n`$ for $`\mathrm{log}n\frac{10}{103e}`$, i.e. $`n226`$. This lower bound for $`(K_X)^n`$ actually holds for all $`n3`$ by direct calculation. Furthermore, even when taking the value of $`r`$ which gives the highest degree, numerical calculations show that $`\delta (X)`$ is still equivalent to some (non-zero) multiple of $`\frac{n^2}{\mathrm{log}n}`$. ###### Proposition 1. For each $`n3`$, there is a Fano variety $`X`$ of dimension $`n`$, index $`1`$ and Picard number $`2`$ such that $$\delta (X)\frac{3n^2}{10\mathrm{log}n}.$$ If we analyze the construction, we see that we need to start from a Fano variety with both high index and high degree. The variety $`X`$ constructed in the proposition has index $`1`$, hence cannot be used to iterate the process. However, if one takes instead $`a=sr=n2r`$ and the same $`r`$, the index of $`X`$ becomes $`r+1`$, and, although the degree becomes slightly smaller, it still satisfies $`(K_X)^n`$ $`\left({\displaystyle \frac{n}{\mathrm{log}n}}\right)^n\left(n2{\displaystyle \frac{n}{\mathrm{log}n}}\right)^{n\frac{n}{\mathrm{log}n}}`$ $`\left({\displaystyle \frac{n^2}{7\mathrm{log}n}}\right)^n`$ for $`\mathrm{log}n\frac{14}{7e}`$; this lower bound actually holds for all $`n4`$. ###### Proposition 2. For each positive integers $`k2`$ and $`n4`$ such that $`\frac{n}{\mathrm{log}n}2^{k2}`$, there exist a positive constant $`c(k)`$ <sup>3</sup><sup>3</sup>3One can take $`c(k)={\displaystyle \frac{1}{4^{k^2k+2}}}`$. and a Fano variety $`X`$ of dimension $`n`$ and Picard number $`k`$ such that $$\delta (X)\frac{c(k)n^k}{(\mathrm{log}n)^{k1}}.$$ ###### Proof. We proceed by induction on $`k`$, assuming in addition that the index of $`X`$ is $`\left[\frac{n}{2^{k2}\mathrm{log}n}\right]+1`$. We just did it for $`k=2`$. Assume the construction is done for $`k2`$. Let $`n`$ be an integer as in the proposition, and set $$r=\left[\frac{n}{2^{k1}\mathrm{log}n}\right]s=nr.$$ Since $`\frac{n}{\mathrm{log}n}2^{k1}`$, the integer $`r`$ is positive. Also, $`r\frac{n}{4}`$ because $`n4`$ and $`k2`$. It implies $$\frac{s}{\mathrm{log}s}\frac{3n}{4\mathrm{log}n}>2^{k2},$$ hence there exists by induction a Fano variety $`Y`$ of dimension $`s`$, index $`\iota _Y=\left[\frac{s}{2^{k2}\mathrm{log}s}\right]+1`$ and Picard number $`k`$, such that $$\delta (Y)\frac{c(k)s^k}{(\mathrm{log}s)^{k1}}.$$ Write $`K_Y=\iota _YH`$, with $`H`$ ample on $`Y`$, and let $$X=𝐏(𝒪_Y^r𝒪_Y((\iota _Yr1)H)),$$ with projection $`\pi :XY`$, so that $`K_X=(r+1)(L+\pi ^{}H)`$. As above, it implies that $`X`$ is a Fano variety of dimension $`n=r+s`$ when $`r<\iota _Y`$; note also that $`\rho _X=k+1`$ when $`r>0`$, and $`\iota _X=r+1`$. We get again $`(K_X)^n`$ $`=(r+1)^n{\displaystyle \underset{i=0}{\overset{n}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{i}}\right)L^i\pi ^{}H^{ni}`$ $`=(r+1)^n{\displaystyle \underset{i=r}{\overset{n}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{i}}\right)((\iota _Yr1)H)^{ir}L^r\pi ^{}H^{ni}`$ $`=(r+1)^n{\displaystyle \underset{i=r}{\overset{n}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{i}}\right)(\iota _Yr1)^{ir}H^s`$ $`(r+1)^n(\left({\displaystyle \genfrac{}{}{0pt}{}{n}{r}}\right)+(\iota _Yr1)^s)H^s`$ $`\left({\displaystyle \frac{n}{2^{k1}\mathrm{log}n}}\right)^n(1+(\iota _Yr1)^s)\left({\displaystyle \frac{c(k)s^k}{(\mathrm{log}s)^{k1}}}\right)^s{\displaystyle \frac{1}{\iota _Y^s}}`$ Note that $$\iota _Y\frac{s}{2^{k2}\mathrm{log}n}\frac{3n}{2^k\mathrm{log}n}.$$ If $`\frac{n}{\mathrm{log}n}72^{k2}`$, we obtain $$\frac{r+1}{\iota _Y}\frac{2}{3}+\frac{2^k\mathrm{log}n}{3n}\frac{6}{7};$$ if $`\frac{n}{\mathrm{log}n}<72^{k2}`$, we get $$\frac{1}{\iota _Y}\frac{1}{\frac{s}{2^{k2}\mathrm{log}s}+1}\frac{1}{\frac{n}{2^{k2}\mathrm{log}n}+1}\frac{1}{8}.$$ In all cases, $$(1+(\iota _Yr1)^s)\frac{1}{\iota _Y^s}\frac{1}{8^n}.$$ It follows that $`\delta (X)`$ $`{\displaystyle \frac{n}{2^{k1}\mathrm{log}n}}\left({\displaystyle \frac{c(k)(3n)^k}{4^k(\mathrm{log}n)^{k1}}}\right)^{1\frac{r}{n}}{\displaystyle \frac{1}{8}}`$ $`n^{1+kk\frac{1}{2^{k1}\mathrm{log}n}}{\displaystyle \frac{1}{2^{k+2}\mathrm{log}n}}{\displaystyle \frac{c(k)3^k}{4^k(\mathrm{log}n)^{k1}}}`$ $`n^{k+1}{\displaystyle \frac{1}{(\mathrm{log}n)^k}}{\displaystyle \frac{c(k)3^k}{e2^{k+2}4^k}},`$ which proves the proposition. ∎
no-problem/9904/physics9904044.html
ar5iv
text
# Vorticity Statistics In The Two-Dimensional Enstrophy Cascade ## Abstract We report the first extensive experimental observation of the two-dimensional enstrophy cascade, along with the determination of the high order vorticity statistics. The energy spectra we obtain are remarkably close to the Kraichnan Batchelor expectation. The distributions of the vorticity increments, in the inertial range, deviate only little from gaussianity and the corresponding structure functions exponents are indistinguishable from zero. It is thus shown that there is no sizeable small scale intermittency in the enstrophy cascade, in agreement with recent theoretical analyses. The enstrophy cascade is one of the most important processes in two-dimensional turbulence, and its investigation, at fundamental level, provides cornerstones for the analysis of atmosphere dynamics. The existence of this cascade was first conjectured by Kraichnan , and later by Batchelor . Both of them proposed that in two-dimensional turbulence, enstrophy injected at a prescribed scale is dissipated at smaller scales, undergoing a cascading process at constant enstrophy transfer rate $`\eta `$ ; this led to predicting a $`k^3`$ spectrum for the energy, in a range of scales extending from the injection to the dissipative scale. Later, logarithmic corrections have been incorporated in the analysis to ensure constancy of the enstrophy transfer rate . The advent of large computers revealed surprising deviations from the classical expectation, especially in decaying systems . It was soon realized that in two-dimensional systems, long live coherent structures inhibit the cascade locally and therefore the self similarity of the process, assumed to fully apply in the classical approach, is broken. Expressions like ”laminar drops in a turbulent background” were coined to illustrate the role of coherent structures in the problem . Along with the observations of unexpected exponents, models, emphasizing on the role of particular vortical structures , or based on conformal theory , suggested non classical values. In the recent period however, high resolution simulations underlined that, provided long live coherent structures are disrupted, classical behaviour holds ; furthermore, theoretical studies suggested the absence of small scale intermittency, placing the direct enstrophy cascade in a position strikingly different from the three-dimensional energy cascade. The recent soap film experiments, developing single point measurements of the velocity field , obtained spectral exponents consistent with these views. Nonetheless, investigating small scale intermittency in this problem requires measuring the statistics of quantities such as the vorticity increments, which has not been done yet, neither in physical nor in numerical experiments. Efforts in this direction were made in the numerical study of Borue , but difficulties arose to obtain converged results. An analysis of the enstrophy fluxes in the numerical experiment of Babiano et al led the authors to underlining the presence of weak intermittency in the enstrophy cascade ; thus, although the theory on the problem is at a well advanced stage (at least compared to three-dimensional situation), it is not yet known, even in situations where self similarity fully holds, to what extent classical theory, based on mean field arguments ”a la Kolmogorov”, can be applied to describe the enstrophy cascade. In the physical experiment we present here, we have extensively measured the statistics of vorticity increments, in a situation where coherent structures have been disrupted. We could show, for the first time, that in the enstrophy cascade, the deviation from gausiannity, for the small scale statistics of the vorticity field, are moderate, and -more importantly- scale independant ; the corresponding structure function exponents are indistinguishable from zero, so that intermittency is absent from the process, in agreement with the theoretical analysis of . This observation, made on a physical system perhaps brings the problem, more firmly, within the reach of a theoretical understanding, a situation rare in the field. The experimental set-up has been described in a series of papers . It appears that the system we use is a formidable tool for investigating fundamental issues of two-dimensional turbulence. It provides reliable data on quantities reputed hard to measure. We believe this is an interesting situation, since it would be unpleasant to elaborate a rationale for 2D turbulence, solely on virtual inputs . Briefly speaking, the flow is generated in a square PVC cell, 15 cm $`\times `$ 15 cm. The bottom of the cell is made of a thin (1 mm thick) glass plate, below which permanent magnets, 5x8x4 mm in size, and delivering a magnetic field, of maximum strength 0.3 T, are placed. In order to ensure two-dimensionality , the cell is filled with two layers of NaCl solutions, 2.5 mm thick, with different densities, placed in a stable configuration, i.e. the heavier underlying the lighter. Under typical operating conditions, the stratification remains unaltered for periods of times extending up to 10mn. The interaction of an electrical current driven across the cell with the magnetic field produces local stirring forces. The flow is visualized by using clusters of 2 $`\mu `$m in size latex particles, placed at the free surface, and the velocity fields $`𝐯(𝐱,t)`$ are determined using particle image velocimetry technique, implemented on $`64\times 64`$ grids. In such experiments, the dissipative scale for the enstrophy cascade - defined by $`l_d=\eta ^{1/6}\nu ^{1/2}`$ (where $`\nu `$ is the kinematic viscosity of the working fluid) - is on the order of 1 mm ; it is thus unresolved. Moreover, $`l_d`$ lying below the layer thickness, it is reasonable to consider that the way how enstrophy is dissipated in our system is not purely two-dimensional. Concerning now measurement accuracy, we estimate, from the measurement of local divergence, that the accuracy on the velocity is a few percent and that on the vorticity is 10%. In the experiments we describe here, magnets are arranged into four triangular aggregates of roughly one hundred units, having the same magnetic orientation, as shown schematically in Fig 1. By doing so, the electromagnetic forcing is defined on large scale, and its spatial structure does not favour any particular permanent pattern. The electrical current is unsteady : it is a non periodic, zero mean, square waveform, of amplitude equal to 0.75 A (see Fig 1). The corresponding Reynolds number - defined as the square of the ratio of the forcing to the dissipative scale - is on the order of $`10^3`$ ; this estimate is one order of magnitude above the largest simulation performed on the subject, using normal viscosity (See ). In the statistically steady state, the instantaneous flow pattern consists of transient recirculations of sizes comparable to one fourth of the box size. The formation of permanent large scale structures, which might tend to break the self similarity of the process, seems disrupted by our particular forcing. The instantaneous vorticity field in the statistically stationary state is shown in Fig 2. We see elongated structures, in form of filaments or ribbons, some of them extending across a large fraction of the cell. At variance with the decaying regimes, and consistently with the above discussion, we have not seen any long live vorticity concentration, i.e persisting more than a few seconds. This is further confirmed by a measurement of the flatness of the vorticity distribution, a diagnostics previously introduced by and which is found slightly above the gaussian value in our case. The presence of coherent structures would have been associated to much larger values of this quantity. The isotropy of the vorticity field is not obvious from the inspection of a single realisation, such as the one of Fig 2 ; nonetheless, as will be shown later, the overall anisotropy level, obtained after statistical averaging, turns out to be reasonably small. The spectrum of the velocity field, averaged over 200 realisations, in the statistically steady state, is shown in Fig 3. The forcing wave number $`k_f`$ 0.6 cm<sup>-1</sup> corresponds to the location of the maximum of the energy spectrum ; it is associated to an injection scale $`l_f=\frac{2\pi }{k_f}`$ estimated to 10 cm, a value consistent with the size of our permanent magnets clusters. The wave-number associated to the stratified fluid layer, may be defined as $`k_l=\frac{2\pi }{b}`$ 12 cm<sup>-1</sup> (where b is the fluid thickness). This wave-number, together with the sampling wave-number, which is 25 cm<sup>-1</sup>, are well outside the region of interest. Fig 3 shows that in the high wave number region, i.e above 9 cm<sup>-1</sup>, the spectrum is flat. This region is dominated by white noise ; it reflects a limitation in the PIV technique to resolve low velocity levels at small scales. The interesting feature is that there exists a spectral band, lying between $`k_f`$ and $`k_{max}`$7 cm<sup>-1</sup> , uncontaminated by a possible interaction with the layer wave-number, in which a power law behaviour is observed. The corresponding exponent is close to -3, as shown on the compensated spectrum. A direct measurement of the exponent, performed by using least square fit in the scaling region, leads to proposing the following formula for the spectrum : $$E(k)k^{3.0\pm 0.2}$$ The exponent we find is thus close to classical expectation. There is no steepening effect of the spectrum, which could be attributed, as in decaying systems, to the presence of coherent structures. Further analysis of the vorticity field shows homogeneity and stationary, of the process. Isotropy is also obtained, albeit only roughly, as shown in Fig 4 : to estimate the anisotropy level, we follow circles, embedded in the inertial range, in the spectral plane of Fig 4, and determine by how much the spectral energy departs from a constant value along such circles. This leads to an anisotropy level on the order of 15 % in the central region of the inertial range ; this is acceptably low. Determining the Kraichnan Batchelor constant is a delicate issue, which entirely relies on the measurement of the enstrophy pumping rate $`\eta `$. The constant we discuss here, called C’, is defined by expressing the energy spectrum in the form : $$E(k)=C^{}\eta ^{2/3}k^3$$ To measure C’, we determine the spectral enstropy transfer rate from below k to above k, \- $`\mathrm{\Lambda }(k)`$ \- and search for a plateau, within the scaling range of the energy spectrum. $`\mathrm{\Lambda }(k)`$ is found positive throughout this range, which confirms the cascade is forward. To determine $`\eta `$, we further average out $`\mathrm{\Lambda }(k)`$, between $`k_f`$ and $`k_{max}`$. This procedure provides the following estimate for the Kraichnan Batchelor constant C’ : $$C^{}1.4\pm 0.3$$ This estimate agrees with that found in the high resolution study of Ref , for which values ranging between $`1.5`$ and $`1.7`$ have been proposed. We provide here the first experimental measurement ever achieved for this constant. We now turn to the intermittency problem. Fig 5 shows a set of five distributions of the vorticity increments, obtained for different inertial scales, ranging between 2 and 9 cm. As usual, in order to analyze shapes, the pdfs have been renormalized to impose their variance be equal to unity. The shapes of the pdfs are not exactly the same, but there is no systematic trend with the scale across which the vorticity increment is determined. Within experimental error, the distributions roughly collapse onto a single curve ; the tails of such an average distribution are broader than a gaussian curve, but here again, the deviations have a moderate amplitude and are scale independant. It is thus difficult here, from the inspection of the distributions, to figure out the presence of intermittency in the enstrophy cascade. The analysis of the structure functions of the vorticity, shown on Fig 6, confirms this statement. These structure functions are defined by : $$S_p(r)=<(\omega (𝐱+𝐫)\omega (𝐱))^𝐩>$$ in which $`𝐱`$ and $`𝐫`$ are vectors, and r is the modulus of $`𝐫`$. The brackets mean double averaging, both in space, throughout the plane domain, and in time, between 20 and 280 s. We use here $`10^5`$ data points to determine the structure functions ; this allows to determine up to twelfth order, because of the near gaussianity of the pdfs. Fig 6 thus represents a series of vorticity structure functions $`S_p(r)`$, obtained in such conditions, emphasizing on the inertial domain, i.e with r varying between 1 and 10 cm. The structure functions are essentially flat, indicating the structure function exponents are close to zero. The corresponding values of such exponents fall in the range -0.05, 0.15, for p varying between 2 and 10 ; this is indistinguishable from zero. We thus obtain here a result fully compatible with the classical theory, for which the exponents are predicted to be exactly zero at all orders. Surprisingly, we do not detect any logarithmic deviation, which would be compatible with the analysis of Refs . It is however not completely safe that this does not reflect a limitation in the measurements. To summarize, we have performed, for the first time in a physical system, an extensive observation of the enstrophy cascade. Previous experiments inferred its existence from the interpretation of $`k^3`$ spectra. We provide here a complete observation, along with a measurement of the Kraichnan Batchelor constant, and a determination of the high order vorticity statistics, a crucial quantity to measure for addressing the intermittency problem. We obtain that classical theory is strikingly successful ; there is no sizeable small scale intermittency, and the vorticity statistics departs only weakly from gausiannity, at all scales. Because of these particular features, one may perhaps hope this problem be brought to theoretical understanding. The role of coherent structures, long emphasized on, is indeed important and interesting, but should probably be considered as a separate issue. This work has been supported by Ecole Normale Supérieure, Universités Paris 6 et Paris 7, Centre National de la Recherche Scientifique, and by EEC Network Contract FMRX-CT98-0175. The authors wish to thank G Falkovitch, V Lebedev, R Benzi for enlightening discussions concerning this study.
no-problem/9904/astro-ph9904422.html
ar5iv
text
# Beyond the standard model for the cosmic X–ray background ## 1 Introduction The cosmic X–ray background (XRB) above $`1`$ keV is the result of the integrated emission of discrete sources, since the contribution of any intergalactic hot medium must be negligible (Wright et al. 1994). In the soft X–ray band from 0.5 to 2 keV the largest fraction of the XRB has already been resolved into sources (Hasinger et al. 1998), most of which turned out to be broad line active nuclei (Schmidt et al. 1998), i.e. Quasi Stellar Objects (QSOs) and Seyfert galaxies of type 1. The spectra of these sources are however too steep to reproduce also the hard XRB at several tens of keV, where the bulk of the energy resides, and a population of objects with flatter spectra is therefore required. The most popular synthesis models of the XRB are based on the so–called unification schemes for Active Galactic Nuclei (AGNs), where the orientation of a molecular torus surrounding the nucleus determines the classification of the source. At a zeroth–order approximation level, sources observed along lines of sight free from the torus obscuration should have unabsorbed X–ray spectra and optical broad lines (type 1 AGNs), while sources seen through the torus should have absorbed X–ray spectra and appear as narrow line objects in the optical (type 2 AGNs, e.g. Seyfert 2 galaxies). In this framework type 2 AGNs provide a natural class of sources with X–ray spectra flattened by absorption. The intrinsic X–ray luminosity function (XLF) of type 2 objects is unknown and has been usually assumed to be the same as the one derived for type 1s (e.g. Boyle et al. 1993), apart from a normalization factor. The cosmological evolution has also been taken identical for type 1s and type 2s. Under these assumptions it has been shown that the broad band 3–100 keV spectrum of the XRB can be reproduced by an appropriate mix of unabsorbed and absorbed AGNs (Matt & Fabian 1994; Madau, Ghisellini & Fabian 1994; Comastri et al. 1995, hereafter Co95). The number ratio $`R`$ of type 2 to type 1 objects, as well as the distribution of the absorbing column densities $`N_\mathrm{H}`$, are key parameters of the models; these have been assumed to be independent of redshift and of intrinsic source luminosity, and have been treated as free parameters in the fitting procedure. Since the overall parameter space of the models is quite large and a good fit to the XRB can be obtained with different set of values, it is important to compare the model predictions with the largest number of observational constraints. Indeed, Co95 showed that the source counts in the 0.5–2 keV and 2–10 keV energy bands, as well as the redshift distributions, could successfully be reproduced by their model. Very recently an additional set of observational constraints has become available. Deep surveys from ROSAT have extended our knowledge to the low luminosity part of the AGN XLF (Miyaji, Hasinger & Schmidt 1999a, hereafter Mi99a). Contrary to previous results (Boyle et al. 1993; Page et al. 1996; Jones et al. 1997) a pure luminosity evolution (PLE) of AGNs with redshift is no longer consistent with the data, and a luminosity dependent density evolution (LDDE) is required. From the X–ray data of an optically selected sample of Seyfert galaxies Risaliti, Maiolino & Salvati (1999) have determined the $`N_\mathrm{H}`$ distribution for local Seyfert 2 galaxies, pointing out that a significant fraction of sources have columns exceeding $`N_\mathrm{H}=10^{25}`$ cm<sup>-2</sup> and are therefore completely thick to Compton scattering. The $`R`$ ratio between type 2s and type 1s has been determined in the local Universe for low luminosity AGNs, i.e. Seyfert galaxies (Maiolino & Rieke 1995), while the existence of a relevant number of high luminosity absorbed sources, the so–called QSO 2s, which is a basic assumption of previous models, is still uncertain (Akiyama et al. 1998). An observational constraint to the QSO 2 number density can be obtained from the infrared source counts. Indeed, QSO 2s are expected to have strong infrared counterparts, since the dust present in the torus should re–emit in the IR band the nuclear radiation absorbed by the gas. The ultraluminous infrared galaxies (ULIRGs) discovered by IRAS are the only local objects with QSO–like bolometric luminosities (Soifer et al. 1986; Kim & Sanders 1998). Thus, even if all ULIRGs were powered by a hidden AGN, the local QSO 2s could not be more numerous than ULIRGs. Finally, source counts in the 5–10 keV band have been derived for the first time by the BeppoSAX satellite with the HELLAS survey (Giommi et al. 1998; Comastri et al. 1999). In the present paper we test the standard synthesis model to verify if it remains compatible with the new data. These data leave still some latitude to important parameters of the model, and various choices are possible to fit the XRB equally well. However, in all cases we find moderate but consistent evidence that at least some of the standard assumptions have to be relaxed: extra hard spectrum AGNs are needed at intermediate or high redshifts, in addition to those expected in the usual scenario. The additional sources could be analogous to local Seyfert 2s, if they evolve faster than type 1s, or they could be other astrophysical sources not yet enlisted among the contributors to the XRB. We discuss the observations which could distinguish between the alternatives. Throughout this paper the deceleration parameter and the Hubble constant are given the values $`q_0=0.5`$ and $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. ## 2 AGN X–ray properties ### 2.1 The spectra After the observations of X–ray satellites like GINGA, ASCA and BeppoSAX, different components have been recognized in the X–ray spectra of AGNs. Starting with Sey 1 galaxies, the basic component is a power law with energy spectral index $`\alpha 0.9`$ (Nandra et al. 1997a) and an exponential cut off at high energies. A mean value for the $`e`$–folding energy can probably be set at $`300`$ keV, although the observed dispersion is very high (Matt 1998). Some of the primary radiation is reprocessed by an accretion disc and/or the torus around the nucleus, producing a flattening of the spectral slope above $`10`$ keV, and a strong iron line at 6.4 keV (Nandra & Pounds 1994). Below 1–1.5 keV a radiation excess with respect to the power law emission is detected in a large fraction of Sey 1s (sometimes resulting from a misfit of the “warm absorber” component). The spectrum of QSO 1s is similar to that of Sey 1s, but there is no evidence for the iron line and the reflection hump to be as common (Lawson & Turner 1997). Assuming that the accretion disc produces most of the line and hump, Nandra et al. (1997b) ascribe these differences to a higher ionization state of the disc in higher accretion rate sources, so that in QSO 1s the spectral features due to photoelectric processes are quenched. Recently, Vignali et al. (1999) have derived a mean spectral slope of $`\alpha =0.67\pm 0.11`$ from a sample of 5 QSO 1s at redshifts above 2. Although the statistics is poor, this result seems to suggest that the spectra of high redshift QSOs are flatter than those of local ones. In Sey 2 galaxies the power law is cut off by photoelectric absorption at energies increasing with the column density of the intercepted torus. For highly absorbed objects the X–ray luminosity may be dominated by that fraction of the nuclear radiation which is reflected off the torus surface towards the observer. When $`N_\mathrm{H}>10^{25}`$ cm<sup>-2</sup> the obscuring medium is completely thick to Compton scattering and the spectrum is a pure reflection continuum as described by Lightman & White (1988), with a 2–10 keV luminosity about two orders of magnitude lower than that of Sey 1s (Maiolino et al. 1998). On the contrary, when $`N_\mathrm{H}<10^{24}`$ cm<sup>-2</sup> the medium is Compton–thin and the spectrum is dominated by the component transmitted through the torus. In the range $`10^{24}<N_\mathrm{H}<10^{25}`$ cm<sup>-2</sup> both a transmitted and a reflected component contribute to the observed luminosity, the Circinus galaxy (Matt et al. 1999) being a typical example. Also Sey 2 galaxies often have soft emission in excess of the absorbed power law (Turner et al. 1997). These soft excesses are however two orders of magnitude weaker than those of Sey 1s of the same intrinsic luminosity and their nature is still unclear (probably scattered or starburst radiation). ### 2.2 The XLF and cosmological evolution The most recent results about the AGN XLF and cosmological evolution have been obtained by Mi99a by combining data from several ROSAT surveys. Down to a limiting flux of $`10^{15}`$ erg s<sup>-1</sup> cm<sup>-2</sup>, reached by the deep survey in the Lockman Hole, they collected a sample of about 670 sources, which is the largest X–ray selected sample of AGNs presently available. The local XLF is described with a smoothed double power law of the following form: $$\varphi (L_\mathrm{x})=\frac{\mathrm{d}\mathrm{\Phi }(L_\mathrm{x})}{\mathrm{d}\mathrm{log}L_\mathrm{x}}=A\left[(L_\mathrm{x}/L_{})^{\gamma _1}+(L_\mathrm{x}/L_{})^{\gamma _2}\right]^1,$$ where $`L_\mathrm{x}`$ is the observed 0.5–2 keV X–ray luminosity, ranging from $`10^{41.7}`$ to $`10^{47}`$ erg s<sup>-1</sup>. The best fit values for the cosmology adopted here are: $`A=(1.57\pm 0.11)\times 10^6`$ Mpc<sup>-3</sup>, $`L_{}=0.57_{0.19}^{+0.33}\times 10^{44}`$ erg s<sup>-1</sup>, $`\gamma _1=0.68\pm 0.18`$ and $`\gamma _2=2.26\pm 0.95`$. The XLF has been found to evolve from redshift 0 up to $`z_{cut}=1.51\pm 0.15`$, with an evolution rate which drops at low luminosities according to the factor: $`e(z,L_\mathrm{x})=\{\begin{array}{cc}(1+z)^{\mathrm{max}(0,p1\alpha (\mathrm{log}L_\mathrm{a}\mathrm{log}L_\mathrm{x}))}\hfill & L_\mathrm{x}<L_\mathrm{a}\hfill \\ (1+z)^{p1}\hfill & L_\mathrm{x}L_\mathrm{a};\hfill \end{array}`$ (3) here $`p1=5.4\pm 0.4`$, $`\alpha =2.3\pm 0.8`$ and $`\mathrm{log}L_\mathrm{a}=44.2`$ (fixed). The X–ray AGNs have been observed at redshifts up to $`z=4.6`$ and there is no evidence for a decline in their space density beyond $`z3`$, unlike what is found in optical (Schmidt, Schneider & Gunn 1995) and radio surveys (Shaver et al. 1997). We note that the XLF parametrization of Mi99a is a preliminary result and is not a unique representation of the ROSAT data. The extrapolation of the high redshift XLF into the low luminosity range, where few data are available, is not well constrained. Indeed, the number of low luminosity, high redshift AGNs could be higher than the Mi99a representation (Hasinger et al. 1999). Another cause of uncertainty is the possible presence of type 2 AGNs in the Mi99a sample: unlike previous works, where only (optical) type 1 AGNs were included, Mi99a do not discriminate between type 1s and type 2s; some of the latter could then appear in the ROSAT bandpass because of their soft excesses and, for sources at high redshifts, because of the $`K`$–correction. Objects with type 2 optical spectra are relatively rare in the ROSAT sample (Hasinger et al. 1999). As for the X–ray spectral type, within any given model the raw counts can be corrected for the contribution of the absorbed sources, and these can be subtracted from the XLF: in the following we consider also this approach, and investigate the robustness of our conclusions with respect to the correction. In general, the correspondence between optical and X–ray spectral classification is broadly verified in the local Universe, albeit with some blurring (see Section 2.3); at high redshifts the question is still unsettled. Previous works about the XLF of AGNs used a PLE model both in the soft X rays, by combining observations from ROSAT and Einstein (Boyle et al. 1993, 1994; Jones et al. 1997), and in the hard X rays from ASCA data (Boyle et al. 1998a). In the Mi99a data the fit with a PLE model is rejected with a high significance. A pure density evolution model is marginally rejected, and LDDE models are preferred, even if several variants are still being discussed. ### 2.3 The number and column densities of local type 2 AGNs In the local Universe 5–10% of the galaxies show Seyfert activity (Maiolino & Rieke 1995; Ho, Filippenko & Sargent 1997). From a sample of $`90`$ nearby Seyfert galaxies limited in the B magnitude of the host galaxy, Maiolino & Rieke (1995) derived an estimate for the local ratio $`R`$ of type 2 to type 1 Seyferts. From our point of view Seyfert types 1.8, 1.9 and 2, which have flat X–ray spectra due to absorption by cold gas, can be grouped as type 2 objects, while types 1, 1.2 and 1.5, which have steep X–ray spectra without significant cold absorption ($`N_\mathrm{H}<10^{21}`$ cm<sup>-2</sup>), can be grouped as type 1s. Here it is noted that the relation between Seyfert type and X–ray absorption is not univocal. By observing with ROSAT the complete sample of Piccinotti et al. (1982), Schartel et al. (1997) showed that at least a fraction of type 1 AGNs suffer from X–ray absorption by more than $`N_\mathrm{H}=10^{21}`$ cm<sup>-2</sup>. However this fraction (grouping Seyfert 1, 1.2 and 1.5) is only 20%, and on average their $`N_\mathrm{H}`$ does not exceed $`10^{22}`$ cm<sup>-2</sup>. The inclusion of some moderate–absorption type 1s should not change significantly our results. By considering Seyfert types 1.8, 1.9 and 2 as type 2s, and Seyfert types 1, 1.2 and 1.5 as type 1s, Maiolino & Rieke found $`R`$=4.0$`\pm `$0.9, in agreement with the results of Osterbrock & Martel (1993) and, more recently, Ho et al. (1997). From the Maiolino & Rieke sample Risaliti et al. (1999) have derived a distribution of X–ray column densities for Sey 2s. The selection of the sample by means of optical narrow emission lines, rather than in the X–rays, should avoid biases against X–ray absorbed sources. It turned out that most of the sources are affected by strong absorption, $`75\%`$ of the objects having $`N_\mathrm{H}>10^{23}`$ cm<sup>-2</sup>. Furthermore, a significant fraction of sources ($`>25\%`$) are absorbed by $`N_\mathrm{H}>10^{25}`$ cm<sup>-2</sup>. Their results are shown in Fig. 1 and compared with the $`N_\mathrm{H}`$ distribution assumed by Co95. In the limited luminosity range of Seyfert galaxies Risaliti et al. (1999) did not find evidence of a correlation between absorption and luminosity. In the wider luminosity domain which includes the QSOs the evidence is contradictory. Recent results from the IRAS 1–Jy survey (Kim & Sanders 1998) show that the space density of ULIRGs at $`z\stackrel{<}{_{}}0.1`$ is similar to that of optically selected QSOs of comparable bolometric luminosity. By considering that the number of obscured QSOs cannot exceed that of ULIRGs, and assuming that every ULIRG is powered only by nuclear activity, we can set a conservative upper limit of 2 to the $`R`$ ratio at high luminosities. The actual value of $`R`$ might be significantly lower. Indeed, there is evidence that the fraction of AGN–powered ULIRGs decreases from 50% for IR luminosities $`L_{\mathrm{IR}}\stackrel{>}{_{}}1.7\times 10^{46}`$ erg s<sup>-1</sup> to 15% below this value (Lutz et al. 1998), the remaining ones being dominated by starburst activity. ## 3 The model ### 3.1 The XRB spectrum Our models are completely analogous to those of the canonical lineage, the only differences arising from updated input parameters. A key set of such parameters is the one referring to the XLF and its cosmological evolution, and for model A1 we adopt the results of Mi99a. Strictly speaking, the XLF of Mi99a refers to the observed 0.5–2 keV luminosities and could be considered as the 0.5–2 keV XLF in the rest frame only by assuming simple power law spectra with energy index $`\alpha =1`$ (i.e. zero $`K`$–correction). Indeed, in the rest frame energy range seen by ROSAT at different redshifts, the spectra of type 1 AGNs assumed in our models do not differ significantly from a power law with $`\alpha =1`$. Since we refer the Mi99a XLF to unabsorbed AGNs, without correcting for the contribution of absorbed AGNs to the ROSAT counts, model A1 might be biased in favor of a soft XRB. We discuss the strength of this bias in connection with models A2 and B in the following. The absorption distribution in type 2 AGNs is no longer derived from best fitting, instead it is taken equal to the local one, as measured by Risaliti et al. (1999). The objects for which only a lower limit is available, $`N_\mathrm{H}>10^{24}`$ cm<sup>-2</sup>, have been assigned to the bin $`10^{24}<N_\mathrm{H}<10^{25}`$ cm<sup>-2</sup>. Because of the evidence that the $`R`$ ratio decreases with the intrinsic luminosity of the AGNs, at least locally, we introduce a change with respect to the canonical scenario: the XLF is divided in two luminosity regions as follows: $$\varphi (L_\mathrm{x})=\varphi (L_\mathrm{x})e^{\frac{L_\mathrm{x}}{L_s}}+\varphi (L_\mathrm{x})(1e^{\frac{L_\mathrm{x}}{L_s}}),$$ with the 0.5–2 keV $`e`$–folding luminosity set equal to $`L_s`$=10<sup>44.3</sup> erg s<sup>-1</sup>, following Miyaji, Hasinger & Schmidt (1999b; hereafter Mi99b). The first and second term represent the XLF of Sey 1s and QSO 1s, respectively, and, apart from the exponential factors, are equal to the Mi99a functions as given in Section 2.2. The XLF of Sey 2s and QSO 2s are $`R_\mathrm{S}`$ and $`R_\mathrm{Q}`$ times the XLF of the corresponding type 1 objects. In this parametrization we can explore various hypotheses, including for instance the effects of eliminating altogether the QSO 2s. Following Co95, and the experimental evidence referred to in Section 2.1, we assume that the basic spectrum for type 1 AGNs is a power law with energy index $`\alpha =0.9`$ and exponential cut off with $`e`$–folding energy $`E_c=320`$ keV. Below 1.5 keV the soft excess is modeled with a power law of index 1.3. A reflection component from the accretion disc has been included for Sey 1s with relative normalization $`f_d=1.29`$ (Co95). Beside the disc, we have also included for Sey 1s a torus reflection component which is normalized in accordance with the prescriptions of Ghisellini, Haardt & Matt (1994). In type 1 AGNs the relative contribution of the torus at 30 keV is 29% and 55% for $`N_\mathrm{H}=10^{24}`$ and $`10^{25}`$ cm<sup>-2</sup>, respectively. If we assume that the column density of the torus is approximately the same for all obscured lines of sight, from the measured $`N_\mathrm{H}`$ distribution we find that the torus contributes on the average 28% at 30 keV. The same disc and torus reflection components of Sey 1s have been included also in the QSO spectra: this is against the evidence at low redshifts, but mimics the harder power law seen at high redshifts (Vignali et al. 1999), where most of the XRB is produced. If anything, this assumption tends to reduce the need for additional hard spectrum sources, thus strengthening our results. Sey 2 spectra have been computed for different amounts of intrinsic absorption (log$`N_\mathrm{H}`$=21.5, 22.5, 23.5, 24.5, 25.5) to cover all the observed column densities. In the Compton thin regime the adopted spectrum is that of Sey 1s with a photoelectric cut off and a lower amount of disc reflection ($`f_d=0.88`$, Co95). In this regime the component reflected by the torus does not contribute significantly to the observed radiation (5% at 30 keV for log$`N_\mathrm{H}`$=23.5, inclusive of orientation effects). For the sources with log$`N_\mathrm{H}`$=25.5 we have adopted a pure reflection continuum. The normalization of the spectrum is determined so as to reproduce the contribution of thick tori to the flux of Sey 1s (55% at 30 keV) after correcting for orientation effects (Ghisellini, Haardt & Matt, 1994). This approach predicts that the 2–10 keV continuum luminosity of completely Compton thick sources is about 2% of the typical luminosity of Sey 1s, in agreement with the results of Maiolino et al. (1998). A composite reflected/transmitted spectrum has been considered for Circinus–like sources with log$`N_\mathrm{H}`$=24.5, where the reflected and transmitted components have been normalized in analogy with the previous cases. <sup>1</sup><sup>1</sup>1For log $`N_\mathrm{H}`$=24.5 the effects of Compton scattering begin to be important. We have checked the error introduced by our approximation with respect to the MonteCarlo simulations of Matt, Pompilio & La Franca (1999). The counts remain unaffected, while the model XRB at 30 keV should be reduced by 10–15%, and even more type 2s should be included in order to maintain the agreement with the data. We have modeled the soft excess of Sey 2s with a power law of index 1.3 and a normalization at 1 keV which is 3% of the primary de–absorbed power law. In analogy with type 1s, the spectra of QSO 2s –if at all present– are assumed to be identical to those of Sey 2s. Finally, we have added to the input spectra an iron emission line at 6.4 keV. Following Gilli et al. (1999) we have considered lines with different equivalent widths according to the spectral absorption, and have not included the iron line in the spectra of QSOs. In our model A1 we assume $`R_\mathrm{Q}=0`$, i.e. we do not include QSO 2s. The Mi99a XLF are integrated in the range $`10^{41}<L_\mathrm{x}<10^{49}`$ erg s<sup>-1</sup> up to $`z_{max}=4.6`$. The contribution of clusters of galaxies has been included by considering thermal bremsstrahlung spectra with a distribution of temperatures. We have adopted the 2–10 keV luminosity vs temperature relation of David et al. (1993), and the 2–10 keV X–ray luminosity function of Ebeling et al. (1997). The cluster XLF is assumed not to evolve, and is integrated in the range $`10^{42}<L_\mathrm{x}<10^{47}`$ erg s<sup>-1</sup> up to $`z_{max}=2`$. The overall XRB spectrum resulting from the model is shown in Fig. 2 as a solid line, which is the sum of the contibutions of the other labeled curves. In order to fit the observed XRB spectrum we need a ratio $`R_\mathrm{S}=4.2`$, in good agreement with the local value. Above $`1`$ keV, where the XRB is completely extragalactic, the model provides a good fit to the data from ASCA (Gendreau et al. 1995) and the compilation of Gruber (1992) based on HEAO–1 A2 measurements. The contribution of the AGN iron line to the model XRB is found to be less than 7% at $`6.4/(1+z_{cut})`$ in agreement with the results of Gilli et al. (1999) obtained in a different framework (PLE). Clusters of galaxies are found to contribute to the model XRB by $`12\%`$ at 1 keV, in agreement with the results of Oukbir, Bartlett & Blanchard (1997). ### 3.2 The X–ray source counts We now compare the predictions of model A1 with the observed source counts in different X–ray bands. The results in the soft 0.5–2 keV band are shown in Fig. 3. The expected AGN counts, which are dominated by unabsorbed sources, agree with the data of Mi99a; the expected cluster counts agree with the data of Jones et al. (1998), Rosati et al. (1995), and De Grandi et al. (1999). Since the XLF and its evolution are derived from the ROSAT counts, this is no more than a self–consistency check; the slight overprediction at low fluxes ($``$30% at $`2\times 10^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup>) is due to the $`K`$–displaced type 2 objects, as anticipated previously. In the hard 2–10 keV band the predictions of the model have to be compared with the results of HEAO–1 A2 (Piccinotti et al. 1982), ASCA (Cagnoni, Della Ceca & Maccacaro 1998; Ogasaka et al. 1998; Ueda et al. 1998) and BeppoSAX (Giommi et al. 1998). At the flux limit of $`S3\times 10^{11}`$ erg s<sup>-1</sup> cm<sup>-2</sup> Piccinotti et al. (1982) found that AGNs and clusters of galaxies have the same surface density of $`1.1\times 10^3`$ deg<sup>-2</sup>. However, the AGN density found by these authors is likely to be overestimated by $`20\%`$ due to the local supercluster (Co95). As shown in Fig. 4, after the Piccinotti et al. point is corrected by 20%, the model is in agreement with the data within $`1\sigma `$. On the contrary the disagreement cannot be solved at fainter fluxes. At $`S2\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup> the AGN surface density expected in our model is about a factor of 2 lower than the measurement of Cagnoni et al. (1998). This corresponds to a $`2\sigma `$ discrepancy. When comparing the model with the data of the ASCA Large Sky Survey (Ueda et al. 1998) the discrepancy is even larger. The situation is worse still in the 5–10 keV band. The only available counts in this band are from the HELLAS survey performed by BeppoSAX (Giommi et al. 1998; Comastri et al. 1999). At the flux of $`2\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup> the observed surface density is $`2.7\pm 0.7`$ deg<sup>-2</sup>, which is a factor of 4 ($`3\sigma `$) above the predictions (Fig. 5). ### 3.3 Correction for absorbed sources One of the main assumptions of model A1 is that the XLF and evolution derived by Mi99a refer to unabsorbed AGNs. However, as discussed in Sect. 2.2, this is likely not the case. In order to evaluate the effects of our assumption, we have computed a different variant, A2, which adopts the XLF and LDDE of Mi99b. These authors allow self–consistently within their model for the $`K`$–correction, and for the absorbed sources which, especially at faint fluxes and high redshifts, appear in the ROSAT counts; thus the parameters they provide refer to unabsorbed sources only. Of course, our model is different from theirs, and the self–consistency is lost; however, this is likely to be a higher order effect, and for a first order estimate we can include their parameters in our computation. The results are shown in Fig. 6. Note that around 1 keV the unabsorbed sources produce 30% of the XRB, exactly as in Mi99b. Note also that in order to fit the XRB spectrum a ratio $`R_\mathrm{S}=13`$ is now required, which is much higher than the local value and implies additional hard spectrum sources. Since the contribution of type 2s has increased with respect to A1, in order to make up for the reduced contribution of type 1s, the mean spectrum of the population producing the XRB is harder, and the discrepancies between the model predictions and the hard counts are somewhat reduced (though not completely eliminated). ## 4 Discussion The main difference between models A1 and A2 is the fractional contribution of type 1 AGNs to the XRB; this contribution is dominated by objects close to the XLF break at redshifts close to $`z_{cut}`$, and is not well constrained by the data. In the former model 60% of the 1–keV XRB is due to type 1s, so the local value of the type ratio $`R_\mathrm{S}`$ is sufficient to account for the entire XRB; the average spectrum, though, is too soft, and the softness shows up in a marginal discrepancy with the XRB spectrum at $`>40`$ keV (Fig. 2), and unacceptable discrepancies with the hard counts (Figs. 4 and 5). In the latter model the type 1s account for only 30% of the 1–keV XRB, and making up the entire XRB requires an $`R_\mathrm{S}`$ much larger than the local value; now the average spectrum is harder, the shape discrepancy disappears and the count discrepancies are reduced (Fig. 6). By extrapolating from these two models we can make qualitative predictions on still different parametrizations of the Mi99a sample: for instance, models adopting density evolution with a dependence on luminosity weaker than Mi99a would predict a type 1 soft X–ray contribution higher than 60%, and would miss the hard counts by factors larger than Figs. 4 and 5. Density evolution with a dependence on luminosity stronger than Mi99b (if at all acceptable) would require $`R_\mathrm{S}>13`$, which is already three times the local value. The main result of our analysis is precisely this one: no matter which variant is adopted for the XLF and the evolution, the models which incorporate the most recent observations within the standard prescriptions always produce some discrepancy. The discrepancy may appear as an underprediction of the observed hard counts, or a type 2 to type 1 ratio higher than the observed local value, but in all cases it points to additional hard spectrum sources at intermediate or high redshifts. For the sake of completeness, we present in the Appendix a PLE model with $`R_\mathrm{Q}=R_\mathrm{S}`$ (model B): this region of the parameter space is not favored by the most recent data, but was adopted in practically all previous works on the XRB. Note that model B has the same type 1 soft X–ray contribution as model A2 (30%), but the type 2 contribution here is due to higher luminosity sources, which show up in higher flux bins: indeed, the XRB spectrum is well fitted, the counts in the ASCA band are matched, and the discrepancy with the HELLAS counts is reduced to $`2\sigma `$. The cost to be paid is a number density $`R_\mathrm{Q}=7.7`$, which –again– is definitely higher than the local upper limit $`R_\mathrm{Q}<2`$. Irrespective of the plausibility of PLE and QSO 2s, we stress that even model B results in a discrepancy, and the discrepancy is concordant with the results of the other, less controversial variants. A population of absorbed or hard spectrum AGNs evolving more rapidly than the type 1s could accomodate all the problems discussed above. In this context one should be reminded that the hard counts already resolve $``$30% of the XRB at fluxes $`5\times 10^{14}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, so they must converge rapidly just below these values. The optical identifications of the counts in the 2–10 keV and 5–10 keV bands are still largely incomplete. Up to now 34 X–ray sources detected in the ASCA LSS survey (Ueda et al. 1998) have been identified (Akiyama et al. 1998), and 28 objects turned out to be AGNs. They are 22 broad line AGNs (type 1–1.5) with $`0<z<1.7`$, and 6 type 2 AGNs with $`0<z<0.7`$. The number of identified sources of the BeppoSAX HELLAS survey is lower, but the distribution of the AGNs seems similar to the ASCA one: 7 broad line QSOs with $`0.2<z<1.3`$ and 5 Seyferts 1.8–1.9 with $`0.04<z<0.34`$ (Fiore et al. 1999). If one accepts these low redshift type 2 identifications, one has to find a physical reason for a convergence so recent in comparison with all other AGNs (BL–Lacs excepted) and star forming galaxies. Alternatively, one could rely on the poor statistics to maintain that the hard counts are mostly due to optically empty fields, containing very absorbed, very powerful sources at redshifts $`>1`$. There are prospective candidates in both scenarios. In the low–$`z`$ hypothesis one could assume that “normal” Seyfert 2 galaxies evolve more rapidly than type 1s, so that $`R_\mathrm{S}`$ increases with redshift up to the required value. Not only the number ratio, but also the $`N_\mathrm{H}`$ distribution could change with cosmic time (Franceschini et al. 1993). Local Sey 2s are associated with a star formation activity higher than Sey 1 and normal galaxies ( Maiolino et al. 1995; Rodríguez-Espinoza et al. 1986), so this assumption would have interesting implications on the star formation history. One could also invoke Advection Dominated Accretion Flows (ADAFs, Di Matteo et al. 1998) whose luminosity is proportional to $`\dot{M}^2`$, where $`\dot{M}`$ is the mass accretion rate, and which become normal QSOs at large $`\dot{M}`$: thus, they should evolve more rapidly than normal AGNs at intermediate redshifts, and should undergo a change of class at high redshifts. In the high–$`z`$ hypothesis one should resort to ULIRGs, which indeed are absorbed and powerful, and appear to evolve as fast as required \[$`(1+z)^{7.6\pm 3.2}`$, Kim & Sanders 1998\]. As mentioned before there is evidence that the IR emission of ULIRGs is powered both by starburst and AGN processes; Kim, Veilleux & Sanders (1998) and Lutz et al. (1998) find that the fraction of AGN–powered infrared luminous galaxies increases with the bolometric luminosity, and reaches 30–50% in the ULIRG range. While normal starbursts are inefficient emitters in the hard X–rays, obscured AGNs in ULIRGs could easily explain the hard counts. Finally, it should be noted that the optical identifications of the hard X–ray counts, scanty as they are, suggest that at the given X–ray flux the type 1s are more numerous than the type 2s. Concordant evidence is provided by recent BeppoSAX observations of the Marano field and the Lockman hole (Hasinger et al. 1999): most of the BeppoSAX sources have ROSAT counterparts, which in most cases are optically identified with type 1 AGNs. This type composition is in agreement with the predictions of, for instance, model A1 (Figs. 4 and 5). But if the hard counts in excess of the model are attributed entirely to obscured AGNs, then the predicted type ratio is reversed, with the type 2s more numerous than the type 1s. The numbers involved are too small to draw any conclusion, however they seem to suggest that some of the hard counts are due to flat X–ray spectrum sources with type 1 optical spectra; a few similar sources might have been found already in the ASCA LSS (Akiyama et al. 1998). Clearly, a decisive progress in this area will require more numerous and more secure identifications of hard X–ray counts; given the various hypotheses, counterparts should be looked for not only at optical wavelengths, but also in the infrared and submillimeter domains, where AGN–dominated ULIRGs should be conspicuous. ## 5 Summary and conclusions In this paper we have shown that the standard prescriptions for synthesizing the XRB from the integrated emission of AGNs are not consistent with a number of recent observational constraints, and some of them must be relaxed. We have worked out models (A1 and A2) which take into account detailed input spectra of AGNs, the $`N_\mathrm{H}`$ distribution observed in local Seyfert 2s, and the XLF and evolution newly determined from the largest ROSAT sample. The latter data do not define a unique parametrization, and the two models explore different variants. As prescribed by the standard model, the XLF and evolution of type 2 AGNs are taken from type 1s, and the spectra of both types are taken independent of redshift; the only fitting parameter is the number ratio $`R`$ of type 2s to type 1s. We find that model A1 reproduces the XRB and the soft counts with a ratio $`R`$ compatible with the local value, but underestimates the hard counts. Model A2 is less discrepant as far as the counts are concerned, but requires a ratio $`R`$ definitely larger than observed locally. We have also computed a model adopting a canonical pure luminosity evolution (model B). In agreement with the results of Co95, model B can reproduce the XRB, the soft X–ray counts and the ASCA hard counts in the 2–10 keV band. It is also consistent within 2$`\sigma `$ (or discrepant at 2$`\sigma `$) with the preliminary BeppoSAX counts in the 5–10 keV band. Nevertheless, it requires a number of type 2 QSOs much higher than the local upper limit, and perhaps already ruled out by the deep X–ray surveys. The discrepancies found in all models are to some extent model dependent, but all of them point in the same direction, and suggest that hard spectrum sources at intermediate or high redshifts are needed in addition to the predictions of the standard scenario. The X–ray spectrum of these additional sources could be flattened by absorption, or could be intrinsically hard. In the former hypothesis reasonable candidate counterparts could be rapidly evolving, “normal” Seyfert 2s. One should also note that a fraction of ULIRGs seem to be powered by AGNs, and their cosmological evolution seems faster than that of unabsorbed QSOs. The alternate hypothesis could instead require the presence of ADAFs. Optical identifications of the hard X–ray sources are still largely incomplete and do not allow yet to decide between the various possibilities. ###### Acknowledgements. We are grateful to A. Comastri and G. Zamorani for a careful reading of the manuscript, and to T. Miyaji G. Hasinger and M. Schmidt for permission to use their LDDE model in advance of publication. Our presentation was greatly improved by the comments of the referee, Prof. G. Hasinger. This work was partly supported by the Italian Space Agency (ASI) under grant ARS–98–116/22 and by the Italian Ministry for University and Research (MURST) under grant Cofin98–02–32. ## Appendix A Comparison with a PLE model We have computed a canonical synthesis model of the XRB by adopting the XLF and PLE of Jones et al. (1997); since only AGNs with broad optical lines are included, there is no need to correct for the contribution of type 2 AGNs. This sample is smaller than Mi99a, and presumably low luminosity sources at high redshifts are underrepresented. Our model B assumes the XLF and PLE indicated above, includes QSO 2s as numerous as the Sey 2s, and adopts the absorption distribution of Risaliti et al. (1999) at all redshifts and all luminosities. In the cosmology adopted here, model B makes only $`30\%`$ of the soft XRB with type 1 AGNs, and one needs $`R_\mathrm{S}=R_\mathrm{Q}=7.7`$ to fit the overall background (Fig. A1). Due to the large contribution of type 2 AGNs, the model XRB spectrum is very hard. Furthermore, due to the high “effective” luminosity implied by QSO 2s, the ASCA counts are reproduced. The discrepancy with the data in the 5–10 keV band, on the contrary, is not eliminated, although it is reduced to a $`2\sigma `$ level. Because of the preliminary nature of the HELLAS data one might debate about its significance. At any rate, one should stress that this marginal result can be obtained only by assuming a strong (a factor $`>`$4) differential evolution of QSO 2s with respect to QSO 1s, so that at $`z_{cut}`$ the former would outnumber the latter by a factor $``$8.
no-problem/9904/hep-ph9904224.html
ar5iv
text
# Pion Scattering Revisited ## Abstract Chiral Ward identities lead to consistent accounting for the $`\sigma `$’s width in the linear sigma model’s Feynman rules. Reanalysis of pion scattering data at threshold imply a mass for the $`\sigma `$ of $`600\genfrac{}{}{0pt}{}{+200}{100}`$MeV. This short talk (by M.R-A) reviews our recent work on the linear sigma model nos ; mau , where full references can be found. At low energies, chiral perturbation theory is supposed to yield good agreement with strong interaction data. Unfortunately, chiral perturbation theory gives rather poor results on the scattering lengths of pion-pion scattering, which are relevant experimental quantities in the limit of zero momentum, that is to say, where chiral perturbation theory should work best. A missing ingredient in the description at low energies of strong interactions is the $`\sigma `$ field, in addition to the Goldstone bosons of chiral symmetry (the pions). A wide scalar resonance in the vicinity of 600 MeV exists, and can be identified naturally with the $`\sigma `$ particle of the original linear $`\sigma `$–model. What are the phenomenological consequences of the linear $`\sigma `$–model in $`\pi \pi \pi \pi `$ scattering at very low energies? The sole guiding principle is chiral symmetry, whose Ward identities allow us modify the various vertices to take into account the large width of the $`\sigma `$ resonance. The chiral symmetry breaking giving mass to the pions is soft, so that when we include the width $`\mathrm{\Gamma }_\sigma `$ of the $`\sigma `$ in its propagator, we can exploit the chiral Ward identities to modify the vertices accordingly. The chiral Ward identities are satisfied by the resulting lagrangian (with parameters $`m_\pi `$, $`f_\pi `$, $`m_\sigma `$), from which we compute the amplitudes in the various isospin and angular momentum channels of experimental relevance. We use the expression for $`\mathrm{\Gamma }_\sigma `$ from the decay $`\sigma \pi \pi `$ to perform a simple and succesful one–parameter ($`m_\sigma `$) fit to data. The field $`\sigma `$ is very unstable: its tree–level width is $$\mathrm{\Gamma }\left(\sigma \pi \pi \right)=\frac{3m_\sigma ^3}{32\pi f_\pi ^2}(1\epsilon )^2\sqrt{14ϵ}$$ where we have introduced the convenient shorthand $`\epsilon =(m_\pi /m_\sigma )^2`$. In strict analogy with the Higgs field in the standard model, the $`\sigma `$ width $`\mathrm{\Gamma }_\sigma `$ grows very fast with its mass: $`\mathrm{\Gamma }_\sigma (350)=65`$, $`\mathrm{\Gamma }_\sigma (500)=310`$, $`\mathrm{\Gamma }_\sigma (650)=785`$, all in MeV. The effect of the width of the $`\sigma `$ field is to modify its propagator from the usual $`i\left(q^2m_\sigma ^2\right)^1`$ to $`\mathrm{\Delta }_\sigma (q)=i\left(q^2m_\sigma ^2+i\mathrm{\Gamma }_\sigma m_\sigma \theta (q^24m_\pi ^2)\right)^1`$, where the step function ensures that the imaginary piece in the denominator appears only when the momentum of the propagator is above the kinematical threshold for $`\sigma `$ decay. Thus, in the physical process of $`\pi \pi \pi \pi `$ scattering, which we shall consider shortly, the propagator of the $`\sigma `$ picks up the correction due to the width only in the $`s`$–channel, not in the $`u`$– nor the $`t`$–channels. The crucial point is that, in the linear $`\sigma `$ model, chiral symmetry is responsible for important cancellations which imply, notably, that the pion coupling is always derivative in the limit of soft pion momenta. Enforcing the chiral Ward identities on the vertices of the lagrangian implies that the latter pick up modifications related to the width $`\mathrm{\Gamma }_\sigma `$. These vertex corrections depend on the kinematical variables (the incoming momenta) in a particular way, dictated by chiral symmetry. For instance, the $`\sigma \pi ^i\pi ^j`$ Feynman rule reads now $$V_{\sigma \pi ^i\pi ^j}=\frac{i}{f_\pi }\delta ^{ij}\left(m_\sigma ^2m_\pi ^2i\mathrm{\Gamma }_\sigma m_\sigma \theta (q^24m_\pi ^2)\right)$$ where $`q^\mu `$ is the momentum of the $`\sigma `$. We find also $$V_{\pi ^i\pi ^j\sigma \sigma }=V_{\sigma \sigma \sigma }\mathrm{\Delta }_\sigma (p_j)V_{\sigma \pi ^i\pi ^j}$$ where $`p_j`$ is the momentum of a pion, so that $`p_j^2=m_\pi ^2`$ if it is on–shell. This equation defines the $`\pi \pi \sigma \sigma `$ vertex. Similarly, the chiral Ward identity satisfied by the $`\pi ^4`$ Feynman rule is $$V_{\pi ^i\pi ^j\pi ^k\pi ^{\mathrm{}}}=V_{\pi ^k\pi ^{\mathrm{}}\sigma }\mathrm{\Delta }_\sigma (p_j)V_{\sigma \pi ^i\pi ^j}+V_{\pi ^i\pi ^k\sigma }\mathrm{\Delta }_\sigma (p_k)V_{\sigma \pi ^j\pi ^{\mathrm{}}}+V_{\pi ^i\pi ^{\mathrm{}}\sigma }\mathrm{\Delta }_\sigma (p_{\mathrm{}})V_{\sigma \pi ^j\pi ^k}$$ Obviously, these relations hold at tree level before chiral symmetry breaking, that is to say, when $`m_\pi =0`$, and also $`\mathrm{\Gamma }_\sigma =0`$. Powefully, they also hold when $`m_\pi 0`$ and/or when $`\mathrm{\Gamma }_\sigma 0`$, to all orders in perturbation theory. This can be proved easily using the enormous advantage that the linear sigma model is a well–defined (renormalizable) field theory. Since the vertex modifications ensure the preservation of exact chiral Ward identities, they also guarantee, for instance, that the pion couplings remain derivative as they should. To illustrate the power of this implementation of chiral symmetry, we evaluate, at tree level, the amplitude for $`\pi \pi `$ scattering. Clearly, we do not expect the result to be the perfect answer, since the only resonance we will take into account is the $`\sigma `$. In particular, not taking into account the vector meson $`\stackrel{}{\rho }^\mu `$ is a rather bad approximation in the $`I=1`$, $`\mathrm{}=1`$ amplitude. Nevertheless, our results are in better agreement with experimental data than those of chiral perturbation theory. Let us emphasize that the kinematical region where we compare both predictions, namely at very low momenta, is precisely where chiral perturbation theory should be exact. This lends further support to the real existence of $`\sigma `$ as a strong resonance. At tree level, four diagrams contribute to $`\pi \pi \pi \pi `$: the four–pion contact term, and the exchange of a $`\sigma `$ in the three $`s`$, $`t`$ and $`u`$ channels. Due to the structure of the Feynman rules dictated by chiral Ward identities, the width $`\mathrm{\Gamma }_\sigma `$ contributes, in the Born approximation, only to $`T_0^{(0)}`$. The experimental knowledge of pion scattering near threshold is rather poor. The relatively badly measured scattering lengths and ranges are $`a_0^{(0)}`$, $`b_0^{(0)}`$, $`a_0^{(2)}`$, $`b_0^{(2)}`$, $`a_1^{(1)}`$, $`a_2^{(0)}`$ and $`a_2^{(2)}`$, These seven numbers come out of our computation with only $`m_\sigma `$ as a free parameter. An overall fit to these seven numbers gives $`m_\sigma =700\genfrac{}{}{0pt}{}{+800}{150}`$MeV. The $`\chi ^2`$ distribution is very flat towards increasing values of $`m_\sigma `$; $`m_\sigma `$550 Mev is the only useful information. Of the seven numbers, if we eliminate the worst one ($`a_1^{(1)}`$ (presumably under strong influence from $`\rho `$ exchange, which we do not take into account), then the fit improves and it yields $`m_\sigma =590\genfrac{}{}{0pt}{}{+220}{90}`$MeV. Nicely, the fit to only the scalar isoscalar values gives $`m_\sigma =525\genfrac{}{}{0pt}{}{+85}{45}`$MeV. Overall, one may conclude that the data are consistent with a linear sigma resonance provided its mass is around 600 MeV (and thus its width also around 600 MeV). The errors on these numbers, from the pion data available, are substantial. Although the low–energy moments $`a_{\mathrm{}}^{(I)}`$ and $`b_{\mathrm{}}^{(I)}`$ are the relevant quantities for us, what is actually measured is a momentum–dependent phase shift, which can be split in various isospin and angular momentum channels. From the analysis of the data available, we fit $`m_\sigma =550\genfrac{}{}{0pt}{}{+450}{80}`$MeV. Again the error on the heavy side is huge: the $`\chi ^2`$ distribution is very flat with increasing $`m_\sigma `$. Exact unitarity is achieved iff $$\mathrm{Im}T_{\mathrm{}}^{(I)}=\sqrt{\frac{s4m_\pi ^2}{s}}\left|T_{\mathrm{}}^{(I)}\right|^2$$ from which the optical theorem can be derived. Since there are many other resonances in nature heavier than the $`\sigma `$, we should not worry much about possible unitarity violations at high momenta (say, above 1 GeV). It turns out that there is no problem with unitarity at center of mass momenta lower than the 400 MeV. Unfortunately, unitarity does not constrain $`m_\sigma `$ from above in any meaningful way. We have enhanced the linear sigma model by enforcing chiral Ward identities which take into account the (large) sigma width. We have found that low energy pion scattering data supports the existence of a wide $`\sigma `$ field with mass around 600 MeV (actually $`m_\sigma =590\genfrac{}{}{0pt}{}{+220}{90}`$MeV), provided we exclude the datum in the vector isovector channel. The advantage of keeping the $`\sigma `$ as a true resonance in the effective low energy theory of strong interactions is not only that its inclusion simulates more or less the results of chiral perturbation theory to one loop, but also, more crucially, that this opens the door to more industrious analyses of the whole scalar spectrum, including glueballs. Acknowledgements. This work was supported in part by CONACYT through projects 3979P-E9608, 25504-E, and Cátedra Patrimonial II de Apoyo a los Estados, and by DGAPA–UNAM through IN103997.
no-problem/9904/solv-int9904005.html
ar5iv
text
# The tetrahedral analog of Veneziano amplitude ## 1 Introduction It was shown in that the famous Veneziano amplitude, from which all the string theory starts, comes naturally from one of the simplest solutions of the functional pentagon equation (FPE). More generally, FPE is intimately connected with the duality condition for scattering processes. From the viewpoint of the theory of integrable models, FPE is a rather trivial equation whose solutions have transparent geometrical or group-theoretic meaning \[1, section 5\]. It looks natural to search for similar constructions with FPE replaced by the functional tetrahedron equation (FTE). As the relations between the pentagon and duality condition are like those between the tetrahedron and local Yang–Baxter equation (LYBE), the duality condition is likely to be replaced by LYBE. In this paper, I find such FTE and LYBE solutions that are described by formulas very similar to those describing Veneziano amplitude in , including the fundamental property of Möbius invariance. They are what I mean by the tetrahedral analog of Veneziano amplitude. ## 2 A functional transformation for edge variables from refactorization equation Consider the following “refactorization equation” for the product of three matrices: $`\left(\begin{array}{ccc}a_1& b_1& 0\\ c_1& d_1& 0\\ 0& 0& 1\end{array}\right)\left(\begin{array}{ccc}a_2& 0& b_2\\ 0& 1& 0\\ c_2& 0& d_2\end{array}\right)\left(\begin{array}{ccc}1& 0& 0\\ 0& a_3& b_3\\ 0& c_3& d_3\end{array}\right)=`$ $`=\left(\begin{array}{ccc}1& 0& 0\\ 0& a_3^{}& b_3^{}\\ 0& c_3^{}& d_3^{}\end{array}\right)\left(\begin{array}{ccc}a_2^{}& 0& b_2^{}\\ 0& 1& 0\\ c_2^{}& 0& d_2^{}\end{array}\right)\left(\begin{array}{ccc}a_1^{}& b_1^{}& 0\\ c_1^{}& d_1^{}& 0\\ 0& 0& 1\end{array}\right),`$ (1) ($`a_1,\mathrm{},d_3^{}`$ are numbers) for the case when all six submatrices $`\left(\begin{array}{cc}a_i^{()}& b_i^{()}\\ \\ c_i^{()}& d_i^{()}\end{array}\right)`$ have the form $$\left(\begin{array}{cc}a& b\\ c& d\end{array}\right)=\left(\begin{array}{cc}\alpha & 1\alpha \\ 1\beta & \beta \end{array}\right).$$ (2) In other words, each of the six matrices in (1) transforms the vector $`\left(\begin{array}{c}1\\ 1\\ 1\end{array}\right)`$ into itself. It is known from that each side of (1) determines the other side to within some “gauge freedom”, and one can verify that the additional conditions (2) are exactly good for fixing that freedom. The fate of an arbitrary vector $`\left(\begin{array}{c}p\\ q\\ r\end{array}\right)`$ under the action of both sides of (1) is more complicated. We present it in Figure 1, where we denote the matrices entering (1), in their order in that equation, by letters $`X_1`$, $`X_2`$, $`X_3`$, $`Y_3`$, $`Y_2`$, $`Y_1`$. The meaning of the LHS of Figure 1 is that $$X_3\left(\begin{array}{c}p\\ q\\ r\end{array}\right)=\left(\begin{array}{c}p\\ v\\ w\end{array}\right),X_2\left(\begin{array}{c}p\\ v\\ w\end{array}\right)=\left(\begin{array}{c}u\\ v\\ z\end{array}\right),X_1\left(\begin{array}{c}u\\ v\\ z\end{array}\right)=\left(\begin{array}{c}x\\ y\\ z\end{array}\right),$$ while the meaning of the RHS is that $$Y_1\left(\begin{array}{c}p\\ q\\ r\end{array}\right)=\left(\begin{array}{c}f\\ g\\ r\end{array}\right),Y_2\left(\begin{array}{c}f\\ g\\ r\end{array}\right)=\left(\begin{array}{c}x\\ g\\ h\end{array}\right),Y_3\left(\begin{array}{c}x\\ g\\ h\end{array}\right)=\left(\begin{array}{c}x\\ y\\ z\end{array}\right).$$ One can see that if, vice versa, all the values $`x,y,z,\mathrm{}`$ in e.g. the LHS of Figure 1 are given, then matrices $`X_1,X_2,X_3`$ of the form (2) are recovered unambiguously. So, we can take some given values of nine numbers in the LHS, get the triple of matrices $`X_1,X_2,X_3`$ from them, then get $`Y_1,Y_2,Y_3`$ by (1), and then get the missing values $`f,g,h`$ in the RHS from $`p,q,r`$ using $`Y_1,Y_2,Y_3`$. We will formulate this the following way: for any fixed “outer” variables $`x`$, $`y`$, $`z`$, $`p`$, $`q`$, $`r`$, the transformation $$R=R(x,y,z,p,q,r):(u,v,w)(f,g,h)$$ (3) is given. The transformations (3) satisfy the functional tetrahedron equation (FTE). To explain this, note that equation (1) can be naturally regarded as an equation in the direct sum of three one-dimensional complex linear spaces, each of the matrices acting nontrivially only in a direct sum of two of them. One can consider similar relations in a direct sum of four spaces (each of the matrices acting nontrivially again only in a direct sum of two spaces). Let us picture in Figure 2 the spaces as straight lines, put matrices at their intersections, and attach the results of matrix action upon some 4-vector to line segments like in Figure 1, and then consider the transition from the LHS of Figure 2 to its RHS as a composition of “elementary” transformations $`R`$ of type (3). As was explained in the paper (and the reader will verify it him-/herself easily), there exist two different compositions of four $`R`$s both transforming the LHS of Figure 1 in its RHS. The first of them starts with $`R_{356}`$, by which we mean “turning inside out” triangle $`356`$, while the other—with $`R_{123}`$. We can write FTE in the same abstract form as in : $$R_{123}R_{145}R_{246}R_{356}=R_{356}R_{246}R_{145}R_{123},$$ (4) but the sense of (4) is now different: $`R`$ is now a transformation of variables belonging to the edges rather than of matrices belonging to vertices. To prove FTE (4) for edge variables, note that the variables belonging to inner edges (i.e., say, edges $`12`$, $`13`$, …, $`56`$ in the LHS of Figure 2) are unambiguously recovered if variables at outer edges and matrices at vertices are given. The FTE for matrices, according to , does hold, while the variables at outer edges are not changed by the transformations. Thus, the variables at inner edges do not depend on the way of transformations as well. ## 3 Möbius invariance The same way as we have traced the fate of vector $`\left(\begin{array}{c}p\\ q\\ r\end{array}\right)`$ under the action of LHS and RHS of (1) in Figure 1, we can trace the fate of two more vectors, namely $$\left(\begin{array}{c}p_n\\ q_n\\ r_n\end{array}\right)=\kappa \left(\begin{array}{c}1\\ 1\\ 1\end{array}\right)+\lambda \left(\begin{array}{c}p\\ q\\ r\end{array}\right)\text{and}\left(\begin{array}{c}p_d\\ q_d\\ r_d\end{array}\right)=\mu \left(\begin{array}{c}1\\ 1\\ 1\end{array}\right)+\nu \left(\begin{array}{c}p\\ q\\ r\end{array}\right),$$ (5) where $`\kappa ,\lambda ,\mu ,\nu `$ are some constants (and subcripts $`n`$ and $`d`$ stand for “numerator” and “denominator”, see formula (6) below). I do not draw here corresponding diagrams, differing from Figure 1 only in that $`n`$ or $`d`$ is added to all small letters. Now let us do the following gauge transformations (in the sense of ) on matrices $`X_1,\mathrm{},Y_3`$: $`\left(\begin{array}{cc}a_1& b_1\\ \\ c_1& d_1\end{array}\right)`$ $``$ $`\left(\begin{array}{cc}\stackrel{~}{a}_1& \stackrel{~}{b}_1\\ & & \multicolumn{-1}{c}{}\\ \stackrel{~}{c}_1& \stackrel{~}{d}_1\end{array}\right)=\left(\begin{array}{cc}x_d^1& 0\\ & & \multicolumn{-1}{c}{}\\ 0& y_d^1\end{array}\right)\left(\begin{array}{cc}a_1& b_1\\ & & \multicolumn{-1}{c}{}\\ c_1& d_1\end{array}\right)\left(\begin{array}{cc}u_d& 0\\ & & \multicolumn{-1}{c}{}\\ 0& v_d\end{array}\right),`$ $`\left(\begin{array}{cc}a_2& b_2\\ \\ c_2& d_2\end{array}\right)`$ $``$ $`\left(\begin{array}{cc}\stackrel{~}{a}_2& \stackrel{~}{b}_2\\ & & \multicolumn{-1}{c}{}\\ \stackrel{~}{c}_2& \stackrel{~}{d}_2\end{array}\right)=\left(\begin{array}{cc}u_d^1& 0\\ & & \multicolumn{-1}{c}{}\\ 0& z_d^1\end{array}\right)\left(\begin{array}{cc}a_2& b_2\\ & & \multicolumn{-1}{c}{}\\ c_2& d_2\end{array}\right)\left(\begin{array}{cc}p_d& 0\\ & & \multicolumn{-1}{c}{}\\ 0& w_d\end{array}\right),`$ $`\left(\begin{array}{cc}a_3& b_3\\ \\ c_3& d_3\end{array}\right)`$ $``$ $`\left(\begin{array}{cc}\stackrel{~}{a}_3& \stackrel{~}{b}_3\\ & & \multicolumn{-1}{c}{}\\ \stackrel{~}{c}_3& \stackrel{~}{d}_3\end{array}\right)=\left(\begin{array}{cc}v_d^1& 0\\ & & \multicolumn{-1}{c}{}\\ 0& w_d^1\end{array}\right)\left(\begin{array}{cc}a_3& b_3\\ & & \multicolumn{-1}{c}{}\\ c_3& d_3\end{array}\right)\left(\begin{array}{cc}q_d& 0\\ & & \multicolumn{-1}{c}{}\\ 0& r_d\end{array}\right),`$ $`\left(\begin{array}{cc}a_1^{}& b_1^{}\\ \\ c_1^{}& d_1^{}\end{array}\right)`$ $``$ $`\left(\begin{array}{cc}\stackrel{~}{a}_1^{}& \stackrel{~}{b}_1^{}\\ & & \multicolumn{-1}{c}{}\\ \stackrel{~}{c}_1^{}& \stackrel{~}{d}_1^{}\end{array}\right)=\left(\begin{array}{cc}f_d^1& 0\\ & & \multicolumn{-1}{c}{}\\ 0& g_d^1\end{array}\right)\left(\begin{array}{cc}a_1^{}& b_1^{}\\ & & \multicolumn{-1}{c}{}\\ c_1^{}& d_1^{}\end{array}\right)\left(\begin{array}{cc}p_d& 0\\ & & \multicolumn{-1}{c}{}\\ 0& q_d\end{array}\right),`$ $`\left(\begin{array}{cc}a_2^{}& b_2^{}\\ \\ c_2^{}& d_2^{}\end{array}\right)`$ $``$ $`\left(\begin{array}{cc}\stackrel{~}{a}_2^{}& \stackrel{~}{b}_2^{}\\ & & \multicolumn{-1}{c}{}\\ \stackrel{~}{c}_2^{}& \stackrel{~}{d}_2^{}\end{array}\right)=\left(\begin{array}{cc}x_d^1& 0\\ & & \multicolumn{-1}{c}{}\\ 0& h_d^1\end{array}\right)\left(\begin{array}{cc}a_2^{}& b_2^{}\\ & & \multicolumn{-1}{c}{}\\ c_2^{}& d_2^{}\end{array}\right)\left(\begin{array}{cc}f_d& 0\\ & & \multicolumn{-1}{c}{}\\ 0& r_d\end{array}\right),`$ $`\left(\begin{array}{cc}a_3^{}& b_3^{}\\ \\ c_3^{}& d_3^{}\end{array}\right)`$ $``$ $`\left(\begin{array}{cc}\stackrel{~}{a}_3^{}& \stackrel{~}{b}_3^{}\\ & & \multicolumn{-1}{c}{}\\ \stackrel{~}{c}_3^{}& \stackrel{~}{d}_3^{}\end{array}\right)=\left(\begin{array}{cc}y_d^1& 0\\ & & \multicolumn{-1}{c}{}\\ 0& z_d^1\end{array}\right)\left(\begin{array}{cc}a_3^{}& b_3^{}\\ & & \multicolumn{-1}{c}{}\\ c_3^{}& d_3^{}\end{array}\right)\left(\begin{array}{cc}g_d& 0\\ & & \multicolumn{-1}{c}{}\\ 0& h_d\end{array}\right).`$ Here, of course, $`x_d=\mu +\nu x`$ etc., in analogy with $`p_d`$, $`q_d`$ and $`r_d`$ in (5). Denote the so obtained matrices $`\stackrel{~}{X}_1,\mathrm{},\stackrel{~}{Y}_3`$. Now imagine a version of Figure 1 for these matrices with tildes. One can say that the transformation of vectors corresponding to the above gauge matrix transformation has brought all variables with subscript $`d`$ into $`1`$, and hence the matrices with tildes have again the form (2). As for the variables with subscript $`n`$, they turned into $$x\stackrel{~}{x}=\frac{x_n}{x_d}=\frac{\kappa +\lambda x}{\mu +\nu x}\text{etc.}$$ (6) We see from here that a linear-fractional (Möbius) transformation of variables $`x,y,\mathrm{}`$ commutes with the transformation $`R`$. Clearly, the same conclusion could be made from the explicit formulas for $`R`$ given in Section 4. The Möbius invariance is an argument in support of the idea that $`R`$ really is an analog of “pentagonal” transformation from connected with Veneziano amplitude. ## 4 Connection between volume elements and the explicit form of functional transformation Let us now vary the edge variables in Figure 1, with matrices $`X_1,\mathrm{}Y_3`$ fixed. For instance, consider the variables at outer edges of the LHS of that Figure as functions of three inner variables $`u,v,w`$, and calculate the corresponding partial derivatives. The reader will easily check that $$\frac{x}{u}=\frac{xv}{uv},\frac{x}{v}=\frac{xu}{vu},\frac{y}{u}=\frac{yv}{uv}$$ (7) and so on. Using formulas of the type (7), it is not hard to obtain the following relations for “volume elements”: $$\mathrm{d}x\mathrm{d}y\mathrm{d}z=\frac{xy}{uv}\frac{zu}{wu}\mathrm{d}u\mathrm{d}v\mathrm{d}w$$ (8) from the LHS of Figure 1 and similarly $$\mathrm{d}x\mathrm{d}y\mathrm{d}z=\frac{xh}{fh}\frac{yz}{gh}\mathrm{d}f\mathrm{d}g\mathrm{d}h$$ (9) from its RHS. The equalness of the RHSs of (8) and (9) can be called “the relation between $`\mathrm{d}u\mathrm{d}v\mathrm{d}w`$ and $`\mathrm{d}f\mathrm{d}g\mathrm{d}h`$ got via $`\mathrm{d}x\mathrm{d}y\mathrm{d}z`$”. Similarly, the equalness of the RHSs of relations $$\mathrm{d}y\mathrm{d}z\mathrm{d}p=\frac{yu}{vu}\frac{zp}{wu}\mathrm{d}u\mathrm{d}v\mathrm{d}w$$ (10) and $$\mathrm{d}y\mathrm{d}z\mathrm{d}p=\frac{yz}{gh}\frac{pg}{fg}\mathrm{d}u\mathrm{d}v\mathrm{d}w$$ (11) can be called “the relation between $`\mathrm{d}u\mathrm{d}v\mathrm{d}w`$ and $`\mathrm{d}f\mathrm{d}g\mathrm{d}h`$ got via $`\mathrm{d}y\mathrm{d}z\mathrm{d}p`$”. There are four more pairs of relations of the type (811) with $`\mathrm{d}z\mathrm{d}p\mathrm{d}q`$, $`\mathrm{d}p\mathrm{d}q\mathrm{d}r`$, $`\mathrm{d}q\mathrm{d}r\mathrm{d}x`$ and $`\mathrm{d}r\mathrm{d}x\mathrm{d}y`$ respectively in their LHSs. Certainly, one can exclude the differentials from those relations and obtain formulas giving explicitely the connection between edge variables, i.e. the transformation $`R`$, namely $`{\displaystyle \frac{xy}{uy}}{\displaystyle \frac{uz}{pz}}`$ $`=`$ $`{\displaystyle \frac{xh}{fh}}{\displaystyle \frac{fg}{pg}},`$ (12) $`{\displaystyle \frac{yx}{vx}}{\displaystyle \frac{vr}{qr}}`$ $`=`$ $`{\displaystyle \frac{yh}{gh}}{\displaystyle \frac{gf}{qf}},`$ (13) $`{\displaystyle \frac{zp}{wp}}{\displaystyle \frac{wq}{rq}}`$ $`=`$ $`{\displaystyle \frac{zg}{hg}}{\displaystyle \frac{hf}{rf}},`$ (14) $`{\displaystyle \frac{xv}{uv}}{\displaystyle \frac{uw}{pw}}`$ $`=`$ $`{\displaystyle \frac{xr}{fr}}{\displaystyle \frac{fq}{pq}},`$ (15) $`{\displaystyle \frac{yu}{vu}}{\displaystyle \frac{vw}{qw}}`$ $`=`$ $`{\displaystyle \frac{yz}{gz}}{\displaystyle \frac{gp}{qp}},`$ (16) $`{\displaystyle \frac{zu}{wu}}{\displaystyle \frac{wv}{rv}}`$ $`=`$ $`{\displaystyle \frac{zy}{hy}}{\displaystyle \frac{hx}{rx}}.`$ (17) ## 5 Local Yang–Baxter equation The local Yang–Baxter equation (LYBE) dealt with in this section differs from the conventional Yang–Baxter equation, first, in its continuous (instead of usual discrete) “set of colours” and, second (and this is what makes it “local”, or “twisted”), in that all six $`R`$-matrices (instead of which we will have, however, functions of 4 complex variables) entering it are different (in the usual Yang–Baxter equation, the LHS and RHS are made from the same 3 matrices, multiplied in different orders). Namely, our LYBE will have the following form (for real $`x,y,\mathrm{}`$): $`{\displaystyle L(x,y,u,v)M(u,z,p,w)N(v,w,q,r)du}\mathrm{d}v\mathrm{d}w`$ $`={\displaystyle N^{}(y,z,g,h)M^{}(x,h,f,r)L^{}(f,g,p,q)df}\mathrm{d}g\mathrm{d}h.`$ (18) In the same way as the duality relation in , the equality (18) will hold if we require that the relation hold obtained from (18) by removing the integration signs, with the triples of variables $`u,v,w`$ and $`f,g,h`$ connected by some dependence. For such dependence, we will take the transformation $`R`$ from formula (3). Then, the following construction of functions $`L,\mathrm{},N^{}`$ can be proposed. Take the relation $$\frac{xy}{uv}\frac{zu}{wu}\mathrm{d}u\mathrm{d}v\mathrm{d}w=\frac{xh}{fh}\frac{yz}{gh}\mathrm{d}f\mathrm{d}g\mathrm{d}h$$ (19) (see (8, 9)), and also the relations (1217) raised in arbitrary degrees (the relations (1217) are not independent, so one of those degrees can be set to zero). Then multiply separately the LHSs and RHSs of all so obtained relations (including (19)). The obtained LHS and RHS will be exactly the integrands in (18), and from them the multipliers $`L,\mathrm{},N^{}`$ depending on proper quadruples of variables are easily extracted. I leave for further work the problem of possible choices of integration domains in (18) and integral regularization (if needed). The explicit form of functions $`L,\mathrm{},N^{}`$ will also be presented elsewhere. Let me just note that we can also regard all the variables $`x,y,\mathrm{}`$ as complex. In such case, we should multiply the integrands (including differentials) by their complex conjugates and integrate over some domains of six real dimensions. This will be the tetrahedral analog of Virasoro–Shapiro amplitude. ## 6 Discussion The LYBE of the form (18), as well as the duality equations from , are interesting because there exists a hope to construct from them interesting “exactly solvable” functional integrals, perhaps connected with 3-dimensional statistical physics. By the way, here I have presented the tetrahedral analog of one of two models in , and it seems fascinatingly interesting to construct the analog of the other model (and their generalizations). Very interesting will be also to clarify the relations between pentagon and tetrahedron equations, where, despite the presence of the excellent work , many things are unclear. ### Acknowledgements I am grateful to Satoru Saito for valuable discussions during my stay at Tokyo Metropolitan University, in the course of which the idea started to revisit, from the modern integrability theory viewpoint, the algebraic structures from which string theory was born in its time. I am also grateful to Sergei Sergeev for many discussions on tetrahedron and pentagon equations. Finally, I am glad to thank Russian Foundation for Basic Research for its (mostly moral) support under grant no. 98-01-00895.
no-problem/9904/cond-mat9904375.html
ar5iv
text
# Superconductors of mixed order parameter symmetry in a Zeeman magnetic field ## I Introduction Although the nature of the superconducting pair wave function in high -$`T_c`$ cuprates is not yet known strong evidences of a major $`\mathrm{d}_{\mathrm{x}^2\mathrm{y}^2}`$ symmetry exists . Experiments sensitive to the internal phase structure of the pair wave function reported a sign reversal of the order parameter supporting $`d`$ wave symmetry . Most recently from various experiments and theory it appears that the pairing symmetry of these family could be a mixed one like $`\mathrm{d}_{\mathrm{x}^2\mathrm{y}^2}+\mathrm{e}^{\mathrm{i}\theta }\alpha `$ where $`\alpha `$ could be something in the $`s`$ wave family or $`d_{xy}`$. There were early questions from tunneling experiments regarding the pure d-wave symmetry as the data supports an admixture of d and s-wave components due to orthorhombicity in YBCO. Possibility of a minor but finite $`id_{xy}`$ symmetry alongwith the predominant $`d_{x^2y^2}`$ has also been suggested in connection with magnetic defects or small fractions of a flux quantum $`\mathrm{\Phi }_0=hc/2e`$ in YBCO powders. Similar proposals came from various other authors in the context of magnetic field, magnetic impurity, interface effect etc. The experimental result by Krishana et al., was interpreted as a signature of induction of a minor component eg, $`id_{xy}`$ or $`is`$ in a $`d`$-wave superconducture with the application of magnetic field along the $`c`$ axis. In this work, we study in details the effect of a weak Zeeman magnetic field on superconductors with mixed order parameter symmetry like $`\mathrm{\Delta }(k)=\mathrm{\Delta }_{d_{x^2y^2}}+e^{i\theta }\alpha `$ with $`\alpha =d_{xy}`$, $`s`$ for arbitrary $`\theta `$. It is well known that such superconductors with $`\theta 0`$ and $`\alpha 0`$ corresponds to broken time reversal states (BTRS). These BTRS states lift the directional degeneracy of charge currents by admixing a subdominant $`\alpha `$-wave component to the d-wave pairing state and a spontaneous finite current appears. An application of Zeeman magnetic field can lift the spin degeneracy leading to suppression of BTRS ; a pure $`d`$-wave occurs with increasing magnetic field. For mixed symmetry as above with $`\theta =0`$ that preserves time reversal symmetry and are nodeful respond differently to the Zeeman field as compared to $`\theta 0`$ states. For nodefull $`\theta =0`$ state, the local gap $`\mathrm{\Delta }(k)`$ of small magnitude over the Fermi surface may be destructed with the application of Zeeman field leading to a paramagnet pocket. This although true for $`\theta 0`$ states, but such states correspond to fully gapped situation all over the Fermi surface which causes weak response to the magnetic field. A clear picture on the above will be demonstrated in this article. It may be mentioned that the high temperature superconductors are quasi two dimensional in nature and therefore, a magnetic field parallel to the $`\mathrm{Cu}\mathrm{O}`$ plane does not couple to the orbital motion of the electrons in the plane. Therefore, we shall not consider spin-orbit interaction in this work. In connection with the discussion of order parameter symmetry in cuprates, we would further like to mention that the proposal of mixed order parameter symmetry got the correct momentum when experimental data on longitudinal thermal conductivity by Krishana et al, of $`\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_8`$ compounds and that by Movshovich et al, showed supportive indication to such proposals. There are experimental results related to interface effects as well as in the bulk that indicates mixed pairing symmetry (with dominant $`d`$-wave) , thus providing a strong threat to the pure $`d`$ wave models. There were early orginal works as regards to the modifaction of superconductivity due to application of Zeeman magnetic and very recently, the Zeeman suppression was discussed in mesoscopic systems . ## II Model Calculation The free energy of a two dimensional planar superconductor with arbritary pairing symmetry in presence of a magnetic field may be written as, $$F_{k,k^{}}(h)=\frac{1}{\beta }\underset{k,\sigma =\pm }{}\mathrm{ln}(1+e^{\sigma \beta E_k^\sigma })+\frac{\mathrm{\Delta }_k^2}{V_{kk^{}}}$$ (1) where $`E_k^\sigma =\sqrt{(ϵ_k\mu )^2+\mathrm{\Delta }_k^2}+\sigma (g\mu _B/2)B`$ are the energy eigen values of a Hamiltonian that describes superconductivity, $`(g\mu _B/2)`$ is the magnetic moment of the electrons. This includes the assumption that the Zeeman field raises/lowers the energy of the spin up/down quasiparticle states. We minimize the free energy, Eq. (1) i.e, $`F/\mathrm{\Delta }`$ = 0, to get the gap equation as, $$\mathrm{\Delta }_k=\underset{k^{}}{}\frac{V_{kk^{}}}{2}\frac{\mathrm{\Delta }_k^{}}{2E_k^{}}\left(\mathrm{tanh}(\frac{\beta E_k^{}^+}{2})+\mathrm{tanh}(\frac{\beta E_k^{}^{}}{2})\right)$$ (2) where $`ϵ_k`$ is the dispersion relation taken from the ARPES data and $`\mu `$ the chemical potential will control band filling through a number conserving equation given below. Since the applied Zeeman field modifies the SC quasiparticles of spin up and down differently, their occupation probablities are also modified. The number conserving equation that controls the band filling through chemical potential, $`\mu `$ in presence of Zeeman field is given by, $`\rho (\mu ,T,h)=`$ (3) $`{\displaystyle \underset{k}{}}\left[1{\displaystyle \frac{1}{2}}{\displaystyle \frac{(ϵ_k\mu )}{E_k}}\left(\mathrm{tanh}{\displaystyle \frac{\beta E_k^+}{2}}+\mathrm{tanh}{\displaystyle \frac{\beta E_k^{}}{2}}\right)\right]`$ (4) where $`h=(g\mu _B/2)B`$. Let us consider that the overlap of orbitals in different unit cells is small compared to the diagonal overlap. Then in the spirit of tight binding lattice description, the matrix element of the pair potential used in the SC gap equation Eq. (2) may be obtained as, $`V(\stackrel{}{q})`$ $`=`$ $`{\displaystyle \underset{\stackrel{}{\delta }}{}}V_\stackrel{}{\delta }e^{i\stackrel{}{q}\stackrel{}{R}_\delta }=V_0+V_1f^d(k)f^d(k^{})+V_1g(k)g(k^{})`$ (6) $`+V_2f^{d_{xy}}(k)f^{d_{xy}}(k^{})+V_2f^{s_{xy}}(k)f^{s_{xy}}(k^{})`$ where in the first result of the equation (6) $`\stackrel{}{R}_\delta `$ locates nearest neighbour and further neighbours, $`\stackrel{}{\delta }`$ labels and $`V_n`$, $`n=1,2`$ represents strength of attraction between the respective neighbour interaction. The first term in the above equation $`V_0`$ refers to the on-site interaction which has an effective attractive value giving rise isotropic $`s`$ wave. The second and third terms are responsible for $`d`$ and extended $`s`$ wave symmetry superconductivity whereas the $`4^{th}`$ and the $`5^{th}`$ terms are responsible for $`d_{xy}`$ and $`s_{xy}`$ symmetries respectively. We restrict only to singlet pairing states (i.e, $`\mathrm{\Delta }(k)=\mathrm{\Delta }(k)`$) as applicable for high temperature superconductors. The momentum form factors are obtained as, $`f^d(k)=\mathrm{cos}(k_xa)\mathrm{cos}(k_ya)`$ (7) $`g(k)=\mathrm{cos}(k_xa)+\mathrm{cos}(k_ya)`$ (8) $`f^{d_{xy}}(k)=2\mathrm{sin}(k_xa)\mathrm{sin}(k_ya)`$ (9) $`f^{s_{xy}}(k)=2\mathrm{cos}(k_xa)\mathrm{cos}(k_ya)`$ (10) For two component order parameter symmetries as mentioned above, we substitute the required form of the potential and the corresponding gap structure into the either side of Eq. (2) which gives us an identity equation. Then separating the real and imaginary parts together with comparing the momentum dependences on either side of it we get gap equations for the amplitudes in different channels as, $`\mathrm{\Delta }_j={\displaystyle \underset{k}{}}{\displaystyle \frac{V_j}{2}}{\displaystyle \frac{\mathrm{\Delta }_jf_k^{j^2}}{2E_k}}\left[\mathrm{tanh}({\displaystyle \frac{\beta E_k^+}{2}})+\mathrm{tanh}({\displaystyle \frac{\beta E_k^{}}{2}})\right]`$ (11) where $`j=1,2`$ corresponding to two components $`d`$ and $`s`$ or $`d`$ and $`d_{xy}`$ symmetries. Considering mixed symmetry of the form $`\mathrm{\Delta }(k)=\mathrm{\Delta }_{d_{x^2y^2}}(0)f^d(k)+e^{i\theta }\mathrm{\Delta }_s(0)`$ one identifies $`\mathrm{\Delta }_1=\mathrm{\Delta }_{d_{x^2y^2}}(0)`$, $`\mathrm{\Delta }_2=\mathrm{\Delta }_s(0)`$, $`f_k^1=f^d(k)`$, $`f_k^2=1`$ and $`V^1=V_1`$, $`V^2=V_0`$ in Eq. (4). Similarly, for mixed symmetries of the form $`\mathrm{\Delta }(k)=\mathrm{\Delta }_{d_{x^2y^2}}(0)f^d(k)+e^{i\theta }\mathrm{\Delta }_{d_{xy}}(0)f^{d_{xy}}`$ $`\mathrm{\Delta }_2=\mathrm{\Delta }_{d_{xy}}(0)`$, $`f_k^2=f_k^{d_{xy}}`$ and $`V^1=V_1`$, $`V^2=V_2`$ of Eq.(4). The potential required to get such pairing symmetries are discussed in Eq. (10). We solve self-consistently the above three equations (Eq.11 and Eq.4) in order to study the phase diagram of a mixed order parameter superconducting phase in presence of Zeeman magnetic field. The numerical results obtained for the gap amplitudes through Eqs. (11,4) will be compared with free energy minimizations via Eq. (1) to get the phase diagrams. ## III Results and Discussions We present in this section our numerical results for a set of fixed parameters, e.g, a cut-off energy $`\mathrm{\Omega }_c`$= 500 K around the Fermi level above which superconducting condensate does not exist, a fixed transition temperature of the minor component $`T_c^\alpha (h=0)=24`$ K and the bulk $`T_c=85`$ K determined by the $`d`$-wave order parameter. In figures 1 and 2 we present results for $`\mathrm{\Delta }(k)=\mathrm{\Delta }_{d_{x^2y^2}}(0)f^d(k)+e^{i\theta }\alpha `$ symmetries for $`\theta =\pi /2`$ and $`\theta =0`$ respectively. Such symmetries would arise from a combination of two component pair potentials ($`2^{nd},1^{st}`$), ($`2^{nd},4^{th}`$) terms in Eq. 6 for $`\alpha =s\&d_{xy}`$ respectively. The amplitudes of extended $`s`$-wave states like $`s_{x^2+y^2}`$, $`s_{xy}`$ are found to be finite only towards very low band filling, $`\rho 0`$ for $`\theta =\pi /2`$ hence does not cause any mixing with the predominant $`d`$-wave. Therefore, we shall discuss only the results of $`\theta =0`$ and $`\theta =\pi /2`$ for $`\alpha =s\&d_{xy}`$. These two phases of $`\theta `$ can cause important differences. It is known that for any $`\theta 0`$, time reversal symmetry is locally broken at lower temperatures with the onset of the secondary component which correspond to a phase transition to an fully gapped phase for $`h=0`$, from a partially ungapped phase of $`d_{x^2y^2}`$ symmetry. On the other hand, the $`\theta =0`$ phase still remains nodeful, although the nodal lines shifts a lot from the usual $`k_x=k_y`$ lines of the $`d_{x^2y^2}`$. In Fig. 1(a) and (b) we present the temperature dependencies of the order parameters in the complex mixed symmetry. Curves corresponding to the $`d`$-wave channels are represented with thinner joining lines of different styles whereas the minor $`s`$ component is denoted through that of same style but with thicker lines (same strategy will be carried out in other figures as well). For a zero Zeeman field ($`h=0`$), the amplitude of the $`d`$-wave is suppressed with the onset of the minor $`s`$ component at $`T=24`$ K which leads to a kink like structure (cf. the thin solid curve in Fig1 (a)). With application of weak Zeeman field the transition temperature of the minor $`s`$ state decreases while the zero temperature magnitude remains the same. This causes a shift in the kink like structure in the thermal dependence of the $`d`$ wave channel towards lower temperature with increasing field and hence a small enhancement in the $`d`$-wave with field at lower temperature occurs. This point will be clearer from Fig. 5(b) as discussed latter. At a field value $`h_c=0.016`$ eV, the $`s`$ wave component is completely suppressed leading to a pure $`d`$ wave phase. Thus we have a magnetic field induced transition at lower temperature from fully gapped phase of the $`d+is`$ state to a partially gapped phase of the $`d`$-wave. Therefore, in absence or very low magnetic field, there is a transition from a partially gapped phase of the $`d`$-wave at higher temperature to a fully gapped $`d+is`$ phase at lower temperatures. With increasing field the fully gapped phase region with respect to temperature decreases and brings back the ungapped phase of the $`d`$-wave. These phase transitions will have important bearings in the thermodynamic and transport properties. In Fig. 1(b) we present the paramagnetic state of the $`d`$-wave. This phase has also been qualitatively investigated recently by Yang and Sondhi with possibility of pairing with finite momentum. We therefore restrict to present only the details thermal dependence of the $`d`$-wave superconductivity in presence of Zeeman magnetic field that were not discussed. We show that with increasing field (at lower fields) the zero temperature magnitude of the $`d`$-wave remains unchanged whereas the $`T_c`$ is reduced. At higher field e.g, $`h=0.04`$ one sees a first order transition from superconducting state to the normal state with respect to temperature. This behaviour causes a magnetic field induced enhancement of the $`2\mathrm{\Delta }/k_BT_c`$ ratio. This ratio is crucial for many physical properties like specific heat jump etc. and hence expected to have drastic effect with magnetic field. In Fig.2 we describe the thermal behaviour of superconductors with $`d+s`$ symmetry in presence of magnetic field. First of all, in absence of magnetic field, the thermal behaviors at lower temperatures is quite different (as we have seen in Fig. 1), the $`s`$-wave gap opens very fast below $`T=24`$ K and also induces a growth to the $`d`$ at a faster rate than that above $`T=24`$ K. While the $`s`$-wave has about three times the zero temperature value compared to $`d+is`$ phase, the $`d`$ wave also have quite larger value. With application of a very small magnetic field the $`s`$-component is suppressed largely; both its $`T_c`$ and the zero temperature gap being suppressed. The $`d`$-wave gap magnitude is also suppressed while its $`T_c`$ remained practically unchanged with such small values of the magnetic field. More importantly, although both the $`d`$ and the $`s`$ channel has larger magnitude in the $`d+s`$ phase the critical field ($`h_c=0.013`$ eV) at which the $`s`$ component vanishes completely is smaller compared to that for the $`\theta =\pi /2`$ phase ($`h_c=0.016`$ eV) of the mixed symmetry. Distinctly, the response of the Zeeman field to the $`d+s`$ superconductors is more pronounced than the $`d+is`$ superconductors. This study therefore also revealed the importance of the phase of the minor component in $`d`$-wave superconductors with a pratical example of effect of Zeeman magnetic field, for the first time. In order to establish the importance of the phase of the minor component we also study the effect of Zeeman magnetic field in $`\mathrm{d}_{\mathrm{x}^2\mathrm{y}^2}+\mathrm{id}_{\mathrm{xy}}`$ and $`\mathrm{d}_{\mathrm{x}^2\mathrm{y}^2}+\mathrm{d}_{\mathrm{xy}}`$ symmetry superconductors respectively. The qualitative as well as quantitative behaviors remain almost same as that shown in figures 1 and 2. Thus establishing that the response of the superconducors with mixed order parameter symmetries with $`\theta =0`$ phase of the minor component is stronger. In figures 5 and 6 we concentrate on studying behavior of such mixed order parameter symmetry superconductors in presence of magnetic field as a function of band filling ($`\rho `$) for $`d_{xy}`$ and $`s`$ as minor components respectively. With application of a small field the minor component $`\alpha =d_{xy}`$ or $`s`$ is suppressed only at lower fillings and then with increasing field it is suppressed in the optimal doping regime suddenly. Therefore, with increasing field one finds mixed symmetry region as well as pure $`d`$-wave region with respect to filling – a non-uniform superconductivity. This behavior depends on the nature of the minor component. For $`\alpha =s`$, the mixed symmetry is possible around half-filling and at around $`\rho =0.7`$ in an intermediate field value. With increasing field mixed symmetry regions around both the band fillings shrinks and at a field value of $`h=0.03`$ the $`s`$-wave vanishes leading to a pure $`d`$-wave phase. It may be noticed the $`d`$-wave channel is enhanced (cf. Fig 5(b)) in the optimal doping region with intermediate field values as was also mentioned earlier while discussing Fig1 (a). This however does not occur in case of $`d_{xy}`$. For $`d_{xy}`$ minor component, the minor component is suppressed strongly only from the lower filling with increasing field and at an intermidiate field the mixing is possible only near half filling. The $`d`$-wave boundary towards lower fillings as well as around half-filling also shrinks with increasing field values in contrast to $`\alpha =s`$. One always see a sharp transition from a mixed phase to a pure $`d`$-wave phase or otherway around with respect to band filling irrespective of the minor component in a given magnetic field. At larger fields ($`h>0.03`$) when the minor component is suppressed completely i.e, one has a pure $`d`$-wave phase, the $`d`$-wave phase also show sharp transition from $`d`$-wave superconducting state to a normal state (cf. Fig. 6). In Fig. 6 we study the same as that in Fig. 5 for $`d+s`$ and $`d+d_{xy}`$ symmetries. In contrast to the $`\theta =\pi /2`$ phase of the minor component (as in Fig. 5), the minor components have very large values in the $`\theta =0`$ phase although have the same $`T_c^\alpha `$ as mentioned earlier, in absence of the magnetic field. With very small magnetic field such large amplitudes of the condensation in the minor channel is strongly suppressed (see Fig 6(b) specially). The nature of suppression, as in Fig.5, is different for different minor component. For example, $`d_{xy}`$ is suppressed only from the lower filling whereas the $`s`$-wave is suppressed both from the lower filling as well as around the optimal doping. Overall, it is very distinct that the magentic field affects the $`\theta =0`$ mixed phase more stongly than the $`\theta =\pi /2`$ phase. This is due to the fact that the response of applied Zeeman field is paramagnetic with destruction of superconductivity over parts of the Fermi surface where the Zeeman field exceeds the local magnitude of the $`k`$-dependent gap resulting a spin polarization in the normal electrons. For $`\theta =\pi /2`$ phase, the nodes are missing and the Fermi surface is gapped all over, although the gap will have local minima. Thus response of the $`\theta =\pi /2`$ phase is weaker compared to the $`\theta =0`$ phase. Noticiably, the critical value at which the minor component vanishes completely in all band fillings for the $`\theta =0`$ phase is $`h_c=0.025`$ eV whereas that for $`\theta =\pi /2`$ is $`h_c=0.03`$ eV. (The value of $`h_c`$ depends on $`\rho `$, as well as $`\alpha `$ and $`\theta `$). Above $`h_c=0.025`$, the paramegnetic state of the $`d`$-wave superconductors are also presented. The region of $`d`$-wave superconductivity shrinks with increasing field around the band filling $`\rho =0.82`$. The change from $`d`$-wave to spin polarized normal state is very sharp with respect to the band filling. In the $`d+s`$ state a plateau has been observed in the $`d`$ channel in weak or zero field, similar to the behaviour known for the YBCO systems . It may be mentioned that Krishana et al., found the longitudinal thermal conductivity of $`\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_{8+\delta }`$ at lower temperatures (5K to 20 K) decreases with the increase in magnetic field applied along $`\mathrm{c}`$-axis. Above a critical value of the magnetic field $`\mathrm{H}_\mathrm{k}(\mathrm{T})`$, the thermal conductivity cease to change with the magnetic field and develops a plateau. It was proposed that the $`\mathrm{d}_{\mathrm{x}^2\mathrm{y}^2}`$ pairing state is unstable against the formation of $`\mathrm{d}+\mathrm{e}^{\mathrm{i}\theta }\alpha `$ (where $`\alpha =\mathrm{s},\mathrm{d}_{\mathrm{xy}}`$) in presence of $`\mathrm{H}_\mathrm{k}(\mathrm{T})`$ such that the loss of quasiparticle transport in the thermal conductivity can be explained. In contrast, we started with a $`\mathrm{d}+\mathrm{e}^{\mathrm{i}\theta }\alpha `$ picture and application of Zeeman field (which may be mapped as application of magnetic field parallel to the 2D $`\mathrm{Cu}\mathrm{O}`$ plane) without considering the orbital effect did not find any enhancement in the condensation of the minor channel but suppression leading to a pure paramegnetic $`d`$ state. ## IV Summary In summary we have performed a detailed study on the effect of Zeeman magnetic field on mixed pairing symmetry with predominant $`d`$-wave which seems to be very promising symmetry for the high $`T_c`$ systems. Thus we have described the paramagnetic state in the mixed symmetry superconductors and subsequently in the $`d`$-wave superconductors. In particular we established that the phase of the minor component mixed with predominant $`d`$-wave is of immense importance. The $`\theta =0`$ phase minor component symmetry responds to Zeeman field more profoundly than the $`\theta =\pi /2`$ of the minor component. It will be very interesting to calculte the specific heat, Magnetization, density of states as a function of magnetic field using this model. We argued that the orbital effect is secondary when a magnetic field is applied parallel to the conducting plane. This may indicate that the experimental observation by Krishana et al., involve strong coupling of spins to orbitals due to application of magnetic field perpendicular to the plane at lower temperatures. However, the order parameter alone does not completely determine the thermodynamic property of a system. In Ref. , some estimations and scaling relations in the change of various physical properties due to Zeeman field are given for a pure $`d`$-wave superconductor. It turns out, a weak Zeeman field does little to the order parameter but may profoundly affect the thermodynamic property of a pure $`d_{x^2y^2}`$ supercondcutor. Such effects will presumably also remain for mixed superconductors with $`\theta =0`$ as the gapnodes similar to $`d_{x^2y^2}`$ remains, its effect in the $`\theta =\pi /2`$ phase has to be studied more carefully. ## V Acknowledgments A large part of this work was carried out at the Instituto de Física, Universidade Federal Fluminense, Brazil and was financially supported by the Brazilian funding agency FAPERJ, project no. E-26/150.925/96-BOLSA.
no-problem/9904/cond-mat9904343.html
ar5iv
text
# Triangular Trimers on the Triangular Lattice: an Exact Solution ## Abstract A model is presented consisting of triangular trimers on the triangular lattice. In analogy to the dimer problem, these particles cover the lattice completely without overlap. The model has a honeycomb structure of hexagonal cells separated by rigid domain walls. The transfer matrix can be diagonalised by a Bethe Ansatz with two types of particles. This leads two an exact expression for the entropy on a two-dimensional subset of the parameter space. In the course of years a few exactly solvable lattice gas models have been found. The Ising model , proposed in 1920 by Lenz as a model of a ferromagnet, can be interpreted as a lattice gas with hard-core repulsion and short-range attraction. The (zero-field square lattice) Ising model was solved in 1944 by Onsager . It exhibits gas–liquid coexistence below a critical temperature and a single fluid phase above. Another lattice gas is the hard hexagon model , solved in 1980 by Baxter . It has a continuous fluid–solid transition. At high-density (solid) the particles select one of three sub-lattices; at low-density (fluid) these sub-lattices are evenly occupied. As a final example we mention the dimer problem. It was solved for planar lattices in 1961, independently by Kasteleyn and by Temperley and Fisher . A dimer is a particle that occupies two adjacent lattice sites. As in the Ising and the hard hexagon model two particles cannot occupy the same lattice site. In contrast to these models it is also required that all sites are occupied. The configurations are coverings of a lattice with dimers, without empty sites or overlap. The dimer problem is reviewed in Ref. . We discuss the dimer model on the honeycomb lattice in some detail, because it has illustrative similarities to a new model we shall introduce below. A configuration of the honeycomb lattice dimer model can be viewed as a number of domains consisting of vertical dimers, separated by zigzagging domain walls made up of dimers of the other two orientations. This is illustrated in Fig. 1. The domain walls run from the bottom to the top of the lattice, so that any horizontal line through the system meets all domain walls. Hence the number of domain walls is the same in each horizontal slice, in other words, it is a conserved quantity. Consider the entropy for fixed density $`\rho `$ of domain wall dimers. From the exact solution of the model it can be calculated that for low $`\rho `$ the entropy per dimer is given by $$S(\mathrm{log}2)\rho \frac{\pi ^2}{24}\rho ^3.$$ The linear term reflects the zigzag freedom of the domain walls; each domain wall dimer contributes $`\mathrm{log}2`$ to the entropy. The cubic term is due to the (repulsive) interaction between the domain walls: when two domain walls meet some of the zigzag freedom is lost. Now give chemical potentials $`\mu `$ to the dimers in the domain walls and $`0`$ to the vertical dimers. For $`\mu \mathrm{log}2`$ the free energy $`F=\mu \rho S(\rho )`$ is is an increasing function of $`\rho `$ for small $`\rho `$, so no domain walls will be present. For $`\mu \mathrm{log}2`$ the free energy has a minimum at some small positive value of $`\rho `$. At $`\mu =\mathrm{log}2`$ there is a transition between a frozen phase consisting of vertical dimers only and a rough phase where dimers of all three orientations are present. Inspired by the dimer model we consider coverings of the triangular lattice by triangular trimers. A trimer is a particle that occupies three lattice sites. As in the dimer problem we require that there are no empty sites and that there is no overlap. Fig. 2 shows a typical configuration. In this paper we present our main results on this model. We intend to publish a more detailed account later. The model admits very regular configurations where the trimers occupy a sub-lattice of the triangular faces. There are six such sub-lattices, which we number 0, 1, …, 5 as indicated in Fig. 2. Note that the up and down triangles make up the even-numbered and odd-numbered sub-lattices, respectively. Consider configurations where trimers on sub-lattice 0 predominate. They consist of hexagonal domains of trimers on this sub-lattice, separated by straight domain walls that form an irregular honeycomb network. There are three types of domain walls, of different orientations. The domain walls that run from lower right to upper left will be termed L; they are made up of trimers on sub-lattice 5. Those running from lower left to upper right will be called R; they consist of trimers on sub-lattice 1. The vertical domain walls are made up of trimers on sub-lattice 3. When domain walls of the three different types meet in a Y-shape a trimer on sub-lattice 2 or 4 occurs. This is illustrated in Fig. 3. For the time being we require that the model is isotropic, in the sense that there are equal amounts of the three types of domain walls. Let $`\rho `$ denote the density of domain wall trimers. The domain walls are rigid, so they have no zigzag freedom contributing to the entropy. Therefore the low density expansion of the entropy contains no term linear in $`\rho `$. There is, however, freedom in the sizes of the domains . For example, it is possible to enlarge a single domain while simultaneously shrinking its six neighbours. The contribution per domain depends on the linear dimensions of the domains, and is roughly proportional to $`\mathrm{log}\rho `$. The number of domains is approximately proportional to $`\rho ^2`$. Hence the “breathing” entropy is given for low $`\rho `$ by $$SK\rho ^2\mathrm{log}\rho ,$$ (1) where $`K`$ is some (positive) proportionality constant. If a chemical potential $`\mu `$ is given to the domain wall trimers, the free energy for low $`\rho `$ is $$F\mu \rho +K\rho ^2\mathrm{log}\rho .$$ This is an increasing function of $`\rho `$ for $`\mu <0`$ and a decreasing function for $`\mu 0`$. Hence the free energy takes its minimum either at $`\rho =0`$ or at a large value of $`\rho `$, for which the approximation (1) is not valid. For small $`\mu `$ there are no domain walls, but when $`\mu `$ passes some threshold $`\rho `$ jumps to a positive value. Thus the phase transition is different from that in the honeycomb lattice dimer model, where the domain wall density increases gradually at the phase transition. Now we return to the model without the isotropy requirement. Let $`N`$ denote the total number of trimers, $`N_i`$ the number of trimers on sub-lattice $`i`$, and $`\rho _i`$ the partial density $`N_i/N`$. These six sub-lattice densities obviously satisfy $$\rho _0+\rho _1+\rho _2+\rho _3+\rho _4+\rho _5=1.$$ (2) It can be shown that, when toroidal boundary conditions are imposed, they also satisfy $$\rho _0\rho _2+\rho _2\rho _4+\rho _4\rho _0=\rho _1\rho _3+\rho _3\rho _5+\rho _5\rho _1.$$ (3) When the total density of down trimers $`\rho _{\mathrm{}}=\rho _1+\rho _3+\rho _5`$ is small, it follows easily from (2) and (3) that one of $`\rho _0`$, $`\rho _2`$ and $`\rho _4`$, say $`\rho _0`$, is larger than the other two. If there is no further symmetry breaking $`\rho _0>\rho _1=\rho _3=\rho _5>\rho _2=\rho _4`$. By the same token when $`\rho _{\mathrm{}}`$ is close to $`1`$ the symmetry between the down sub-lattices is broken. Therefore when $`\rho _{\mathrm{}}`$ is increased from $`0`$ to $`1`$ at least one phase transition is expected. Beside (2) and (3) we have found no more constraints on the sub-lattice densities. Therefore of the six sub-lattice densities four are independent. We would like to know the entropy (per trimer) as function of these four parameters. We have been able to compute it for a two-dimensional subset of the four-dimensional parameter space. The calculation is rather lengthy, so here we only give an outline of the method, and a description of the final result. View each vertical domain wall as a combination of one L domain wall and one R domain wall. Then the L and R domain walls run without interruption from the bottom to the top of the lattice. Therefore the number of L domain walls and the number of R domain walls are constant throughout the system. This is analogous to the situation described above for the dimer model on the honeycomb lattice, except that there are now two conserved quantities $`n_\text{L}`$ and $`n_\text{R}`$ instead of a single one. We introduce the densities $`\rho _\text{L}=n_\text{L}/L`$ and $`\rho _\text{R}=n_\text{R}/L`$, where $`3L`$ is the number of sites in a horizontal row of the lattices. They can be expressed in terms of the sub-lattice densities: $`\rho _\text{L}`$ $`=`$ $`1\rho _0\rho _1+\rho _3+\rho _4,`$ (4) $`\rho _\text{R}`$ $`=`$ $`1\rho _0+\rho _2+\rho _3\rho _5.`$ (5) It is suggestive to interpret the vertical lattice direction as “time” and the horizontal direction as “space”. The domain walls then are viewed as world lines of two types of particles, L and R. In a vertical domain wall one L and one R particle form a “bound state”. The model can be formulated in terms of a transfer matrix, which describes the “time” evolution of the system of L particles and R particles in one “space” dimension. Solving the model boils down to determining the largest eigenvalue of this operator, or more precisely, its maximum over all particle numbers $`n_\text{L}`$ and $`n_\text{R}`$. This we have achieved using coordinate Bethe Ansatz; the solution is similar to that of the square-triangle random tiling model, due to Widom and Kalugin . A model can be solved in this way only if, in some sense, the many-particle interactions factorise into two-particle interactions. It turns out that for the present model this is indeed the case. It is noteworthy that the L particles among each other are free fermions, as are the R particles, but that the interaction between an L and an R particle is non-trivial. The Bethe Ansatz allows for numerical computations for the system on an infinitely long cylinder of finite circumference. These computations can be done to arbitrary precision, and effectively for the full four-dimensional parameter space of the model. In the thermodynamic limit the Bethe Ansatz gives rise to a set of two coupled integral equations. The physical quantities we are interested in, such as the densities $`\rho _\text{L}`$ and $`\rho _\text{R}`$ and the entropy, can be expressed in terms of the functions satisfying these equations. These can be solved analytically in a special case . Thus we have obtained an exact expression for the entropy for a two-dimensional family of sub-lattice densities. It will be seen below that this family is given by $`\rho _1=\rho _3=\rho _5`$ (or $`\rho _0=\rho _2=\rho _4`$). This solution is parametrised by a complex number $`\widehat{b}`$ with $`\text{Im }\widehat{b}>0`$. Write $$\widehat{b}=b_\text{L}b_\text{L}^1=b_\text{R}b_\text{R}^1$$ (6) with $`\text{Re }b_\text{L}0`$ and $`\text{Re }b_\text{R}0`$. It follows that $`\text{Im }b_\text{L}>0`$, $`\text{Im }b_\text{R}>0`$ and $`b_\text{L}b_\text{R}=1`$. Take contours $`C_\text{L}`$ and $`D_\text{L}`$ running from $`b_\text{L}^{}`$ to $`b_\text{L}`$, and $`C_\text{R}`$ and $`D_\text{R}`$ running from $`b_\text{R}`$ to $`b_\text{R}^{}`$. The arrangement of these four curves must be as shown in Fig. 4, but their precise shape is immaterial. (In fact there are more solutions, corresponding to contour configurations different from that in Fig. 4. Since they are related by permutations of the six sub-lattices, we only treat one case here.) Define the complex function $$t(z)=\left(\frac{zz^1\widehat{b}}{zz^1\widehat{b}^{}}\right)^{1/6}.$$ Fix a branch $`t_\text{L}(z)`$ with branch cuts $`C_\text{R}`$ and $`D_\text{L}`$ by $`t_\text{L}(0)=\mathrm{exp}(\pi \text{i}/3)`$, and a branch $`t_\text{R}(z)`$ with branch cuts $`C_\text{L}`$ and $`D_\text{R}`$ by $`t_\text{R}(0)=\mathrm{exp}(\pi \text{i}/3)`$. The domain wall density $`\rho _\text{L}`$ is given by $$\rho _\text{L}=\frac{1}{2\pi \text{i}}_{C_\text{L}}\frac{t_\text{L}(z)+t_\text{L}(z)^1}{z}\text{d}z,$$ and $`\rho _\text{R}`$ is given by the same equation with all subscripts L changed into R. The sub-lattice densities are $`\rho _0`$ $`=`$ $`1{\displaystyle \frac{1}{2}}(\rho _\text{L}+\rho _\text{R})+{\displaystyle \frac{1}{6}}(\rho _\text{L}^2\rho _\text{L}\rho _\text{R}+\rho _\text{R}^2),`$ (7) $`\rho _2`$ $`=`$ $`{\displaystyle \frac{1}{2}}(\rho _\text{R}\rho _\text{L})+{\displaystyle \frac{1}{6}}(\rho _\text{L}^2\rho _\text{L}\rho _\text{R}+\rho _\text{R}^2),`$ (8) $`\rho _4`$ $`=`$ $`{\displaystyle \frac{1}{2}}(\rho _\text{L}\rho _\text{R})+{\displaystyle \frac{1}{6}}(\rho _\text{L}^2\rho _\text{L}\rho _\text{R}+\rho _\text{R}^2),`$ (9) $`\rho _i`$ $`=`$ $`{\displaystyle \frac{1}{6}}(\rho _\text{L}+\rho _\text{R}){\displaystyle \frac{1}{6}}(\rho _\text{L}^2\rho _\text{L}\rho _\text{R}+\rho _\text{R}^2)\text{for odd }i.`$ (10) Define auxiliary integrals $`\varphi _\text{L}`$ and $`\mathrm{\Sigma }_\text{L}`$ by $`\varphi _\text{L}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\text{Re}{\displaystyle _{b_\text{L}}^{b_\text{L}^1}}{\displaystyle \frac{t_\text{L}(z)+t_\text{L}(z)^1}{z}}\text{d}z,`$ (11) $`\mathrm{\Sigma }_\text{L}`$ $`=`$ $`{\displaystyle \frac{1}{4}}\text{Re}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{t_\text{L}(z)+t_\text{L}(z)^11}{z}}\text{d}z.`$ (12) The real parts of these integrals do not depend on the choice of the integration contours, which must not meet the branch cuts $`C_\text{R}`$ and $`D_\text{L}`$. The auxiliary integrals $`\varphi _\text{R}`$ and $`\mathrm{\Sigma }_\text{R}`$ are defined analogously. The entropy per trimer is given by $$S=\mathrm{\Sigma }_\text{L}+\mathrm{\Sigma }_\text{R}+\frac{1}{6}(2\rho _\text{R}\rho _\text{L})\varphi _\text{L}+\frac{1}{6}(2\rho _\text{L}\rho _\text{R})\varphi _\text{R}.$$ We started out with the model parametrised by the six sub-lattice densities satisfying the constraints (2) and (3). The exact solution described above has two parameters, so it covers a two-dimensional set of sub-lattice densities. The solution is however parametrised by a complex number $`\widehat{b}`$, and not in terms of these densities. It follows from (10) that $`\rho _1=\rho _3=\rho _5`$ for the exact solution. The space of sub-lattice densities satisfying this constraint as well as (2) and (3) is two-dimensional. In the limit $`\widehat{b}0`$ the domain wall densities $`\rho _\text{L}=\rho _\text{R}`$ vanish; the system is then filled with sub-lattice 0 trimers and its entropy is zero. At $`\widehat{b}=2\text{i}`$ the system is in a symmetric phase with all sub-lattice densities equal to $`1/6`$; its entropy is $`S_{\text{sym}}=\mathrm{log}(3\sqrt{3}/4)`$. When $`\widehat{b}`$ is taken between $`0`$ and $`2\text{i}`$ on the imaginary axis, $`b_\text{L}`$ and $`b_\text{R}`$ lie on the unit circle. Then $`\rho _\text{L}=\rho _\text{R}`$, so the sub-lattice densities satisfy $`\rho _2=\rho _4`$. For small $`\rho _{\mathrm{}}1/2`$ this equation together with $`\rho _1=\rho _3=\rho _5`$ describes the most symmetric case for the sub-lattice densities. Based on numerical Bethe Ansatz calculations we believe that for given $`\rho _{\mathrm{}}1/2`$ the system takes its maximum entropy at these sub-lattice densities. Fig. 5 shows the entropy $`S`$ as function of $`\rho _{\mathrm{}}`$. The entropy for $`\rho _{\mathrm{}}1/2`$ was obtained from that for $`\rho _{\mathrm{}}1/2`$ using the symmetry between the up and the down trimers. The entropy $`S`$ is a convex function of $`\rho _{\mathrm{}}`$ for $`0\rho _{\mathrm{}}1/2`$. A system with $`\rho _{\mathrm{}}`$ in this interval is thermodynamically unstable. It would separate into a phase with $`\rho _{\mathrm{}}=0`$ and a phase with $`\rho _{\mathrm{}}=1/2`$, except for the fact that the model does not admit an interface between these two phases. Similarly a system with $`1/2\rho _{\mathrm{}}1`$ would demix into phases with $`\rho _{\mathrm{}}=1/2`$ and $`\rho _{\mathrm{}}=1`$. The transition between these phases can also be controlled by assigning a chemical potential $`\mu `$ to the down trimers instead of imposing their density $`\rho _{\mathrm{}}`$. From Fig. 5 it is seen that for $`\mu 2S_{\text{sym}}`$ the free energy $`F=\mu \rho _{\mathrm{}}S(\rho _{\mathrm{}})`$ takes its minimum at $`\rho _{\mathrm{}}=0`$, so all trimers are on one of the up sub-lattices. For $`2S_{\text{sym}}\mu 2S_{\text{sym}}`$ the minimum of $`F`$ is at $`\rho _{\mathrm{}}=1/2`$, so all sub-lattices are equally occupied. For $`\mu 2S_{\text{sym}}`$ the minimum of $`F`$ is at $`\rho _{\mathrm{}}=1`$, so the system is again in a frozen phase. We thank Jan de Gier for fruitful discussions. This work is part of the research programme of the “Stichting voor Fundamenteel Onderzoek der Materie (FOM)”, which is financially supported by the “Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO)”.
no-problem/9904/cond-mat9904140.html
ar5iv
text
# Phonon-Induced Spin Relaxation of Conduction Electrons in Aluminum \[ ## Abstract Spin-flip Eliashberg function $`\alpha _S^2F`$ and temperature-dependent spin relaxation time $`T_1(T)`$ are calculated for aluminum using realistic pseudopotentials. The spin-flip electron-phonon coupling constant $`\lambda _S`$ is found to be $`2.5\times 10^5`$. The calculations agree with experiments validating the Elliott-Yafet theory and the spin-hot-spot picture of spin relaxation for polyvalent metals. \] Spin dynamics of itinerant electrons in metals and semiconductors is attracting increasing attention. Part of the reason for this interest is fundamental, arising from improved spin injection and detection techniques which now allow precise measurements of spin transport, relaxation, and coherence properties. But much of the recent interest is also motivated by the exciting potential of using electron spin as a building block in nanoelectronics (dubbed “spintronics”) where spin dynamics and transport is projected to be utilized in proposed novel device applications. The most ambitious such possibility is using electron spin as a qubit in a quantum computer architecture, but more modest proposals involving the use of spin injection and transport in new quantum transistor devices (“spin transistors”) have also been made . Electron spin already plays a fundamental, albeit passive, role in giant magnetoresistance-based memory devices. The current push for a better understanding of spin dynamics in electronic materials is, however, based on the hope that the electron spin could be used as an active element, where manipulation of spin in a controlled manner will lead to novel device applications which are not feasible in conventional microelectronics. This hope arises from two underlying concepts: the inherently quantum mechanical nature of spin (enabling the possibility of truly quantum devices which could not be envisioned within standard micro- or nanoelectronics), and, even more importantly, the inherently long relaxation or coherence time of spin eigenstates in metals and semiconductors (indeed, in a typical nonmagnetic metal at room temperature electron spins survive for hundreds of picoseconds; by comparison, momentum states live no more than femtoseconds). This Letter provides the first realistic quantitative calculation of the temperature dependent spin relaxation time (the so called $`T_1`$ relaxation time) in an electronic material, namely, metallic aluminum. The calculation, for reasons to be explained below, is surprisingly subtle and extremely computationally demanding; it has therefore never been attempted before although the basic theory for the phenomenon goes back more than thirty-five years . The mechanism behind spin relaxation in metals is believed to be the spin-flip scattering of electrons off phonons and impurities, as suggested by Elliott and Yafet . The periodic, ion-induced spin-orbit interaction causes electronic Bloch states to have both spin up and spin down amplitudes. The states can still be polarized by a magnetic field (so we can call them up and down) but because of the spin mixing, even a spin-independent interaction with phonons or impurities (which are assumed to be nonmagnetic) leads to a transition from, say, up to down, degrading any unbalanced spin population. (Note that the spin-orbit interaction by itself does not produce spin relaxation–what is needed is spin-orbit coupling to mix the up and down spins, and a momentum conservation-breaking mechanism such as impurities or phonons.) Although these arguments seem to be consistent with experimental findings, there has been to date no calculation of $`T_1`$ for a metal based on the Elliott-Yafet theory. In this Letter we calculate the phonon contribution to $`T_1`$ for aluminum providing the first quantitative justification of the theory. (Impurities in real samples contribute only a temperature independent background which can be subtracted from the measurement.) At temperatures $`T`$ above 100 K, where experimental data are not available, our calculation is a prediction which should be useful for designing room-temperature spintronic devices that use aluminum. We also calculate the spin-flip Eliashberg function $`\alpha _S^2F(\mathrm{\Omega })`$ which measures the ability of phonons with frequency $`\mathrm{\Omega }`$ to change electron momenta and spins. This function, which is an analogue of the ordinary (spin-conserving) Eliashberg function $`\alpha ^2F(\mathrm{\Omega })`$, is important in spin-resolved point-contact spectroscopy where phonon-induced spin flips could be directly observed. (A recent effort to detect phonon-induced spin flips in aluminum failed because of the overwhelming spin-flip boundary scattering in the sample.) Aluminum belongs to the group of metals whose spin relaxation is strongly influenced by band-structure anomalies . Monod and Beuneu observed that while simple estimates based on the Elliott-Yafet theory work well for monovalent alkali and noble metals, they severely underestimate $`1/T_1`$ for polyvalent Al, Mg, Be, and Pd (the only polyvalent metals measured so far). Silsbee and Beuneu pointed out that in aluminum accidental degeneracies can significantly enhance $`1/T_1`$. We recently developed a general theory including band structure anomalies like accidental degeneracy, crossing Brillouin zone boundaries or special symmetry points, and rigorously showed that they all enhance $`1/T_1`$. This explains the Monod-Beuneu finding because the anomalies (which we named “spin hot spots”) are ubiquitous in polyvalent metals. The present calculation is consistent with the spin-hot-spot picture. The formula for the spin relaxation rate, first derived by Yafet, can be written in the more conventional electron-phonon terminology as $`1/T_1(T)=8\pi T{\displaystyle _0^{\mathrm{}}}𝑑\mathrm{\Omega }\alpha _S^2F(\mathrm{\Omega }){\displaystyle \frac{N(\mathrm{\Omega })}{T}},`$ (1) where $`N(\mathrm{\Omega })=[\mathrm{exp}(\mathrm{}\mathrm{\Omega }/k_BT)1]^1`$ and $`\alpha _S^2F(\mathrm{\Omega })`$ is the spin-flip Eliashberg function. Before writing the expression for $`\alpha _S^2F`$ we introduce the following notation. Electron states $`\mathrm{\Psi }`$ (normalized to a primitive cell) in the periodic potential $`V`$ containing the spin-orbit interaction are labeled by lattice momentum $`𝐤`$, band index $`n`$, and spin polarization $``$ or $``$. If $`V`$ has inversion symmetry (as in aluminum), states $`\mathrm{\Psi }_{𝐤n}`$ and $`\mathrm{\Psi }_{𝐤n}`$ are degenerate . The spin polarization then means that these two states are chosen to satisfy $`(\mathrm{\Psi }_{𝐤n},\widehat{\sigma }_z\mathrm{\Psi }_{𝐤n})=(\mathrm{\Psi }_{𝐤n},\widehat{\sigma }_z\mathrm{\Psi }_{𝐤n})>0`$ with the off-diagonal terms vanishing . Lattice vibrations are represented by phonons with momentum $`𝐪`$ and polarization index $`\nu `$. Phonon frequency is $`\omega _{𝐪\nu }`$ and polarization vector $`𝐮_{𝐪\nu }`$ (we consider a Bravais lattice). If $`𝐪=𝐤𝐤^{}`$ and $`g_{𝐤n,𝐤^{}n^{}}^\nu \left|𝐮_{𝐪\nu }(\mathrm{\Psi }_{𝐤n},V\mathrm{\Psi }_{𝐤^{}n^{}})\right|^2,`$ (2) the spin-flip Eliashberg function is $`\alpha _S^2F(\mathrm{\Omega })={\displaystyle \frac{g_S}{2M\mathrm{\Omega }}}{\displaystyle \underset{\nu }{}}g_{𝐤n,𝐤^{}n^{}}^\nu \delta (\omega _{𝐪,\nu }\mathrm{\Omega })_{𝐤n}_{𝐤^{}n^{}}.`$ (3) Here $`g_S`$ is the number of states per spin and atom at the Fermi level, $`M`$ is the ion mass, and $`\mathrm{}_{𝐤n}`$ denotes the Fermi surface averaging . We calculate $`\alpha _S^2F`$ and $`T_1`$ for aluminum by the pseudopotential method . The spin-independent part of the electron-ion pseudopotential is represented by the Mašović-Zeković semi-empirical form factor which reproduces well the observed band gaps at the symmetry points of the Brillouin zone. This is a crucial feature because the presence of spin hot spots makes $`T_1`$ sensitive to the band structure at the Fermi surface . The spin-orbit part of the pseudopotential comes from a fit of the first-principles Bachelet-Hamann-Schlüter pseudopotential to $`\alpha \widehat{𝐋}\widehat{𝐒}𝒫_1`$, where $`\widehat{𝐋}`$ ($`\widehat{𝐒}`$) is the orbital (spin) momentum operator and $`𝒫_l`$ is the operator projecting on the orbital momentum state $`l`$. The parameter $`\alpha =2.4\times 10^3`$ a. u. (1 a. u. = 2 Ry) inside the ion core of twice the Bohr radius, $`r_c=2r_B`$. Outside the core $`\alpha `$ vanishes. The cutoff for the plane-wave energy is 1 a. u. from the Fermi level . For phonons we use the highly successful force-constant model of Cowley which gives an excellent fit to the experimental spectrum. Finally, the sums over the Brillouin zone are done by the tetrahedron method with a specially designed grid of more than 4000 points around the Fermi surface in an irreducible wedge of the Brillouin zone to accurately obtain contributions from the spin hot spots. Figure 1 shows the calculated spin relaxation time $`T_1`$ as a function of temperature. The agreement with experiment is evident. At high temperatures where there are no experimental data, our calculation predicts $`T_1[\mathrm{ns}]24T^1[K]`$. This behavior is expected for a phonon-induced relaxation above the Debye temperature which for aluminum is about 400 K. As Fig. 1 shows the $`T_1T^1`$ behavior starts already at 200 K. At very low temperatures the theory predicts the asymptotic temperature dependence $`T_1T^5`$ (the Yafet law ) purely on dimensional grounds. Our calculation gives rather a good fit to $`T_1T^{4.35}`$ between 2 and 10 K. At lower temperatures our results cease to be reliable because of the finite size (limited by the computing resources) of the tetrahedron blocks in the summations over the Brillouin zone. We anticipate that the asymptotic Yafet law would be reached at lower temperatures (much lower than 2 K) since we have verified numerically its origin, namely that $`g_{𝐤^{}n,𝐤^{}n}^\nu (𝐤𝐤^{})^4`$ at $`𝐤𝐤^{}`$ (a quadratic dependence would be expected for spin-conserving matrix elements). In Fig. 1 we also plot an estimate of $`T_1`$ based on the simple formula $`T_1\tau /4b^2,`$ (4) where $`b^2`$ is the Fermi surface average of the spin-mixing parameter, calculated in to be $`2.5\times 10^5`$, and $`\tau `$ is the momentum relaxation time obtained from the Drude formula for the resistivity (resistivity data taken from Ref. ) with an electron thermal mass of $`1.5`$ of the free electron mass. This estimate of $`T_1`$ reproduces well the calculated functional temperature dependence making Eq. 4 useful as a starting point for order-of-magnitude estimates. The calculated spin-flip Eliashberg function $`\alpha _S^2F`$ for aluminum is shown in Fig. 2 along with the phonon density of states $`F`$ and the spin-conserving Eliashberg function $`\alpha ^2F`$. The last agrees very well with previous calculations . Transverse phonon modes which dominate the low-frequency spectrum are less effective in scattering electrons, with or without spin flip, than high-frequency longitudinal phonon modes. The behavior of $`\alpha _S^2F`$ at small $`\mathrm{\Omega }`$ that gives the Yafet law is predicted to be $`\alpha _S^2F\mathrm{\Omega }^4`$. We are not able to reproduce this result, again because of the finite size of the tetrahedron blocks. This is a well known problem that the asymptotic low-frequency behavior is hard to reproduce . From the Eliashberg function we can calculate the effective electron-phonon coupling constant $`\lambda _{(S)}=2{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{d\mathrm{\Omega }}{\mathrm{\Omega }}}\alpha _{(S)}^2F(\mathrm{\Omega }).`$ (5) We obtain $`\lambda 0.4`$ and $`\lambda _S2.5\times 10^5`$. The spin-conserving $`\lambda `$ falls well into the interval of the “recommended” values $`0.380.48`$ obtained by different methods . At high temperatures the phonon-induced relaxation is determined by $`\lambda _{(S)}`$, since in this regime $`\mathrm{}/\tau 2\pi \lambda k_BT`$ and $`\mathrm{}/T_14\pi \lambda _Sk_BT`$. The momentum to spin relaxation time ratio $`\tau /T_1`$ is $`2\lambda _S/\lambda 1.24\times 10^4`$. From the above ratio of $`\tau /T_1`$ we obtain the “effective” $`b^23.1\times 10^5`$ in Eq. 4, not that different from its calculated value of $`2.0\times 10^5`$ . Thus, our theory is internally consistent. We conclude with a remark on the accuracy of our calculation of $`\lambda _S`$. The numerical error is accumulated mostly during the summations over the Brillouin zone. This error was previously estimated to be about 10%. Another source of uncertainty, which is much more important here than in the spin-conserving calculations, comes from the choice of the pseudopotentials. While the spin-orbit pseudopotential sets the overall scale ($`1/T_1\alpha ^2`$), the scalar part of the pseudopotential determines the “band renormalization” of $`1/T_1`$, that is, the enhancement due to spin hot spots . Here we can only offer a guess. Considering the spin-orbit part “fixed,” our semi-empirical scalar pseudopotential, which reproduces the experimental band gaps at symmetry points within 5%, does not introduce more than another 10% error, making $`\lambda _S`$ determined with 20% accuracy. As for the spin-orbit interaction, future experiments done in the regime where $`T_11/T`$ (that is, above 200 K), will have the opportunity to set definite constraints on $`\alpha `$ through a direct comparison with our theory. In summary, we have provided the first fully quantitative theory for the temperature dependent spin relaxation rate in aluminum taking into account spin-orbit coupling and electron-phonon interaction within the Elliott-Yafet formalism using realistic pseudopotentials. Our theoretical results are in excellent agreement with the measured $`T_1(T)`$ in aluminum and for $`T>`$ 100 K, where experimental results are currently non-existent, our theory provides specific predictions for comparison with future experiments. We thank P. B. Allen for helpful discussions. This work was supported by the U.S. ONR and the U.S. ARO.
no-problem/9904/nucl-th9904003.html
ar5iv
text
# Reply to Comment on “Determination of pion-baryon coupling constants from QCD sum rules” In his Comment, Kim criticises the sum rules obtained in Ref. for pion-baryon coupling constants. In particular he suggests that the treatment of the continuum in our work is inconsistent, and he presents a different perturbative model for the continuum that leads to rather different results. However Kim’s arguments rely heavily on the use of single dispersion relations to take into account the continuum contributions to the correlation function. These are assumed to take the form $`\mathrm{\Pi }^{\mathrm{cont}}(p^2)={\displaystyle _S^{\mathrm{}}}𝑑s{\displaystyle \frac{\rho _{\mathrm{OPE}}(s)}{sp^2}}+\mathrm{},`$ (1) up to subtraction terms which are needed to cancel divergences of the integral over $`s`$ and which form a polynomial in $`p^2`$. Here is $`S`$ is some threshold above which the perturbative continuum from the operator-product expansion (OPE) is assumed to provide a good approximation to the true spectral density $`\rho (s)`$. This is the form familiar in the analysis of sum rules for hadron masses, which are obtained from vacuum-to-vacuum two-point correlators. In contrast, meson-to-vacuum correlators or correlators in the presence of an external field should be represented by a double dispersion relation. This is because the external meson or field can cause transitions between different states that are created or annihilated by the chosen interpolating fields. A separate dispersion relation is thus needed for each of the hadron “propagators” corresponding to the initial and final hadrons. (For more details, see Ref.). In the general case where the external meson or field carries nonzero momentum $`q`$, the continuum contribution can be written as $`\mathrm{\Pi }^{\mathrm{cont}}(p_1^2,p_2^2,q^2)={\displaystyle _{S_1}^{\mathrm{}}}𝑑s_1{\displaystyle _{S_2}^{\mathrm{}}}𝑑s_2{\displaystyle \frac{\rho _{\mathrm{OPE}}(s_1,s_2,q^2)}{(s_1p_1^2)(s_2p_2^2)}}+\mathrm{},`$ (2) where $`S_1`$ and $`S_2`$ are (possibly different) thresholds. To obtain the sum rules of Ref. we expanded around the chiral limit and so we worked in the limit $`q^20`$. In this case $`p_1^2=p_2^2p^2`$ and so a single momentum flows through the correlator. At first sight one might think that the use of the single dispersion relation in $`p^2`$ is legitimate, since one could split up the denominator in (2) and perform the integration over either $`s_1`$ or $`s_2`$ first. However, this impression is misleading: the integrals over $`s_1`$ and $`s_2`$ are ultraviolet divergent and, moreover, the subtraction terms that are needed for the integral over $`s_1`$ ($`s_2`$) appear multiplying an unknown function of $`p_2^2`$ ($`p_1^2`$). Hence, even in the limit of $`q^20`$, one cannot cancel the divergences of the integral in Eq. (2) by a simple polynomial in $`p^2`$. The above representation of $`\mathrm{\Pi }^{\mathrm{cont}}(p^2,p^2,0)`$ is thus not equivalent to a single dispersion relation. As stressed by Ioffe, important features of QCD sum rules for coupling constants, such as the double pole at $`p^2=M_N^2`$ and the single pole due to nucleon-to-continuum transitions and the associated subtraction terms, rely on the use of a double dispersion relation. As an example of our treatment of the continuum, consider the dimension-5 term in our sum rule, which arises from a term of the form $`C\mathrm{ln}(p^2)`$ in the OPE. The corresponding spectral density that reproduces this logarithm (up to subtraction constants) has the form $$\rho _{\mathrm{OPE}}(s_1,s_2,0)=Cs_1\delta (s_1s_2).$$ (3) If we use this perturbative density in Eq. (2), starting at some threshold $`S`$, as our model for the continuum on the phenomenological side of the sum rule, we obtain $$\mathrm{\Pi }^{\mathrm{cont}}(p^2,p^2,0)=C_S^{\mathrm{}}𝑑s\frac{s}{(sp^2)^2}=C\left[\mathrm{ln}(Sp^2)\frac{S}{Sp^2}+\mathrm{}\right],$$ (4) up to terms that vanish after Borel transforming. Taking the Borel transform of Eq. (4) with respect to $`Q^2=p^2`$ gives a perturbative continuum contribution of the form $$CM^2\left(1+\frac{S}{M^2}\right)\mathrm{exp}(S/M^2).$$ When this is taken over to the OPE side of the sum rule, it leads to the replacement of $`CM^2`$ (the Borel transform of $`C\mathrm{ln}(p^2)`$) by $`CM^2E_1(S/M^2)`$, as in our paper. (The functions $`E_n(x)`$ are defined in the usual way: $`E_n(x)=1(1+x+\mathrm{}+x^n/n!)e^x`$.) A similar treatment of the dimension-3 term generates a factor of $`E_2(S/M^2)`$. The approach outlined here, which was used in Ref., is based on a simple perturbative model for the spectral density, which is assumed to start at some threshold $`S`$. This is then inserted in the double dispersion relation for the correlator where it gives rise to logarithmic discontinuities, starting at the threshold $`p^2=S`$, as well as threshold singularities, which in this case are poles. Kim raises questions about the unphysical nature of these poles and their consistency with ideas of duality. However duality tells us only that the hadronic spectral density at high energies can be well approximated by a spectral density of quarks and gluons. The whole threshold, together with any associated singularities, is an artefact of our crude modelling of the continuum at lower energies, where hadronic resonances are important. Moreover, any simple pole-plus-continuum ansatz for the phenomenological spectral density ignores many of the singularities (cuts and threshold singularities) that will be present in the real correlator. The whole sum-rule approach relies on the assumption that in some averaged sense the main features of the real correlator are reproduced by the ansatz used on the phenomenological side of the sum rule. Hence any model of this type for the continuum should be used only in the context of some procedure for averaging over $`p^2`$, such as the Borel transform. Its detailed form as a function of $`p^2`$ should not be taken too seriously. We therefore believe that our treatment of the continuum is consistent with duality and, more importantly, with the fact that the correlator in the presence of an external meson or field should be represented by a double dispersion relation. Finally we do acknowledge one correction which does need to be made to the results of Ref.. This concerns the contribution of the dimension-7 condensate. In Ref. a contribution of this term to the continuum was included, which led to a factor of $`E_0(S/M^2)`$ in that term in the sum rule. Since, as Kim points out, the corresponding term in the OPE has the form $`1/p^2`$, with no logarithm, it does not contribute to the perturbative continuum. The factor $`E_0`$ should therefore be replaced by 1. However this term is small; indeed it was included only in order to estimate the size of dimension-7 contributions to the sum rule. Hence the numerical results of Ref. remain practically unchanged.
no-problem/9904/cond-mat9904004.html
ar5iv
text
# Electronic structure of spinel-type LiV2O4 ## Abstract The band structure of the cubic spinel compound LiV<sub>2</sub>O<sub>4</sub>, which has been reported recently to show heavy Fermion behavior, has been calculated within the local-density approximation using a full-potential version of the linear augmented-plane-wave method. The results show that partially-filled V 3$`d`$ bands are located about 1.9 eV above the O 2$`p`$ bands and the V 3$`d`$ bands are split into a lower partially-filled $`t_{2g}`$ complex and an upper unoccupied $`e_g`$ manifold. The fact that the conduction electrons originate solely from the $`t_{2g}`$ bands suggests that the mechanism for the mass enhancement in this system is different from that in the 4$`f`$ heavy Fermion systems, where these effects are attributed to the hybridization between the localized 4$`f`$ levels and itinerant $`spd`$ bands. The recent discovery of heavy Fermion (HF) behavior in LiV<sub>2</sub>O<sub>4</sub> by Kondo et al. has significant importance because this is the first $`d`$-electron system that shows HF characteristics, a phenomenon that has previously been observed only in $`f`$-electron systems. Kondo et al. have reported a large electronic heat coefficient of $`\gamma `$ 0.42 J/mol K<sup>2</sup> and a crossover with decreasing temperature $`T`$ from local moment to renormalized Fermi-liquid behavior. Recently Takagi et al. have reported that the electrical resistivity $`\rho `$ of single crystals exhibits a $`T^2`$ temperature dependence $`\rho =\rho _0+AT^2`$ (Ref. ) with an enormous $`A`$, which is another HF characteristic. The Curie-Weiss law at high temperatures implies that each V ion has a local moment and their coupling is antiferromagnetic although no magnetic ordering is observed down to 0.02 K. The purpose of the present study is to determine from first principles the electronic band properties of LiV<sub>2</sub>O<sub>4</sub>, with particular emphasis on features near the Fermi level $`E_F`$. This provides a reference framework for evaluating the magnitude of the heavy Fermion mass-enhancement effects in this material. In addition, a simple tight-binding model which captures the essential features of the LiV<sub>2</sub>O<sub>4</sub> electronic structure is presented. In heavy Fermion systems, the enhanced electron mass manifests itself in terms of an exceptionally large value of the density of quasi-particles at $`E_F`$. Experimentally, this high density is reflected in the large specific-heat $`\gamma `$ and the large spin susceptibility $`\chi ^{\mathrm{spin}}`$ which is nearly $`T`$-independent. The calculated band DOS at $`E_F`$ $`D(E_F)`$ can be compared directly with the experimental results, and the ratio of the experimental and calculated band $`D(E_F)`$ values then provides a direct measure of the enhancement factors. LiV<sub>2</sub>O<sub>4</sub> forms with the well known cubic spinel structure in which the Li ions are tetrahedrally coordinated by oxygens while the V sites are surrounded by a slightly-distorted octahedral array of oxygens. The spinel structure features a face-centered-cubic Bravais lattice and a nonsymmorphic space group ($`Fd3m`$) which is identical with that of the diamond structure. The primitive unit cell contains two LiV<sub>2</sub>O<sub>4</sub> formula units (14 atoms). One can construct the spinel structure by an alternate stacking along -type directions of the two different kinds of cubes shown in Fig. 1. The LiV<sub>2</sub> substructure is the same as the $`C15`$ structure $`AB_2`$, where the local moments at the $`B`$ sites are highly frustrated. The observed lattice constant of the LiV<sub>2</sub>O<sub>4</sub> is 8.22672 Å at 4 K. The eight oxygen atoms in the primitive cell are situated at the 32$`e`$-type sites, at positions which are determined by the internal-position parameter $`x=0.2611`$. The VO<sub>6</sub> octahedra are trigonally distorted in the spinel structure unless the parameter $`x`$ is equal to the “ideal” value 0.25. The present band-structure calculations for LiV<sub>2</sub>O<sub>4</sub> have been carried out in the local-density approximation (LDA) using a full-potential, scalar-relativistic implementation of the linear augmented-plane-wave (LAPW) method. The LAPW basis has included plane waves with a 14-Ry cutoff ($``$60 LAPW’s/atom) and spherical-harmonic terms up to $`l=6`$ inside the muffin tins. The crystalline charge density and potential have been expanded using $``$7300 plane waves (60 Ry cutoff) in the interstitial region and lattice harmonics with $`l_{\mathrm{max}}=6`$ inside the muffin-tin spheres ($`R_{\mathrm{Li}}1.98`$ a.u., $`R_\mathrm{V}2.16`$ a.u., $`R_\mathrm{O}1.54`$ a.u.). Brillouin-zone (BZ) integrations have utilized a ten-point k sample in the 1/48 irreducible wedge. Exchange and correlation effects have been treated via the Wigner interpolation formula. The atomic Li($`2s^1`$), V($`3d^44s^1`$) and O($`2s^22p^4`$) states were treated as valence electrons in this study whereas the more tightly bound levels were included via a frozen-core approximation. The results of the present LAPW calculations for LiV<sub>2</sub>O<sub>4</sub> are plotted along selected BZ symmetry lines in Fig. 2(a). The O 2$`s`$ states, which are not shown, form narrow ($``$1 eV) bands and are situated about 19 eV below the Fermi level $`E_F`$. The lowest band complex shown in this figure evolves from the O 2$`p`$ states: it contains 24 bands, and has an overall width of $``$4.8 eV. The 20 V 3$`d`$ bands lie above the O 2$`p`$ bands, separated by a 1.9-eV gap. A broad plane-wave conduction band evolves from the $`\mathrm{\Gamma }`$ point above 4.4 eV. These general features are quite similar to those exhibited by previous results for the closely related isostructural compound LiTi<sub>2</sub>O<sub>4</sub> The octahedral crystal field at the V sites in the spinel structure splits the 20 V 3$`d`$ bands into twelve partially-filled $`t_{2g}`$ bands and eight $`e_g`$ states. The Fermi level lies within the $`t_{2g}`$ complex, and thus the transport properties of LiV<sub>2</sub>O<sub>4</sub> are solely associated with $`t_{2g}`$ bands. The formal valence of vanadium is 3.5+, yielding exactly six valence electrons per cell in a perfectly stoichiometric material. The point symmetry at the V sites is $`D_{3d}`$. In the fully localized limit, this allows the cubic $`t_{2g}`$ crystal-field-type states to be split into $`a_{1g}`$ and $`e_g`$ levels. As discussed below, tight-binding estimates of this splitting show that it is quite small ($``$0.14 eV) in LiV<sub>2</sub>O<sub>4</sub>, about 6 % of the octahedral $`e_g`$-$`t_{2g}`$ splitting ($``$2.5 eV). The 0.14-eV difference between the $`a_{1g}`$ and $`e_g`$ orbital energies is small compared to the overall $`t_{2g}`$ bandwidth ($``$2.2 eV). As a result, this splitting of the $`a_{1g}`$ and $`e_g`$ orbital energies does not produce effects which are readily discernible in the band-dispersion curves for LiV<sub>2</sub>O<sub>4</sub>. In order to provide a convenient starting point for future investigations of electron-correlation effects in the present LiV<sub>2</sub>O<sub>4</sub> system, we have applied a simple tight-binding (TB) model to fit the present LAPW results at six symmetry points in the BZ. This TB model has included the V 3$`d`$ as well as the O 2$`s`$ and 2$`p`$ orbitals. Using 12 independent TB parameters, a moderately accurate (rms error = 0.17 eV) fit has been obtained to the 52 LiV<sub>2</sub>O<sub>4</sub> valence and conduction bands. The fitting has involved two-center energy as well as overlap parameters. The TB parameters are listed in Table I. The TB representation of the LiV<sub>2</sub>O<sub>4</sub> bands is shown in Fig. 2(b). It is clear that, although the agreement is not perfect, the TB model captures the essential features of the LAPW results. A different TB model excluding the oxygen orbitals has been applied to the LAPW V 3$`d`$ band results in order to obtain estimates of the orbital energies for the individual $`a_{1g}`$ and $`e_g`$ subbands in this system. (Note that the TB parameters in Table I include only a single V 3$`d`$ orbital energy $`E_d`$). These TB orbital energies represent the mean band energy for each subband. They are in fact the crystal-field levels that come into play in the limit where the V 3$`d`$ electrons are localized by electron-correlation effects. According to the present analysis, the mean band energy for the upper $`e_g`$ complex is 2.82 eV while those for the $`t_{2g}`$-derived $`a_{1g}`$ and $`e_g`$ manifold are 0.43 and 0.29 eV, respectively. Thus, crystal-field splitting that arises from the octahedral coordination of oxygens (2.48 eV) is more than an order-of-magnitude larger than that originating from the trigonal distortion ($``$0.14 eV). This TB model, which also involved a total of 12 parameters, yielded a moderate fit (rms error = 0.15 eV) to the V 3$`d`$-band states. The effective $`d`$-$`d`$ hopping integrals \[$`(dd\sigma )`$, $`(dd\pi )`$, $`(dd\delta )`$\] over three shells of V-V neighbors \[$`d_1`$ = 2.91 Å, $`d_2`$ = 5.04 Å, $`d_3`$ = 5.82 Å\] that have been included in this fit have values (in eV) \[-0.425, 0.008, 0.152\], \[-0.034, 0.026, -0.021\], and \[-0.060, 0.064, -0.026\], respectively. These parameters may provide a useful starting point for future studies of electron-electron correlation effects in this system. A more detailed view of the $`t_{2g}`$ portion of the LAPW V 3$`d`$ energy-band results for LiV<sub>2</sub>O<sub>4</sub> is shown in Fig. 3. The six LiV<sub>2</sub>O<sub>4</sub> valence electrons per cell are sufficient to fill, on average, three of the 12 $`t_{2g}`$ conduction bands. In fact, the calculated LAPW band dispersion produces several partially-filled bands, leading to a rather complicated Fermi-surface topology in this system. While two bands are completely filled, the third conduction band contains holes near the $`X`$ and $`L`$ symmetry points ($`h_3`$) and electron pockets at $`\mathrm{\Gamma }`$ ($`e_4`$ and $`e_5`$) and $`W`$ ($`e_4`$). Since the unit cell contains an even number of electrons, compensation requires that there exist equal numbers (i.e. BZ volumes) of electrons and holes. The existence of Fermi-surface sheets with both electron and hole character in LiV<sub>2</sub>O<sub>4</sub> provides a ready explanation for the experimental observation that the Hall coefficient changes sign with temperature in accordance with a two-carrier model. The total DOS and the projected contributions to the DOS from various muffin-tins are shown in Fig. 4. The dominant contribution to the DOS over this energy range originates from the O 2$`p`$ and V 3$`d`$ states. The O 2$`p`$ bands are filled by electrons transferred from the Li 2$`s`$ and V 3$`d`$ orbitals so the system can be described by the ionic configuration Li<sup>+</sup>(V<sup>3.5+</sup>)<sub>2</sub>(O<sup>2-</sup>)<sub>4</sub>. Previous photoemission-spectra data are in general agreement with these results. However, the calculated band DOS and the photoemission spectra do not agree with each other in the V 3$`d`$ band region near $`E_F`$. In particular, the photoemission intensity at $`E_F`$ is an order of magnitude lower than the calculated DOS. On the other hand, the density of quasi-particles at $`E_F`$ derived from the specific heat is much higher than the calculated $`D(E_F)`$. According to our results, the calculated $`D(E_F)=`$ 7.1 states/(eV$``$formula-unit) corresponds to $`\gamma _{\mathrm{cal}}=`$ 17 mJ/mol K<sup>2</sup>; this is $``$25 times smaller than the experimental value of $`\gamma `$ 0.42 J/mol K<sup>2</sup>. It is well-known that electron-phonon interactions enhance the electronic specific heat as in the closely-related spinel compound LiTi<sub>2</sub>O<sub>4</sub>, where band calculations indicated strong electron-phonon-coupling effects, yielding a coupling parameter $`\lambda =`$ 1.8 and hence a mass enhancement factor of $`1+\lambda =`$ 2.8. In the present case, however, the deduced enhancement factor of $``$25 suggests that a different mechanism such as electron-correlation effects would be responsible for the observed HF behavior in LiV<sub>2</sub>O<sub>4</sub>. In comparison with the band structures of typical 4$`f`$-electron HF systems, our result show similar behavior in that the calculated density of states at $`E_F`$ is much smaller than those obtained from the analysis of experimental specific-heat data. For example, the ratio $`\gamma _{\mathrm{exp}}/\gamma _{\mathrm{cal}}`$ is $``$100 for CeCu<sub>2</sub>Si<sub>2</sub> and $``$70 for CeAl<sub>3</sub> Also, the band structures of the $`f`$-electron HF and LiV<sub>2</sub>O<sub>4</sub> systems are similar in that there is a sharp DOS peak just above $`E_F`$ ($``$0.3 eV for CeCu<sub>2</sub>Si<sub>2</sub> and $``$0.1 eV for LiV<sub>2</sub>O<sub>4</sub>, see inset of Fig. 4). In order to explain the high density of quasi-particles deduced from the electronic specific heats, a renormalized band picture was proposed, in which a strong renormalization of the $`f`$ band due to the strong correlation at the 4$`f`$ site was taken into account. In the renormalized band picture, it is considered that electrons with two different degrees of wave function localization, namely, the itinerant $`sd`$ conduction electrons and the well localized $`f`$ electrons hybridize with each other. In the case of LiV<sub>2</sub>O<sub>4</sub>, however, all the conduction bands crossing $`E_F`$ consist of V $`t_{2g}(3d)`$ orbitals, i.e., electrons with the same degrees of localization, and therefore the mechanism of the mass enhancement may be totally different from the $`f`$-electron systems. The narrowing of the energy bands due to electron correlation within the $`t_{2g}`$ bands is likely. Given a high-temperature local moment behavior in a metallic system, a prerequisite for a HF behavior is that the local moments do not show long-range order at low temperatures. In the 4$`f`$-electron HF systems, long-range order is prohibited by weakness of the coupling between the 4$`f`$ local moments. In the case of LiV<sub>2</sub>O<sub>4</sub>, the long-range order is disfavored by the magnetic frustrations originating from the spinel structure. This phenomenological consideration remains to be justified by microscopic theories. In conclusion, the results of LAPW band calculations for the spinel-type compound LiV<sub>2</sub>O<sub>4</sub> show that the O 2$`p`$, V 3$`d`$ ($`t_{2g}`$) and V 3$`d`$ ($`e_g`$) bands as well as the higher-lying $`sp`$ conduction band are well separated from each other and that the Fermi level lies within the $`t_{2g}`$ bands. The electronic structure of LiV<sub>2</sub>O<sub>4</sub> can be well described by electrons in the triply-degenerate $`t_{2g}`$ bands, thereby indicating that the mechanism of the mass enhancement in this compound should be different from that in the $`f`$-electron heavy Fermion systems. Further experimental studies on various physical properties at low and high temperatures are necessary to characterize the nature of the heavy Fermion behavior in LiV<sub>2</sub>O<sub>4</sub>. High-resolution photoemission studies would be especially useful for clarifying the renormalization of quasi-particles in this system. The authors would like to thank Prof. Y. Ueda for valuable discussions. This work was supported by a Special Coordination Fund for Promoting Science and Technology from the Science and Technology Agency of Japan. L. F. M. is grateful to Yamada Science Foundation for supporting his visit to the University of Tokyo. J. M. acknowledges support from the Japan Society for the Promotion of Science for Young Scientists.
no-problem/9904/hep-ph9904288.html
ar5iv
text
# Nonpartonic components in the nucleon structure functions at small 𝑄² in a broad range of 𝑥 ## 1 Introduction The standard deep inelastic scattering picture applies when the four-momentum transfer squared from the lepton line to the hadron line ($`Q^2`$) is large. When virtual photon wave length increases and reaches the size of the nucleon one may expect a transition to another regime where the standard partonic model is no longer valid. In this region a kinematical constraint guarantees the vanishing of the $`F_2(x,Q^2)`$ structure function. This requirement is not embodied in the perturbative parton distributions. A phenomenological fit based on parton screening was proposed in to satisfy this condition by introducing an extra form factor. The recent low-$`Q^2`$ data from HERA have triggered many phenomenological analyses. Especially interesting is the unexplored transition region. At present there is no consensus on the details of the transition mechanism. Here we concentrate on the region of somewhat larger $`x`$ rather than the new HERA data. We shall demonstrate that also at larger $`x`$ a similar transition due to vanishing partonic components at small $`Q^2`$ takes place, although it is not directly seen from experimental data. It is common wisdom that the vector dominance model applies at low $`Q^2`$ while the parton model describes the region of large $`Q^2`$, leading at lowest order to Bjorken scaling, and to logarithmic scaling violation in higher orders of QCD. A proposal was made in Ref. to unify both the limits in a consistent dispersion method approach. In the traditional formulation of the VDM one is limited to large lifetimes of hadronic fluctuations of the virtual photon, i.e. small Bjorken $`x<0.1`$ for the existing data. It is a purpose of this paper to generalize the model to a full range of $`Q^2`$ and $`x`$ by introducing extra phenomenological form factors to be adjusted to the experimental data. Some authors believe that it is old-fashioned to talk about VDM contribution in the QCD era. However, VDM effects appear naturally in the time-like region in the production of vector mesons in $`e^+e^{}`$ collisions. These effects cannot be described in terms of perturbative QCD, as in the production of resonances many complicated nonperturbative effects take place. The physics must be similar in the space-like region. We shall demonstrate that it is essential to include this contribution explicitly in order to describe the structure functions at low Q<sup>2</sup>. In the next section we outline our model and discuss how to choose its basic parameters in a model independent way. In section 3 we discuss a fit of the remaining parameters to the fixed target data and present results of the fitting procedure for the proton and deuteron structure functions. In addition we compare the result of our model for $`F_2^pF_2^n`$ and some subtle isovector higher-twist effects with another low-$`Q^2`$ model. Finally we discuss some interesting predictions which could be verified in the future. ## 2 The model As in Ref. the total nucleon structure function is represented as a sum of the standard vector dominance part, important at small $`Q^2`$ and/or small Bjorken $`x`$, and partonic (part) piece which dominates over the vector dominance (VDM) part at large $`Q^2`$: $$F_2^N(x,Q^2)=F_2^{N,VDM}(x,Q^2)+F_2^{N,part}(x,Q^2).$$ (1) The standard range of applicability of vector dominance contribution is limited to large invariant masses of the hadronic system ($`W`$), i.e. small values of $`x`$. In the target (nucleon) reference frame the time of life of the hadronic fluctuation is given according to the uncertainty principle as $`\tau 1/\mathrm{\Delta }E`$ with $$\mathrm{\Delta }E=\sqrt{M_V^2+|𝐪|^2}\sqrt{q^2+|𝐪|^2},$$ (2) where $`M_V`$ is the mass of the hadronic fluctuation (vector meson mass). In terms of the photon virtuality and Bjorken $`x`$ this can be expressed as $$\mathrm{\Delta }E=\frac{M_V^2+Q^2}{Q^2}M_Nx.$$ (3) As the energy transfer $`\nu \mathrm{}`$ the time of life of the hadronic fluctuaction becomes $`\tau \frac{Q^2}{M_V^2m_Nx}`$. It is natural to expect small VDM contribution when the time of life of the hadronic fluctuation is small. We shall model this fact by introducing a form factor $`\mathrm{\Omega }(\tau )=\mathrm{\Omega }(x,Q^2)`$. Then the modified vector dominance contribution can be written as: $$F_2^{N,VDM}(x,Q^2)=\frac{Q^2}{\pi }\underset{V}{}\frac{M_V^4\sigma _{VN}^{tot}(s^{1/2})}{\gamma _V^2(Q^2+M_V^2)^2}\mathrm{\Omega }_V(x,Q^2).$$ (4) In the present paper we take $`\gamma `$’s calculated from the leptonic decays of vector mesons which include finite width corrections $`\gamma _\rho ^2/4\pi `$ = 2.54, $`\gamma _\omega ^2/4\pi `$ = 20.5, $`\gamma _\varphi ^2/4\pi `$ = 11.7. <sup>1</sup><sup>1</sup>1Please note different normalization of $`\gamma `$’s in comparison to . In general one can try different functional forms for $`\mathrm{\Omega }`$. In the present analysis we shall use only exponential and Gaussian form factors $`\mathrm{\Omega }(x,Q^2)=\mathrm{exp}(\mathrm{\Delta }E/\lambda _E),`$ $`\mathrm{\Omega }(x,Q^2)=\mathrm{exp}((\mathrm{\Delta }E/\lambda _G)^2).`$ (5) As in Ref. we take the partonic contribution as $$F_2^{N,part}(x,Q^2)=\frac{Q^2}{Q^2+Q_0^2}F_2^{asymp}(\overline{x},\overline{Q}^2).$$ (6) where $`\overline{x}=\frac{Q^2+Q_2^2}{W^2m_N^2+Q^2+Q_2^2}`$ and $`\overline{Q}^2=Q^2+Q_1^2`$. The $`F_2^{asymp}(x,Q^2)`$ above denotes the standard partonic structure function which in the leading order can be expressed in terms of the quark distributions: $`F_2^{asymp}(x,Q^2)=x_fe_f^2\left[q_f(x,Q^2)+\overline{q}_f(x,Q^2)\right]`$. The extra factor in front of Eq.(6) assures a correct kinematic beheviour in the limit $`Q^20`$. In general $`Q_0^2`$, $`Q_1^2`$ and $`Q_2^2`$ can be slightly different. In the following section we shall consider different options. At large Bjorken $`x`$ one has to include also the so-called target mass corrections. Their origin is mainly kinematic . In our approximate treatment we substitute the Bjorken variable $`x`$ in the partonic distributions by the Nachtmann variable $`\xi `$ given by: $$\xi =\frac{2x}{1+\sqrt{(1+\frac{4M_N^2x^2}{Q^2})}},$$ (7) which is the dominant modification. In principle $`F_2^{asymp}(x,Q^2)`$ could be obtained in any realistic model of the nucleon combined with QCD evolution. We leave the rather difficult problem of modeling the partonic distributions for future studies. We expect that at not too small $`x>`$ 0.01, the region of the interest of the present paper, the leading order Glück-Reya-Vogt (GRV) parametrization of $`F_2^{p,asymp}(x,Q^2)`$ and $`F_2^{n,asymp}(x,Q^2)`$ should be adequate. Furthermore in our opinion the parametrization with the valence-like input for the sea quark distributions and $`\overline{d}`$ \- $`\overline{u}`$ asymmetry built in incorporates in a phenomenological way nonperturbative effects caused by the meson cloud in the nucleon . The total cross section for (vector meson) – (nucleon) collision is not well known. Above meson-nucleon resonances, one may expect the following approximation to hold: $`\sigma _{\rho N}^{tot}`$ $`=`$ $`\sigma _{\omega N}^{tot}={\displaystyle \frac{1}{2}}\left[\sigma _{\pi ^+p}^{tot}+\sigma _{\pi ^{}p}^{tot}\right],`$ $`\sigma _{\varphi N}^{tot}`$ $`=`$ $`\sigma _{K^+p}^{tot}+\sigma _{K^{}p}^{tot}{\displaystyle \frac{1}{2}}\left[\sigma _{\pi ^+p}^{tot}+\sigma _{\pi ^{}p}^{tot}\right].`$ (8) Using a simple Regge-inspired parametrizations by Donnachie-Landshoff of the total $`\pi N`$ and $`KN`$ cross sections we get simple and economic parametrizations for energies above nucleon resonances $`\sigma _{\rho N}^{tot}`$ $`=`$ $`\sigma _{\omega N}^{tot}=13.63s^{0.0808}+31.79s^{0.4525},`$ $`\sigma _{\varphi N}^{tot}`$ $`=`$ $`10.01s^{0.0808}+2.72s^{0.4525},`$ (9) where the resulting cross sections are in mb. We expect that our model should be valid in a broad range of $`x`$ and $`Q^2`$ except for very small $`x<0.001`$, where genuine effects of BFKL pomeron physics could show up, and except for very large $`x`$, where the energy ($`s^{1/2}`$ in Eq.(4)) is small and the behaviour of the total $`VN`$ cross section is essentially unknown. Because our main interest is in the transition region, the large $`Q^2`$ data were not taken into account in the fit. There the partonic contribution is by far dominant and the GRV parametrization is known to provide a reliable description of the data. ## 3 Comparison to experimental data Most of the previous parametrizations in the literature centered on the proton structure function. In the present analysis we are equally interested in both proton and neutron structure functions. Achieving this goal requires special selection of the experimental data with similar statistics and similar range in ($`x`$,$`Q^2`$) for proton and deuteron structure function. In Fig.1 we display the experimental data for proton (left panel) and deuteron (right panel) structure functions chosen in our fit. We have selected only NMC, E665 and SLAC sets of data for both proton and deuteron structure functions, together amounting to 1833 experimental points: 901 for the proton structure function and 932 for the deuteron structure function. According to the arguments presented above, we have omitted BCDMS and HERA data in the fitting procedure but these will be compared to our parametrization when discussing the quality of the fit. The deuteron structure function has been calculated as $$F_2^d(x,Q^2)=\frac{1}{2}[F_2^p(x,Q^2)+F_2^n(x,Q^2)],$$ (10) i.e. we have neglected all nuclear effects such as shadowing, antishadowing due to excess mesons, Fermi motion, binding, etc, which are known to be relatively small for the structure function of the deuteron , one of the most loosely bound nuclear systems. In addition we have assumed isospin symmetry for the proton and neutron quark distributions, i.e. $`u_n(x,Q^2)=d_p(x,Q^2)`$, $`d_n(x,Q^2)=u_p(x,Q^2)`$ and $`s_n(x,Q^2)=s_p(x,Q^2)`$. The charm contribution, which in the GRV parametrization is due to the photon-gluon fusion, is in practice negligible in the region of $`x`$ and $`Q^2`$ taken in the fit, and therefore is omitted throughout the present analysis. The results of the fit are summarised in Table 1. Because in general $`Q_0^2`$, $`Q_1^2`$ and $`Q_2^2`$ can be different, there are 4 independent free parameters of the model. In order to limit the number of parameters we have imposed extra conditions as specified in Table 1. A series of seven fits has been performed. In all cases considered the number of free parameters has been reduced to two: the cut-off parameter of the form factor and $`Q_0^2`$. Statistical and systematic errors were added in quadrature when calculating $`\chi ^2`$. Only data with $`Q^2>`$ 0.25 GeV<sup>2</sup> were taken in the fit which is connected with the domain of applicability of the GRV parametrization. The values of the parameters found are given in each case in parentheses below the value of $`\chi ^2`$ per degree of freedom. In addition to the combined fit, which includes both proton $`F_2^p`$ and deuteron $`F_2^d`$ structure function data, we show the result of the fit separately for proton and deuteron structure functions. As can be seen from the table fairly similar values of parameters are found for the proton and deuteron structure function and the $`\chi ^2`$ per degree of freedom is slightly worse in the latter case which can be due to the omission of nuclear effects as mentioned above. The best fit (No 1 in the table) is obtained with $`Q_1^2=Q_2^2=0`$ (fits of similar quality can be obtained with very small values of $`Q_1^2`$ 0.1 GeV<sup>2</sup> and $`Q_2^2`$ 0.1 GeV<sup>2</sup>). While the value of $`\chi ^2`$ does not practically depend on the type of the form factor (exponential vs. Gaussian), much larger value of $`Q_0^2`$ is found for the Gaussian ($`Q_0^2`$ = 0.84 GeV<sup>2</sup>) than for the exponential ($`Q_0^2`$ = 0.52 GeV<sup>2</sup>) parametrization. The value of $`Q_0^2`$ found here is smaller than in the original Badełek-Kwieciński model but larger than that found by H1 collaboration in the fit to low-$`x`$ data . Although the resulting $`\chi ^2`$ is similar in both cases, the $`F_2^n(x)/F_2^p(x)`$ ratio for $`x1`$ prefers the Gaussian form factor. While the vector meson contribution with the exponential form factor survives up to relatively large $`x`$, with the Gaussian form factor it is negligible at large $`x`$.. For comparison the GRV parametrization of quark distributions alone yields: $`\chi ^2/N_{dof}`$ = 9.74 (21.48) (proton structure functions), $`\chi ^2/N_{dof}`$ = 13.73 (32.99) (deuteron structure functions), $`\chi ^2/N_{dof}`$ = 11.77 (27.33) (combined data), where the first numbers include target mass corrections and for illustration in parentheses their counterparts without target mass corrections are given. Clearly the inclusion of the target mass effects is essential and only such results will be discussed in the course of the present paper. The agreement of the CKMT parametrization is comparable to that obtained in our model. For instance for parametrization (b) in Table 2 in the second paper , which includes new HERA data: $`\chi ^2/N_{dof}`$ = 2.22 (1.00) (proton structure functions), $`\chi ^2/N_{dof}`$ = 3.54 (3.59) (deuteron structure functions), $`\chi ^2/N_{dof}`$ = 2.89 (2.33) (combined data), where in the parentheses we present $`\chi ^2`$ for $`Q^2<`$ 4 GeV<sup>2</sup> i.e. in the region of applicability of the CKMT parametrization. <sup>2</sup><sup>2</sup>2The number of experimental points is reduced then to 354 and 373 for proton and deuteron structure functions, respectively We note much better description of the proton data in comparison to the deuteron data. The agreement of the Donnachie-Landshoff parametrization with the proton structure function data is of similar quality. In Fig.2 we present for completeness a map of $`\chi ^2`$ for our best fit as a function of model parameters $`Q_0^2`$ and $`\lambda `$. A well defined minimum of $`\chi ^2`$ for $`\lambda _G`$ 0.5 GeV and $`Q_0^2`$ 0.85 GeV<sup>2</sup> can be seen. The experimental statistical uncertainty of the obtained parameters $`\lambda _G`$ and $`Q_0^2`$ is less than 1 %. Some examples of the fit quality can be seen in Fig.3 (x-dependence for different values of $`Q^2`$ = 0.585, 1.1, 2.0, 3.5 GeV<sup>2</sup>) and in Figs.4, 5 ($`Q^2`$-dependence for different values of Bjorken $`x`$ = 0.00127, 0.0125, 0.05, 0.10, 0.18, 0.35, 0.55, 0.75). Shown are experimental data which differ from the nominal $`Q^2`$ or Bjorken $`x`$ specified in Fig.3, 4, 5 by less than $`\pm `$ 3 %. An excellent fit is obtained for $`Q^2>`$ 4 GeV<sup>2</sup> (not shown in Fig.3), although the VDM contribution stays large up to 10 GeV<sup>2</sup>. In comparison to the GRV parametrization (dashed line) our model describes much better the region of small $`Q^2<`$ 3 GeV<sup>2</sup>, especially at intermediate Bjorken $`x`$: 0.05 $`<x<`$ 0.4. The CKMT model (long-dashed line), shown according to the philosophy in for $`Q^2<`$ 10 GeV<sup>2</sup> gives a better fit at very small Bjorken $`x`$. However, one may expect here a few more effects which will be discussed below. <sup>3</sup><sup>3</sup>3The use of the next-to-leading order structure functions in our model would improve the description of low-$`x`$ data, discussion of which is left for a separate analysis. It is however slightly worse as far as isovector quantities are considered, as will be discussed later. There seems to be a systematically small (up to about 5 %) discrepancy between our model and the data for $`Q^2<`$ 2 GeV<sup>2</sup> and $`x`$ = 0.1 - 0.3. This is caused by some higher-twist effects due to the production of the $`\pi N`$ and $`\pi \mathrm{\Delta }`$ exclusive channels and will be discussed elsewhere. A fit of similar quality is obtained in our model for the proton (left panels) and deuteron (right panels) structure functions. Rather good agreement of our model with the BCDMS data can be observed in Figs.4 and 5 in spite of the fact that the data were not used in the fitting procedure. At very small $`x<`$ 0.01 the description of the data becomes worse. This is partially due to the use of the leading order approximation. The fit to the fixed-target data prefers $`\overline{x}x`$ and $`\overline{Q}^2Q^2`$ (see Table 1 and the discussion therein). On the other hand, the HERA data would prefer $`Q_1^2`$ 0 and $`Q_2^2`$ 0. If we included the HERA data in the fit the description of the fixed target data would become worse. There are no fundamental reasons for the parameters in both regions to be identical. In addition at very small $`x`$ other effects of isoscalar character, not included here, such as heavy long-lived fluctuactions of the incoming photon and/or BFKL pomeron effects , may become important. For illustration a VDM contribution modified by a form factor (5) is shown separately by the short-dashed line. The partonic component can be obtained as a difference between the solid and VDM line. It can be seen from Figs.3-5 that the partonic component decreases towards small $`Q^2`$. This decrease is faster than one could directly infer from the failure of the GRV parametrization at low $`Q^2`$ because in our model a part of the strength resides in the VDM contribution. The modified VDM contribution is sizeable for small values of Bjorken $`x`$ and not too large $`Q^2`$ and survives up to relatively large $`Q^2`$. At $`Q^2>`$ 3.5 GeV<sup>2</sup> the structure functions in our model almost coincide with those in the GRV parametrization despite that the VDM term is still not small. For $`Q^2\mathrm{}`$ only the partonic contribution survives and $`F_2(x,Q^2)F_2^{part}(x,Q^2)F_2^{GRV}(x,Q^2)`$. The deviations from the partonic model can be also studied in the language of higher-twist corrections. Then the structure function can be written formally as $$F_2^{p/n}(x,Q^2)=F_2^{p/n,LT}(x,Q^2)\left[1+\frac{c_2^{p/n}(x)}{Q^2}+\frac{c_4^{p/n}(x)}{Q^4}+\mathrm{}\right].$$ (11) However, in empirical analyses one usually includes only one term in (11) $$F_2^{p/n}(x,Q^2)=F_2^{p/n,LT}(x,Q^2)\left[1+\frac{c^{p/n}(x)}{Q^2}\right].$$ (12) In our model for $`Q^2M_V^2,Q_0^2`$ there are an infinite number of active terms in the expansion of the structure function (11). Therefore the coefficient $`c^{p/n}`$ (the same is true for the deuteron counterpart $`c^d`$) in (12) becomes effectively $`Q^2`$-dependent $`c^{p/n}(x)=c^{p/n}(x,Q^2)`$. As an example in Fig.6 we show $`c^p`$ and $`c^d`$ as a function of Bjorken $`x`$ for three different values of $`Q^2`$ = 2, 4, 8 GeV<sup>2</sup> in the range relevant for the analysis in . In order to correctly compare our results for $`c^p`$ and $`c^d`$ with the results of the analysis in the structure function $`F_2^{p/n,LT}(x,Q^2)`$ in Eq.(12) will include complete target mass corrections calculated according to Ref.. A fairly similar pattern is obtained for $`c^p`$ and $`c^d`$ especially at small $`x`$. The rise of $`c^p`$ or $`c^d`$ for $`x`$ 1 is caused by our treatment of the target mass corrections and partially by the VDM contribution which survives in our model up to relatively large x. We obtain small negative $`c^p`$ and $`c^d`$ for $`x<`$ 0.3 in agreement with . The smallnest of $`c^p`$ and $`c^d`$ in our model for x $`<`$ 0.3 is due to the cancellation of the positive VDM contribution and a negative contribution caused by the external damping factor $`\frac{Q^2}{Q^2+Q_0^2}`$ of the partonic contribution in Eq.(6). The CKMT parametrization (shown only in its applicability range for $`Q^2`$ = 2, 4 GeV<sup>2</sup>) provides a very good explanation of $`c^p`$. It predicts, however, somewhat larger negative $`c^d`$ for x $`<`$ 0.3. This will have some unwanted consequences for $`c^pc^n`$ discussed below. In Fig.7 we show $`c^pc^n`$ for $`Q^2`$ = 2, 4 GeV<sup>2</sup> together with empirical results from . Rather strong $`Q^2`$-dependence of $`c^pc^n`$ is observed. Our model correctly describes the trend of the experimental data. For comparison we show also the result obtained by means of the CKMT parametrization (long-dashed lines) of the structure functions which somewhat fails to reproduce the details of the empirical results from , especially for small Bjorken $`x`$. To our best knowledge no other model in the literature is able to describe quantitatively this subtle higher-twist effect. Our model seems to provide a very good description of some isovector quantities. As an example in Fig.8 we present $`F_2^p(x,Q^2)F_2^n(x,Q^2)`$ at $`Q^2`$ = 4 GeV<sup>2</sup> obtained in our model (solid lines for different form factors), as well as the results obtained with the GRV parametrization (dashed line) and in the CKMT model (long-dashed line).<sup>4</sup><sup>4</sup>4 No evolution of the CKMT quark distributions was done, but it is negligible between 2 $``$ 4 GeV<sup>2</sup> for the nonsinglet quantity. The NMC data prefer rather our model. As a consequence of the imperfect description of the deuteron data the CKMT model fails to describe the difference $`F_2^p(x)F_2^n(x)`$ for $`x<`$ 0.3. The success of our model is related to the violation of the Gottfried Sum Rule and/or $`\overline{d}\overline{u}`$ asymmetry which is included in our model explicitly. In contrast to our model in the CKMT model for $`Q^2>`$ 2 GeV<sup>2</sup> the Gottfried Sum Rule $`S_G=\frac{1}{3}`$. ## 4 Conclusions and discussion We have constructed a simple model incorporating nonperturbative structure of the nucleon and photon. Our model is a generalization of the well known and successful Badełek-Kwieciński model . While the original Badełek-Kwieciński model is by construction limited to a small-$`x`$ region, our model is intended to be valid in much broader range. The original VDM model assumes implicitly a large coherence length for the photon-hadron fluctuation, i.e. assumes that the hadronic fluctuation is formed far upstreem of the target. When the fluctuation length becomes small the VDM is expected to break-down. This effect has been modelled by introducing an extra form factor. As a result we have succeeded in constructing a physically motivated parametrization of both proton and deuteron structure functions. In comparison to the pure partonic models with QCD evolution our model leads to a much better agreement at low $`Q^2`$ in a broad range of $`x`$. With only two free parameters we have managed to describe well the transition from the high- to low-$`Q^2`$ region simultaneously for the proton and deuteron structure functions. Our analysis of the experimental data indicates that the QCD parton model begins to fail already at $`Q^2`$ as high as about 3 GeV<sup>2</sup>. This value is larger than commonly believed. In our discussion we have omitted the region of the HERA data. In our opinion the physics there may be slightly more complicated. Other effects of isoscalar character, not included in our analysis, for example the BFKL pomeron effects , may become important. In contrast to other models in the literature we obtain a very good description of the NMC $`F_2^pF_2^n`$ data where the previously mentioned isoscalar effects cancel. Recently an intriguing, although small, difference between $`\overline{d}\overline{u}`$ asymmetry obtained from recent E866 Drell-Yan data and muon DIS NMC data has been observed. At least part of the difference can be understood in our model. We expect for $`Q^2`$ smaller than about 4 GeV<sup>2</sup> an extra $`Q^2`$ dependence of some parton model sum rules. We predict a rather strong $`Q^2`$ effect for the integrand of the Gottfried Sum Rule where in the first approximation the VDM contribution cancels. Recently in the literature there has been sizeable activity towards a better understanding of higher-twist effects. Some were estimated within the operator product expansion, some in terms of the QCD sum rules. It is, however, rather difficult to predict the absolute normalization of the higher-twist effects. Our model leads to relatively large higher-twist contributions. For some observables, like structure functions, they almost cancel. For other observables, like $`F_2^pF_2^n`$, the cancellation is not so effective. Our model provides specific higher-twist effects not discussed to date in the literature. This will be a subject of a future separate analysis. Acknowledgments: We are especially indebted to J. Kwieciński for valuable discussions and suggestions and J. Outhwaite for careful reading of the manuscript. We would also like to thank C. Merino for the discussion of the details of the CKMT model. This work was supported partly by the German-Polish exchange program, grant No. POL-81-97.
no-problem/9904/quant-ph9904089.html
ar5iv
text
# DETERMINATION OF THE WIGNER FUNCTION FROM PHOTON STATISTICS ## Abstract We present an experimental realisation of the direct scheme for measuring the Wigner function of a single quantized light mode. In this method, the Wigner function is determined as the expectation value of the photon number parity operator for the phase space displaced quantum state. submitted to acta physica slovaca (a) Wydział Fizyki, Uniwersytet Warszawski, Hoża 69, PL–00–681 Warszawa, Poland (b) Abteilung für Quantenphysik, Universität Ulm, D-89081 Ulm, Germany (c) Center for Laser and Photonics Research, Oklahoma State University, Stillwater, OK 74078, USA Submitted 26 April 1999 The Wigner function provides a complete representation of the quantum state in the form that is analogous to a classical phase space probability distribution . An interesting and nontrivial problem is how to relate the Wigner function to directly measurable quantities so that it can be determined from data collected in a feasible experimental scheme. The first answer was given by Vogel and Risken who noted that marginal distributions of the Wigner function can be measured with a balanced homodyne detector, and that the inverse transformation is possible by tomographic back-projection. This idea has been realized in a beautiful experiment by Smithey and co-workers , and it has quickly become a useful tool in studying quantum statistical properties of optical radiation . In this contribution we briefly review our recent experimental realization of the direct scheme for measuring the Wigner function of a light mode . This method, based on photon counting, provides the complete Wigner representation of the quantum state without using any numerical reconstruction algorithms. The basic idea underlying our experimental scheme is that the Wigner function $`W(\alpha )`$ at a given phase space point $`\alpha `$ is itself a well defined quantum observable. This observable can be represented as the expectation value of the displaced photon number parity operator: $$W(\alpha )=\frac{2}{\pi }\widehat{D}(\alpha )\underset{n=0}{\overset{\mathrm{}}{}}(1)^n|nn|\widehat{D}^{}(\alpha ).$$ (1) Here $`\widehat{D}(\alpha )=\mathrm{exp}(\alpha \widehat{a}^{}\alpha ^{}\widehat{a})`$ denotes the displacement operator, and $`|nn|`$ are projections on Fock states. The two elements of the above expression: the displacement transformation and the projections on Fock states have a straightforward optical realisation. The displacement transformation can be implemented by interfering the signal field at a nearly fully transmitting beam splitter with a probe coherent field, and projections on Fock states are given by the photon statistics. Our experimental scheme, presented in Fig. 1, is constructed as an unbalanced Mach-Zender interferometer with the beams in the two arms serving as the signal and the probe fields. The source of light is an attenuated beam from a frequency-stabilized single-mode He:Ne laser. The quantum state is prepared as a weak coherent state using the neutral density filter ND. Additionally, the mirror mounted on a piezoelectric translator PZT can be used to generate a statistical mixture of coherent states with fluctuating phase. The displacement transformation $`\widehat{D}(\alpha )`$ is realized using the phase modulator EOM2 and the high-transmission beam splitter BS2 whose second input port is fed with a coherent probe beam. The modulator EOM2 performs rotation in the phase space, while the probe beam effectively shifts the phase space in a fixed direction. The value of the shift is proportional to the probe beam amplitude, which is controlled in the setup using the Pockels cell EOM1 placed between the half-wave plate and a polarizer. Thus, the point of the phase space at which the Wigner function is measured, is defined in our experiment by voltages applied to the modulators EOM1 and EOM2. The photon statistics of the displaced signal field is measured using an avalanche photodiode operated in the single photon counting module. The count rate is adjusted to the level such that a negligible fraction of photons is missed due to the detector dead time. The experiment is controlled by a computer, which collects photon statistics on a polar grid in the phase space. In Fig. 2 we present experimental results obtained for the vacuum state, a coherent state, and a phase diffused coherent state. The scan for the vacuum state has been performed with the blocked signal path, and the phase diffused coherent state has been generated by applying a 400 Hz sine waveform to the piezoelectric translator. At each point of the grid, the photon statistics has been collected from 8000 counting intervals. The graphs are parameterized with the complex variable $`\beta =n_{\text{vac}}^{1/2}e^{i\varphi }`$, where $`n_{\text{vac}}`$ is the average number of photons registered for the blocked signal path, and $`\varphi `$ is the phase shift generated by the modulator EOM2. The photon statistics $`p_n(\beta )`$ collected at a point $`\beta `$ is processed to yield $$𝒫(\beta )=\frac{2}{\pi }\underset{n}{}(1)^np_n(\beta ).$$ (2) In the ideal, loss-free limit this quantity is equal to the Wigner function of the signal field. In a realistic case, $`𝒫(\beta )`$ can be related to a generalized $`s`$-ordered quasidistribution function $`W(\alpha ;s)`$: $$𝒫(\beta )=\frac{1}{\eta T}W(\frac{\beta }{\sqrt{\eta T}};\frac{1\eta T}{\eta T}),$$ (3) where $`\eta `$ is the quantum efficiency of the photodetector and $`T`$ is the power transmission of the beam splitter performing the phase space displacement. The right-hand side of Eq. (3) can be interpreted as the Wigner function of the signal field that has passed through a dissipation process with the losses characterized by the overall efficiency $`\eta T`$. For our setup, the efficiency of the photon counting module specified by the manufacturer is $`\eta =70\%`$, and the power transmission of the beam splitter BS2 is $`T=98.6\%`$. This gives the value of the ordering parameter equal to $`s=11/\eta T=0.45`$. Further details and discussion of various aspects of the experiment can be found in Ref. . An important factor that should be taken into account in the analysis of experimental data is the mode-mismatch between the fields interfering at the beam splitter BS2. The effect of the mode-mismatch is quite different from balanced homodyne detection, where it can be included in the overall efficiency parameter. In the photon counting scheme, the mode-mismatch generates a slowly decaying gaussian envelope centered at the phase space origin, which multiplies the Wigner function of the signal field. Acknowledgements This research is supported by the KBN grant 2P03B 002 14. K.W. thanks the Alexander von Humboldt Foundation for generous support and Prof. W. P. Schleich for his hospitality in Ulm.
no-problem/9904/math9904065.html
ar5iv
text
# Theorem 1 On the structure of multiPREPRINTple translational tilings by polygonal regions Mihail N. Kolountzakis<sup>1</sup><sup>1</sup>1Partially supportedby the U.S. National Science Foundation, under grant DMS 97-05775. Department of Mathematics, University of Crete, Knossos Ave., 714 09 Iraklio, Greece. E-mail: kolount@math.uch.gr February 1999 ## Abstract We consider polygons with the following “pairing property”: for each edge of the polygon there is precisely one other edge parallel to it. We study the problem of when such a polygon $`K`$ tiles the plane multiply when translated at the locations $`\mathrm{\Lambda }`$, where $`\mathrm{\Lambda }`$ is a multiset in the plane. The pairing property of $`K`$ makes this question particularly amenable to Fourier Analysis. After establishing a necessary and sufficient condition for $`K`$ to tile with a given lattice $`\mathrm{\Lambda }`$ (which was first found by Bolle for the case of convex polygons–notice that all convex polygons that tile, necessarily have the pairing property and, therefore, our theorems apply to them) we move on to prove that a large class of such polygons tiles only quasi-periodically, which for us means that $`\mathrm{\Lambda }`$ must be a finite union of translated $`2`$-dimensional lattices in the plane. For the particular case of convex polygons we show that all convex polygons which are not parallelograms tile necessarily quasi-periodically, if at all. §0. Introduction In this paper we study multiple tilings of the plane by translates of a polygonal region of a certain type, the polygons with the pairing property of Definition 1 below. Definition 1 (Tiling) Let $`K`$ be a measurable subset of $`^2`$ of finite measure and let $`\mathrm{\Lambda }^2`$ be a discrete multiset (i.e., its underlying set is discrete and each point has finite multiplicity). We say that $`K+\mathrm{\Lambda }`$ is a (translational, multiple) tiling of $`^2`$, if $$\underset{\lambda \mathrm{\Lambda }}{}\mathrm{𝟏}_K(x\lambda )=w,$$ for almost all (Lebesgue) $`x^2`$, where the weight or level $`w`$ is a positive integer and $`\mathrm{𝟏}_K`$ is the indicator function of $`K`$. Definition 2 (Polygons with the Pairing Property) A polygon $`K`$ has the pairing property if for each edge $`e`$ there is precisely one other edge of $`K`$ parallel to $`e`$ Remarks. 1. Note that all symmetric convex polygons have the pairing property and it is not hard to see that all convex polygons that tile by translation are necessarily symmetric. 2. The polygonal regions we deal with are not assumed to be connected. Using Fourier Analysis we study the following two problems: (a) characterize the polygons that tile multiply with a lattice, and (b) determine which polygons tile necessarily in a “quasi-periodic” manner, if they tile at all. We restrict our attention to polygons with the pairing property. Definition 3 (Quasi-periodic multisets) A multiset $`\mathrm{\Lambda }^d`$ is called quasi-periodic if it is the union of finitely many $`d`$-dimensional lattices (see Definition 2) in $`^d`$. In §1 we describe the general approach to translational tiling using the Fourier Transform of the indicator function of the tile and in particular its zero-set. This zero-set for polygons with the pairing property is calculated explicitly. In §1 we give a necessary and sufficient condition (Theorem 2) for a polygon $`K`$ with the pairing property to tile multiply with a lattice $`\mathrm{\Lambda }`$. This has been proved previously by Bolle for the more special case of convex polygons (although his method might apply for the case of pairing polygons as well) who used a combinatorial method. Our approach is based on the calculation of §1. In §1 we find a very large class of polygons with the pairing property that tile only in a quasi-periodic manner. In particular we show that every convex polygon that is not a parallelogram can tile (multiply) only in a quasi-periodic way. Notation. 1. The Fourier Transform of a function $`fL^1(^d)`$ is normalized as follows: $$\widehat{f}(\xi )=_^de^{2\pi i\xi ,x}f(x)𝑑x.$$ It is extended to tempered distributions by duality. 2. The action of a tempered distribution $`\alpha `$ on a function $`\varphi `$ of Schwarz class is denoted by $`\alpha (\varphi )`$. The Fourier Transform $`\widehat{\alpha }`$ of $`\alpha `$ is defined by $$\widehat{\alpha }(\varphi )=\alpha (\widehat{\varphi }).$$ A tempered distribution $`\alpha `$ is supported on a closed set $`K`$ if for each smooth function $`\varphi `$ with $`\mathrm{supp}\varphi K^c`$ we have $`\alpha (\varphi )=0`$. The intersection of all such closed sets $`K`$ is called the support of $`\alpha `$ and denoted by $`\mathrm{supp}\alpha `$. §1. The Fourier Analytic approach. 1.1 General It is easy to see that if a polygon $`K`$ with the pairing property tiles multiply then for each (relevant) direction the two edges parallel to it necessarily have the same length. For this, suppose that $`u`$ is a direction and that $`e_1`$ and $`e_2`$ are the two edges parallel to it. Let then $`\mu _u`$ be the measure which is equal to arc-length on $`e_1`$ and negative arc-length on $`e_2`$. Suppose also that $`K+\mathrm{\Lambda }`$ is a multiple tiling of $`^2`$. It follows then that $$\underset{\lambda \mathrm{\Lambda }}{}\mu _u(x\lambda )$$ is the zero measure in $`^2`$. This is so because each copy of edge $`e_1`$ in the tiling has to be countered be some copies of edge $`e_2`$. Hence the total mass of $`\mu _u`$ is $`0`$ and $`e_1`$ and $`e_2`$ have the same length. We can then write (here $`e_1`$ and $`e_2`$ are viewed as point-sets in $`^2`$ and $`\tau `$ as a vector) $$e_2=e_1+\tau ,$$ for some $`\tau ^2`$. By the previous discussion, a polygon $`K`$ with the pairing property tiles multiply with a multiset $`\mathrm{\Lambda }`$ if and only if for each pair $`e`$ and $`e+\tau `$ of parallel edges of $`K`$ $$\underset{\lambda \mathrm{\Lambda }}{}\mu _e(x\lambda )=0,$$ (1) where $`\mu _e`$ is the measure in $`^2`$ that is arc-length on $`e`$ and negative arc-length on $`e+\tau `$. Write $$\delta _\mathrm{\Lambda }=\underset{\lambda \mathrm{\Lambda }}{}\delta _\lambda ,$$ where $`\delta _a`$ is a unit point mass at $`a`$. Thus $`\delta _\mathrm{\Lambda }`$ is locally a measure but is globally unbounded when $`\mathrm{\Lambda }`$ is infinite. However, whenever $`K+\mathrm{\Lambda }`$ is a multiple tiling, it is obvious that $`\mathrm{\Lambda }`$ cannot have more than $`cR^2`$ points in any disc of radius $`R`$, $`R>1`$, ($`c`$ depends on $`K`$ and the weight of the tiling). This implies that $`\delta _\mathrm{\Lambda }`$ is a tempered distribution and we can take its Fourier Transform, denoted by $`\widehat{\delta _\mathrm{\Lambda }}`$. Condition (1) then becomes $$\widehat{\mu _e}\widehat{\delta _\mathrm{\Lambda }}=0.$$ (2) When $`\mathrm{\Lambda }`$ is a lattice $`\mathrm{\Lambda }=A^2`$, where $`A`$ is a $`2\times 2`$ invertible matrix, its dual lattice $`\mathrm{\Lambda }^{}`$ is defined by $$\mathrm{\Lambda }^{}=\{x^2:x,\lambda ,\lambda \mathrm{\Lambda }\},$$ and we have $`\mathrm{\Lambda }^{}=A^{}^2`$. The Poisson Summation Formula then takes the form $$\widehat{\delta _\mathrm{\Lambda }}=det\mathrm{\Lambda }\delta _\mathrm{\Lambda }^{}.$$ (3) Since $`\widehat{\mu _e}`$ is a continuous function we have in this case, and whenever $`\widehat{\delta _\mathrm{\Lambda }}`$ is locally a measure, that condition (2) is equivalent to $$\mathrm{supp}\widehat{\delta _\mathrm{\Lambda }}Z(\widehat{\mu _e}),$$ (4) where for every continuous function $`f`$ we write $`Z(f)`$ for the set where it vanishes. When $`\mathrm{\Lambda }`$ is a lattice (2) is equivalent to $$\widehat{\mu _e}(x)=0,x\mathrm{\Lambda }^{}.$$ So, to check if a given polygon $`K`$ with the pairing property tiles multiply $`^2`$ with the lattice $`\mathrm{\Lambda }`$, one has to check that $`\widehat{\mu _e}`$ vanishes on $`\mathrm{\Lambda }^{}`$ for every edge $`e`$ of $`K`$. 1.2 The shape of the zero-set Here we study the zero-set of the Fourier Transform of the measure $`\mu _e`$ of §1 and determine its structure. We first calculate the Fourier Transform of $`\mu _e`$ in the particular case when $`e`$ is parallel to the $`x`$-axis, for simplicity. Let $`\mu M(^2)`$ be the measure defined by duality by $$\mu (\varphi )=_{1/2}^{1/2}\varphi (x,0)𝑑x,\varphi C(^2).$$ That is, $`\mu `$ is arc-length on the line segment joining the points $`(1/2,0)`$ and $`(1/2,0)`$. Calculation gives $$\widehat{\mu }(\xi ,\eta )=\frac{\mathrm{sin}\pi \xi }{\pi \xi }.$$ Notice that $`\widehat{\mu }(\xi ,\eta )=0`$ is equivalent to $`\xi \left\{0\right\}`$. If $`\mu _L`$ is the arc-length measure on the line segment joining $`(L/2,0)`$ and $`(L/2,0)`$ we have $$\widehat{\mu _L}(\xi ,\eta )=\frac{\mathrm{sin}\pi L\xi }{\pi \xi }$$ and $$Z(\widehat{\mu _L})=\{(\xi ,\eta ):\xi L^1\left\{0\right\}\}.$$ Write $`\tau =(a,b)`$ and let $`\mu _{L,\tau }`$ be the measure which is arc-length on the segment joining $`(L/2,0)`$ and $`(L/2,0)`$ translated by $`\tau /2`$ and negative arc-length on the same segment translated by $`\tau /2`$. That is, we have $$\mu _{L,\tau }=\mu _L(\delta _{\tau /2}\delta _{\tau /2}),$$ and, taking Fourier Transforms, we get $$\widehat{\mu _{L,\tau }}(\xi ,\eta )=2\frac{\mathrm{sin}\pi L\xi }{\pi \xi }\mathrm{sin}\pi (a\xi +b\eta ).$$ Define $`u=\frac{\tau }{\left|\tau \right|^2}`$ and $`v=(1/L,0)`$. It follows that $$Z(\widehat{\mu _{L,\tau }})=(u+u^{})(\left\{0\right\}v+v^{}).$$ This a set of straight lines of direction $`u^{}`$ spaced by $`\left|u\right|`$ and containing $`0`$ plus a similar set of lines of direction $`v^{}`$, spaced by $`v`$ and containing zero. However in the latter set of parallel lines the straight line through $`0`$ has been removed. We state this as a theorem for later use, formulated in a coordinate-free way. Definition 4 (Geometric inverse of a vector) The geometric inverse of a non-zero vector $`u^d`$ is the vector $$u^{}=\frac{u}{\left|u\right|^2}.$$ ###### Theorem 1 Let $`e`$ and $`e+\tau `$ be two parallel line segments (translated by $`\tau `$, of magnitude and direction described by $`e`$, symmetric with respect to $`0`$). Let also $`\mu _{e,\tau }`$ be the measure which charges $`e`$ with its arc-length and $`e+\tau `$ with negative its arc-length. Then $$Z(\widehat{\mu _{e,\tau }})=(\tau ^{}+\tau ^{})(\left\{0\right\}e^{}+e^{}).$$ (5) §2. When does a polygon tile with a certain lattice? The following theorem has been proved by Bolle \[Bo94\] who used combinatorial methods. Theorem (Bolle) A convex polygon $`K`$, which is centrally symmetric about $`0`$, tiles multiply with the lattice $`\mathrm{\Lambda }`$ (for some weight $`w`$) if and only if for each edge $`e`$ of $`K`$ the following two conditions are satisfied. * (i) In the relative interior of $`e`$ there is a point of $`\frac{1}{2}\mathrm{\Lambda }`$, and * (ii) If the midpoint of $`e`$ is not in $`\frac{1}{2}\mathrm{\Lambda }`$ then the vector $`e`$ is in $`\mathrm{\Lambda }`$. Remark. Notice that Bolle’s theorem implies that all convex polygons with vertices in $`\mathrm{\Lambda }`$ tile multiply with $`\mathrm{\Lambda }`$ at some level. We prove the following which is easily seen to be a generalization of Bolle’s Theorem to polygons with the pairing property. ###### Theorem 2 If the polygon $`K`$ has the pairing property and $`\mathrm{\Lambda }`$ is a lattice in $`^2`$ then $`K+\mathrm{\Lambda }`$ is a multiple tiling of $`^2`$ if and only if for each pair of edges $`e`$ and $`e+\tau `$ of $`K`$ * (i) $`\tau \mathrm{\Lambda }`$, or * (ii) $`e\mathrm{\Lambda }`$ and $`\tau +\theta e\mathrm{\Lambda }`$, for some $`0<\theta <1`$. Proof of Theorem 2. Once again we simplify matters and take the edge $`e`$ to be parallel to the $`x`$-axis and follow the notation of §1. For an arbitrary non-zero vector $`w^2`$ define the group $$G(w)=w+w^{},$$ which is a set of straight lines in $`^2`$ of direction $`w^{}`$ spaced regularly at distance $`\left|w\right|`$. It follows that $$Z(\widehat{\mu _{L,\tau }})G(u)G(v).$$ From Theorem 1 it follows that $`\mathrm{\Lambda }^{}Z(\widehat{\mu _{L,\tau }})`$ which implies that $`\mathrm{\Lambda }^{}G(u)`$ or $`\mathrm{\Lambda }^{}G(v)`$. This is a consequence of the following. Observation 1 If $`G,H,K`$ are groups and $`GHK`$ then $`GH`$ or $`GK`$. For, if $`aGK`$ and $`bGH`$, then $`abH`$, say, which implies $`bH`$, a contradiction. So we have the two alternatives 1. $`\mathrm{\Lambda }^{}G(u)`$, 2. $`\mathrm{\Lambda }^{}G(v)`$. However, since not all of $`G(v)`$ is in $`Z(\widehat{\mu _{L,\tau }})`$, if alternative 2 holds and alternative 1 does not, it follows that $$\mathrm{\Lambda }^{}\mathrm{span}_{}\{v,w\},$$ (6) where $`w`$ is the smallest (in length) multiple of $`v^{}`$ which is in $`G(u)`$, i.e., $$w=(0,1/b).$$ We have that (6) is equivalent to $$\mathrm{\Lambda }\left(\mathrm{span}_{}\{v,w\}\right)^{}=(L,0)+(0,b),$$ which is in turn equivalent to $$(L,0)\mathrm{\Lambda }\text{and}(0,b)\mathrm{\Lambda }.$$ Notice also that $$\mathrm{\Lambda }^{}G(u)\mathrm{\Lambda }G(u)^{}\mathrm{\Lambda }\frac{u}{\left|u\right|^2}=\tau .$$ We have therefore proved the following lemma. ###### Lemma 1 If $`\mathrm{\Lambda }`$ is a lattice, $`u=\frac{(a,b)}{a^2+b^2}`$ and $`v=(L,0)`$, then $$\mathrm{\Lambda }^{}(u+u^{})(\left\{0\right\}v+v^{})$$ if and only if 1. $`(a,b)\mathrm{\Lambda }`$, or 2. $`(L,0)\mathrm{\Lambda }`$ and $`(0,b)\mathrm{\Lambda }`$. Allowing for a general linear transformation, let $`\tau ,e^2`$, and let $`\mu _{e,\tau }`$ be the measure that “charges” with its arc-length the line segment $`e`$ translated so that its midpoint is at $`\tau /2`$ and charges with negative its arc-length the line segment $`e`$ with its midpoint at $`\tau /2`$. We have proved the following: $$\mathrm{\Lambda }^{}Z(\widehat{\mu _{e,\tau }})\{\begin{array}{c}\tau \mathrm{\Lambda },\text{or}\hfill \\ e\mathrm{\Lambda }\text{and}\tau +\theta e\mathrm{\Lambda },\text{for some }0<\theta <1.\hfill \end{array}$$ (7) This completes the proof of Theorem 2. $`\mathrm{}`$ §3. Polygons that tile only quasi-periodically 3.1 Meyer’s theorem We now deal with the following question: which polygons with the pairing property admit only quasi-periodic multiple tilings. The main tool here, as was in \[KL96\], is the idempotent theorem of P.J. Cohen for general locally compact abelian groups, in the form of the following theorem of Y. Meyer \[M70\]. Definition 5 (The coset ring) The coset ring of an abelian group $`G`$ is the smallest collection of subsets of $`G`$ which is closed under finite unions, finite intersections and complements (that is, the smallest ring of subsets of $`G`$) and which contains all cosets of $`G`$ Remark. When the group is equipped with a topology one usually only demands that the open cosets of $`G`$ are in the coset ring, but we take all cosets in our definition. Theorem (Meyer) Let $`\mathrm{\Lambda }^d`$ be a discrete set and $`\delta _\mathrm{\Lambda }`$ be the Radon measure $$\delta _\mathrm{\Lambda }=\underset{\lambda \mathrm{\Lambda }}{}c_\lambda \delta _\lambda ,c_\lambda S,$$ where $`S\left\{0\right\}`$ is a finite set. Suppose that $`\delta _\mathrm{\Lambda }`$ is tempered, and that $`\widehat{\delta _\mathrm{\Lambda }}`$ is a Radon measure on $`^d`$ which satisfies $$\left|\widehat{\delta _\mathrm{\Lambda }}\right|([R,R]^d)CR^d,\text{as}R\mathrm{},$$ (8) where $`C>0`$ is a constant. Then, for each $`sS`$, the set $$\mathrm{\Lambda }_s=\{\lambda \mathrm{\Lambda }:c_\lambda =s\}$$ is in the coset ring of $`^d`$. A proof of Meyer’s theorem for $`d=1`$ can be found in \[KL96\]. The proof works verbatim for all $`d`$. 3.2 Discrete elements of the coset ring In this section we determine the structure of the discrete elements of the coset ring of $`^d`$. In dimension $`d=1`$ we have the following characterization of the discrete elements of the coset ring of $``$, due to Rosenthal \[R66\]. Theorem (Rosenthal) The elements of the coset ring of $``$ which are discrete in the usual topology of $``$ are precisely the sets of the form $$F\underset{j=1}{\overset{J}{}}(\alpha _j+\beta _j),$$ (9) where $`F`$ is finite, $`\alpha _j>0`$ and $`\beta _j`$ ($``$ denotes symmetric difference). Rosenthal’s proof does not extend to dimension $`d2`$. Since we need to know what kind of sets the elements of the coset ring of $`^2`$ are, we prove the following general theorem. ###### Theorem 3 Let $`G`$ be a topological abelian group and let $``$ be the least ring of sets which contains the discrete cosets of $`G`$. Then $``$ contains all discrete elements of the coset ring of $`G`$. In other words, a discrete element of the coset ring can always be written as a finite union of sets of the type $$A_1\mathrm{}A_mB_1^c\mathrm{}B_n^c,$$ (10) where the $`A_i`$ and $`B_i`$ are discrete cosets of $`G`$. And, observing that the intersection of any two cosets is a coset, we may rewrite (10) as $$AB_1^c\mathrm{}B_n^c,$$ (11) where $`A`$ and all $`B_i`$ are discrete cosets. We need the following lemma. ###### Lemma 2 Suppose that $`A`$ is a non-discrete topological abelian group, $`FA`$ is discrete and $`B_1,\mathrm{},B_m`$ are cosets in $`A`$ disjoint from $`F`$. Then $$A=FB_1\mathrm{}B_n$$ (12) implies that $`F=\mathrm{}`$. This remains true if $`A`$ is a coset in a larger group. Proof of Lemma 2. Write $`B_i=x_i+G_i`$ and let $`k`$ be the number of different subgroups $`G_i`$ appearing in (12). We do induction on $`k`$. Notice that the group $`G_1`$ may be assumed to be non-discrete, by the non-discreteness of $`A`$. When $`k=1`$ the theorem is true as then $`F`$ is a union of cosets of $`G_1`$ and cannot be discrete unless it is empty. (Here is where the disjointness of $`F`$ from the $`B_i`$ is used.) Assume the theorem true for $`kn`$ and suppose that precisely $`n+1`$ groups appear in (12) and that $`F\mathrm{}`$. Assume that the $`G_1`$-cosets in (12) are $$x_1+G_1,\mathrm{},x_r+G_1,$$ and let $`yF`$. We then have $$y+G_1F(X_2+G_2)\mathrm{}(X_{n+1}+G_{n+1}),$$ with all sets $`X_i`$, $`i=2,\mathrm{},n+1`$, being finite. Hence $`G_1`$ $``$ $`(y+F)(y+X_2+G_2)\mathrm{}(y+X_{n+1}+G_{n+1})`$ $`=`$ $`F^{}(X_2^{}+G_2)\mathrm{}(X_{n+1}^{}+G_{n+1}),`$ with $`F^{}=y+F`$, $`X_i^{}=y+X_i`$. Furthermore, one may take $`X_i^{}G_1`$, $`i=2,\mathrm{},n+1`$ (possibly empty), to get $$G_1(F^{}G_1)(X_2^{}+G_2G_1)\mathrm{}(X_{n+1}^{}+G_{n+1}G_1).$$ Since $`yF`$ we have that $`F^{}G_10`$ (hence it is non-empty) and $$(F^{}G_1)(X_i^{}+G_iG_1)=\mathrm{},i=2,\mathrm{},n+1.$$ By the induction hypothesis we get a contradiction. $`\mathrm{}`$ Proof of Theorem 3. By Lemma 2, if $`A`$ is non-discrete then $`AB_1^c\mathrm{}B_n^c`$ is either non-discrete or empty. Hence a finite union of such sets can only be discrete if all participating $`A`$’s are discrete. Rewrite then $$AB_1^c\mathrm{}B_n^c=A(B_1A)^c\mathrm{}(B_nA)^c$$ so as to have the arbitrary discrete element of the coset ring made up with finitely many operations from discrete cosets. $`\mathrm{}`$ Definition 6 (Dimension, lattices) The dimension of a set $`A^d`$ is the dimension of the smallest translated subspace of $`^d`$ that contains $`A`$. A lattice is a discrete subgroup of $`^d`$. Remark. It is well known that all $`k`$-dimensional lattices in $`^d`$ are of the form $`A^k`$, where $`A`$ is a $`d\times k`$ real matrix of rank $`k`$. ###### Theorem 4 Let $`C=AB_1^c\mathrm{}B_n^c`$, with $`A,B_i`$ being discrete cosets of $`^d`$. Then $`C`$ may be written as a finite (possibly empty) union of sets of the type $$KL_1^c\mathrm{}L_m^c,L_iKA,m0,$$ (13) where the $`K,L_i`$ are discrete cosets and, when $`C`$ is not empty, $$dimL_i<dimK=dimA=dimC.$$ Observation 2 If $`A`$ and $`B`$ are discrete cosets in $`^d`$ with $`dimA=dimB=dimAB`$ then $`AB^c`$ is a finite (possibly empty) union of disjoint cosets of $`AB`$ and, therefore, $`dimAB^c=dimA`$, except when $`AB^c=\mathrm{}`$. Hence $`A`$ and $`B`$ can each be written as a finite disjoint union of translates of $`AB`$. Proof of Theorem 4: Notice that $$C=A(B_1A)^c\mathrm{}(B_nA)^c.$$ Let $$\alpha =dimA=dimB_1A=\mathrm{}=dimB_rA$$ and $`dimB_iA<\alpha `$ for $`i>r0`$. Let $$C^{}=A(B_1A)^c\mathrm{}(B_rA)^c.$$ By induction on $`r0`$ we prove that $`C^{}`$ is a finite union of sets of type (13). For $`r=0`$ this is obvious. If it is true for $`r1`$ then $`C^{}`$ is a finite union of sets of type $$KL_1^c\mathrm{}L_m^c(B_rA)^c,$$ with $`\alpha =dimK>dimL_i`$, $`i=1,\mathrm{},m`$. Each of these sets falls into one of two categories: Category 1: $`dimK(B_rA)=\alpha `$. Then, by Observation 4 above, $`K(B_rA)^c`$ is a finite union of cosets $`K_1,\mathrm{},K_s`$ of dimension $`\alpha `$ and hence $`C^{}`$ is a finite union of $`K_iL_1^c\mathrm{}L_m^c`$. Category 2: $`dimK(B_rA)<\alpha `$. Then $$KL_1^c\mathrm{}L_m^c(B_rA)^c$$ is already of the desired form. $`\mathrm{}`$ From Theorems 3 and 4 it follows for $`d=2`$ that every discrete element $`S`$ of the coset ring of $`^2`$ may be written as $$S=\left(\underset{j=1}{\overset{J}{}}A_j(B_1^{(j)}\mathrm{}B_{n_j}^{(j)})\right)\underset{l=1}{\overset{L}{}}L_lF,$$ (14) where $`A_1,\mathrm{},A_J`$ are $`2`$-dimensional translated lattices, $`L_l`$ and $`B_i^{(j)}`$ are $`1`$-dimensional translated lattices and $`F`$ is a finite set ($`J,L0`$). And, repeatedly using Observation 4, the lattices $`A_j`$ may be assumed to be have pairwise intersections of dimension at most $`1`$. 3.3 Purely discrete Fourier Transform Definition 7 (Uniform density) A multiset $`\mathrm{\Lambda }^d`$ has asymptotic density $`\rho `$ if $$\underset{R\mathrm{}}{lim}\frac{\left|\mathrm{\Lambda }B_R(x)\right|}{\left|B_R(x)\right|}\rho $$ uniformly in $`x^d`$. We say that $`\mathrm{\Lambda }`$ has (uniformly) bounded density if the fraction above is bounded by a constant $`\rho `$ uniformly for $`x`$ and $`R>1`$. We say then that $`\mathrm{\Lambda }`$ has density uniformly bounded by $`\rho `$. Assume that $`\mathrm{\Lambda }^2`$ is a discrete multiset of bounded density which satisfies the assumptions of Meyer’s Theorem (if we write $`c_\lambda `$ for the multiplicity of $`\lambda \mathrm{\Lambda }`$). Then, if $`\mathrm{\Lambda }_k`$ is the subset of $`\mathrm{\Lambda }`$ of multiplicity $`k`$, $`\mathrm{\Lambda }_k`$ is a discrete element of the coset ring and is of the form (14). Assume now in addition that $`\widehat{\delta _\mathrm{\Lambda }}`$ has discrete support. We shall prove that all sets $`F`$, $`L_l`$ and $`B_i^{(j)}`$ are empty in (14) and so $$\mathrm{\Lambda }=\underset{j=1}{\overset{J}{}}A_j,$$ where the $`A_i`$ are translated $`2`$-dimensional lattices in $`^2`$. One can easily show that whenever $`\mathrm{\Omega }^d`$ of finite measure tiles with $`\mathrm{\Lambda }`$ at level $`w`$ then $`\mathrm{\Lambda }`$ has density $`w/\left|\mathrm{\Omega }\right|`$. ###### Theorem 5 Suppose that $`\mathrm{\Lambda }^d`$ is a multiset with density $`\rho `$, $`\delta _\mathrm{\Lambda }=_{\lambda \mathrm{\Lambda }}\delta _\lambda `$, and that $`\widehat{\delta _\mathrm{\Lambda }}`$ is a measure in a neighborhood of $`0`$. Then $`\widehat{\delta _\mathrm{\Lambda }}(\left\{0\right\})=\rho `$. Proof of Theorem 5. Take $`\varphi C^{\mathrm{}}`$ of compact support with $`\varphi (0)=1`$. We have $`\widehat{\delta _\mathrm{\Lambda }}(\left\{0\right\})`$ $`=`$ $`\underset{t\mathrm{}}{lim}\widehat{\delta _\mathrm{\Lambda }}(\varphi (tx))`$ $`=`$ $`\underset{t\mathrm{}}{lim}\delta _\mathrm{\Lambda }(t^d\widehat{\varphi }(\xi /t))`$ $`=`$ $`\underset{t\mathrm{}}{lim}t^d{\displaystyle \underset{\lambda \mathrm{\Lambda }}{}}\widehat{\varphi }(\lambda /t)`$ $`=`$ $`\underset{t\mathrm{}}{lim}{\displaystyle \underset{n^d}{}}{\displaystyle \underset{\lambda Q_n}{}}t^d\widehat{\varphi }(\lambda /t)`$ where, for fixed and large $`T>0`$, $$Q_n=[0,T)^d+Tn,n^d.$$ Since $`\mathrm{\Lambda }`$ has density $`\rho `$ it follows that for each $`ϵ>0`$ we can choose $`T`$ large enough so that for all $`n`$ $$\left|\mathrm{\Lambda }Q_n\right|=\rho \left|Q_n\right|(1+\delta _n),$$ with $`\left|\delta _n\right|ϵ`$. For each $`n`$ and $`\lambda Q_n`$ we have $$\widehat{\varphi }(\lambda /t)=\widehat{\varphi }(Tn/t)+r_\lambda $$ with $`\left|r_\lambda \right|CTt^1\widehat{\varphi }_{L^{\mathrm{}}(t^1Q_n)}`$. Hence $`\widehat{\delta _\mathrm{\Lambda }}(\left\{0\right\})`$ $`=`$ $`\underset{t\mathrm{}}{lim}{\displaystyle \underset{n^d}{}}t^d{\displaystyle \underset{\lambda Q_n}{}}(\widehat{\varphi }(Tn/t)+r_\lambda )`$ $`=`$ $`\underset{t\mathrm{}}{lim}{\displaystyle \underset{n^d}{}}t^d\rho \left|Q_n\right|(1+\delta _n)\widehat{\varphi }(Tn/t)+`$ $`\underset{t\mathrm{}}{lim}{\displaystyle \underset{n^d}{}}t^d{\displaystyle \underset{\lambda Q_n}{}}r_\lambda `$ $`=`$ $`\underset{t\mathrm{}}{lim}S_1+\underset{t\mathrm{}}{lim}S_2.`$ We have $$\left|S_1\underset{n}{}t^d\rho \right|Q_n\left|\widehat{\varphi }(Tn/t)\right|ϵ\underset{n}{}t^d\rho \left|Q_n\right|\left|\widehat{\varphi }(Tn/t)\right|$$ (15) The first sum in (15) is a Riemann sum for $`\rho _^d\widehat{\varphi }=\rho `$ and the second is a Riemann sum for $`\rho _^d\left|\widehat{\varphi }\right|<\mathrm{}`$. For $`S_2`$ we have $`\left|S_2\right|`$ $``$ $`C{\displaystyle \underset{n^d}{}}t^d\rho \left|Q_n\right|(1+\delta _n)Tt^1\widehat{\varphi }_{L^{\mathrm{}}(t^1Q_n)}`$ $``$ $`\rho CTt^1{\displaystyle \underset{n^d}{}}t^d\left|Q_n\right|\widehat{\varphi }_{L^{\mathrm{}}(t^1Q_n)}.`$ The sum above is a Riemann sum for $`_^d\left|\widehat{\varphi }\right|`$, which is finite, hence $`lim_t\mathrm{}S_2=0`$. Since $`ϵ`$ is arbitrary the proof is complete. $`\mathrm{}`$ Remark: The same proof as that of Theorem 5 shows that, if $$\mu =\underset{\lambda \mathrm{\Lambda }}{}c_\lambda \delta _\lambda ,$$ with $`\left|c_\lambda \right|C`$, $`\mathrm{\Lambda }`$ is of density $`0`$ and the tempered distribution $`\widehat{\mu }`$ is locally a measure in the neighborhood of some point $`a^2`$, then we have $`\widehat{\mu }(\left\{a\right\})=0`$. ###### Theorem 6 Suppose that $`\mathrm{\Lambda }^2`$ is a uniformly discrete multiset and that $$\widehat{\delta _\mathrm{\Lambda }}=\left(\underset{\lambda \mathrm{\Lambda }}{}\delta _\lambda \right)^{}$$ is locally a measure with $$\left|\widehat{\delta _\mathrm{\Lambda }}\right|(B_R(0))CR^2,$$ for some positive constant $`C`$. Assume also that $`\widehat{\delta _\mathrm{\Lambda }}`$ has discrete support. Then $`\mathrm{\Lambda }`$ is a finite union of translated lattices. Proof of Theorem 6. Define the sets (not multisets) $$\mathrm{\Lambda }_k=\{\lambda \mathrm{\Lambda }:\lambda \text{ has multiplicity }k\}.$$ By Meyer’s Theorem (applied for the base set of the multiset $`\mathrm{\Lambda }`$ with the coefficients $`c_\lambda `$ equal to the corresponding multiplicities) each of the $`\mathrm{\Lambda }_k`$ is in the coset ring of $`^2`$ and, being discrete, is of the type (14). We may thus write $$\mathrm{\Lambda }_k=AB,$$ (16) with $`A=_{j=1}^JA_j`$, where the $`2`$-dimensional translated lattices $`A_j`$ have pairwise intersections of dimension at most $`1`$, and $`\mathrm{dens}B=0`$. Hence $$\delta _{\mathrm{\Lambda }_k}=\underset{j=1}{\overset{J}{}}\delta _{A_j}+\mu ,$$ where $`\mu =_{fF}c_f\delta _f`$, $`\mathrm{dens}F=0`$ and $`\left|c_f\right|C(J)`$. The set $`F`$ consists of $`B`$ and all points contained in at least two of the $`A_j`$. Combining for all $`k`$, and reusing the symbols $`A_j`$, $`\mu `$ and $`F`$ we get $$\delta _\mathrm{\Lambda }=\underset{j=1}{\overset{J}{}}\delta _{A_j}+\mu .$$ But $`\widehat{\delta _\mathrm{\Lambda }}`$ and $`_{j=1}^J\widehat{\delta _{A_j}}`$ are both (by the assumption and the Poisson Summation Formula) discrete measures, and so is therefore $`\widehat{\mu }`$. However $`\mathrm{dens}F=0`$ and the boundedness of the coefficients $`c_f`$ implies that $`\widehat{\mu }`$ has no point masses (see the Remark after the proof of Theorem 5), which means that $`\widehat{\mu }=0`$ and so is $`\mu `$. Hence $`\delta _\mathrm{\Lambda }=_{j=1}^J\delta _{A_j}`$, or $$\mathrm{\Lambda }=\underset{j=1}{\overset{J}{}}A_j,\text{as multisets}.$$ $`\mathrm{}`$ Finally, we show that discrete support for $`\widehat{\delta _\mathrm{\Lambda }}`$ implies that $`\widehat{\delta _\mathrm{\Lambda }}`$ is locally a measure. ###### Theorem 7 Suppose that the multiset $`\mathrm{\Lambda }^d`$ has density uniformly bounded by $`\rho `$ and that, for some point $`a^d`$ and $`R>0`$, $$\mathrm{supp}\widehat{\delta _\mathrm{\Lambda }}B_R(a)=\left\{a\right\}.$$ Then, in $`B_R(a)`$, we have $`\widehat{\delta _\mathrm{\Lambda }}=w\delta _a`$, for some $`w`$ with $`\left|w\right|\rho `$. Proof of Theorem 7. It is well known that the only tempered distributions supported at a point $`a`$ are finite linear combinations of the derivatives of $`\delta _a`$. So we may assume that, for $`\varphi C^{\mathrm{}}(B_R(a))`$, $$\widehat{\delta _\mathrm{\Lambda }}(\varphi )=\underset{\alpha }{}c_\alpha (D^\alpha \delta _a)(\varphi )=\underset{\alpha }{}(1)^{\left|\alpha \right|}c_\alpha D^\alpha \varphi (a),$$ (17) where the sum extends over all values of the multiindex $`\alpha =(\alpha _1,\mathrm{},\alpha _d)`$ with $`\left|\alpha \right|=\alpha _1+\mathrm{}+\alpha _dm`$ (the finite degree) and $`D^\alpha =_1^{\alpha _1}\mathrm{}_d^{\alpha _d}`$ as usual. We want to show that $`m=0`$. Assume the contrary and let $`\alpha _0`$ be a multiindex that appears in (17) with a non-zero coefficient and has $`\left|\alpha _0\right|=m`$. Pick a smooth function $`\varphi `$ supported in a neighborhood of $`0`$ which is such that for each multiindex $`\alpha `$ with $`\left|\alpha \right|m`$ we have $`D^\alpha \varphi (0)=0`$ if $`\alpha \alpha _0`$ and $`D^{\alpha _0}\varphi (0)=1`$. (To construct such a $`\varphi `$, multiply the polynomial $`1/\alpha _0!x^{\alpha _0}`$ with a smooth function supported in a neighborhood of $`0`$, which is identically equal to $`1`$ in a neighborhood of $`0`$.) For $`t\mathrm{}`$ let $`\varphi _t(x)=\varphi (t(xa))`$. Equation (17) then gives that $$\widehat{\delta _\mathrm{\Lambda }}(\varphi _t)=t^m(1)^mc_{\alpha _0}.$$ (18) On the other hand, using $$\left(\varphi (t(xa))\right)^{}(\xi )=e^{2\pi ia,\xi /t}t^d\widehat{\varphi }(\xi /t)$$ we get $$\widehat{\delta _\mathrm{\Lambda }}(\varphi _t)=\underset{\lambda \mathrm{\Lambda }}{}e^{2\pi ia,\lambda /t}t^d\widehat{\varphi }(\lambda /t).$$ (19) Notice that (19) is a bounded quantity as $`t\mathrm{}`$ by a proof similar to that of Theorem 5, while (18) increases like $`t^m`$, a contradiction. Hence $`\widehat{\delta _\mathrm{\Lambda }}=w\delta _a`$ in a neighborhood of $`a`$. The proof of Theorem 5 again gives that $`\left|w\right|\rho `$. $`\mathrm{}`$ Using Theorem 7 we may drop from Theorem 6 the assumption that $`\widehat{\delta _\mathrm{\Lambda }}`$ has to be locallly a measure, as this is now implied by the discrete support which we assume for $`\widehat{\delta _\mathrm{\Lambda }}`$. Summing up we have the following. ###### Theorem 8 Suppose that the multiset $`\mathrm{\Lambda }`$ has uniformly bounded density, that $`S=\mathrm{supp}\widehat{\delta _\mathrm{\Lambda }}`$ is discrete, and that $$\left|SB_R(0)\right|CR^d,$$ for some positive constant $`C`$. Then $`\mathrm{\Lambda }`$ is a finite union of translated $`d`$-dimensional lattices. 3.4 Application to tilings by polygons In this section we apply Theorem 8 and the characterization of the zero-sets of the functions $`\widehat{\mu _{e,\tau }}`$ (Theorem 1) in order to give very general sufficient conditions for a polygon $`K`$ to admit only quasi-periodic tilings, if it tiles at all. ###### Theorem 9 Let the polygon $`K`$ have the pairing property and tile multiply the plane with the multiset $`\mathrm{\Lambda }`$. Denote the edges of $`K`$ by (we follow the notation of §1) $$e_1,e_1+\tau _1,e_2,e_2+\tau _2,\mathrm{},e_n,e_n+\tau _n.$$ Suppose also that $$\{\stackrel{~}{e_1},\stackrel{~}{\tau _1}\}\mathrm{}\{\stackrel{~}{e_n},\stackrel{~}{\tau _n}\}=\mathrm{},$$ (20) where with $`\stackrel{~}{v}`$ we denote the orientation of vector $`v`$. Then $`\mathrm{\Lambda }`$ is a finite union of translated $`2`$-dimensional lattices. Proof of Theorem 9. By Theorem 1 and the tiling assumption we get $$\mathrm{supp}\widehat{\delta _\mathrm{\Lambda }}Z(\widehat{\mu _{e_1,\tau _1}})\mathrm{}Z(\widehat{\mu _{e_n,\tau _n}}).$$ By Theorem 1 in the intersection above each of the sets is contained in a collection of lines in the direction $`\stackrel{~}{e_i}`$ union a collection of lines in the direction $`\stackrel{~}{\tau _i}`$. Because of assumption (20) these sets have a discrete intersection as two lines of different orientations intersect at a point. Furthermore, because of the regular spacing of these pairs of sets of lines, it follows that the resulting intersection has at most $`CR^2`$ points in a large disc of radius $`R`$. Theorem 8 now implies that $`\mathrm{\Lambda }`$ is a finite union of translated $`2`$-dimensional lattices. $`\mathrm{}`$ The condition (20) is particularly easy to check for convex polygons. ###### Theorem 10 Suppose that $`K`$ is a symmetric convex polygon which is not a parallelogram. Then $`K`$ admits only quasi-periodic multiple tilings. Proof of Theorem 10. Suppose that (20) fails and that the intersection in (20) contains a vector which is, say, parallel to the $`x`$-axis. It follows that each pair of edges $`e_i,e_i+\tau _i`$ of edges of $`K`$ either (a) has both edges parallel to the $`x`$-axis, or (b) has the line joining the two midpoints parallel to the $`x`$-axis. As this latter line goes through the origin it is clear that (b) can only happen for one pair of edges and, since (a) cannot happen for two consecutive pairs of edges, (a) can hold at most once as well. This means that $`K`$ is a parallelogram. $`\mathrm{}`$ Remarks. 1. It is clear that parallelograms admit tilings which are not quasi-periodic. Take for example the regular tiling by a square and move each vertical column of squares arbitrarily up or down. 2. Some very interseting classes of polygons are left out of reach of Theorem 9. An important class consists of all polygons whose edges are parallel to either the $`x`$\- or the $`y`$-axis. §4. Bibliography